Weeks: 13-14 Graded: Yes (two-tier grading -- see CAPSTONE.md for full rubric) Time estimate: ~10-12 hr total (split across Weeks 13-14) Tools: Browser; writing tool of your choice; FIRST.org CVSS calculator; Git
Learning objective
Reconstruct a significant historical CVE in a 5-8 page report pitched at the "educated non-specialist" level: covering the technical mechanism, the disclosure timeline, the disclosure handling, and a reflective "what would we do differently." (Bloom's L6: Create -- produce an original explanatory artifact that synthesizes technical detail, historical context, and policy analysis.)
CVE selection
Choose one CVE from the instructor-curated list. Get instructor sign-off on your outline before drafting (Week 13 workshop).
| CVE | Common name | Vulnerability class |
|---|---|---|
| CVE-2014-0160 | Heartbleed | Buffer over-read in OpenSSL TLS heartbeat handler |
| CVE-2014-6271 | Shellshock | Bash function definition parsing allows trailing command injection |
| CVE-2021-44228 | Log4Shell | JNDI injection via logging input in Log4j |
| CVE-2017-5753 / CVE-2017-5754 | Spectre and Meltdown | Speculative execution side-channel in x86/ARM CPUs |
| CVE-2017-0144 | EternalBlue | SMBv1 buffer overflow enabling remote code execution on Windows |
| CVE-2016-3714 | ImageTragick | ImageMagick shell command escape via MVG/MSL file handling |
| CVE-2016-5195 | Dirty COW | Linux kernel race condition allowing privilege escalation via copy-on-write |
| CVE-2019-0708 | BlueKeep | Windows Remote Desktop Protocol pre-authentication RCE |
You may propose an alternative with instructor approval. The alternative must be of comparable scope: widely deployed affected software, documented disclosure process, and at least three independent technical write-ups.
Required artifacts
Submit a Git repository containing all of the following:
1. report.md (5-8 pages)
The report itself. Structure:
Section 1: What happened (500-700 words)
Plain-English explanation for the educated non-specialist. Your reader is smart but not a security professional. They have read general-audience coverage of the breach but not the technical advisories.
Cover:
- What software or system was vulnerable.
- What the vulnerability allowed an attacker to do.
- Who was affected and at what scale.
- When it was discovered and when the public learned about it.
No jargon without explanation. The first time you use a technical term, explain it.
Section 2: Why it worked (600-900 words)
The technical root cause. This is the one section that requires genuine technical depth, but the depth must remain accessible to your stated audience.
Cover:
- The specific code or design failure that introduced the vulnerability.
- The mechanism of exploitation: what an attacker actually does to trigger the vulnerability.
- Why the vulnerability was not caught earlier (what in the software development process or security practice of the time failed to catch it).
Use analogies where they help. "The Heartbleed vulnerability is like a library patron who asks to borrow 100 books, and the librarian hands them back their 100 books plus everything in the next shelf because they didn't verify the patron's count was accurate" -- this analogy is imperfect but captures the buffer over-read logic for a non-specialist audience.
Section 3: The timeline (300-400 words)
Chronological. Use specific dates, not relative terms ("two weeks later"). Include:
- Discovery (who, when, how they found it).
- Vendor notification (was it coordinated?).
- Embargo period (how long, was it honored?).
- CVE assignment date.
- Patch release date.
- Advisory publication date.
- Mass exploitation (if documented) -- when did exploitation in the wild begin?
A table of dates is acceptable if you annotate each row.
Section 4: Disclosure handling (300-400 words)
Evaluate the CVD process:
- Did the vendor respond appropriately to the initial report?
- Was the embargo period reasonable?
- Was the public notified clearly and promptly?
- What did the disclosure process do well?
- What would you recommend doing differently?
Section 5: CVSS v3.1 scoring (200-300 words)
Score the CVE yourself using the FIRST.org CVSS v3.1 calculator. Show each metric value with a one-sentence justification. Compare to the NVD-assigned score. If your score differs, explain why.
Section 6: What would we do differently (200-400 words)
Prospective reflection. Not "what could the vendor have done better during disclosure" (that is Section 4). This section asks: what would prevent the next vulnerability of this class?
Examples of good answers:
- A specific secure coding practice or language feature that would have prevented the root cause.
- A development-process change (fuzzing, code review, formal verification for the specific type of code involved).
- An industry-wide change (better default configurations, dependency scanning in CI/CD, mandatory SBOMs for critical infrastructure software).
Bad answers: "they should have tested more" (not specific) or "use a memory-safe language" without explaining which aspect of the vulnerability is memory-safety-related.
2. timeline-diagram.png (or .svg)
A visual timeline covering at minimum: discovery, vendor notification, patch release, advisory publication. Created with any diagramming tool (draw.io, Excalidraw, LibreOffice Draw, even a well-formatted Markdown table converted to image).
3. Git repository with at least 3 commits
The repository must show at least three commits in its history. The first commit should be the outline or research notes; subsequent commits should show incremental work on the report. A single commit with the finished report does not satisfy this requirement.
Repository naming: sec101-capstone-[your-name] (lowercase, hyphens). Example: sec101-capstone-alex-jones.
Primary sources (all three required in your citations)
- The original CVE record (NVD or MITRE).
- The vendor's security advisory.
- An independent technical write-up (Project Zero, security researcher blog, conference talk, CERT advisory).
Use any consistent citation format. Include the URL and the date you accessed each source.
Plagiarism
The capstone is graded on the originality of your explanation, not the facts. You can and should describe the same technical facts as your sources, but in your own words and for your stated audience. Copying a paragraph from a blog post, even with attribution, fails the "educated non-specialist" audience requirement: the blog post was not written for your reader.
Submission
Push your repository to GitHub or GitLab. Email the repository URL to interested@virtuscyberacademy.org with subject SEC-101 capstone, [your name]. The course team replies within 7 days.
Grading rubric
Two-tier grading (see CAPSTONE.md for the full rubric):
Tier 1 (pass/fail gate): The report covers a real CVE with technical accuracy. Reports with material technical errors do not pass to Tier 2. Plagiarized content is grounds for course failure.
Tier 2 (scored):
- Technical accuracy and depth (40%): does the reconstruction match the public record at the byte/protocol level?
- Audience-appropriate clarity (30%): can the educated non-specialist follow the report?
- Disclosure and ethics handling (30%): does the report engage seriously with the CVD process, vendor behavior, and the "what would we do differently" reflection?
picoCTF connection
The CVE you chose for this capstone likely belongs to a vulnerability class represented in the picoCTF challenge set: buffer overflow (Heartbleed, EternalBlue, BlueKeep), command injection (Shellshock), injection (Log4Shell), or privilege escalation (Dirty COW). After submitting your capstone, return to the picoCTF challenges in the matching category. With the full CVE reconstruction behind you, the CTF challenges that abstract the same mechanism will feel different. This is the forward-integration the picoCTF spine is designed to produce: by Week 14, you have enough vocabulary and mental model to see the CTF challenge as a controlled version of a real-world disclosure event.
Lab 9 of 9. SEC-101 complete.