Trust boundaries: the difference between "a security control exists" and "a security control is enforced." Workshop on student-chosen systems with peer review. Lab 6: full STRIDE threat model with data flow diagram.
Reading (~30 min)
Read the OWASP Threat Dragon project's quick-start guide (threatdragon.com or github.com/OWASP/threat-dragon/wiki). The tool is free and browser-based; you will use it for Lab 6 today. Understanding the interface before the lab saves time.
Then browse one example threat model on the OWASP Threat Modeling page to see what a finished artifact looks like. The diagram conventions (rectangles for processes, cylinders for data stores, arrows for data flows, dashed lines for trust boundaries) are what you will produce in Lab 6.
Lecture outline (~1.5 hr)
Part 1: Trust boundaries (35 min)
A trust boundary is a line in the data flow diagram where the level of trust changes. Crossing a trust boundary is where security controls must be applied.
The core idea: trust flows in, threat comes from outside. Every system trusts itself (usually) and distrusts the internet (usually). The interesting territory is everything in between: internal services that talk to external APIs, admin dashboards that accept input from non-admin users, mobile apps that communicate with backend APIs over TLS.
Common trust boundaries:
- Internet vs. internal network (the classic; firewalls live here)
- Authenticated vs. unauthenticated users (session check lives here)
- User vs. admin (authorization check lives here)
- One service vs. another service (service-to-service authentication and authorization)
- User data vs. code (input validation lives here; this is the trust boundary that injection attacks cross)
The key distinction: exists vs. enforced.
A security control that exists is one that was designed into the system. A security control that is enforced is one that is checked on every access to every resource, every time.
The most dangerous gap in security design is a control that exists but is not enforced. Examples:
- The system checks authentication on login but not on every API call. A user who knows the API endpoint URL can call it directly after authentication expires.
- The system validates input in the browser-side JavaScript but not on the server. The browser check can be bypassed by sending the HTTP request directly (curl, Burp Suite, or any HTTP client).
- The system requires admin role to delete records but checks the role only in the UI, not in the backend API. A user who knows the DELETE endpoint URL can call it regardless of role.
The STRIDE category for this class of failure is Elevation of Privilege (EoP), but it touches multiple categories: a missing server-side check also enables Tampering and Information Disclosure.
Part 2: Applying trust boundaries to the STRIDE analysis (25 min)
Return to the STRIDE template from Week 2. For each threat identified:
-
Identify which trust boundary the threat crosses. If a threat doesn't cross a trust boundary, it either belongs inside the system (an insider threat) or you mis-drew the boundary.
-
Identify which controls are supposed to operate at that boundary: authentication, authorization, input validation, rate limiting, logging.
-
Ask: is the control enforced on every call, or only on some? Write down the answer honestly.
This three-step process turns the STRIDE list into an actionable checklist. The threats with unenfored or missing controls at the relevant trust boundary are the highest-priority items.
Part 3: Workshop and peer review (30 min)
Students work in pairs or individually to begin or complete their Lab 6 threat model.
The workshop structure:
- Draw the data flow diagram in OWASP Threat Dragon (10 min).
- Mark trust boundaries with dashed lines (5 min).
- Apply STRIDE to each element and flow, noting which controls exist and whether they are enforced (10 min).
- Exchange with a peer for a 5-minute "adversarial read": the peer asks "what if the attacker does X?" for at least three threats the original author didn't list (5 min).
The peer-review step is not about finding fault; it is about discovering blind spots. The adversarial read is the reason real threat models happen in teams rather than alone.
Lab exercises (~1.5 hr)
Lab 6: Full STRIDE threat model with data flow diagram (graded)
See labs/lab-6-stride-threat-model.md for the full lab.
This is the week's primary deliverable: a full threat model on the same student-chosen system from Labs 1 and 2. The lab takes 60-90 minutes to complete with the OWASP Threat Dragon tool.
Independent practice (~5 hr)
- Reading (1 hr): Read the OWASP Testing Guide v4.2 introduction (free at owasp.org). Focus on the threat-modeling section. Notice how the testing methodology connects back to the threat model: the things you test in a security assessment are the threats you identified in the threat model.
- picoCTF spine (3 hr): Work in Forensics. Aim for 3 challenges. Good targets: a beginner challenge involving file analysis (finding hidden data in a file) or metadata extraction. Document approach and blockers.
- Reflection (1 hr): Write the prompts below.
Reflection prompts
-
The peer review in today's workshop probably produced at least one threat you hadn't thought of. What was it? What did your peer see that you missed? What does this tell you about the value of reviewing security designs with someone who didn't design them?
-
The distinction between "a control exists" and "a control is enforced" is the root cause of a large fraction of real-world security vulnerabilities. Pick a vulnerability you've read about (from the OWASP Top 10 or from general reading) and identify which control was present but not enforced, and at which trust boundary.
-
The threat model you built this week reflects your current mental model of the system. In a real organization, the threat model is a living document that changes as the system changes. What event or system change would make your current threat model wrong? How would you know it had changed?
Week 3 of 14. Next: Cryptography I (symmetric ciphers, asymmetric cryptography, "don't roll your own crypto").