The legal landscape: CFAA, EU NIS2 Directive. Safe-harbour language in bug-bounty programs. Codes of professional conduct: ISC2, OffSec, EC-Council, SANS. Deliverable D3: ethics-framework reflection.
Reading (~45 min)
Read the text of CFAA Section 1030 (18 U.S.C. 1030). It is dense but short. Pay attention to the "without authorization" and "exceeds authorized access" language: these are the phrases courts have interpreted (inconsistently) over the years. Then read the HackerOne policy page (hackerone.com/hackerone/policy) as an example of safe-harbour language. Notice what the program explicitly permits and what it explicitly prohibits.
Lecture outline (~1.5 hr)
Part 1: CFAA (Computer Fraud and Abuse Act) (25 min)
The CFAA (18 U.S.C. 1030, enacted 1986, amended multiple times) is the primary U.S. statute governing unauthorized computer access. Key provisions relevant to security researchers:
Section 1030(a)(2): Prohibits intentionally accessing a computer without authorization or exceeding authorized access, and thereby obtaining information. "Exceeding authorized access" is where the legal line for security research sits.
Section 1030(a)(5): Prohibits intentionally causing damage to a protected computer without authorization. Damage is broadly defined and includes impairing availability (DoS).
Why "without authorization" is legally complicated:
The phrase has been interpreted inconsistently across circuit courts. Two readings exist:
-
Narrow reading (gates-and-locks): "Without authorization" means accessing a system the user has no permission to access at all. Accessing a public-facing website is authorized; modifying data you are not supposed to modify is "exceeding authorized access" but the CFAA's scope is limited to access controls that were technically enforced.
-
Broad reading (terms-of-service): "Without authorization" includes any access that violates the computer owner's terms of service, even if the access is technically possible. Under this reading, logging into a competitor's website to collect pricing data violates the CFAA.
The Supreme Court's 2021 Van Buren v. United States decision moved toward the narrow reading: using legitimate access in a manner that violates policy is not a CFAA violation; circumventing technical access barriers is.
For security researchers:
The bright line: test only systems you own, systems you have explicit written permission to test, or authorized CTF/bug-bounty platforms. "Written permission" matters: verbal permission is not reliably sufficient. A bug-bounty program's published scope statement is written permission for the systems listed in scope.
The CFAA has been used against researchers who disclosed vulnerabilities without vendor permission, who tested production systems outside of bug-bounty scope, and who accessed data the system technically permitted but was not intended for them. The legal risk is not theoretical.
Part 2: EU NIS2 Directive and coordinated disclosure mandates (15 min)
Outside the U.S., the legal landscape differs. The EU's Network and Information Systems Directive 2 (NIS2, 2022, effective October 2024) includes provisions relevant to vulnerability disclosure:
- NIS2 requires operators of essential services (energy, transport, healthcare, digital infrastructure) to implement vulnerability handling policies aligned with coordinated disclosure principles.
- EU member states are directed to establish national CVD policies, encouraging vulnerability reporters to report to their national CSIRT before going public.
- The EU Cybersecurity Act (ENISA) promotes the development of harmonized CVD practices across member states.
The EU framework is more explicitly protective of good-faith security research than the U.S. framework, but the specifics vary by member state. The practical lesson: know the jurisdiction when reporting.
Part 3: Safe-harbour language in bug-bounty programs (20 min)
Bug-bounty programs are the structured mechanism through which organizations receive vulnerability reports from outside researchers and provide safe-harbour (assurance that the researcher will not be prosecuted for authorized testing within scope).
What good safe-harbour language includes:
- A clear scope statement: which domains, IP ranges, applications, and APIs are in scope for testing.
- An explicit authorization statement: "We authorize good-faith security research on the systems listed in scope."
- A defined disclosure timeline: how long the vendor expects to take to fix, and when the researcher can publish.
- An out-of-scope exclusion list: social engineering, DoS, physical attacks, and tests that affect other customers are typically out of scope.
- A legal protection statement: "We will not bring legal action against researchers who comply with this policy."
HackerOne (hackerone.com) and Bugcrowd (bugcrowd.com) are the two major bug-bounty platform operators. Both publish their own platform-level safe-harbour policies in addition to vendor-specific program policies.
What safe-harbour does NOT cover:
- Testing outside the stated scope, even if you found a vulnerability
- Accessing or modifying data beyond what is needed to confirm the vulnerability
- Accessing other customers' data
- Denial of service or degradation of service
- Social engineering of employees
The practical lesson: read the program policy before testing. If you are unsure whether an action is in scope, ask the vendor rather than proceeding and hoping.
Part 4: Codes of professional conduct (20 min)
Every major security certification requires the holder to attest to a code of ethics. SEC-101 is where students become familiar with what those codes require.
ISC2 Code of Ethics (for CC, CISSP, CSSLP, etc.):
Four canons, in priority order:
- Protect society, the common good, necessary public trust and confidence, and the infrastructure.
- Act honorably, honestly, justly, responsibly, and legally.
- Provide diligent and competent service to principals.
- Advance and protect the profession.
Canon 1 is the highest priority: even the client's interests yield to society's welfare.
OffSec Code of Conduct (for OSCP, OSED, etc.):
Emphasizes that OSCP-holders conduct engagements only with explicit authorization, do not disclose client data, and follow responsible disclosure for any vulnerabilities found outside of engagement scope.
EC-Council Code of Ethics (for CEH, CHFI, etc.):
Covers confidentiality, legal compliance, and the prohibition on unauthorized access. EC-Council's CEH specifically requires the candidate to have completed a legal training component covering CFAA and similar statutes.
SANS GIAC Code of Ethics:
Similar to the above; emphasizes that GIAC certification holders conduct themselves in ways that advance the profession and do not bring it into disrepute.
The practical common thread:
All four codes prohibit unauthorized access. All four require confidentiality of client information. All four require legal compliance with applicable statutes. None creates an exception for "good intentions." The ethical security professional operates within authorization or does not operate at all.
Deliverable D3: Ethics-framework reflection (in-class start, ~600 words)
See the CAPSTONE.md for the deliverable specification. This is written work, not a lab. The reflection covers:
- The CFAA authorization requirement in your own words, with one concrete example of where the line sits.
- The safe-harbour language from two bug-bounty programs of your choice. What do they explicitly permit? What do they explicitly prohibit?
- The CVD process per ISO 29147 as you would explain it to a developer who has never heard of it.
The reflection is graded on the Evaluate objective (Bloom's L5): does the student articulate the framework accurately and in their own words, and does the reasoning show genuine engagement with the trade-offs?
Independent practice (~5 hr)
- Deliverable D3 completion (2 hr): Finish the ~600-word ethics reflection. First draft in class; final draft for submission.
- picoCTF spine (2 hr): Multi-category challenge set. Pick two categories you have spent the least time on and complete at least one challenge in each.
- Capstone preparation (1 hr): Narrow your capstone CVE selection to your top two candidates. Read the original CVE record for each on NVD. Write one paragraph on each explaining why it interests you and which technical area (cryptographic failure, injection, network protocol vulnerability, hardware vulnerability) it represents.
Reflection prompts
-
The CFAA has been amended multiple times since 1986. Technology has changed dramatically: the original statute was written before widespread internet access. Identify one aspect of modern security research (e.g., automated vulnerability scanning, cloud API testing, AI-assisted fuzzing) that the original CFAA authors clearly did not anticipate. How do current courts handle this?
-
Bug-bounty programs provide safe-harbour for testing within scope. A researcher finds a critical vulnerability outside of scope while testing in scope (for example, testing the web application and accidentally discovering the database is accessible from the internet). What should the researcher do? Walk through the ethical reasoning.
-
ISC2's first canon places society's welfare above the client's interests. Describe a scenario where this creates a conflict for a security professional employed by a private company. How does the code of ethics resolve the conflict?
Week 12 of 14. Next: Capstone scoping (CVE selection, outline, instructor sign-off).