Logo
Published on

Ethical Boundaries: Navigating the Legal Landscape of Security Research

Authors

Ethical Boundaries: Navigating the Legal Landscape of Security Research

Hey everyone! Welcome back to our exploit development journey. Last time, we discussed the mindset that a security researcher will need to have to be successful in exploit dev. Today however, we'll be tackling something equally important but often overlooked: the ethical and legal considerations that should guide our work. It's not the most glamorous topic but how we choose to be ethically will define what roles we take in the field.

The Double-Edged Sword of Security Research

The skills we're developing throughout this series are inherently dual-use. They can be used to strengthen security or harm it. This places a high degree of responsibility on our shoulders.

dual nature

Let's daydream for a second and think on a very real situation. Imagine for a moment that you've discovered a very serious vulnerability in a widely used application. After hours, days, months of poking away at this application when one of the arrows you've been meticulously firing off actually sticks there is a rush of excitement you'll be faced with. I've experienced it many times over the years. Once the realization of what capabilities this may grant sink in though a more complex set of emotions may arise: a degree of uncertainty on what should be done next, potential consequences of such a vulnerability if someone more malicious in nature were to discover the same flaw, and an awareness of the responsibility you now have.

This duality creates a fundamental tension in security research:

Notewise-E-D-p2-1

Navigating this tension requires a clear foundation in the decisions each of us make – decisions that are rooted in both ethics and carrying ourselves in a manner that aligns with societal laws.

Before diving into ethics, let's establish the legal baseline. Laws that govern security research vary dramatically and are often times complex, contradictory, and frequently written without a deep understanding of how security research actually works.

I'm not a lawyer (and this definitely isn't legal advice), but here are some key legal considerations that affect security researchers worldwide:

Computer Misuse and Unauthorized Access Laws

Most countries have laws that criminalize unauthorized access to computer systems. In the US, the Computer Fraud and Abuse Act (CFAA) is the primary federal law, while other countries have similar legislation like the UK's Computer Misuse Act.

The challenge with these laws is that terms like "unauthorized access" and "exceeding authorized access" are often poorly defined and subject to broad interpretation. This creates a gray area where the actions and terms used might be considered legal in this context while illegal in another.

Notewise-E-D-p2-2

Intellectual Property Laws

Security research often involves reverse engineering software or analyzing protocols in use – such activities may encroach on intellectual property laws in several ways:

  1. Copyright laws might restrict your right to decompile or reverse engineer software, especially if doing so requires circumventing technical measures designed to prevent access to the underlying code.

  2. Terms of service and EULAs often explicitly prohibit reverse engineering or security testing, creating potential contractual liability even if the research itself wouldn't otherwise be illegal.

  3. Patent laws can affect how you implement certain techniques in your own security tools.

Export Control Regulations

In some countries, such as the US, encryption technologies and certain security tools are subject to export control regulations. These rules can restrict situations like:

  • Sharing exploit code or security tools internationally
  • Teaching specific security techniques to foreign nationals
  • Traveling internationally with certain security tools

The penalties for violating these regulations can be severe, so it's important to be aware of them, especially if you're publishing research or collaborating internationally.

Responsible Disclosure: The Art of Sharing Vulnerabilities

Outside of legal concerns, there's a well-established ethical framework in the security community for how an individual should handle vulnerabilities: responsible disclosure (sometimes called coordinated vulnerability disclosure).

At its core we should give vendors a reasonable opportunity to fix vulnerabilities before publicly disclosing such vulnerabilities to the public. Implementing this concept may raise many questions when it comes down to it:

Notewise-E-D-p2-3

Key Questions in Responsible Disclosure

  1. Who to notify? Sometimes this is obvious (the software vendor), but what about the cases where multiple parties may be involved (cloud providers, hardware with embedded software, etc.)? Even moreso who would you contact in the situation where the vendor may be unresponsive or no longer in business?

  2. How long to wait? What is the typically acceptable time to wait before disclosure? The industry has gravitated toward a 90 day wait as a standard disclosure timeline, but varies widely.

  3. How much detail to share? Once you do share about your discovery what is the appropriate level of detail to share If you go the route of "full disclosure" (including proof-of-concept exploit code) you may make the vulnerability easy for users to exploit said vulnerability and this may put people in harms way. If you do a limited disclosure you may reduce the likelihood of someone easily putting your exploit into use and give individuals enough information protect themselves to some degree without enabling the threat actors that have a more malicious intent.

  4. What if the vendor is unresponsive? This creates one of the toughest ethical dilemmas in security research – balancing the public's right to know about vulnerabilities affecting them against the risk of enabling attacks before a fix is available.

You will most likely be faced with this challenge several times over the years of doing your own research, and a perfect answer will not always be present. My approach is to consider the real-world risk to users and adjust the disclosure timeline and level of detail accordingly.

For critical vulnerabilities with high exploitation potential, I may lean towards the route of providing a very detailed private disclosure with the vendor and give time for them to institute a fix. Disclosure of a vulnerability that has otherwise been undiscovered may have very high repercussions or may be on a less severe level. At the same time if a vulnerability is discovered by malicious actors and the vendor isn't aware, the vulnerability may be in use for a great length of time before they are alerted to the flaw. Ultimately it's a balancing act on how you handle the disclosure and having the integrity to think from multiple different angles.

Bug Bounty Programs: Incentivizing Security Research

Bug bounty programs have transformed the landscape of vulnerability research over the years. They provide clear permission and financial incentives for finding and reporting vulnerabilities. They create a structured means that benefit both the researcher and the organization.

Notewise-E-D-p2-4

If you're considering participating in bug bounty programs here are some tips that will aim to steer you from the pitfalls you may experience.

  1. Read the scope and rules carefully. The program's scope defines what systems you're authorized to test and what techniques are permitted. Staying within these boundaries is crucial. Bug bounty is not a free for all, you are only authorized to the levels the vender has determined. Also review the scope from time to time because they do update it occasionally.

  2. Document everything. Keep detailed records of your testing process, including timestamps, IP addresses used, and the specific actions taken. This documentation can be invaluable if questions arise and will aid you in creating an effective report for the vendor.

  3. Minimize collateral impact. Even with permission, design your testing to minimize disruption to the target systems. Avoid denial-of-service conditions, excessive data access, or actions that might affect other users.

  4. Be professional in communications. How you report vulnerabilities can significantly affect how they're received. Clear, professional communication that focuses on the technical details and potential impact will generally get better results than demands or threats. Yes you have put in a great deal of work, you may experience situations where the vendor doesn't consider the bug as valid for many possible reasons or it may have already been reported. It can be frustrating in these situations but professionalism is key.

  5. Respect the process. Follow the program's disclosure guidelines, including any waiting periods before public disclosure.

Safe Harbor and Its Limitations

Many bug bounty programs offer "safe harbor" provisions – essentially promising not to pursue legal action against researchers who follow the program rules. While these provisions are valuable, they have limitations:

  • They typically only protect against actions by the organization running the program, not potential legal action by law enforcement
  • They usually only apply to actions specifically within the program's scope
  • They may be subject to various conditions and exceptions

Understanding these limitations is important for making informed decisions about your research activities.

Building Your Personal Ethical Framework

Laws and industry norms are excellent guidelines, but ultimately, each of us must develop our own ethical framework for security research. If you ensure you are operating as far within the law as possible and being mindful of yourself as well as others you should be in a good position that you should be able to tackle most of these questions that may develop over time.

Notewise-E-D-p2-5

Practical Guidelines for Ethical Research

Beyond the theoretical frameworks, here are some practical guidelines I've found helpful for conducting research ethically:

Making Ethical Choices in Binary Analysis

Even in the technical aspects of our work, ethical considerations arise:

  1. Choose appropriate targets for practice. When learning reverse engineering and exploit development, focus on:

    • Your own code and systems
    • Open-source software (respecting license terms)
    • Intentionally vulnerable practice applications (like those we'll discuss next week in our lab setup)
    • Software with explicit permission for security research
  2. Consider the potential impact of your tools. When developing exploitation tools or writing about techniques, think about balancing between educational value and potential for misuse.

  3. Apply the principle of least privilege. When developing exploits, design them to achieve their specific purpose without unnecessary access or features.

ethical-compass

Institutional Review: Additional Safeguards

For those working in corporate environments or academic institutions, you may be able to seek out review from some of the other groups that may be able to provide other direction towards protection and ethics.

  1. Legal team review can help identify potential legal issues before they arise.

  2. Ethics boards in academic settings can provide guidance on research methodologies.

  3. Peer review within the security community can highlight ethical considerations you might have missed.

Even as an independent researcher you may stand to gain a lot of insight from trusted colleagues on potential ethics and legal concerns before partaking on sensitive research.

Sometimes the most ethical action isn't clearly legal, or what's legally permissible may not seem ethically optimal. These situations create some of the most challenging dilemmas in security research.

For example, what should you do if you discover a critical vulnerability in a medical device, but the manufacturer has a history of threatening researchers and doesn't offer a bug bounty program? The stakes for patient safety might be high, but so are the potential legal risks. Not everyone will be accepting of your research regardless of whether you have bad intent or not.

Notewise-E-D-p2-6

There's no universal answer to these dilemmas, but I've found that transparency about your motivations, careful documentation of your process, and a focus on minimizing harm while maximizing security benefit provide the best compass.

Looking Ahead: Building Our Secure Research Environment

With this ethical and legal foundation in place, we're ready to move into the practical aspects of exploit development. Next week, we'll start building our exploit development laboratory (I'll be using Parrot OS as a personal preference). This will set us up with an isolated environment that will aid in separating our activities from our host system and other systems on the network and provide us with a safer environment where we can practice the skills going forward.

We'll explore some virtualization concepts, network isolation techniques, and the initial toolset installation that will support our journey from basic memory corruption exercises all the way to advanced exploitation techniques.

Until Next Time

I hope this exploration of the ethical and legal dimensions of exploit development has helped you to better get a footing on how you can go forward with your research in a manner that is professional and responsible. These topics are definitely not the most exciting but need to be had. They will be an aspect in some form or another in all research you do going forward.

Until next time,

persona-smaller