Chapter 4. Building a New Feedback Loop by Starting a Bug-Bounty Program
Bug-bounty and responsible-disclosure programs are incredibly useful.
Bounties and responsible-disclosure programs can be completely separate activities. I’m using them interchangeably in this chapter simply for brevity, but I highly encourage anyone looking further into launching one to specifically understand the difference and choose whatever is the right starting point for your organization.1
If you’re not working toward launching one at some point, I strongly suggest considering it. Often, security or technology leaders are worried about either about having the funds to support bug bounties or about inviting attacks on the organization. However, my experience is that these two concerns are much smaller issues than you would expect in practice.
Many organizations have embraced bug-bounty and responsible-disclosure programs in the past few years for two key reasons:
Bug bounties give you a real-time and ongoing feedback loop that highlights where your security program is succeeding and where it is failing.
Bug bounties provide an avenue and incentive for researchers to report serious vulnerabilities they previously might not have shared (or simply sent directly to the press or security mailing lists).
Because both of these reasons offer major advantages to organizations with a program in place, it’s no wonder bug-bounty programs have seen such a high rate of adoption.
In this chapter, I share experiences with launching a bounty program that will help you know what to focus on when thinking about adding one to your security program. I also outline a high-level plan for implementing your bug-bounty program.
The Concerns about Bug-Bounty Programs
When I talk to people about starting a bug-bounty program, the two biggest concerns I often hear are about having the budget and increasing the risk of attacks.
When Etsy launched its bug-bounty program, we were surprised to learn that many of the researchers didn’t necessarily care about the money, which makes budget concerns far less of a focus in retrospect. In many cases, recognition is a bigger motivation than money. If you decide to launch a program with just a hall of fame as a reward, it can often be a good way to start and will likely attract more attention than you would initially expect.
This doesn’t mean money doesn’t factor into a bug-bounty program, but money also shouldn’t be a barrier to at least getting started with a basic hall of fame for your responsible-disclosure program. The real lesson we learned: don’t think you need a $200,000 bounty line item in the budget before you can get started—although it certainly doesn’t hurt. Many organizations have been successful by starting with a hall-of-fame program and then graduating to a monetary-rewards program (and eventually increasing rewards to encourage renewed coverage).
The second concern is the risk of inviting attacks. However, on the internet, you’re already being penetration-tested (pen-tested) continuously. You’re just not receiving the report. With the bug-bounty program, you actually get the report.
One caveat is that, when you initially launch the program, you will of course experience an uptick in activity for which you should prepare. However, the bug-bounty program doesn’t change the overall risk to your organization. Also, you can handle the issues that arise when you first launch the program by having a plan, as I explain a little later in this chapter.
The Goals of a Bug-Bounty Program
Understanding the goals of your bug-bounty program has a couple of benefits. These goals can help you to make the case for a program and, assuming you get the go-ahead, build the program with your goals in mind. In my experience, a bug-bounty program has three key goals.
- Incentivizing researchers to report issues to you
- Back in the not-so-distant past, if a researcher found a vulnerability in a company’s system and contacted that company about what they found and how to fix it, the researcher had an alarmingly high chance of facing criminal charges. Bug-bounty programs evolved out of this situation so that researchers who wanted to work within an ethical framework could do so.
- Validating where your security program is working and where it isn’t
- This goal is important because it massively augments our only previous feedback mechanism, which was pen-testing. Although we’ve always thought pen-testing provided this feedback, the results of this testing are typically directed at a very narrow scope of applications of infrastructure. In contrast, bounty programs tend to be much wider in scope than pen-tests and thus augment pen-tests by providing coverage across a much broader swath of applications and infrastructure. With this feedback in hand, a security team now has much better data on where their program is working and where it could use additional investment of time or resources.
- Increasing attackers’ costs for vulnerability discovery and exploitation
- When you have a bug-bounty program that incentivizes ethical researchers to report bugs to you, your organization becomes better at responding to and remediating newly discovered issues. I cover this in more depth later in the chapter, but as you improve your ability to respond quickly and remediate, you become a harder target and thus increase your attackers’ costs.
These three goals should hopefully help frame your thinking around when a bug-bounty or responsible-disclosure program is right for your organization and the benefits you can obtain from launching one.
Launching a Bug-Bounty Program
Let’s not mince words: The first few weeks of a public bug-bounty program will be intense. But you can lay the groundwork for a launch that will help you meet your goals and minimize the chaos. The following sections dig into the preparation you can do so that launching your program is a success.
Provide Specific Guidelines and Processes
To create a win-win program where researchers can report bugs to you without going out of scope, you need to state the rules explicitly. Define where researchers can look for bugs and what process you require for reporting those bugs. For example, Etsy has a web page that explains exactly how to write a good bug bounty submission and a FAQ that outlines how Etsy defines a valid bug, do’s and don’ts, rewards, and other useful information.
Historically, these guidelines have been incredibly important in reassuring researchers and incentivizing them to report to you. The guidelines also help you get the kind of reports you want and help researchers understand how to work with your site and systems without disrupting the customer experience.
You might need to refine these guidelines as you go. Just have some type of guidelines ready for the day you launch your program.
Record Expectations and Goal-Based Metrics
As I mentioned earlier, your bug-bounty program can tell you what is and isn’t working in your security program. It can also help you increase attackers’ costs. To gather these insights and determine how well the program is increasing attacker cost, you need to set your baseline expectations and prepare to record a few key metrics.
To prepare to assess your security program, before you launch, record what you expect to see and what you don’t. Then, compare your expectations to the issues actually reported.
For example, you might record something like this: “We expect no remote code execution and we expect maybe a couple of instances of SQL injection, but we wouldn’t be surprised to see a number of Cross-Site Scripting (XSS) issues.”
Also, make sure your bug-bounty processes enable you to easily track what researchers actually find, so that you can compare your expectations with what actually happens. With this tracking, you can return to your baseline expectations two months later and see what actually mapped up. You might think you’re doing something really well, but if attackers on the internet disagree, the attackers win that argument. The point of this exercise is not to see whether you can predict issues, but rather to obtain real-world data on where and how you can be better investing your security resources.
In your overall program metrics, there’s no shortage of things to record, but you’ll want to make sure you record the following:
- Number of bugs
- Over time, you’re hoping to see fewer bugs, although this metric isn’t nearly as straightforward as you might think.
- Severity of each bug
- You want the time between critical issues to increase over time as they become rarer and rarer.
- Time to Remediation and Time to Triage
- These are the two key metrics you want to trend down over time, which I explain here.
When your bug bounty-program launches, it will cause a number of frantic situations in which you’re trying to find the owner of an application or infrastructure where an issue has been reported, leading to questions like, “Who owns that service that was end-of-life’d five years ago? Who do I talk to about it? How do we coordinate a remediation?” The good news of a bounty is that the first time you’re going through this situation, it’s with a benign researcher, not a malicious attacker. Most important—and this is half the value of a bounty program—over time you’ll quickly become a lot better at sorting out those questions, which will help you lower your time to triage and remediate. By months after the launch, everyone knows the drill.
Compared to Waterfall, where changes take a long time, a DevOps-friendly security model also helps you lower your ability to respond quickly, because you can deploy the patch as soon as it’s ready.
Last but not least, lowering your response time can help you increase attackers’ cost for exploiting bugs, because identified bugs no longer necessarily take months or years to fix. This means, when an attacker discovers an issue, they can’t spend months preparing to use it in the maximally beneficially way to them, because it will likely be quickly fixed.
By tracking these metrics, you can make sure you focus the efforts you put in to a bounty program in the right direction and not simply treat it as just yet another source of bugs to throw in a backlog.
Inform All Teams Before the Bug-Bounty Launch
As you prepare to launch your bug-bounty program, don’t tell only your security team or limit the announcement to your larger engineering organization. Tell everyone in the company that the bug-bounty program is about to happen. Learn from my pain: When your customer support team suddenly begins receiving a lot of different test exploit payloads, you want them to know what is going on and how to contact the security team.
Review Helper Systems for Scaling Problems
In any organization, your security team makes frequent use of internal systems to get their job done, such as Splunk or internal customer service tools. Your bug-bounty program will often briefly but greatly increase traffic to these systems that your security team uses. For example, suppose that you use Splunk for log aggregation of security-relevant logs. You’ll want to make sure that a brief 10-times spike in security data doesn’t put you over your limit and cut off your access to a tool that your team relies upon at a time when you need it most.
Attacks Will Begin Almost Immediately
Because bounty programs are now being launched frequently and with a bunch of press, attacks starting almost immediately isn’t something that will come as a shock. But it certainly did to us when we launched ours with almost no public fanfare. When the Etsy bug-bounty program launched, the time from announcement to first attack was 13 minutes. I didn’t forget a zero or make a typo there: I really mean 13 minutes.
This launch wasn’t a coordinated event with a press release and an announcement on several social platforms. This launch was a couple of people from the security team saying something on Twitter. (Later on, we wrote a blog post and made the launch more formal.)
In Figure 4-1, you can see how attacks skyrocketed starting with those tweets and continuing for the next three weeks.
I share these statistics because I’ve talked with a number of security professionals who expect they’ll slowly see attacks ramp up over days or weeks after their program launch. But if you’re launching a bounty program, you should be ready for attacks and bug reports immediately. Based on my conversations with other security teams, a rapid arrival of attacks is typical for a bug-bounty launch. Your first two or three weeks will be intense, so you need have as many people as you can dedicated to triage and response. Don’t expect to get much sleep or get any anything else done. After that, you can expect about a month of handling more than usual but less than the onslaught of the first few weeks. Finally, after about a month, reports tend to drop to what will be their natural pace for your organization.
To plan for a steep increase in attacks, make sure you plan everyone’s schedules appropriately for those two or three weeks. Also, know who your team leads are for different pieces of the code base. To assess whether your team is ready to launch, you can run this exercise: when a critical issue comes in for a particular application or service, who do you talk to? Run that exercise for a few different parts of your organization, and you’ll be in a much better position to handle the first few weeks of your bug-bounty program.
Communicating with Researchers
The key to interacting with researchers is transparency, pure and simple. For example, let’s assume that your team needs six months to fully implement a patch to a bug that a researcher finds. If you communicate with the researcher in an open and transparent way, they will work with you rather than reacting negatively and trying to publish the issue publicly. You might let the researcher know you have a ticket on the bug they found. Share the timeline for remediation and that you’re moving forward with a fix. Also, if you can, it’s often a much-appreciated gesture to pay the researcher before the bug is fixed in cases with longer remediation times.
As a real example, at Etsy we had a case in which a researcher found a bug that we realized would take us a number of months to fix. Fixing the problem itself wasn’t the issue. The issue was that the fix implementation was going to require a service to be redesigned, and this was during the time of year when a code freeze was in effect. To keep the researcher in the loop and keep them from having a negative experience, we emailed them every month with a quick status update to say we were still on track for our target implementation date.
If we had been silent, the researcher might have felt like the bug-bounty submission went into a black hole. By sending these regular updates, we showed a good faith effort to communicate and be transparent about the status of the bug they found.
At the end of the day interaction with researchers, like so much of security, comes down to focusing on clear and transparent communication.