News

Published on July 26th, 2019 📆 | 7685 Views ⚑

0

Right Size Security – Episode 5: Bug Bounties – Gigaom


iSpeech.org

Intro

Welcome to Right Size Security, a podcast where we discuss all manner of infosec: from enterprise security to practical security that any business can use all the way down to end user security. Your hosts for Right Size Security are me, Simon Gibson, longtime CISO—and me, Steve Ginsburg, former head of operations and CIO. For this episode of Right Size Security, we’re discussing bug bounties, penetration testing, some of the tools used to conduct them, and why this important field of information security is so nuanced. We’re going to walk through the ins and outs of this topic and explain some of the pitfalls enterprises should keep in mind before embarking on a bug bounty or penetration test—or pen test, as it’s known—and we’ll get into some of those details. Thanks for listening to Right Size Security.

Transcript

Steve Ginsburg: So those who listen to the podcast know we always start with some short topics, and I wanted to start this week first with just a quick follow-up. Last week we talked a little bit about voting machines; I brought up the interest in a standard for secure voting machines. And I questioned a little bit out loud: “Well, there must be some work done in this field.”

And I just wanted to follow up quickly, without going into all the details, but there are organizations, of course, that are engaged in this, so quick Googling brought up the Organization for Security and Cooperation in Europe, which is actually a very wide mandate organization with 57 countries, including the U.S. and Canada, in addition to European nations, some North American nations as well. And they definitely look at voting security and election security in general. Also, in the U.S.—a reminder, some folks will know—that it’s actually voluntary testing guidelines that are produced, which I thought was interesting.

So the companies do have—and this will lead a little bit into some of our topics today—requirements for disclosure and testing, but still there’s voluntary, and looks like multiple standards, so just thought that was kind of interesting. And there’s also another international nonprofit called the National Democratic Institute, or NDI, which it did not look like the U.S. was a member [of], according to their website. But they had some very clear type of standards, which was really what I was thinking about, like requirements for transparency...and was looking more at the overall technical requirements for a good voting system, guidelines in that regard. So just [an] interesting topic, and one that we’ll hopefully see in the same way we’d like to see corporate security improve overall; in all locations I’d love to see voting machines continue to get more and more rock-solid as much as they can.

Simon Gibson: Yeah. You’d think that as part of our democratic system, it is very intertwined with the government. The postal service is very key to our democracy, because if the mail doesn’t run, people can’t absentee vote. And I think for it being as integral to our country, FIPS or common criteria-like standards or NIST standards around voting machines that are mandatory and transparent is probably a good thing.

Steve Ginsburg: Yes. And this, in fact, does play a role in the U.S. Government standards for voting.

Simon Gibson: Nice.

Steve Ginsburg: And then, on the corporate side, there’s an interesting write-up I saw today on an ongoing issue, which is that a security researcher found what looks like a pretty significant problem with Zoom, the videoconferencing system.

Simon Gibson: Hm. I saw that go by, and I saw that Zoom had a problem with their video and the privacy around the camera, when you’re in the middle of the Zoom meeting, being turned on and off. And I do the thing quite a few researchers do than they care to admit; I thought, ‘Well, I have tape over my cameras all the time, and I’m sure it’s a problem, but my cameras are taped, and I’m generally pretty cautious around any kind of webcam.’ And so I did not dive into it much.

Steve Ginsburg: Yeah, so I’ve similarly opted for the sliding plastic door, and there are times when I thought, ‘Well, perhaps I’m being a little overly paranoid,’ but I also thought, ‘No, it’s probably likely that at some point’—you know, I think a couple of concerning things raised to me there [are] it looks like—and I think Zoom is a great company; their product is excellent, and just to be clear, I’m actually a supporter overall.

Simon Gibson: Same.

Steve Ginsburg: However, it sounds like they’re running a website to get the automatic magic feature of being able to join a meeting.

Simon Gibson: Like a little web server running on the local machine?

Steve Ginsburg: Yeah, it runs on the client, right? And it looks like [that] can be exploited, the researcher was able to show. And then there was a little bit of a concern, I’d say, from the path too, that there’s a note in the timeline that at a certain point, [the researcher] was notified that the security engineer was on vacation.

Simon Gibson: ...when he reported the vulnerability.

Steve Ginsburg: Yeah. And I think security engineers should absolutely be able to take vacation, but ideally, there should be enough security researchers that something that looks as serious as this turned out to be, that a company can move quickly towards resolving [it] and really shouldn’t take a delay for staff outage. So I think that just goes under our general theme that we’ve been on about, that companies need to figure out how to provide excellent security. Hopefully, with each one of these events, enterprise listeners and people responsible for these programs will continue to have more fuel to improve them.

Simon Gibson: So interestingly, Steve, with Zoom, there isn’t a way that I was able to find, Googling around [for 5-10 minutes]— to report a vulnerability. There is no responsible disclosure program, they don’t have a portal and any kind of a policy or framework to let you submit a problem, if you happen to find one. Bug bounty program aside, if you are just a user of the service, I expect, as a paid member or even perhaps inside their licensing portal, you can file a support ticket [and] someone will get back to you, but in terms of the engineer, getting a reply [that] their security engineer isn’t in—I honestly am just not in the least bit surprised that that’s their answer. It’s unfortunate; it’s a 25 billion-dollar market cap company, but…

Steve Ginsburg: Yeah. And it really leads into our topic perfectly today. We’re looking at how companies should structure their pen-testing and bug bounty and have a program that’s robust and really improves their overall brand, the brand experience and the product experience, and also really leverages the large security community that’s out there.

Simon Gibson: Yeah. Very topical. So let’s get at it. Let’s get at bug bounties and pen testing and the values and differences between the two, first of all. I think that’s an important one.

Steve Ginsburg: Why don’t I let you do the definition?

Simon Gibson: Sure. I think it’s helpful to understand a little bit—a penetration test, or a bug bounty, they have the same goals. They want to do the same things. They go about them very differently, and I think the nuances in that are things that are the important ones that enterprises understand. My sense is that bug bounties began—my earliest recollection of bug bounties was partially based in the open-source world, with things like Apache and Linux kernels and Free BSD, and then the first commercial version probably was started at Microsoft, and is still arguably the best in the world, where they needed a method for Microsoft Windows end users to report vulnerabilities, and I think that’s how they got going.

Steve Ginsburg: Yeah, there was a period in the not-so-distant past where security was actually—it could have been something that would sink Microsoft operating systems. For awhile, there was just so many Windows security [features] that certainly, those of us in the Unix community felt a vast difference. But sadly, I have to say that over time, there were later some exploits in Unix at the core that were discovered that really took away some of the bragging rights that Unix would have; some very significant problems there, too.

Simon Gibson: I mean, I think security aside—just shelving it for a second—I think it was the ubiquitous Windows desktop and the fact that if a bug happened on one, it happened on all. So that means the entire world in effect, the entire enterprise; every system and every company everywhere, apart from the Unix machines or the SCADA machines or the op mainframes where everybody had that problem.

Steve Ginsburg: That’s right. And it makes it the thing to target, for most of the folks who want to target anything, because it’s also the opportunity—is really there.

Simon Gibson: Yep, good aside. The Chief Security Officer at Microsoft, a guy called Dan Geer, who founded @Stake, I think was at Microsoft a very short amount of time when he made that public statement and was fired promptly for saying that exact thing.

Steve Ginsburg: For representing that they were the big target because...

Simon Gibson: Yeah, absolutely. Went and founded a company called @Stake. Yeah, but that’s another story. For pen testing and bug bounty, really, it’s a way of setting some boundaries and some guidelines about how to report things. With penetration testing, I think that probably sprung up more out of the need for working within a big company and an enterprise. If you hire someone to pen test, you hire a company and you bring them in, or you build a group of employees.

Either way, it’s very much the way companies are used to doing business. There’s a master service agreement, there’s a contract, there’s an NDA, there’s legal teeth around it, there’s a scope, there’s terms of service, there’s not-to-exceeds, there’s timelines, there’s a set of deliverables. And all those things [must have value because] big companies don’t do things unless it’s going to add some sort of value; so somebody somewhere has calculated a value.

Steve Ginsburg: Yeah. In our last episode, we talked about SIEMs and situational awareness that a security team can build themselves. And of course, security teams can do their own pen tests—and we should talk about that—internal pen tests, but when you move to want to leverage other organizations to help you out, this is a great way to do it, and I think both provide pretty powerful models.

Simon Gibson: Yeah. I think it is definitely—it’s a question of the duration, the focus, and the level of comfort. And we can definitely talk about those. So the next big, important thing with this topic, pen test or bug bounties, are ranking vulnerabilities and scoring them.

Steve Ginsburg: Yeah. So one of the things you would ask me when we were looking at this is maybe to share a little bit from the CIO’s perspective of: How do you go into this? Why do you go into this? What are concerns that are going to happen? At least one of them is going to be: ‘Well, we’ve got a lot of different security issues potentially.’ If you have a complex product (and I think the examples we gave up front, both in voting and in the corporate world, say, over time), all digital code is probably exploitable. Maybe some things are so simple, they’re not. But generally speaking, if you have any complex organism in the digital side, there’s going to be some way to pry it open at some point, even if overall you have a good experience.

So I think, looking at that landscape and really being able to cull out priority, that’s one of the things that, as I came to understand it myself and when I’ve talked to peers, as an executive sponsor of that or somebody who’s going to be responsible for that program being financed and being undertaken, knowing that there’s going to be a value return for [it]—I don’t really want to just find a million trivial things that we’re not going to fix.

Simon Gibson: Exactly. And putting a rating system or a ranking system on a vulnerability discovered helps you then measure the risk to your business, the risk [of] it being discovered, the reputational risk, the availability risk, the risk of data being exposed, all those kinds of things. So it’s important to understand the ranking system. A common one is called CVSS, Common Vulnerability Scoring System. It was developed—and it effectively measures the impact of a vulnerability against criteria like availability: Is this vulnerability going to take down the system? It measures the vulnerability’s risk to exposing confidential information, and it measures it against exposing the data to reliability informations; can you trust the data? So those are sort of the three main criteria.

Steve Ginsburg: And you mentioned well before, but maybe worth reminding, is: Those things can be different, depending on the organization and depending on what area of an organization that is. So for example, there’s different types of confidential information, and some might be considered much more of a business risk than others. For example, HR data is an example you’ve given in the past where of course you never want that exposed. But it might not be as business-critical, depending on what it is, as customer data, for example, where…

Simon Gibson: Or exposing C-code, or exposing certain things and inner workings of applications that would then lead to more sophisticated attacks. There are nuances in that as well, but those are the three main things, and there’s a few others. But being able to measure your vulnerability is super important and, again, to your point, if you’re going to do these programs, you’re going to run these programs, you want to understand what the value of actually implementing them is.

And then I think you brought up earlier—what kind of trouble are these going to cause? Are things going to get taken down? Are things going to break?

Steve Ginsburg: Yeah. You know, all the conversations about modern companies involve the rapid rate of change and the increased business responsibility for companies to keep delivering quality product. And so anything that’s going to be [an] interruption of either teams that are developing or teams that are meant to secure or operate the organization, that has to be factored [in].

So on the one hand, there’s very high value in discovering any potential exploit and the trouble it can cause to availability and company reputation; on the other hand, one has to be careful that they’re not just creating busywork or disrupting quality work, even if for a valid reason.

Simon Gibson: Yep. One of the things that I think companies don’t really understand until they start grappling with these is embarrassment. If a vulnerability is found, doesn’t matter if it’s through a pen test or an end user reported it or an engineer found it. If a company realizes they have a critical vulnerability, and they need to patch it and inform customers about this, that—in a world where there’s apps and app updates and just people take rolling security patches all the time, there’s a little bit less of a worry around that, because people are... used to getting security updates, and it just happens and you don’t really need to explain a lot about it.

In a world where there’s routers or there’s data conduits and optics, whatever the thing is, to tell your biggest customers—your 10 million-dollar customers—‘Oh, we have this really risky vulnerability in your core networks, please patch this,’ companies have to be ready to ‘bite down on the rag’ and do that.

Steve Ginsburg: Sure. It’s not a happy discussion at that point. But I think also, folks who are doing vendor management within any organization, they’re going to look to: ‘Are my partners responsible about these things over time? How do they respond to these things ?’ So to your point, I think there is a great understanding that security risks happen, but companies they don’t manage it well do get managed out.

Simon Gibson: Yeah. And I think that we had a section about partner supply chain risk, and I think that that goes absolutely—I think companies really have to sit down and think at an executive level: ‘Do the benefits of me patching this vulnerability; telling my customer it’s critical; is there a risk that they are going to leave—stop buying from us?’ They’re going to not renew their contracts? Or are they going to look at us and think we’re a good partner. And are we really building credibility with them by coming to them ahead of a vulnerability being disclosed?—which is another super nuanced part about this as well.

Steve Ginsburg: Yes. And having their operational story very clear. We’d had engagements in the past where there might be a security issue or a reliability issue and then in calls dialing in to the company, it was clear that their folks actually did not have a clear vision of what was happening. In other words, some of the discussion about remediation, or possible steps to mitigate problems were not accurate.

So really understanding what—and you mentioned cloud security as we were talking about this, too—in a cloud world, that becomes potentially more difficult, which is a lot of companies are leveraging cloud for vast amounts of their infrastructure. Those that are doing it responsibly will understand what are the significant portions of—what are the implications of all that, and the detail, and those that don’t will potentially be in a place where they can’t enforce good security, or good security response.

Simon Gibson: Well, and I think that’s a good place to sort of rip apart responsible disclosure and coordinated disclosure. So in the world of telecom and routing and large interconnected systems, vulnerabilities discovered that could potentially affect the Internet at large, there’s sometimes the notion of a coordinated disclosure, where the people who fix it, who are responsible for maintaining these, get together, release a patch ahead of time of it being public, they go patch all the things, and then the vulnerability gets disclosed, and then everybody runs behind it and patches, but the core stuff is done.

And that’s the real nuance, which is: this vulnerability can be discovered whether or not you have a disclosure program. This will come out if somebody finds it. Or it’s going to be sold and kept quiet, and used on the black market as a zero day, and sold for potentially a lot of money, depending on what kind of vulnerability it is, on what platform and how reliably it triggers.

Steve Ginsburg: Right. And it also brings up for me—the immunity of the herd. And communities can be very helpful in security. And so, just kind of a call there that enterprise teams, people who are running security programs in any way, your security leaders and your IT leaders, they should be talking to other folks at other companies, at other organizations, about what they’re seeing for security, what they’re doing to improve their program.

Simon Gibson: But even if they’re busy guys and they’re not doing as much of that as they should, they should have a really good understanding that if a vulnerability is discovered and it’s brought to their attention, then they now have guilty knowledge that if this is disclosed by someone other than them to their customers, that’s probably worse than it being disclosed by the company that found it, right?

Steve Ginsburg: Absolutely.

Simon Gibson: All things to keep in mind. Again, this is such a nuanced space; I love it just because of specifically that. So we talked about why they’re different, and we talked a bit about cloud, so let’s get into: What are the things you need to do start doing pen tests repeatedly, reliably, or open a bug bounty or, at the very least, a responsible disclosure program, which, in the case of our opening topic, Zoom, they didn’t seem to have one.

Steve Ginsburg: Right. So one of the things that’s at the start is the executive sponsorship. I alluded to it before, but as we talked about earlier, it’s a very important piece, which is: you’re going to create this program, and there are multiple ways to go about it, in terms of what outside parties you use, how you leverage the outside community and your own teams to do these things. But when you raise issues—we talked about resourcing, we talked about priority—how are you going to make your way through all that?

It’s great when direct contributors can just work that out on their own, but they really need a framework, as we talked about, to make that work. And then if they have conflict, or they’re not sure whether work will be prioritized or what approaches should be taken, they need to be able to escalate that through management leadership. And if there’s not a clear path, you can get gridlock right there.

Simon Gibson: Yeah. I mean that’s for sure. Any well-meaning CISO can put a security at their company, and a little bit of indemnification—which we’ll talk about in a second—and start (sort of) a program. But what happens to the product when there’s a real critical vulnerability, and now you have to bring in the general counsel and the CEO, and they have to make a decision about how they talk about it: what would they tell their customers, what they tell their board; it does need executive sponsorship. And also, because if people are going to spend money and hire engineers, or they’re going to take engineers off other projects to work on this, there needs to be some value.

So somebody needs to work out what the value is in having a vulnerability disclosure program. How much does that add to the QA process? Hiring a pen test, they’re not cheap. Pen tests can be many hundreds of thousands of dollars, depending on the scope and the time and who you’re hiring. So what is the actual value proposition? Is it reputational risk? Is it [that] you need to be seen by your shareholders and your board as having done these things? Are you doing M&A, are you buying a company, and do you want to pen test their code and see how they are before you actually sign the deal? Or give them a terms sheet? So there’s a million reasons why these things have value, but a company needs the executive leadership to really work that out. I think the CSO and CISO are the guys that can do a good job explaining it, if they understand this space well.

Steve Ginsburg: Yeah. I think M&A is a perfect example. There’s certainly lots of cases about companies having been acquired, and then greater security risks [have] been discovered after the fact, which is certainly a pretty big business risk, if you’re the one who has done the acquiring, and the asset that you have doesn’t have the value that you thought, because there are security risks present, for example.

Simon Gibson: Yeah, or loses value right after you bought it because something was disclosed; some vulnerability and some piece of medical equipment was...and you were shorted. So the other thing, again, apart from just the reputational aspects and the executive sponsorship for a program, a legal framework is something that you need to understand really clearly before you start wading into pen tests and bug bounties and disclosure programs.

Steve Ginsburg: Yeah, certainly for disclosure, there are national, and certainly there are state, laws which might be different than your overall commitment. I know in California there are strong disclosure laws, for example. So there might be some real important actions that you’re going to need to take. Your legal team, and then your operational team, as a result, need to be clear what those are.

Simon Gibson: Right. And I think it’s important [to unpack this]—we use this word ‘disclosure’ kind of interchangeably in that sense—you know, in the one sense, there’s the company disclosing that they have had a breach and notifying the people...that varies from state to state; there’s a disclosure policy that needs to be around...what you will disclose to the community at large, what you’re willing to expose about how your company works and what happened, and also a disclosure outbound to the researchers who are in the bug bounty, about what you have in scope, and you have disclosed: these are the rules of the road; if we’re going to do a bug bounty or a pen test, this is the scope around it. So there’s a disclosure piece around that as well. So it kind of goes both ways, and the word ‘disclosure’ is an awfully—it’s a very large word. It encompasses a lot of things.

Steve Ginsburg: Right. It’s easy to just say, “the time when you’re going to say something,” but right, it has some very specific context in this realm.

Simon Gibson: Yeah. It’s definitely—it’s very contextual in how you use it. You know, the legal framework around DMCA and the cComputer Fraud and Abuse [Act] is another important thing, and this goes into executive sponsorship, and the executives need to be made aware of this. If I open a bug bounty and take, for example, Simon Widgets does a project, and we’re like, ‘Go ahead and hack it,’ so I’m protected by DMCA and Computer Fraud and Abuse [law].

If somebody hacks into my company, I can prosecute them. And that’s why you don’t just see people attacking websites and, ‘Oh, I’ve attacked a website.’ No, you’re going to jail; you have hacked a website. If you hack a website as part of a bug bounty, somebody has indemnified the work that you’ve done. Otherwise, you’re a hacker, and that’s against the law. It’s a really important thing. So when you do decide to indemnify, are you risking bringing in a hacker? And now you can’t sue them?

Steve Ginsburg: Now you’ve given them a legal cover.

Simon Gibson: Exactly.





Steve Ginsburg: Yeah, and just—this may be obvious, but part of the framework to get involved with having a bug bounty in the first place is—those of us who are involved in security know that you’re basically seeing automated ‘door latch’ attacks...

Simon Gibson: That’s a good analogy, sure, yeah.

Steve Ginsburg: Constantly, right.

Simon Gibson: Yeah. Door-rattling.

Steve Ginsburg: People are charging for an open piece, and at the heart of it, it’s a good example that the bug bounty is really about taking an expert community and saying, ‘Okay, I will provide you a lane in, where we will share that. And when you mention disclosure, one of the problems about not having a bug bounty program in place is what if you do get a responsible request from a security researcher who’s found something?

Security researchers and hackers—there’s a wide range, of course, of personalities out there, and so you’re going to have folks who are really the bad guys, and they’re just going to try to get in and do whatever, and their approach to you is going to be whatever that is, but you really have very concerned, responsible security researchers, and some of those are independent folks who—they do view it as a real, legitimate job in their world. And so they’re going to want to be compensated, but...

Simon Gibson: Or at least recognized.

Steve Ginsburg: Yeah. That’s right. It can be different, depending on what—but you don’t want to do every one as a one-off situation.

Simon Gibson: Yeah. You’re going to want it fixed. If I have a piece of software, especially if I’m paying for it, I want to notify the company: ‘You know, there’s a problem, I can exploit this. You got to get it fixed.’ So there’s definitely a sense of urgency. But you know, at the end of the day, whether or not you have that program, bugs can drop; people will announce those things. Even if there isn’t a Secure [Technology] Act company or a vulnerability disclosure program or a bug bounty, people can just announce it.

Google has a pretty strict policy with their zero day project, which has done a lot to find bugs in software. They actively research and will let the companies know they have 90 days to respond and fix this, and if they don’t, they go public. I think if the company’s working really hard to fix it, they’ll give them some leeway, but Tavis Ormandy could show up with zero day, and everybody better drop everything, ‘cause in 90 days, they’re going to just release vulnerability about your product.

Steve Ginsburg: Yeah. And I think that also, a big part of wanting to fix these things is—we’ve talked about [it] before—a lot of hacks sometimes takes a long time to be discovered, and not only do you want to generally know, but you want to know soon. If there is an exploit on your website and someone get into your internal network, or get into your customer data, if you can find that out—I mean, ideally you might find it out before it’s exploited, right: a responsible security researcher finds it and then tells you, you remediate it, and then your customer data is never threatened, for example. If you don’t have a program like this, sometimes people can be living on your systems for months or years, right?

Simon Gibson: Or just shelving that vulnerability for use when they’re ready to. I mean, that’s the whole market for zero day. Rand just did a big, long—basically a book about it maybe a year ago, maybe a little bit longer—but there’s a whole market for zero days. And it’s an interesting economic incentivization model that some of the more modern pen testing companies have adopted. So in the zero day market model, the better the vulnerability, the more reliably it triggers, and the platform it triggers on—so the scarcity—equals the value of the vulnerability.

So for example: a reasonably good iOS bug that can infect an iPhone that no one knows about is probably on the order of $500,000. It’s got a face value of that, give or take a little, depending on who’s buying it and what the nature of it is. But it’s a lot of money. So the researchers who work on finding these, and if you find two or three of those a year, you’ve got a small company and you work from home, and you’re doing okay. It’s super illegal; you’re probably on a whole lot of strange government lists, but it’s a market. It’s an economy.

What some of the modern pen test companies have done—there’s a couple of them; Synack is one of them—is understood that paying researchers to work on a penetration test by the hour doesn’t necessarily incentivize them; rather, pay the pen tester on the quality of the vulnerability they find during the pen test. And what Synack found was they get many more hours out of their researchers. So imagine, even you’re salaried, and you’re expected to work 8 or 10 hours a day, but you’re incentivized by the vulnerability, you might stay up all night and work on this and come back for... you might spend your weekends and evenings and just be crushing this because you’re finding vulnerabilities. And then what that ends up in is really high-value return for the customer.

Steve Ginsburg: Yeah. A big way that I looked at it was it was essentially taking the diversity of who security hackers—white and black hat—are, the wide diversity. Basically, the way I looked at it was—There will be black hats coming at us; and this is a way to have white hat hackers working for you.

Simon Gibson: Right. And this gets into the product, which is—the ability to vet researchers reliably; the ability to make sure that whatever they’re doing, there’s some controls around it. So you know, some of the companies we looked at in the pen test and bug bounty report have very novel methods for letting researchers get access to the systems that they’re testing, so that they can be monitored. Again, it goes to the point of: if I let somebody hack my system, am I really sure they’ve left it, and they didn’t put anything on the system that I don’t know about now? And can I be sure of that? That’s a difficult question to answer in some environments.

Steve Ginsburg: Right. This doesn’t replace the need for your SIEM and situational awareness from your own direct monitoring at all. But it can certainly enhance my getting a kind of more 360 view, by definition.

Simon Gibson: Yeah. But for sure, opening the door and allowing hackers to come in is not... I think most companies are pretty averse to that. And so understanding the costs/benefits—it’s an important analysis to do. The next thing that’s really, really, really important is an internal process for this kind of stuff. Just the communication between somebody reporting a vulnerability, acknowledging you’ve received it, and then some sort of a guideline as to how long you’re going to take to respond. Just having something that simple—because again, imagine the researcher who finds a vulnerability in a piece of software running on his or her machine that makes their machine vulnerable: well, they’re paying for the software, they want it fixed, or they’re not, it’s—but regardless, they are feeling a little betrayed, and they understand if they have—this piece of software is being used by tens of millions, or hundreds of millions of people, now there starts to be a little bit of pressure on this researcher.

The very least the company can do is say, ‘Thank you for reporting this.’ The company will usually ask you [for] a way to reproduce it, so the company can verify the vulnerability, and then responding back and saying, ‘We’ve taken this in; this is truly a vulnerability. We are going to fix it.’ You need a process to do that. I mean, even finding the right programming group in a big company to address the vulnerability can be challenging. I can submit a bug, but who’s the development team that owns the bug?

Steve Ginsburg: Right. And that communication itself can be an interesting point because of the diversity of the community. That security researcher who’s reporting a bug might have all sorts of different expectations. We already came up with a few different things they might be wanting or needing, and the companies who are going to be responding, they need to be sensitive to that.

One example again—not to dwell on the Zoom situation—I just thought it was very interesting that a security researcher who—overall I would give him high marks from just my personal opinion, from what I saw of how he reported it and the write-up on it—but one thing that just caught my attention was that one of the features that he was talking about that he called to their attention, Zoom gave a very polished answer that they wanted to keep their customers having the flexibility to either use this feature or not, and it was clearly a well thought-out and—he called them on it being a PR answer and essentially said negatively, ‘I don’t really want to hear a PR. answer in the middle of an ongoing security discussion.’ Which is a fair point.

On the other side, that’s one I had to see from both sides, which is: the company is communicating [about] something that, as it turns out, is ultimately going to reach the public. And so perhaps a polished, professional answer is the way to lead in some of these cases. But I think both of those are good points, and striking the right balance is really the way to go. You mentioned that with different PR firms, you might get a different response, too, if you get to the point where you’re going to a fully public discussion on a situation.

Simon Gibson: Yeah. For an incident that—it’s a very different type of public relations to manage a crisis than it is to get your latest feature into the Wall Street Journal. It’s a different company, or it’s a different discipline.

Steve Ginsburg: [Different] team in the company.

Simon Gibson: Yeah, a different team. And not only do you need a process to manage communication; you need to be able to manage internally about—so we’ve got a bug, somebody needs to verify it. Now that it’s verified, there needs to be a ticket. So is there a zero ticket open? Is there a help desk ticket? Where this can fold in nicely is with companies that already have help desk and ticket support systems. This can run right alongside your—when a customer has an outage or a critical severity bug, and you have a way to measure those things and already work on those, a vulnerability program can run alongside of it, but you still need to build it out. You still need to make sure that when the person on the help desk, or the person on the customer service team gets the report, they have a script and they know what to say.

And then they have a tool that they can put the bug into, and you know, there may not be a zero project for security. There just might not be. So maybe, if you don’t create a zero project, maybe you have some special tag that you build in to these kinds of things: you flag them especially so they can be tracked and remediated, and you have a process to report on them and an SLA, and all those kinds of things.

And then I think you brought this up too, in terms of rules of engagement: how much money are you going to spend? And once you rank the severity of the bug, how are you going to—once it’s reported, and based on the severity, what’s the incentivization program?

Steve Ginsburg: Right. And we mentioned, I think, for companies that don’t have active incidents coming through—so some companies will start a program, or they’re already engaged in ad hoc methods of dealing with these security bugs, and then they bring the program... retrofit as things are moving—but for companies that aren’t very active, they definitely need to be practicing these things, too. Because all this communication and incident response team—if those muscles aren’t flexed... Also, you’ll find out people don’t respond the way that you might expect, even if it was written in a plan that everyone agreed to six months ago, that type of thing.

Simon Gibson: Yeah, it’s true, and in some cases a vulnerability—a bug bounty—it may yield a lot of low-hanging fruit that gets repaired quickly, and it didn’t really flex those muscles. A pen test, or an ongoing pen test quarterly, or whatever your release cycle is, that kind of helps keep those muscles going, and that’s something you can hire for and that you keep rolling. And again, it’s a good differentiator; I think it’s important that they have the same goals, but they perform different functions. You know, the pen test and the bug bounty, it’s important to think about them very differently.

Steve Ginsburg: I think one thing too that, again, might be obvious, but I’m not sure that we really made clear here: for smaller companies that are getting involved and are considering pen tests and bug bounties, we mentioned leveraging a community to do that. One of the things that can be very dramatic about this is leveraging a much larger scale than you have. So most companies are struggling to keep an appropriate number of security engineers on staff.

We’ve talked about—[it] depends on the organization, but let’s face it, most organizations would rather pay for whatever is obviously driving revenue than the security aspect, which does enhance revenue, and in some cases can drive revenue by great reputation and great answers for security reviews, and things like that. So security teams can drive revenue, but it’s not as obvious as the core product of many companies, many organizations.

Simon Gibson: For sure.

Steve Ginsburg: So as a result, most security teams are not going to have dozens of extra employees, for example. And so the bug bounty and the pen tests can be used in coordinated methods, either together or alternating, ideally ongoing, to really bring a much larger pool of individuals looking at your website or your public…

Simon Gibson: Yeah, whatever you’re making.

Steve Ginsburg: Yeah. Your public footprint, right?

Simon Gibson: Yeah. Another interesting use case around cloud is...Initially, I looked at different cloud offerings, and I spoke to the CISO—or CSO—and one of my concerns was, ‘Aren’t you a really juicy target? There’s so much stuff up there. Isn’t everyone coming after you?’ ‘Well, yes, but we have a bunch of Fortune 100s, and some Fortune 10s—they’ve all pen-tested us, and they’ve all found different things, and so now every company who uses us benefits from the work they did. And so that’s an interesting way to think about cloud, is that there can be very specific focused pen tests if a large Fortune 10 company wants to go use some SaaS service and it’s an important...public-private cloud...some sort of relationship—you will probably get a pen test on that. And then you, the company that can’t necessarily afford that, will benefit from those things.

Steve Ginsburg: Right. And that’s also a good example of: if you can afford it, you’d like to do it before your next biggest customer or potential customer does it. And finds out that there’s serious problems, that they don’t want to do business [with you].

Simon Gibson: That’s a really interesting one, where I had a team that worked for me. We would routinely test things and find pretty significant problems. I mean, it was a pretty routine thing, and not trivial problems, but real, real serious problems. And it didn’t mean we didn’t want to do business, but what would hurt the vendors we worked with is them taking a long time to fix our problems.

We had a particular vendor where we had millions of dollars embargoed, and all the different leaders around the company agreed not to buy any of their stuff because of these vulnerabilities. And it took them many quarters to fix it. And then they finally did, and we were able to verify that the fix was in, and they ended up becoming a big customer. So the thing that will hurt you isn’t the pen test; it’s either the refusal to fix it or the priorities...those are the things as a big company that will hurt you.

Steve Ginsburg: Yeah. And from the flip side, that’s a great way to show how security can drive revenue. If you do a great job, better than your competitors on that, you’re now in business, and they’re going to come to you and move forward.

Simon Gibson: So let’s get into a couple of the challenges...How you’re messaging things is important. We sort of hit that with the crisis communication plan. I have always really thought it’s important, and implemented these to have a crisis communication plan in the desk. So that once something like this does happen, you have the right lawyers to hire, you have the right firm to hire, you have the right messaging. This terrible thing happened (insert thing here), this is what we’re doing about it; this is how long we expect it to take; here are the people that are engaged on it—have a plan that works with the community, with the rest of the world.

Another thing is that in the executive sponsorship part, having everybody agree to the severity of a vulnerability is very important at the onset, and the priorities that should be given when they’re presented.

Steve Ginsburg: Absolutely. And with both of these, I think, a big part is to consider that when security events happen, one shouldn’t think of them as sort of happening in this vacuum where it’s like, ‘Okay, this security event is going on.’ If there are security events, they’re going to happen at the same time that other things are happening. They’re going to happen when your company has a big trade show or quarterly reporting or the executives are on their ‘off site’ [meeting] or people are traveling overseas, or any...

Simon Gibson: Product launch.

Steve Ginsburg: Any number of things. Companies are very busy. And the individuals who work, especially at the board level and the exec level, they’re incredibly busy. And so you need to know—ideally, you don’t want to pull those people out of meetings, if you don’t need to, and then if you need to, you need to be ready to do that.

Simon Gibson: Well, and they have to have an agreement up front. Because there’s nothing worse than sitting in a room of executives and having three of them agree, ‘This is a vulnerability, we should do something,’ and two of them, ‘Well, I don’t know how important this is, and I think we should just keep on doing the other thing.’ You really need a clear guideline and a clear matrix of, ‘This has now crossed the threshold into sev 1, and we need to do everything or this is sev 3, and we’ll issue a workaround, and we’ll get afix out, but that stuff really needs to be agreed on at once, ‘cause you don’t want to hit that deadlock.

Steve Ginsburg: That’s right. And we talked about in our last episode how having a clear message from your monitoring to have the clear story, and if you’re in an evolving situation, it’s really a combination of having good information coming from the outside, good information coming from the inside, and then having that fit into a clear definition, as you were saying of: what will those things mean if...

Simon Gibson: Right, lacking ambiguity, and it’s not up to one person to say, ‘Well, I think...’ Just: you have crossed the threshold, and it’s empirical. The other thing, you know, we talked about flexing muscles: one of the things that I think is taken for granted but is an important factor is that flexing these muscles is important, and learning from all these things is important. If you do a vulnerability program, a pen test, if you have a bug bounty, every 3-6 months with new releases you’re getting cross-site scripting, or a SQL injection, maybe there’s some training that could happen that would prevent these kinds of things, you know?

Steve Ginsburg: Yeah. There is a lot of security training that can benefit developers and others in organizations. It’s really about how they’re going about their work over time.

Simon Gibson: Yeah, yeah. And flexing these muscles means looking at the trends, and then applying the right security training. We see this one problem recurring; is there a group here that’s doing this, or is this across the company? Do we need some libraries or some way to link things that now sanitize inputs? Do we need a process that needs to be deployed for our programmers?

Steve Ginsburg: Right. And that can sometimes be about seniority of teams, but sometimes it’s not about that at all.

Simon Gibson: The busyness.

Steve Ginsburg: And also, there’s all sorts of specialties. Sometimes coming from webscale companies and dealing with mostly that, I tend to think of software developers of a certain type, but across all enterprises there are software developers who are specialized closer to machine hardware, and closer to any number of things where their sense of the modern web-available security exploits might be very, very different, and yet they might still come across that. A good example would be how web cameras become exploitable in the world. That’s a device that maybe didn’t even consider itself a security device in any way, and yet that can be responsible for some of the biggest storms on the Internet.

Simon Gibson: Yeah. And I think that’s a good way to sort of wrap this up, which is: these are extremely valuable tests, whether it’s physical security—you think you have controls to access to buildings and doors and perimeters—maybe they don’t work the way you think they do. You issue a badge in one place a person has to walk through an area—can they swap out the badge for another color and get access before anybody has a chance to get a look at it?

There’s all sorts of sneaky things that people will do and think outside the box when you believe you have a set of controls that work around controlling access to a database. Is there some hardcoded credential somewhere that somebody is using directory traversal? Is there somebody somewhere looking at something in a way that you’re not? And these tests prove that. These tests show that, and they are super valuable.

Bug bounties can cost very little; you can spend some money on high-quality vulnerabilities that are submitted. You can spend a ton of money on them; you can hire pen tests that do simple things like scan; or pay seriously experienced researchers to work hard on a specific application before you release it and know and find things. There’s a ton of value about understanding the values and the risks is super important. It’s one of those things that companies should not avoid doing, but they need to understand the risks. Fortunately, I think today, this has evolved such that there are a lot of good partners to work with. And that’s what our report’s going to cover.

Steve Ginsburg: Yeah, absolutely, and maybe just a final thought from my side: we talked a bit about targeting it, and I think scope is—for those who are thinking about this and maybe if they’re newer and want to get engaged, is: for each one of these, they can be intelligently scoped to start, and then you can widen them as it makes sense, or launch multiple efforts as it makes sense.

Simon Gibson: And even yes, for very large companies who don’t want to necessarily expose everything wrong with them, I think scoping things is very important, for sure. I think that’s it. This was a good one. Thanks, Steve.

Steve Ginsburg: Thanks, Simon.

Simon Gibson: Thanks for listening to Right Size Security.



Source link

Tagged with:



Comments are closed.