News

Published on May 31st, 2019 📆 | 8239 Views ⚑

0

What mechanisms can help address today’s biggest cybersecurity challenges?


iSpeech.org

In this Help Net Security podcast, Syed Abdur Rahman, Director of Products with unified risk management provider Brinqa, talks about their risk centric knowledge-driven approach to cybersecurity problems like vulnerability management, application security and cloud and container security.

address cybersecurity challenges

Hereā€™s a transcript of the podcast for your convenience.

Hi, my name is Syed Abdur and Iā€™m the Director of Products at Brinqa, where Iā€™m responsible for product management and technical product marketing.

Brinqa is a cyber risk management company based out of Austin, Texas. We pride ourselves in a unique risk centric knowledge-driven approach to cybersecurity problems like vulnerability management, application security and cloud and container security. We see these problems as a subset of a greater category of cyber risk management problems.

Weā€™re really excited to see that our unique approach to these problems is really resonating well with the industry ā€“ both in terms of our customers, who represent some of the largest organizations in retail, healthcare, critical infrastructure and financial services verticals, as well as through awards. We recently won the Cyber Defense Magazine InfoSec Award for the best product in the vulnerability management category, as well as the Groundbreaking Company Award in the application security category awarded at RSAC this year.

To explain Brinqaā€™s product philosophy, Iā€™m going to talk to you for a bit about a concept known as knowledge graphs, which aligns really well with the way we think about the information infrastructure necessary to address cybersecurity problems. Knowledge graphs are the information architecture behind Google search. You know how when you search for a term on Google, you see this index card on the right with a list of all the related terms about the specific thing that you searched for?

Google is able to present you these results at a momentā€™s notice because they have already built a gigantic knowledge graph of all the information in their database and more importantly, the relationships that exists between all of that information. Imagine if you had a similar tool for cybersecurity: a knowledge graph or a knowledge base for all your relevant cybersecurity information, with any dependencies and relationships necessary to answer a question, no more than a click away.

But do we think that this type of mechanism can help address some of the biggest challenges in cybersecurity today? For instance, can this help us make sure that cybersecurity decisions are based on complete accurate and up to date information? Can this help us ensure that weā€™re making the best use of our cybersecurity tools, budgets and resources? Can this help us determine if more tools and solutions are helping us or hurting us?

There are four key characteristics that define knowledge graphs. These are also rules that all Brinqa applications follow religiously.

The first rule is that itā€™s literally a graph. So, itā€™s literally a collection of nodes and relationships. The reason why this is important is that, if there is any relationship that exists between two pieces of information or facts, we are able to actually make use of it. In context of cybersecurity, this becomes really important because this helps us ensure that if there is any information that is relevant to our risk analysis, no matter where it exists in the organization, if it is related to the asset that we are analyzing, we can actually get to it and make use of it as part of our analysis.

OPIS

The second rule is that itā€™s semantic. Knowledge graphs follow a well-defined ontology, which means that theyā€™re different from things like data lakes where there isnā€™t necessarily a strict structure and definition to the way that the data is being stored and represented. In the case of cybersecurity for instance, Brinqa has built our own cybersecurity data ontology, which essentially maps the relationships between all the different types of entities that you would want to monitor, but also things like vulnerabilities, and alerts, and notifications, and gaps and things like that. We are in complete control of how this information will be represented, once it comes in the knowledge graph.

The third rule is that it actually creates new knowledge. Itā€™s not just a data stored where weā€™re dumping information. Itā€™s actually a source of new knowledge creation by analyzing and working on the information thatā€™s being populated into the knowledge graph. In our case, with cyber risk management applications, usually this is represented as risk ratings and risk scores that are the result of analysis that is being done on information that is coming in from other cybersecurity tools and products.

And then the fourth, and maybe the most important characteristic of knowledge graphs, is that theyā€™re alive, which means that when we talk about the data ontology that defines the structure of information within a knowledge graph, it is completely dynamic, it is completely open to change. As your information is changing in the outside world, the knowledge graph essentially adapts to represent that information as accurately as possible.





We expect organizations to become more proactive and involved in the design and implementation of their cyber risk management programs, and this is really based on how we have seen our own solutions and ecosystem evolve and grow through our interactions with customers and prospects. Brinqa right now has more than 100 connectors to all types of cybersecurity IT and business data sources. I would say about 80 percent of these connected development requests originated from our customers and prospects. So, itā€™s very common for us to go into a deployment APAC with our standard set of connectors for any particular problem like vulnerability management or application security, and then get requests for entirely new connectors that the customer wants to make part of their cyber risk management model.

Once we are exposed to those connectors, they make a lot of sense and obviously we encourage all of our customers to consider building them into their cyber risk management process. But I think what this really drives home for us is the fact that risk management is a subjective exercise by nature. Your risk analysis has to really reflect who you are as an organization, and by giving organizations the tools to do that is where we see the most advancements and most emerging trends in cyber risk management.

For example, when we first started applying our platform to the problem of vulnerability management, we understood the standard set of data sources that we would need to integrate to accurately identify and address risks across network infrastructure, which is what vulnerability management was focused on for a really long time. We knew that this would typically include your vulnerability assessment tools, which are obviously the primary source of vulnerability information.

These tools are also used really extensively for asset discovery, so we knew that we would be getting asset information out of these tools. We also knew that most organizations have some other form of asset inventory, either as a CMDB or a dedicated asset inventory tool. And we knew that this would also be a source of valuable asset metadata on things like ownership, escalation chains, business impact, compliance requirements, data classification and so on. That was another obvious data source that we knew we would want to integrate to build a solution for vulnerability management.

Once we build this internal context, by looking at internal asset inventories and CMDB systems, and other asset discovery tools, we know that we also need to build the external context around the problem that we are solving.

Most organizations also have access to threat intelligence feeds which are a really good source of information about which vulnerabilities are most likely to be exploited, based on factors like: ā€œAre there any known exploits for a vulnerability? Does a tool kit exist that makes use of this particular vulnerability? Are there any known threat actors that are utilizing a particular vulnerability as part of their attacks? Are we seeing a surge in chatter about a specific problem in the dark web?ā€

By providing a lot of additional external context around vulnerabilities threat intelligence feeds can also help us do a good job of prioritizing what needs to be fixed. By combining these three primary data sources, your vulnerability assessment tools, your asset context and your threat intelligence, we know that itā€™s enough information for us to take good decisions about what needs to be addressed. And then obviously, the end goal of vulnerability management is to actually reduce the risk, reduce the exposure to threats and the potential impact posed by these risks.

OPIS

We knew that we also want to integrate some ideas and systems, which is where primarily people keep track of user tasks and tickets for remediation. We knew that by combining these four primary data sources, vulnerability assessment tools, asset inventories, threat intelligence and of ideas and tools, we can actually build an end-to-end vulnerability management process, which is freely automated, going from the identification, analysis, prioritization of vulnerabilities to the actual creation of remediation tickets and the validation of remediation actions and the reporting of risk reduction and so on.

But once our customers started using these solutions, they started coming up with a lot of additional data points that made a lot of sense, and if available, should be integrated into these solutions. One of the first things that we started getting requests for was network administration tools.

If you think about it, one of the first security controls that we implement when weā€™re setting up a network is network segmentation. We set up segments based on what parts of the network need to process different types of information, what is more critical, are there any compliance requirements, things like that.

Since we have already built-in some risk information into our controls in the form of segmentation, it only makes sense that we should incorporate these as part of our risk analysis models. Similarly, endpoint protection systems was another really interesting integration, because if you think about it, with endpoint protection systems you can set up policies, not protect specific endpoints, against exploits using specific vulnerabilities.

Essentially, it provides some mitigating controls to the problems that exist on that on that endpoint. As weā€™re doing our risk analysis for the vulnerabilities that exist on that endpoint, it makes sense to look at whether any existing mitigating controls exist on that box. Also, by thinking about vulnerability management and really other cyber management problems as a problem of information architecture, we can imagine how itā€™s really easy to translate these types of solutions into other areas.

Itā€™s very easy to go from vulnerability management for your network infrastructure to vulnerability management for application security. In that case we would be bringing in data from slightly different data sources, like your asset inventories would instead be application inventories, or code repositories. Instead of network assessment vulnerability assessment tools, we would be using static analysis, dynamic analysis, software composition analysis, penetration testing, results from these tools.

If we think about these problems as a problem of knowledge gathering and information architecture, we can see how itā€™s really easy to translate processes from one area of your infrastructure to another.

Source link

Tagged with: ā€¢ ā€¢ ā€¢ ā€¢ ā€¢



Comments are closed.