News Winning the cybercrime arms race with AI

Published on May 30th, 2019 📆 | 3584 Views ⚑

0

Winning the cybercrime arms race with AI


https://www.ispeech.org

The arms race between
cybercriminals and cybersecurity professionals continues to escalate. And
anyone watching the trajectory of this perpetual game of one-upmanship can see
that this is a race towards implementing AI in the service of each side’s
goals. For instance, a report by Nokia revealed that AI-powered
botnets look for vulnerabilities in Android devices, then load data-stealing malware
that is only detected after the damage has been done.

At the core of this competition is simple economics

Networks now incorporate
multi-cloud environments that are dynamic and often temporary, SD-WAN
connections to branch offices to support critical business applications, and an
increasingly mobile workforce. At the same time, devices are proliferating
inside networks at an unprecedented pace, from a multitude of different
end-user devices to IoT technology. And consumer demand is accelerating the
need for more powerful and responsive applications, which are in turn forcing
workflows and transactions to span multiple data centers and ecosystems.

DX challenges for businesses

Digital transformation
(DX) has completely upended years of security strategy. And due to issues like
the cybersecurity skills gap, organizations can no longer afford to expand
their security infrastructure organically.

Deploying, configuring,
managing and operating security systems across multiple cloud environments, for
example, can quickly overwhelm limited security resources. Traditional models
need to be replaced with integrated systems that require fewer eyes and hands. Interoperability
ensures that visibility and control extend across all security devices and span
all network environments to close gaps, correlate threat intelligence and
coordinate a response. And because of the speed and efficiency of cyberattacks
today, this is only possible by implementing highly automated security
solutions enhanced with machine learning to see and stop a threat before it can
accomplish its goals.

DX opportunities for cybercriminals

Maintaining ROI in a
cybercriminal enterprise requires lowering the overheard caused by constant
innovation while increasing the efficiency and effectiveness of tools that
penetrate defense systems and evade detection. Like their business
counterparts, cybercriminals are increasingly turning to automation and machine
learning to accomplish their goals.

DX efforts keep expanding
the potential attack surface, providing new opportunities for exploits. To
increase the efficiency of their “launch and pray” malware strategies, however,
cybercriminals are increasing their odds by launching malware that runs on
multiple operating systems and in multiple environments, and that can deliver a
variety of exploits and payloads. By leveraging automation and machine
learning, malware can quickly determine which payloads will be most successful
without having to expose itself to detection through constant communications
back to its C2 server.

AI is next

The goal for both sides
of this battle is the eventual deployment of a solution that can adapt to
unexpected environments and make effective autonomous decisions. This will
require some sort of artificial intelligence. AI will allow businesses to
deploy a self-defending security solution that can detect threats, close gaps,
reconfigure devices and respond to threats without human intervention.

AI will also enable
cybercriminals to deploy self-learning attacks that can quickly assess
vulnerabilities, adapt malware to those weaknesses and actively counter
security efforts to stop them. When combined with emerging threats like swarmbots, AI will be able to break down an attack into
functional elements, assign them to different members of a swarm and use
interactive communications across the swarm to accelerate the rate at which a
breach can occur.

Shopping for an AI solution





Fortunately,
the process of developing AI is very time- and resource-intensive, so few
cybercriminals have been able to deploy even basic AI. But as AI becomes
increasingly commoditized, it is a short hop to cybercriminal adoption.

The only
defense against attacks enhanced with automation and machine learning is to
have deployed those same strategies. And when AI becomes part of the malware
toolkit, the organizations that fare the best will be those that have already
begun to integrate AI into their defenses.

The
security community has not adequately defined what constitutes an AI, which
leaves buyers vulnerable. Many cybersecurity vendors at this year’s RSA
conference, for example, claim to have AI capabilities. But in reality, most fall short because their underlying infrastructure is
too small, their learning models are incomplete or the learning process has not
had enough time to develop a sophisticated algorithm for solving problems.

When
looking at a solution that claims to have an integrated AI engine, here are
some questions to ask:

How many years have been spent developing this AI? True
machine learning models require years of careful training to ensure that
algorithms are stable, decision-making trees are mature and unexpected results
are reduced to zero.

How many nodes does it use to learn and make
decisions?
This is not something that can be built in someone’s
lab. It requires millions upon millions of nodes and a continuous feed of
massive amounts of data before something even remotely like an AI can be
developed.

How rich is your data? The lack
of high-quality data is AI’s biggest drawback. Large and broad
data sets are needed for successful implementation of AI. When this rich data
is present, organizations are able to integrate AI into a comprehensive
security framewok that creates centralized visibility, true automation and
real-time intelligence sharing.

Fighting fire with fire

To win the
cybersecurity war—especially during disruptive times like DX, and with limited
cybersecurity resources—your best talent must be focused on the most critical
decisions your organization faces. Achieving this requires handing over
lower-order decisions and processing to automated systems.

However, not all AI is the same. Risk-based decision-making engines that are intelligent enough to take humans out of the loop not only need to be able to execute the “OODA loop” (Observe, Orient, Decide and Act) for the vast majority of situations it encounters but also actually suggest courses of action when a problem is discovered rather than merely relying on pre-defined ones. Only then can you confidently free up your valuable cybersecurity experts so they can concentrate on the more difficult decisions where human cognition and intervention is most required.

By Derek Manky, Chief, Security Insights & Global Threat Alliances at Fortinet – Office of CISO

Source link

Tagged with:



Comments are closed.