Featured How Dangerous Are ChatGPT And Natural Language Technology For Cybersecurity?

Published on January 25th, 2023 📆 | 2968 Views ⚑

0

How Dangerous Are ChatGPT And Natural Language Technology For Cybersecurity?


Powered by iSpeech

ChatGPT it the hot artificial intelligence (AI) app of the moment. In case you’re one of the few who hasn’t come across it yet, it’s basically a very sophisticated generative-AI chatbot powered by OpenAI’s GPT-3 large language model (LLM). Basically, that means that it’s a computer program that can understand and “talk” to us in a way that’s very close to conversing with an actual human. A very clever and knowledgeable human at that, who knows around 175 billion pieces of information and is able to recall any of them almost instantly.

The sheer power and capability of ChatGPT have fueled the public’s imagination about just what could be possible with AI. Already, there’s a great deal of speculation about how it will impact a huge number of human job roles, from customer service to computer programming. Here, though, I want to take a quick look at what it might mean for the field of cybersecurity. Is it likely to lead to an increase in the already fast-growing number of cyberattacks targeting businesses and individuals? Or does it put more power in the hands of those whose job it is to counter these attacks?

How can GPT and successor technology be used in cyberattacks?

The truth is that ChatGPT – and more importantly, future iterations of the technology – have applications in both cyberattack and cyber defense. This is because the underlying technology known as natural language processing or natural language generation (NLP/ NLG) can easily mimic written or spoken human language and can also be used to create computer code.

Firstly, we should cover one important caveat. OpenAI, creators of GPT-3 and ChatGPT, have included some fairly rigorous safeguards that prevent it, in theory, from being used for malicious purposes. This is done by filtering content to look for phrases that suggest someone is attempting to put it to such use.

For example, ask it to create a ransomware application (software that encrypts a target's data and demands money to make it accessible again), and it will politely refuse.

“I’m sorry, I cannot write code for a ransomware application … my purpose is to provide information and assist users … not to promote harmful activities”, it told me when I asked it as an experiment.

However, some researchers say that they have already been able to find a work-around for these restrictions. Additionally, there’s no guarantee that future iterations of LLM/NLG/NLP technology will include such safeguards at all.

Some of the possibilities that a malicious party may have at their disposal include the following:

Writing more official or proper-sounding scam and phishing emails – for example encouraging users to share passwords or sensitive personal data such as bank account information. It could also automate the creation of many such emails, all personalized to target different groups or even individuals.

Automating communication with scam victims – If a cyber thief is attempting to use ransomware to extort money from victims, then a sophisticated chatbot could be used to scale up their ability to communicate with victims and talk them through the process of paying the ransom.

Creating malware – As ChatGPT demonstrates that NLG/NLP algorithms can now be used to proficiently create computer code, this could be exploited to enable just about anyone to create their own customized malware, designed to spy on user activity and steal data, to infect systems with ransomware, or create any other piece of nefarious software.





Building language capabilities into the malware itself – This would potentially enable the creation of a whole new breed of malware that could, for example, read and understand the entire contents of a target’s computer system or email account in order to determine what is valuable and what should be stolen. Malware may even be able “listen in” on the victim’s attempts to counter it – for example, a conversation with helpline staff - and adapt its own defenses accordingly.

How can ChatGPT and successor technology be used in cyber defense?

AI, in general, has potential applications for both attack and defense, and fortunately, this is no different for natural language-based AI.

Identifying phishing scams – By analyzing the content of emails and text messages, it can predict whether they are likely to be attempts to trick the user into providing personal or exploitable information.

Coding anti-malware software – Because it can write computer code in a number of popular languages, including Python, Javascript, and C, it can potentially be used to assist in the creation of software used to detect and eradicate viruses and other malware.

Spotting vulnerabilities in existing code – Hackers often take advantage of poorly-written code to find exploits – such as the potential to create buffer overflows which could cause a system to crash and potentially leak data. NLP/NLG algorithms can potentially spot these exploitable flaws and generate alerts.

Authentication – This type of AI can potentially be used to authenticate users by analyzing the way they speak, write, and type.

Creating automated reports and summaries – It could be used to automatically create plain-language summaries of the attacks and threats that have been detected or countered or those that an organization is most likely to fall victim to. These reports can be customized for different audiences, such as IT departments or executives, with specific recommendations for different people.

I work in cybersecurity – is this a threat to my job?

There’s currently a debate raging over whether AI is likely to lead to widespread job losses and redundancy among humans. My opinion is that although it’s inevitable that some jobs will go, it’s likely that more will be created to replace them. More importantly, it’s likely that jobs that are lost will mostly be those that require mainly routine, repetitive work – such as installing and updating email filters and anti-malware software.

Those that remain, or are newly created, on the other hand, will be those that require more creative, imaginative, and human skillsets. This will include developing expertise in machine learning engineering in order to create new solutions, but also developing and building cultures of cybersecurity awareness within organizations, mentoring workforces on threats that may not be stopped by AI (such as the dangers of writing login details on post-it notes) and developing strategic approaches to cybersecurity.

It's clear that thanks to AI, we are entering a world where machines will replace some of the more routine "thinking" work that has to be done. Just as previous technology revolutions saw the replacement of routine manual work with machines, skilled manual work such as carpentry or plumbing is still carried out by humans. The AI revolution is likely, in my opinion, to have a similar impact. This means that information and knowledge workers in fields that are likely to be affected – such as cybersecurity - should develop the ability to use AI to augment their skills while further developing "soft" human skill sets that are unlikely to be replaced anytime soon.

To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books ‘Future Skills: The 20 Skills And Competencies Everyone Needs To Succeed In A Digital World’ and ‘Business Trends in Practice, which won the 2022 Business Book of the Year award.



Source link

Tagged with:



Comments are closed.