By 2030, the global market for AI-based cybersecurity products is expected to reach US$133.8 billion, up from US$14.9 billion last year.
Hackers also take advantage of this: AI-generated phishing emails are more likely to be opened than manually crafted phishing emails.
Artificial intelligence is playing an increasingly important role in cybersecurity, for both good and bad. Organizations can use the latest AI-based tools to better detect threats and protect their systems and data assets. But cybercriminals can also use this technology to launch more sophisticated attacks.
The rise in cyber attacks is driving the growth of the market for AI-based security products. A July 2022 report by Acumen Research and Consulting states that the global market was valued at $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.
The growing number of attacks such as distributed denial of service (DDoS) and data breaches, many of which are extremely costly to affected organizations, creates a need for more sophisticated solutions.
According to the report, another driver of market growth was the Covid-19 pandemic and the shift to remote work. This has forced many companies to pay more attention to cyber security and use artificial intelligence tools to more effectively detect and stop attacks.
Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rise in the number of connected devices are expected to drive market growth, the Acumen report said. The growing use of cloud-based security services may also provide opportunities for new applications of AI for cybersecurity.
Product types that use AI include antivirus/anti-malware, data loss prevention, fraud detection/protection, identity and access management, intrusion detection/prevention, and risk and compliance management.
Until now, the use of AI for cybersecurity has been somewhat limited. “Companies are not yet outsourcing their cybersecurity programs to AI,” said Brian Finch, co-leader of the cybersecurity, data protection and privacy practice at law firm Pillsbury Law. “This does not mean that AI is not used. We’re seeing companies use AI, but in a limited way,” mostly in the context of products like email filters and malware identification tools that put AI to work in some way.
“What’s most exciting is that we’re seeing behavioral analytics tools increasingly use AI,” Finch said. “By that I mean tools that analyze data to determine the behavior of hackers to see if there is a pattern to their attacks — timing, method of attack, and how hackers move around within the system. Gathering such intelligence can be very valuable for defenders.”
In a recent study, research firm Gartner surveyed nearly 50 security vendors and identified several patterns in their use of AI, says research vice president Mark Driver.
“For the most part, they reported that the primary goal of AI was to ‘remove false positives,’ because one of the top challenges among security analysts is filtering the signal from the noise in very large data sets,” Driver said. “AI can cut that down to a reasonable size, which is much more accurate. As a result, analysts can work smarter and faster to eliminate cyberattacks.”
In general, AI is being used to help detect attacks more accurately and then prioritize responses based on real-world risks, Driver said. And it allows for automated or semi-automated responses to attacks and ultimately provides more accurate models for predicting future attacks. “All of this doesn’t necessarily eliminate analysts, but it makes their work more flexible and accurate in the face of cyber threats,” Driver said.
On the other hand, bad actors can also take advantage of AI in various ways. “For example, artificial intelligence can be used to detect patterns in computer systems that reveal weaknesses in software or security programs, thereby allowing hackers to exploit these newly discovered weaknesses,” Finch said.
Combined with stolen personal information or data collected from open sources such as social media posts, cybercriminals can use AI to create large numbers of phishing emails to spread malware or gather valuable information.
“Security experts have noted that AI-generated phishing emails actually have a higher open rate — [for example] by tricking potential victims into clicking on them and thus triggering attacks — than manually crafted phishing emails,” Finch said. “Artificial intelligence can also be used to design malware that constantly changes to avoid detection by automated security tools.”
Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-based malware can linger on a system, collecting data and observing user behavior until it is ready to launch the next phase of an attack or send out the collected information with a relatively low risk of detection.
Part of that is why companies are moving toward a “zero-trust” model, where defenses are configured to constantly challenge and scan network traffic and applications to make sure they’re not malicious. But Finch said: “Given the economics of cyberattacks — it’s generally easier and cheaper to launch attacks than to build effective defenses — I’d say AI will generally do more harm than good. The caveat, however, is that really good AI is hard to create and requires a lot of specially trained people to make it work well.
Common criminals won’t be able to access the world’s greatest AI minds. A cybersecurity program can access “enormous resources from Silicon Valley and beyond,” Finch said, to build really good AI-level defenses against cyberattacks.
“As we move towards artificial intelligence developed by hacking countries [such as Russia and China], their hacking AI systems are likely to be quite sophisticated, so defenders will often struggle to keep up with AI-based attacks.”