“It’s made our jobs harder, not easier” – ThreatLocker CEO Danny Jenkins on AI | TechRadar

This post was originally published on this site.

Artificial intelligence has been a double-edged sword for the cybersecurity industry – although it promises to help researchers and experts detect threats more quickly, it has also reduced the barrier to entry for even more threat actors by democratizing access to malicious code.

At least, that’s what I thought before talking to ThreatLocker CEO Danny Jenkins, who advocates for a zero-trust approach to protecting hardware, infrastructure and networks.

Speaking with me at the company’s annual Zero Trust World event, Jenkins stated: “[AI]’s really bad at preventing.” Chatting with him made me recognize the valuable skills that human workers continue to offer in a post-AI world, introducing me to the concept that generative AI plays a role in some areas of a business, not all.


You may like

The age-old battle

“How does it know if it’s an IT management tool or a hacker’s tool? How does it know if it’s a backup tool or a data exfiltration tool?” Jenkins asked. “They both perform the exact same function – AI is really bad at determining intent.”

Ultimately, determining good versus bad in cybersecurity is extremely context-dependent, and ThreatLocker knows this, which is why the company places an emphasis on the need for humans to know what runs in their environment, which makes it easier to spot anomalies.

Although artificial intelligence has been shown to flag some malicious code, attackers can trick AI with a few minor alterations to a malware file’s features to cause it to misclassify a threat as benign.

Anyway, well-funded threat actors, including nation-state groups and advanced persistent threat (APT) groups, will even test their attacks against the latest AI-driven tools in what’s been described as a cat-and-mouse game.

How can AI help cybersecurity strategies?

With rapid AI developments far outpacing legislation and guidance, every day brings a slightly different threat. Without knowing where we stand from one day to the next, ThreatLocker’s advocacy for a zero-trust approach to cybersecurity tackles AI-driven threats from a slightly different perspective.

It was at this point that I started chatting with Jenkins’ colleague, Chief Product Officer Rob Allen, who continued to explore the impact of AI on the industry. “The only skill you need is to ask the right question in the right way and you will get the code or the answer that you need,” he said about AI tools.

Besides the technical element of malicious code, generative AI is also helping threat actors produce content for attacks – be it tens of variations of phishing email copy to avoid some detection tools or fake content for a scam website set up to trick people out of their money or other sensitive data.

(Image credit: Shutterstock / LookerStudio)

Jenkins, who said AI is mostly just a “buzzword” thrown around for marketing purposes, summarized: “It’s made our jobs harder, not easier.”

The consensus is that AI works best as an assistant for highly skilled IT and cybersecurity teams, and while it does possess some ability to enhance threat detection and response, helping to plug talent shortages, it cannot replace the element of human judgement that’s paramount to effective security.

Looking ahead, there’s no such thing as a magic pill, and even if there were, it sounds like AI just isn’t it. What it has done, though, is added another string to any company’s bow who’s willing to embrace it – combining artificial intelligence with human resources and a default-deny, zero-trust approach provides the most rounded solution.