Cybersecurity Pros: How AI Is Redefining the Security Field | Dice.com Career Advice

This post was originally published on this site.

When it comes to artificial intelligence (AI) and cybersecurity, most conversations focus on threats to networks. For example, cybercriminals and other malicious actors are using chatbots and other tools to enhance their attacks, such as creating more realistic-looking phishing emails or speeding up operational processes.

At the same time, organizations are testing how generative AI tools can improve cyber defenses and counter threat actors’ use of these technologies.

Another trend is also emerging at the crossroads of AI and cybersecurity: namely, how these technologies and tools can fill the so-called security talent gap. There are nearly 460,000 open cyber positions in the U.S. Globally, that number stands at approximately 5 million, and has remained stubbornly high for several years.

A March 4 report released by cybersecurity firm Darktrace finds that hiring enough security workers remains a daunting task due to burnout and other factors, and many organizations are now looking to AI to fill in crucial gaps. Only about 11 percent of those surveyed are looking to increase security staffing this year, but 64 percent want to add AI-powered tools to bolster current cybersecurity solutions.

In addition, about 40 percent of those surveyed want to optimize processes in their security operations centers (SOCs) by using more AI. The survey is based on responses from 1,500 CIOs, CISOs, security leaders and cyber practitioners in 14 countries.

While AI is not replacing cybersecurity and other tech professionals, organizations are experimenting to determine if the technology can fill essential, if routine, parts of the security team by automating multiple manual processes that are time-consuming and take employees away from more important tasks and analysis.

For example, AI tools can help automate SOC Level 1 analysis as well as prioritization of SOC Level 2 analysis, which frees up other resources and manpower for more important tasks and analysis, said Nicole Carignan, senior vice president for security and AI strategy, and field CISO, at Darktrace.

“AI enables security teams to think more strategically on threat hunting and proactive efforts that will harden their environment, reduce risk and/or possible damage as well as better prioritize threat vulnerability management,” Carignan recently told Dice. “As organizations embrace AI tools, benefiting from their ability to streamline workflows and enable the detection of never-before-seen threats, the need for continuous AI education will be critical.”

For cybersecurity and other tech professionals, understanding what AI tools can and cannot automate is essential at a time when these platforms are evolving and organizations are still investing heavily. While these technologies will not replace workers, it’s important to understand why upskilling now can help in a future job search.

Preparing for More Automated Cyber Functions

Whether it’s called generative AI, machine learning or simply automation, technologies and platforms are making their way into SOCs and other parts of the security organization.

In turn, this approach is changing how security teams handle issues such as threat detection, email security and real-time analysis of user behaviors. Through automating processes, cyber professionals can stay ahead of advanced phishing attacks, business email compromise, and account takeover attempts—detecting threats before they reach users and eliminating the need for manual investigation of every suspicious message.

“The main concern isn’t about AI replacing jobs but rather about organizations implementing AI solutions without proper oversight, creating blind spots where security teams believe threats are being caught when sophisticated attacks might still slip through detection systems that aren’t continuously learning from new threat patterns,” J. Stephen Kowski, field CTO of SlashNext Email Security, told Dice.

One reason many see the need to automate and use AI for certain manual tasks is to keep up with the pace of threats from attackers using many of the same methods.

“Given the increasing volume of sophisticated AI-powered threats, along with the ongoing skills gap and budgetary constraints impacting many security teams, AI is essential to augment and support human team members,” Carignan added. “Ninety-five percent of all cybersecurity professionals surveyed believe AI can improve the speed and efficiency of their ability to prevent, detect, respond and recover from threats, and 88 percent believe that the use of AI is critical to free up time for security teams to become more proactive.”

AI, however, is not a cure-all for what ails organizations. In many cases, neural networks and large language models (LLMs) can produce convincing results but lack explainability, noted Patrick Tiquet, vice president for security and architecture at Keeper Security.

“If a security tool flags a potential threat, but the system can’t explain why, it puts organizations in a difficult position—do they trust the output or risk missing something critical?” Tiquet told Dice. “This is why AI must be viewed as an assistive technology, not an autonomous decision-maker. Security teams must remain actively engaged in validating AI-driven insights to prevent false positives, overlooked threats or unintended biases in automated systems.”

For organizations that are adapting in an attempt to overcome the talent gap, transparency is key.

“The biggest concern here should be transparency and accountability,” Casey Ellis, founder of Bugcrowd, told Dice. “We’re essentially asking AI to make security decisions, often without fully understanding how or why certain conclusions are reached. Security teams rightly worry about AI’s ‘black box’ problem—trusting automated systems without fully grasping their inner workings can leave organizations exposed to overlooked threats and biases in decision-making at the speed of automation.”

Reskilling for the AI Age

With the coming influx of AI-powered tools and platforms, cybersecurity and tech professionals have the chance to reskill now and gain familiarity with the technology.

For security pros, certifications such as Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH) and Certified Cloud Security Professional (CCSP) are a solid start, but adding AI and machine learning fundamentals, additional cloud security expertise and automation skills will set professionals apart, Tiquet noted: “Those who can bridge the gap between AI and cybersecurity—leveraging automation while maintaining critical oversight—will be indispensable in modern security operations.”

Other cybersecurity experts noted that once AI models are trained, organizations will need security professionals who can help explain and take action with the data they are provided, ensuring false alerts are resolved and responding to real threats.

“Security professionals should also be looking to their AI-powered security tools to provide context for alerts and defensive responses. This is an important step in stopping the trend of the SOC drowning in more alerts than they can address,” Kris Bondi, CEO and co-founder of security firm Mimotoy, told Dice. “With AI enabling continually evolving threats, after breach forensics is in danger of delivering obsolete information. This is why AI-powered real-time discovery and response is the natural counterbalance to AI-powered threats.”

The good news for many cybersecurity and tech professionals is that the growing interest in AI also means there are more opportunities to learn about the technology and get hands-on experience as organizations experiment and test-drive these tools.

“If you’re in security today or want to enter the field in the future, it’s critical to get comfortable with AI as quickly as possible. Dive in practically. Try experimenting with AI tools firsthand, and get involved in projects or communities that apply AI to security issues,” Bugcrowd’s Ellis noted. “There are lots of courses and primers to using AI available, as well as opportunities to participate in hackathons and work on open-source projects in ways that demonstrate practical application skills. For candidates, building practical, hands-on experience with AI security solutions isn’t just beneficial; it’s becoming a necessity.”