AI-driven phishing attacks have increased by 1,265% since 2022, as reported by Litslink, highlighting how adversarial AI is rapidly transforming the threat landscape. These intelligent agents can mimic human behavior, adapt at speed, and bypass traditional security controls, forcing organizations to rethink how they protect critical systems. AI-powered defenses are helping organizations keep pace by automating mundane SOC tasks such as alert triaging, allowing security professionals to focus on creative, strategic work.
Luke Babarinde, Global Solution Architect at Imperva, shares insights on how adversarial AI is transforming the cybersecurity landscape. From evolving threats and automated SOC operations to the shifting role of humans in defense, Luke breaks down what it takes to stay resilient in an age where AI is both a tool and a threat. Learn why doubling down on the fundamentals, embracing creativity, and building adaptive strategies are key to navigating the future of cyber defense.
Get the full insights: read his perspective below or watch the Q&A video.
How has the threat landscape evolved with adversarial AI?
The main challenge with adversarial AI is its ability to mimic legitimate behavior and transactions while rapidly shifting that behavior at speed to evade detection. For example, we now have documented cases where threat actors are using generative AI to launch highly effective deceptive campaigns.
It is far easier today to automate attack sequences, starting with automated reconnaissance against targets to identify vulnerabilities and business logic flaws, then rapidly developing payloads to exploit those vulnerabilities. And this is not limited to systems alone—humans remain the weakest link in this deception story. Adversarial AI is very effective at compromising what we can call the human OS.
We see the implications of this in the rise of fraud. Reports, such as those from the FTC, show scams increasing across digital channels like phones, social media platforms, and more. This has serious implications for organizations still relying on traditional security controls to identify anomalies through legacy feedback loops and signal correlation. That old approach simply isn’t effective against adversarial AI.
Organizations need to shift toward advanced behavior-deviation techniques that can identify sophisticated traits that hide in plain view but are very hard to detect. Thankfully, we are beginning to see positive progress as advanced capabilities are integrated into cybersecurity programs.
For example, endpoint solutions now have capabilities that no longer rely on signatures to detect zero-day vulnerabilities and exploits. However, broader anomaly detection and prevention solutions are still behind. Organizations need advanced capabilities to augment what they currently use. That is the way forward.
For so long, cybersecurity has suffered from skill shortages. How has the adoption of AI capabilities impacted security programs?
The shortage of skilled cybersecurity professionals has always been a challenge, and it’s only accelerated in recent years as digital channels expand. As adversaries evolve their tactics, organizations must constantly readapt their defensive strategies. But this requires advanced technological capabilities to bridge the skills gap, so that shortages don’t directly impact the business.
It comes down to people, process, and technology. AI is now making its way into security operations centers, and it is helping reduce the workload on humans—especially for mundane tasks like triaging, which traditionally takes a lot of time. This allows organizations to better use their human capital.
For instance, security professionals can now focus on innovating defensive strategies instead of constantly fighting fires. In software development, we’ve seen an even bigger shift: AI capabilities are augmenting code development, enabling faster unit testing and improving overall software quality. This supports a security by design philosophy.
Thank you for that insightful response. Now, moving on to our next question: What is the role of humans in this evolving landscape, especially for defensive security?
Humans will continue to play a critical role, particularly in areas where creativity is needed. Ironically, this is something threat actors excel at—they are creative, adaptable, and able to change tactics rapidly while leveraging cutting-edge tools to evade detection. Organizations must learn to do the same.
Security professionals need to evolve faster and become just as creative. Fortunately, tools like generative AI can also help humans learn and adapt more quickly. This is not just a cat-and-mouse game. If we focus on doing the fundamentals really well and stay informed about the evolving threat landscape, we can reduce the gaps that adversaries seek to exploit—even with their advanced capabilities.
You mentioned the significance of doubling down on the fundamentals. Can you give us an example of that design philosophy?
It is critical to double down on doing the fundamentals well. A good example is the EU AI Act, which was enacted in December. It provides a solid framework emphasizing high-risk requirements such as quality control checks, human oversight, and transparency in decision-making processes. This ensures we don’t leave everything to “black box” systems.
These principles highlight the importance of governance and best practices. By approaching security with the mindset that anything that can go wrong, will go wrong, organizations can strengthen their security by design approach.
This means delivering better code, building stronger infrastructures that follow best practices, and providing more secure services.
Thank you, Luke, for sharing these valuable insights on how adversarial AI is transforming the cybersecurity landscape.
Thank you for having me. I look forward to hearing more stories about how businesses and organizations continue to protect themselves against this evolving threat landscape.
Discover more from NextLabs’ Expert Series, featuring industry experts in educational and thought-provoking conversations on Data-Centric Security, Zero Trust Architecture, Safeguarding AI, and more.

To comment on this post
Login to NextLabs Community
NextLabs seeks to provide helpful resources and easy to digest information on data-centric security related topics. To discuss and share insights on this resource with peers in the data security field, join the NextLabs community.
Don't have a NextLabs ID? Create an account.