Generative AI is transforming cybersecurity in ways that are both exciting and alarming. While it offers powerful tools to detect threats, automate defenses, and strengthen security strategies, it’s also being leveraged by attackers to create more sophisticated phishing campaigns and malware at scale. A recent IBM report found that 97% of organizations experiencing an AI-related security incident lacked proper AI access controls, underscoring the urgent need to manage AI risks effectively. In this episode of the NextLabs Expert Series, we’ll dive into the dual-edged nature of generative AI in cybersecurity, explore real-world impacts, and discuss what organizations can do to prepare for the evolving threat landscape.
Matthew Rosenquist is a Chief Information Security Officer and Cybersecurity Strategist with over 30 years of experience in security operations, strategic planning, crisis management, and product security. He has contributed to industry standards, annual threat predictions, and innovative technologies, and mentors CISOs and boards at keynote events. Matthew advises companies, academic institutions, and governments worldwide on cybersecurity, emerging threats, privacy, regulatory compliance, digital ethics, and best practices for cyber risk management.
Get the full insights: read his perspective below or watch the Q&A video.
How are cyber attackers leveraging Generative AI to develop more sophisticated and scalable attacks, such as phishing and malware creation?
Well, GenAI is a powerful tool that can be used for good or for malice. Attackers are currently exploring ways to improve fraud and phishing attacks by making deceiving interactions more believable.Â
Additionally, they use GenAI as an enabler to be more efficient in what they do. GenAI is also evolving quickly to be adept at writing code, including highly effective malware, thus lowering the bar for technical aptitude for creating such malicious code.Â
Overall, attackers are using the strengths of GenAI, just like you or I would, and they’re using it as a tool to further advance towards their specific goals.Â
It's concerning to see how advanced these tactics are becoming. In contrast, what role can Generative AI play in developing more effective security protocols and strategies?
Generative AI is already positively impacting cybersecurity. Communication is a big one because it’s such a big challenge for cybersecurity. GenAI can really assist in this area because, well, cybersecurity can be confusing, complex, and ambiguous. GenAI can help communicate through text, visualization, and distillation of data to convey risks and opportunities to protect important assets. GenAI is showing great progress in that reduction of data so that we understand the most important events in a timely manner. That type of alerting is crucial, and it can do it very quickly and in expressive ways.Â
Additionally, GenAI can create very robust code, but it can also be used to detect vulnerabilities in existing code in systems, as well as whether there’s compliance to certain policies. It can interpret that and do the right mapping. But all of this is just the start. As GenAI evolves, so will the use cases.Â
Can you share with us some real-world examples where Generative AI has significantly impacted cybersecurity, both positively and negatively?
GenAI is a hot topic in cybersecurity right now. Deep fake types, social engineering attacks which lead to fraud, and they’ve really struck a nerve in the media. It is disturbing to think that the phone call from your child is being faked, or the video conference is not really with the actual executives. This takes forgery and impersonation to a whole new level.Â
We’re also seeing more realistic phishing content, more believable with less errors, and now it can be created to apply across the much broader set of languages. Â
We also see indications that attackers are using GenAI to quickly write exploit code to reduce the time that the defenders have to patch, and this opens up a very important window for them to exploit. Â
And now, although a step behind the attackers, the defenders are also using GenAI as well. It can be used to improve the effectiveness of employee cybersecurity training and even generate a vast amount of that content. Â
GenAI is also being applied to analyze data to populate investigation, documentation, and reports, which can consume a lot of time from security operations folks. And it’s just the starting to really be used to identify technical, behavioral and process vulnerabilities. And yet this is an area where it may hold a tremendous value in the automation and continually evaluating what those vulnerabilities might be.Â
Those examples highlight the dual-edged nature of AI in cybersecurity. So, how do regulatory and policy frameworks manage the risks and benefits of Generative AI in cybersecurity?
The creation of regulations always lags behind innovative technology, and this very much holds true with GenAI as well. Regulations are just emerging with EU taking an early step forward with the EU AI Act. Now some companies are also attempting to self-regulate with policies and tools and we see this in the social media space most often, where posting of AI generated deep fake images and videos is much more common. Other nations and jurisdictions are discussing AI regulations and are in various stages of development and implementation, but this is a very complex topic. Overall, it’s real, really still a Wild West out there. There just aren’t a whole lot of regulations and laws protecting people or limiting the malicious use of generative AI.Â
Indeed, the regulatory landscape is still catching up with the rapid advancements in Generative AI, and it's clear that there's much work to be done. Looking ahead, what future trends do you foresee in the use of Generative AI for cybersecurity, and how can organizations prepare for these evolving threats and opportunities?
Attackers will get better with automated, customized attacks tailored for specific targets with very clear goals. They will move fast, scale broadly, and automate much of the manual work that currently limits those threats. So, we’re going to have a much bigger problem on our hands.Â
And alternatively, we can be our own worst enemy as well. As we rush to embrace, adopt, deploy, and consume GenAI tools and services, it rushes us to go out there and use them or deploy them. And in that rush, we often forgo the security checks. They may be overlooked. They may be pushed out and that can expose data, systems, and assets.Â
The future has yet to be written, but we should proactively begin addressing those unacceptable risks now before they become a serious problem. We need to be smart and again, proactively work to secure that embrace of GenAI usage so we can receive all the great benefits without being victimized.Â
Thank you for your comprehensive insights, Matthew.
Discover more from NextLabs’ Expert Series, featuring industry experts in educational and thought-provoking conversations on Data-Centric Security, Zero Trust Architecture, Safeguarding AI, and more.

To comment on this post
Login to NextLabs Community
NextLabs seeks to provide helpful resources and easy to digest information on data-centric security related topics. To discuss and share insights on this resource with peers in the data security field, join the NextLabs community.
Don't have a NextLabs ID? Create an account.