Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, warned recently that as generative AI tools spread, companies may have an incentive to favor speed to market over security. As Cybersecurity Dive reports, Easterly said at a Hack the Capitol event, “I think we need to be very, very mindful of making some of the mistakes with artificial intelligence that we’ve made with technology.”
Easterly’s remarks add to a federal push to put more of the security onus on tech companies instead of consumers. That was one of the Biden administration’s goals in its recently issued national cybersecurity strategy.
Rick Grinnell, founder and managing partner of Glasswing Ventures, writes in a CIO op-ed that generative AI tools such as Microsoft’s recently launched Security Copilot should help with sifting through and responding automatically to incident data. Generative AI might even be a boon for smaller organizations that don’t currently have enough cybersecurity resources or expertise, Grinnell writes.
On the other hand, Grinnell cautions that privacy concerns will probably limit the growth of AI-based cybersecurity capabilities. And he warns that threat actors could benefit from testing their attacks against generative AI tools, too, so mid-tier companies tempted to share their data with an AI should be careful.
As Readwrite reports, AI in cybersecurity is free from human error but also human ingenuity: It might not have to sleep, but a computer-based tool can be fooled by hackers like anything else on a network.
Thomas Aneiro, senior director for technology advisory services at Moxfive, writes for VentureBeat that cybersecurity practitioners “have barely scratched the surface” of what generative AI might do for the field. Aneiro argues that far from eliminating jobs in the information technology space, tools like ChatGPT could help cybersecurity pros automate mundane work.
Of course, as The Washington Post reports, generative AI has already helped hackers in the same way, too. Zscaler has said that AI played a role in the 47% jump in phishing attacks it tracked in 2022.