Generative AI and Security: HackerOne's Predictions
Generative Artificial Intelligence (GAI) is popping up in all manner of software every day. It's a trend we're seeing unfold right now, characterized by a firehose of daily announcements of new AI-powered products and capabilities. Many businesses, including HackerOne customers like Snapchat, Instacart, CrowdStrike, Salesforce, and many others, have all announced AI-powered features and user experiences. GAI capabilities will soon be table stakes for any software company as their customers will simply expect it. Those who do not take advantage of this technological evolution will decline into irrelevancy and be replaced by better and more productive alternatives. For example, users will expect to just talk directly to their reports and dashboards instead of figuring out yet another query language.
A world where Generative AI is ubiquitous will soon be here. What does that mean for security? We have two main predictions.
Offensive AI Will Outpace Defensive AI
In the short term, and possibly indefinitely, we will see offensive or malicious AI applications outpace defensive ones that use AI for stronger security. This is not a new phenomenon for those familiar with the offense vs. defense cat-and-mouse game that defines cybersecurity. While GAI offers tremendous opportunities to advance defensive use cases, cybercrime rings and malicious attackers will not let this opportunity pass either and will level up their weaponry, potentially asymmetrically to defensive efforts, meaning there isn’t an equal match between the two.
It’s highly possible that the commoditization of GAI will mean the end of Cross-Site Scripting (XSS) and other current common vulnerabilities. Some of the top 10 most common vulnerabilities — like XSS or SQL Injection — are still far too common, despite industry advancements in Static Application Security Testing (SAST), web browser protections, and secure development frameworks. GAI has the opportunity to finally deliver the change we all want to see in this area.
However, while advances in Generative AI may eradicate some vulnerability types, others will explode in effectiveness. Attacks like social engineering via deep fakes will be more convincing and fruitful than ever. GAI lowers the barrier to entry, and phishing is getting even more convincing.
Have you ever received a text from a random number claiming to be your CEO, asking you to buy 500 gift cards? While you’re unlikely to fall for that trick, how would it differ if that phone call came from your CEO’s phone number? It sounded exactly like them and even responded to your questions in real-time. Check out this 60 Minutes segment with hacker, Rachel Tobac, to see it unravel live.
The strategy of security through obscurity will also be impossible with the advance of GAI. HackerOne research shows that 64% of security professionals claim their organization maintains a culture of security through obscurity. If your security strategy still depends on secrecy instead of transparency, you need to get ready for it to end. The seemingly magical ability of GAI to sift through enormous datasets and distill what truly matters, combined with advances in Open Source Intelligence (OSINT) and hacker reconnaissance, will render security through obscurity obsolete.
Attack Surfaces Will Grow Exponentially
Our second prediction is that we will see an outsized explosion in new attack surfaces. Defenders have long followed the principle of attack surface reduction, a term coined by Microsoft, but the rapid commoditization of Generative AI is going to reverse some of our progress.
Software is eating the world, Marc Andreessen famously wrote in 2011. He wasn’t wrong — code increases exponentially every year. Now it is increasingly (or even entirely) written with the help of Generative AI. The ability to generate code with GAI dramatically lowers the bar of who can be a software engineer, resulting in more and more code being shipped by people that do not fully comprehend the technical implications of the software they develop, let alone oversee the security implications.
Additionally, GAI requires vast amounts of data. It is no surprise that the models that continue to impress us with human levels of intelligence happen to be the largest models out there. In a GAI-ubiquitous future, organizations and commercial businesses will hoard more and more data, beyond what we now think is possible. Therefore, the sheer scale and impact of data breaches will grow out of control. Attackers will be more motivated than ever to get their hands on data. The dark web price of data “per kilogram” will increase.
Attack surface growth doesn’t stop there: many businesses have rapidly implemented features and capabilities powered by generative AI in the past months. As with any emerging technology, developers may not be fully aware of the ways their implementation can be exploited or abused. Novel attacks against applications powered by GAI will emerge as a new threat that defenders have to worry about. A promising project in this area is the OWASP Top 10 for Large Language Models (LLMs). (LLMs are the technology fueling the breakthrough in Generative AI that we’re all witnessing right now.)
What Does Defense Look Like In A Future Dominated By Generative AI?
Even with the potential for increased risk, there is hope. Ethical hackers are ready to secure applications and workloads powered by Generative AI. Hackers are characterized by their curiosity and creativity; they are consistently at the forefront of emerging technologies, finding ways to make that technology do the impossible. As with any new technology, it is hard for most people, especially optimists, to appreciate the risks that may surface — and this is where hackers come in. Before GAI, the emerging technology trend was blockchain. Hackers found unthinkable ways to exploit the technology. GAI will be no different, with hackers quickly investigating the technology and looking to trigger unthinkable scenarios — all so you can develop stronger defenses.
There are three tangible ways in which HackerOne can help you prepare your defenses for a not-too-distant future where Generative AI is truly ubiquitous:
- HackerOne Bounty: Continuous adversarial testing with the world’s largest hacker community will identify vulnerabilities of any kind in your attack surface, including potential flaws stemming from poor GAI implementation. If you already run a bug bounty program with us, contact your Customer Success Manager (CSM) to see if running a campaign focused on your GAI implementations can help deliver more secure products.
- HackerOne Challenge: Conduct scoped and time-bound adversarial testing with a curated group of expert hackers. A challenge is ideal for testing a pre-release product or feature that leverages generative AI for the first time.
- HackerOne Security Advisory Services: Work with our Security Advisory team to understand how your threat model will evolve by bringing Generative AI into your attack surface, and ensure your HackerOne programs are firing on all cylinders to catch these flaws.
Want to hear more? I’ll be speaking on this topic at Black Hat on Thursday, August 10 at Booth #2640, or request a meeting. Check out the Black Hat event page for details.
The Ultimate Guide to Managing Ethical and Security Risks in AI