These engineers are developing artificially intelligent hackers
SAN FRANCISCO: Could you invent an autonomous hacking system that could find and fix vulnerabilities in computer systems before criminals could exploit them, and without any human being involved? That’s the challenge faced by seven teams competing in Darpa’s Cyber Grand Challenge in August.
Each of the teams has already won USD 750,000 for qualifying and must now put their hackingsystems up against six others in a game of ‘capture the flag’. The software must be able to attack the other team’s vulnerabilities as well as find and fix weaknesses in their own software – all while protecting its performance and functionality. The winning team will walk away with USD two
million.
“Fully automated hacking systems are the final frontier. Humans can find vulnerabilities but can’t analyse millions of programs,” explained Giovanni Vigna, a professor of computer science at University of California Santa Barbara.
Robo-hackers could be incredibly useful for organisations trying to defend their network to quickly identify and patch problems before anyone exploits them to either steal data or disrupt online services – without having a team of highly skilled human ‘uber-hackers’ in house. Outside of the Cyber Grand Challenge, other groups are working on hacking machines powered by artificial intelligence.
Konstantinos Karagiannis, chief technology officer of BT Americas, has been building a hacking system that uses neural networks to simulate the way the human brain learns and solves problems. “Using this approach a security scanner could identify intricate flaws using creative approaches you would have never thought of,” explained Karagiannis.
Alex Rice, co-founder of security company HackerOne, agrees. “Anything that can be used to defensively find vulnerabilities can be used by criminals – they all end up becoming a double-edged sword,” he told the Guardian. Despite this, Rice thinks the rise of automation in security is a good thing.