Hackers Don’t Have to Be Human Anymore - This Bot Battle Proves It

Hackers Don’t Have to Be Human Anymore - This Bot Battle Proves It

Last Night, at the Paris Hotel in Las Vegas, seven autonomous bots proved that hacking isn’t just for humans.

The Paris ballroom played host to the Darpa Cyber Grand Challenge, the first hacking contest to pit bot against bot—rather than human against human. Designed by seven teams of security researchers from across academia and industry, the bots were asked to play offense and defense, fixing security holes in their own machines while exploiting holes in the machines of others. Their performance surprised and impressed some security veterans, including the organizers of this $55 million contest—and those who designed the bots.

Starting with over 100 teams consisting of some of the top security researchers and hackers in the world, the Defense Advanced Research Projects Agency (DARPA) pit seven teams against each other in the Cyber Grand Challenge final event, held August 4 in Las Vegas. During the competition, each team’s Cyber Reasoning System (CRS) automatically identified software flaws, and scanned a purpose-built, air-gapped network to identify affected hosts. For nearly twelve hours teams were scored based on how capably their systems protected hosts, scanned the network for vulnerabilities and maintained the correct function of software.
— DARPA
Mike Walker and Dan Kaufman from DARPA on future machines battling hackers to identify and plug security breaches.

During the contest, which played out over a matter of hours, one bot proved it could find and exploit a particularly subtle security hole similar to one that plagued the world’s email systems a decade ago—the Crackaddr bug. Until yesterday, this seemed beyond the reach of anything other than a human.

That was astounding, Anybody who does vulnerability research will find that surprising.
— Mike Walker, the veteran white-hat hacker who oversaw the contest

In certain situations, the bots also showed remarkable speed, finding bugs far quicker than a human ever could. But at the same time, they proved that automated security is still very flawed. One bot quit working midway through the contest. Another patched a hole but, in the process, crippled the machine it was supposed to protect. All the gathered researchers agreed that these bots are still a very long way from grasping all the enormously complex bugs a human can.

According to preliminary and unofficial results, the $2 million first place prize will go to Mayhem, a bot fashioned inside startup ForAllSecure, which grew out of research at Carnegie Mellon. This was the bot that quit working. But you shouldn’t read that as an indictment of last night’s contest. On the contrary. It shows that these bots are a little smarter than you might expect.

DARPA's Cyber Grand Challenge Final Event took place August 4, 2016, at the Paris Las Vegas Hotel and Conference Center. Seven computers developed by teams of hackers played the world's first-ever all-machine game of Capture the Flag.

Read the full article to find out more about ...

  • The Challenge
  • Rematch with the Past
  • Game Theory
  • Crackaddr Cracked
  • The Unintended Bug
Google’s AI Creates its Own Inhuman Encryption

Google’s AI Creates its Own Inhuman Encryption

You are invited to help this AI blog become better - please comment, share, like, etc. to make it more widely known, but please also contact me if you want to share your views on how best this AI blog should be/work.