Two modern day buzzwords are AI and cyber security. Surprisingly, they're not discussed together often. However, in the current startup landscape, there are efforts to integrate the two, both by black hat and white hat hackers.
In general, the goal for white hat hackers is better incident detection and reaction. For example, there is research focused on better malware detection with AI. The goal for black hat hackers is flexible algorithms, especially for avoiding detection. For example, generating realistic phishing emails.
Let's take a look at some real-world examples to get a better sense for what's happening. Some of it is scary, but some of it is awing. Shout out to security adviser Steve Romig for additional information!
One recent AI that has made some rounds on Twitter is called SNAP_R. The goal of SNAP_R is to spear-phish users into clicking shortened links. During a 2-hour test run, SNAP_R was able to trick 275 users into clicking a link to Google. For example, one tweet was "@user ahahaha no way?! Yeah I saw One Direction in concert 😭 😍 💔 _link_". For actual tweet generation, it could use either Markov chains or LSTM NNs.
Thankfully the users involved were not harmed, but the shortened url could have pointed anywhere. This shows how easy it could be for a truly malicious attacker to cause mass damage quickly. Not only could they push out a broader attack, but they could also target Facebook and other social media platforms.
As always, make sure your links are trustworthy!
DARPA Cyber Grand Challenge
In August 2016, top security specialists from around the world gathered to participate in a new type of challenge, a battle of the AI. Their goal was to build an AI to find flaws in software and patch them in a matter of seconds, versus the months that humans would require. In addition, the AI was tasked with finding and exploiting flaws in opponent's software. This challenge was called the DARPA Cyber Grand Challenge.
The competition consisted of 100 teams, and there were many highlights from this gathering. For example, most bugs in the sandboxed environment were purposefully built in to provide teams with some surface area to attack. Hence, it came as a surprise when Team Xandra found an unintended bug in a never before seen binary, and proved it, in a matter of a few minutes.
This segues into another highlight very well. One of the challenges centered around the crackaddr vulnerability found in mail servers. This problem hinges on the finite state problem, and because of the halting problem, it came as a surprise when Team MechaPhish was able to perform enough analysis to exploit the vulnerability. As the challenge announcer puts it, "For them to be capable of doing this means that we're one step closer to ... the Everest of program analysis because of that finite state problem."
Captcha, the world's hardest word to spell, is a tool often used to separate users from bot accounts to prevent general abuse of online platforms like the Twitter phishing described above. Over time, these tests have become more and more advanced to beat back the encroaching tide of bots. Originally, this involved transcribing some plain text. Then, it became transcribing text with overlaid graphics, and finally we have Google's current incarnation of image classification.
One bot that prompted the complication of captcha was created by Google's research team itself. This bot was able to break many state-of-the-art captcha systems with above 30% accuracy. One thing to point out is that 1% accuracy was considered enough to break captcha, yet even that is high when it comes to automation and targeted attacks.
Another bot that has helped this general push is unCaptcha, an AI aimed at solving Google audio captcha problems. It was originally created to see if the system was vulnerable or not, and it was able to complete approximately 85% of Google audio captcha challenges. Google has since released some patches to defeat it, like better detection of browser automation.
Already, we're starting to see companies focused on the AI and cybersecurity duality. For example, consider Darktrace, Vectra, and Alphabet's Chronicle; all are cybersecurity companies focused on detecting and responding to attacks. Traditional methods utilize software signatures or heuristics for detection, but these companies seem to use unsupervised anomaly detection. While it's hard to learn more about their methods, we do know that they have received international grants and rewards for their work. This shows that they are at least somewhat successful.
On the flip side, Darktrace has recently reported detection of a network attack possibly incorporating pieces of machine learning. In general, cyberattacks want to go undetected, and will sometimes mimic normal traffic to accomplish this. This is ripe for an adversarial-trained AI.