Share with:

FacebookTwitterLinkedIn


With advances in machine learning and artificial intelligence, cybercrime rings and government agencies could significantly improve their game in spying and online attacks, experts in both fields say.

At the RSA conference in February, Alphabet Chairman Eric Schmidt, Google’s former CEO, said that when he and other computer scientists started working on AI problems in the 1970s, they didn’t think that the proverbial bad guys would take a strong interest.

“Imagine how different the Internet would be if we knew when it was being designed what we know today,” Schmidt said. “It didn’t occur to us that there were criminals.”

Since then, of course, reality has been following paths of Philip K. Dick science fiction. Computer scientists have been developing technology that gives machines, in essence, their own power to think and learn.

AI, which encompasses forms of computing that emulate advanced human cognitive abilities (such as image recognition or intuition) is influencing the next generation of phishing methods. And cybersecurity experts are developing machine-learning algorithms designed to help software dynamically improve its own abilities through experience, thus making up for a shortfall in personnel and effectively combatting a deluge of attacks.

“To keep pace with intelligent, unpredictable threats, cybersecurity will have to adopt an intelligent security of its own,” Dave Palmer, director of technology at British cybersecurity company Darktrace, wrote in an analysis of AI’s impact on spear-phishing emails.

Business security professionals today are starting to take AI seriously. They are already using machine-learning technology to more efficiently tackle problems such as quantifying risk, detecting network attacks and traffic anomalies, and pinpointing malicious applications and system vulnerabilities, experts say. And they are researching ways AI could address a perennial overload on the infosec community, with North American businesses handling roughly 10,000 security alerts per day—far more than their infosec teams can manage in a single day, according to research from Damballa.

Cybersecurity teams combine massive amounts of data with faster machine-based intelligence to more effectively identify out-of-place data and filter out illegitimate information, says Roman Yampolskiy, a University of Louisville professor who specializes in AI and cybersecurity. On the other hand, cybercrime rings can use the same underlying technology to more effectively trick someone into downloading malicious software.

“It is now possible to automatically generate phishing emails of such high quality that even cybersecurity experts will fall for them,” Yampolskiy says.

AI programs, which can clearly be used to either promote or compromise security, can spit out unexpected results, Yampolskiy adds. Their results can be “difficult to predict, and are often even more difficult to understand. They make decisions which will frequently surprise and disappoint us.”

Artificial intelligence and big data

Nevertheless, the growing availability of big data, combined with the processing power of graphic-processing units, creates “a renaissance period for artificial intelligence,” according to Guy Caspi, CEO of Deep Instinct, which uses AI to stop zero-day attacks.

In August, security company SparkCognition launched an antivirus product called DeepArmor designed to utilize AI to learn new malware approaches and identify mutating viruses. And at the annual DefCon hacker conference in Las Vegas, artificial-intelligence developments stole the spotlight during the Cyber Grand Challenge, organized by the U.S. Defense Advanced Research Projects Agency. The winning team, ForAllSecure, nabbed a $2 million prize for a cloud-based AI bug-hunting system.

With a huge shortfall of experienced infosec professionals available to handle the growing onslaught of breaches and threats, Matt Wolff, chief data scientist at AI device security specialist Cylance, believes that technological boons will help offset a lack of human expertise, making cybersecurity workers “more productive and able to make decisions faster.”

“Three years ago, we had the data, but we were not seeing it used for anything really intelligent,” Wolff says. “Now we’re seeing a big push by a lot of companies and what they can do now.”

Wolff adds that machine learning has a symbiotic relationship with big data.

“Ultimately, the goal of security should be to prevent the attack from happening,” Wolff says. Within three to four years, he expects “every cybersecurity product will have AI baked into it somewhere… not only in this industry, but many other industries.”