Cybersecurity is on the brink of major change. We’ve nearly reached the end of a record-breaking year for cyberattacks, including the recent and notorious Equifax breach—one of the worst in history—and in the same month, the attack on Deloitte, a company once dubbed the best cybersecurity consultant in the world. As these attacks increase worldwide, leaders in technology dig deeper into the possibilities for potential tools for safeguarding data. Overall, the community agrees: artificial intelligence (AI) and machine learning (ML) are the way of the future.
An Inside-Out Response
We’re conditioned to create security measures that model a military style: creating firewalls, barricading, blocking intruders from the outside, then moving inward if duty calls. Clearly, this is no longer working, as attackers become more agile every day, bypassing the “outer perimeter” defenses networks put in place again and again. Darktrace, a cybersecurity company founded by Cambridge mathematicians and ex-British spies, is challenging the security status quo. “That is kind of the mindset the whole industry has,” says Darktrace CEO Nicole Eagan in an interview with Scott Rosenberg, “That if you analyze yesterday’s attack on someone else, you can help predict and prevent tomorrow’s attack on you. It’s flawed, because the attackers keep changing the attack vector.”
At Darktrace, the modus operandi is to radically change the way we protect our networks. The goal is to create immune system style security—a constantly-running internal defense mechanism. As explained by Eagan: “We’ve got skin, but occasionally that virus or bacteria is going to get inside. Our immune system is not going to shut our whole body down. It’s going to have a very precise response. That is where security needs to get.”
Darktrace, a company started by Cambridge mathematicians and ex-British spies, is working to make the immune system metaphor a reality using machine learning. According to Eagan, AI is the only way networks will be able to defend networks against the “unknown unknowns”—finding those clever breaches that sneak past antivirus. In this new strategy, machine learning is taught what “normal” is for a network. Then, anything that does not fit in the “normal” criteria gets flagged in real time.
Filling in the Gaps
You may be asking: why the need for ML and AI? Not only are there too few professionals in the industry to cover all the analysis and monitoring required; these professionals are too skilled to be relegated to those types of tasks. “There aren’t enough humans available to do proper analysis, synthesis or anomaly detection in cybersecurity,” says Shahid Shah, CEO of Netspective Communications. “The only way to fill the skills gap is to program computers to do the grunt work and leave humans to the decision-making, incident management and follow-up.” Shah also points to skills gaps in several areas in Bob Stasio’s article, “Can AI and Machine Learning Help Fill the Cybersecurity Skills Gap?”: incident response and tracking, identity and access management (IAM), and advanced malware protection are a few. “Instead of highly talented personnel spending time on repetitive and mundane tasks, the machine takes away this burden and allows them to get on with the more challenging task of finding new and complex threats,” explains Nick Ismael in his article, “The role of AI in cyber security.”
The downfall of replacing humans with AI is the risk posed when AI cannot distinguish good from bad in a gray area. And when every attack is different, it will be very difficult—if not impossible—to train AI to anticipate on an infinite variety of attacks. “The challenge in cybersecurity is that the initial phases of an attack, such as malware or spear-phishing emails, vary every time the attack is launched, making it impossible to detect and classify with confidence,” says Simon Crosby, Co–founder and CTO at Bromium, in “Separating Fact From Fiction: The Role Of Artificial Intelligence In Cybersecurity.”
As I explain in another article: threat intelligence is an important concept to grasp before building out a team—and it should definitely be understood before involving ML or AI. Threat intelligence can be crucial to empowering cyber security teams, but for both to work effectively, they must be on the same page about what constitutes a threat for the company and how to manage it.
The Future Will Tell
While ML and AI are by no means a cure-all to the cyber security issues we have today, industry leaders agree that involving this technology is the way of the future. The most important thing teams can do now is pinpoint their unique security needs and goals. Then, decide: is there need for a threat intelligence team? Does that team include a machine? If so, it is imperative for that company to educate themselves on what’s involved in the process for incorporating ML and AI into threat intelligence and cyber security efforts.
It’s clear that as threats become more creative and devastating, the use of ML and AI will be extremely useful—for detection and for filling in where humans can’t.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.