Artificial Intelligence in Cybersecurity: Threat or Enabler?

The late Stephen Hawking didn’t mince words when it came to AI. “The development of full artificial intelligence could spell the end of the human race,” he said.

Larry Page is far more optimistic about the technology, likening it to the world’s leading search engine. According to him, AI would be “the ultimate version of Google… It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that.”

AI, a broad catch-all term that pertains to everything from Siri to robots in retail stores to IBM’s Watson program, is changing the world in new, unprecedented ways. A 2017 survey by BCG and MIT Sloan Management Review found that around 20 percent of companies already integrated AI in their offerings and processes, while 70 percent of company executives expected AI to play a substantial role in how they work in the next five years.

But if there’s one industry where it is much heralded, it’s cybersecurity. Next-generation security products are said to increasingly incorporate AI technologies, training AI software or large datasets to deter abnormal or criminal behavior online. This, according to proponents, can be done even without a known cybercrime “signature” or pattern.

On the other hand, AI can also be an instrument to bypass cybersecurity. Back in May 2018, the New York Times revealed that U.S. and Chinese researchers successfully commanded AI systems created by Amazon, Apple, as well as Google to perform tasks such as open websites and dial phones. This occurred without the knowledge of the users of the AI systems, a step closer to unlocking more critical commands and even transferring money.

This leads experts and end users alike to wonder: does AI enable cybersecurity? With its independence, does it do the opposite? Does it threaten security safeguards in place and allow criminal activity to flourish online?

The Good: Benefits of AI in Cybersecurity

Since it is ideally suited to solve most of the world’s complicated problems, AI seems to be a perfect fit for cybersecurity. Since cyberattacks continue to evolve and proliferate in the techniques they use, stuff like robots and machine learning can be used as a weapon to automate threat detection better than traditional software-oriented methods.

Here are some early adopters of AI in the security landscape:

  • Google – Its anti-abuse research team uses deep learning to get and glean on more data.
  • IBM/Watson – The IBM team is becoming increasingly dependent on its Watson cognitive learning platform, which is based on machine learning and used for “knowledge consolidation” tasks and threat detection.
  • Balbix – Its BreachControl platform incorporates AI-oriented observations and analysis to provide real-time risk predictions, vulnerability management, and proactive breach control.
  • Juniper Networks – It harnesses its Self-Driving Network to do the heavy lifting instead of IT staff. It seeks to self-configure, monitor, defend, and analyze threats with little human intervention.

AI-based security products are used to identify anomalies, accelerate threat detection, and fortify the effectiveness of existing cybersecurity products. They can aid overwhelmed analysts with security alerts, which can be otherwise missed by conventional software. This way, security analysts can stop pursuing “false positive” alerts and focus on more legitimate malicious activities.

Organizations, after all, waste some $1.3 million annually responding to wrong intelligence or pursuing erroneous security alerts, a SANS institute survey noted.

Various AI or machine learning approaches can fit different cybersecurity objectives. Here are some of them:

  • Behavioral User Analytics – AI software looks at raw network activity data, flagging unusual connections and spotting suspicious patterns. A threat can move through a network slowly with compromised network credentials, for example. AI will recognize an anomaly, report it to IT staff, and prompt further investigation.
  • Password Protection and Authenticity Detection – AI is implemented in protecting passwords, which are highly vulnerable to threats. It concentrates on physical characteristics such as fingerprints, retina scans, and other biometric logins to make systems safer.
  • Consistent Intelligence Collection – Without stops, AI systems amasses data on new or attempted attacks, successful breaches, and blocked or failed attempts. This allows them to learn from all those situations and continue to defend an organization’s digital assets. The next time around, too, they are better equipped to react to an attempted breach and defend the perimeter.

The Bad: Risks of AI in Cybersecurity

AI systems are deployed not just by those who want to defend, but also those who intend to attack.

Back in 2016, an experimental AI was set off to send simulated spearphishing links to certain Twitter users. This was done to determine its effectiveness compared to a human doing the same task. While the AI drew in 34.4 percent of targets to fake phishing sites, the human lured 38 percent. Take note, though, that the AI had more than 800 attempts versus the human’s 130 attempts within the same period of time! This shows a great speed advantage that can create more risks and more victims.

AI could be configured to get inside the unique defenses of an organization and tools it uses, allowing attackers to breach them successfully in the future. Viruses could also be created to host a specific type of AI, producing malware that can bypass advanced security measures. The wide applications of AI from industry to industry allow the current attack surface to expand and better serve those with criminal motives.

Here are specific security risks to watch out for:

  • Data Manipulation – There could be malicious corruption or manipulation of data, along with implementation and configuration of AI-based security systems. Hackers, for instance, can foil security algorithms through targeting the data they train on and the warning flags they are looking for.
  • Undetected Attacks – Attackers can develop toolkits that make malware harder to be identified by its digital fingerprint. Since they are empowered to make decisions in an automated manner without immediate human involvement, AI systems can be compromised this way and left undetected for a long time.
  • Imperfect Operations – With the lack of massive volumes of data and events, an AI system can also deliver inaccurate results and false positives.
  • Unclear Motives and Lack of Transparency – The reasons why an AI program makes a specific deduction isn’t always clear to those who oversee it. Its underlying decision-making models, too, aren’t always transparent or readily interpretable. So even if a security violation has been detected, the purpose can remain murky.

AI can play a role in increased risks across industries. These include:

  • Financial risks, e.g., credit fraud getting through much more easily
  • Reputational risks, e.g., a company that appears discriminatory
  • Human safety risks, e.g., interference might take place with medical devices or systems in clinical settings, compromising patient safety
  • Health and environmental risks, e.g., There could be compromise in systems controlling cyberphysical devices that manage things like traffic flow and train routing

Turning the Ugly into Positives: Hope for Future Cybersecurity Systems

While AI systems can work both as a threat and enabler in cybersecurity, companies can continue to leverage on their capabilities and mount improving programs to protect their digital properties. With AI’s speed advantage, they can streamline and enhance existing security models by reducing complex, time-consuming manual interventions. They can redirect human effort to more value-creating problem-solving missions.

Organizations can beef up their cybersecurity by applying AI at three levels:

  1. Prevention and Protection – AI in cybersecurity is still in its infancy, but research can work double-time to mobilize AI-enabled prevention and protection systems. Human experts can also better interact with algorithm-based decision-making.
  2. Detection – AI fuels methods such as signature-based detection, which relies on a set of static rules to recognize an attack signature. It can help companies identify threats from both internal and external sensors. With careful crafting of policies and compliance with existing laws, detection of cybersecurity challenges becomes easier and more efficient.
  3. Response – AI can take some burden off analysts when it helps prioritize risk areas and automates manual tasks. It can also mount an attack response on its own, whether outside or inside the perimeter. Intelligent “traps” today, for instance, make attackers falsely believe they are on the right path, with the deceit enabling their identification.

The challenge for organizations and governments now is whether they are amply protecting their AI initiatives. Important questions involve how they are protecting AI-based products and services, how they are shielding algorithms from unauthorized access, and if they have the human and technical monitoring abilities to detect if their AI has been tampered with.

In the end, it’s up to them to align their organizational policies, processes, and technology with their AI systems, helping those systems act as a savior, not as a pawn for digital harm.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *