The Human and AI Partnership: Collaborating for Enhanced Cybersecurity

information security
Author: Jay Allen, CITP, MBCS
Date Published: 6 October 2023

Over the past decade, artificial intelligence (AI) has transitioned from an emerging technology to one firmly embedded in daily life. AI assistants such as Siri and Alexa are commonplace, and AI drives many of the product recommendations and social media feeds people encounter online. However, AI is also poised to play an increasingly important role in cybersecurity. Both generative AI and precision AI hold promise in helping organizations defend against ever more sophisticated cyberattacks.1

Generative AI refers to AI systems that can generate new content, such as text, code, images or videos, based on their training data.2 The most prominent example today is ChatGPT, which can generate human-like text in response to prompts. This type of AI could be invaluable for cybersecurity teams such as by automatically generating threat intelligence reports, policies and other documentation that security analysts must manually write today. Generative AI may also have defensive uses, such as automatically generating benign content to confuse and divert attackers leveraging AI themselves.

Precision AI aims to provide AI systems with greater accuracy, consistency and suitability compared to conventional AI approaches. The use of these capabilities could significantly enhance threat detection and response capabilities. Rather than simply producing a score indicating how likely an activity is malicious, precision AI systems can provide explanations and evidence to justify their outputs. This enables security teams to verify the logic behind AI verdicts rather than unquestioningly trusting in opaque models. Explainable AI models may also uncover biases or gaps in training data that could lead to improved performance over time.

Together, the synergies between generative and precision AI can automate significant portions of cybersecurity workflows and drastically expand security teams' capabilities. Analysts could leverage AI to handle tedious, repetitive tasks such as report writing and reviewing log files for anomalies. It allows them to focus on higher-value investigations and strategic initiatives to improve cyberdefenses. AI could also make security teams more proactive. For instance, generative AI could identify policy vulnerabilities or gaps, while precision AI models could preemptively detect insider threats based on early warnings.

However, there are understandable concerns regarding the responsible use of AI in cybersecurity. Generative models such as ChatGPT sometimes produce harmful, biased or misleading content, which is a serious issue if such AI were to generate flawed cyberplans and procedures. Meanwhile, precision AI relies heavily on training data, which, if not adequately curated, could lead to discriminatory outcomes. There are also worries that over-reliance on AI may cause organizations to become complacent, mainly if the AI provides a false sense of security.

Although AI holds much promise, trust and transparency are critical for its adoption in cybersecurity. Organizations must carefully evaluate generative AI outputs for accuracy and be able to ascertain the reasoning behind precision AI verdicts. AI models should be continuously monitored and refined based on feedback from security teams leveraging them. Processes should ensure security analysts retain active oversight and decision authority when AI is deployed operationally.

With prudent governance and collaboration between technologists and security experts, AI could usher in a new era of enhanced protection against the growing threats faced in cyberspace.

A symbiotic partnership between humans and AI may be key to transforming cybersecurity in an age of increasingly cunning adversaries and sophisticated attacks.3 Cybersecurity leaders must view AI as a complement to, not a replacement for, human insight. Generative AI can expand human creativity and capacity, while precision AI brings greater transparency and focus. Yet responsible oversight and continuous improvement fuelled by human insight remain essential to fulfilling AI's promise.

Security teams must be actively involved in curating the training data used by AI systems and continuously monitoring their performance after deployment. By evaluating real-world results, analysts can provide feedback to improve algorithmic logic, identify gaps in training data and correct unfair biases or blind spots. Ongoing collaboration and communication between technologists and security experts will help develop AI that augments human analysts as trusted partners rather than merely automating rote tasks.

Organizations should also design processes that involve humans in oversight, decision making and control whenever AI is used operationally, including implementing frameworks for continuously monitoring and improving AI systems based on feedback from security teams.4 Although the recommendations of generative and predictive AI systems can inform human judgment, final calls should remain with the security team. Analysts on the ground can assess context, use intuition and draw connections that AI currently lacks. Keeping a human in the loop for consequential actions can act as a check against potential AI failures.

With wisdom and foresight, the power of AI can be harnessed to make the digital world a safer place for all. As AI becomes further embedded into cybersecurity workflows, a renewed focus on judicious governance and human-machine collaboration will be essential. Although AI promises to transform cybersecurity, it is still early days. Adopting these emerging technologies prudently, rather than mindlessly automating, while retaining active human insight will be vital to fulfilling that promise responsibly.

Although the recommendations of generative and predictive AI systems can inform human judgment, final calls should remain with the security team.

Endnotes

1 Haworth, R.; “Artificial Intelligence: Generative AI In Cyber Should Worry Us, Here’s Why,” Forbes, 4 August 2023
2 J.P. Morgan, “Is Generative AI a Game Changer,” 20 March 2023
3 Dash, B.; Ansari, M. F.; Sharma, P.; Ali, A.; “Threats and Opportunities With AI-Based Cyber Security Intrusion Detection: A Review,” International Journal of Software Engineering and Applications, vol. 13, iss. 5, September 2022
4 Thuraisingham, B.; “Artificial Intelligence and Data Science Governance: Roles and Responsibilities at the C-Level and the Board,” 2020 Institute of Electrical and Electronics Engineers (IEEE) 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 2020, p. 314-318

Jay Allen, CITP, MBCS

Is a seasoned technical leader with a rich background in steering multinational teams and global pre-sales efforts in cybersecurity. He has 20 years of experience within the IT industry across both vendors and private organizations. Allen is passionate about advancing the cybersecurity landscape.