The "Invisible" Threats Hidden in Your AI: Access to Public Security Advisories

The "Invisible" Threats Hidden in Your AI: Access to Public Security Advisories
Photo by GuerrillaBuzz / Unsplash

OpenAI's GPT-4 can reveal significant potential for exploiting real-world vulnerabilities by reading public security advisories in the rapidly evolving cybersecurity landscape.

Earlier last month, researchers at the University of Illinois Urbana-Champaign published their research revealing that an AI, left to its own devices and with CVE (Common Vulnerabilities and Exposures) advisories enabled, was able to mine nearly 87% of vulnerabilities self-disclosed to it.

AI has been a powerful force in driving many aspects of technology.

The paradoxical advantage of AI systems in cybersecurity is that the efficiency and speed with which they may function to find and exploit security gaps may also pose the main threat. They can serve equally well for ethical hackers who intend to strengthen systems and, conversely, for the malicious actors who aim to break them.

1. Accelerated Exploit Development: AI reduces the timing window between releasing vulnerabilities and their actual exploitation. This means that the exploitation-to-disclosure timing gets lowered, leaving companies and individuals with even shorter timeframes to apply issued patches or mitigations.

2. Automated Exploit Execution: The research highlighted how AI advances are diving even further into being able to execute explicit exploits on their own. This would allow unsophisticated individuals to have access to sophisticated cyber-attacks, therefore multiplying the attackers' pool. 

3. Exploiting unpatched vulnerabilities: Vulnerabilities defined even before as "one day" remain without an address, as there are no standard procedures followed by the vendor or the organization developing a solution. Therefore, AI's capability to exploit these can damage even before the defensive measures are executed.

This indicates the need to reassess how security advisories are being handled in the current time of AI maturing. Obsolescence of security through obscurity, which means holding back details of how a system's security is being ensured, is growing impractical.

Instead, the researchers advise proactiveness, such as making improvements and patches to outpace AI-driven exploits.

Ethical Consideration and Regulation: As the technology grows, there is a need for guidelines for the ethical use of this technology and tight regulations around it to avoid misuse. There is also a need to sensitize; AI comes with specific policy issues on using sensitive information in spread and AI in security roles. 

Collaboration: Within the cybersecurity community, human partnership can be expected to be encouraged since togetherness can help develop more robust defenses against AI-driven threats. Sharing information and strategies would allow collective security to be even more effective, thereby limiting vulnerabilities.

The merging of AI with cybersecurity efforts provides enormous benefits and introduces new risks that must be carefully managed. The cybersecurity community must remain ever-watchful and assertive in ensuring that AI developments enhance security and not make things riskier. This case study is another example of the dual-use nature of technology, risking raising a need for balanced, informed approaches to how AI will be used in susceptible areas like cybersecurity. 

To read the original article, visit: https://www.theregister.com/2024/04/17/gpt4_can_exploit_real_vulnerabilities

Read more