Site icon GEEKrar

Generative AI Risks and the Need for Enhanced Code Security and Cybersecurity

Generative AI is making its mark across different industries. In medicine, it is significantly reducing doctors’ administrative workloads and is even helping improve the accuracy of diagnoses. In commerce, it is used to establish supply chain resilience. Plus, it is set to change the advertising business landscape and the gaming industry. Generative artificial intelligence promises myriad life-improving changes.

However, just like most other technologies, AI is not exclusive to beneficial purposes. Cybercriminals are taking advantage of the technology to boost their attacks. From convincing deep fakes used in scams and phishing to the rapid generation of malware, AI is becoming a popular tool for adversarial action. Regulators, consumers, and businesses are aware of the unwelcome consequences, which is why there are prominent discussions over AI safety.

The field of software development and operations is particularly prone to the effects of generative AI, both the good and the bad. Here’s a rundown of the risks that come with AI in the context of code security and overall cybersecurity.

The generative AI and code security connection

The mention of generative AI as a threat typically equates to the rapid generation of malicious software by AI or the production of deep fakes, cloned voices, and convincing personalized messages used in phishing and other social engineering attacks. It is rarely related to software code. However, the reality is that generative AI does have the ability to antagonize code security.

One situation where code security and generative AI intertwine is in the use of artificial intelligence systems to write code. As widely reported, modern AI systems like ChatGPT can already write code. Around 31 percent of organizations that use generative AI say that they used it to write code, according to a report from an analytics platform company.

The problem with relying on ChatGPT to write code is that AI does not guarantee secure coding. ChatGPT itself admits that it does not emphasize security when writing code. Similarly, Google’s Gemini is incapable of coming up with secure code. The main reasons for this are the AI systems’ lack of understanding of context and the specific environment the code is intended for, the limited reasoning and logic of existing AI systems, and the inability to undertake testing and debugging.

In other words, code security with AI is unlikely achievable with AI’s current state. AI may eventually learn how to incorporate security best practices or adopt the shift left movement, but these are scenarios that are not expected to materialize in the foreseeable future.  If organizations want secure code, they need expert and experienced human coders.

Generative AI risks

AI’s direct role in writing secure code may appear farfetched for now, but its ability to aid threat actors is already a reality. As mentioned, generative AI can be used to produce malware at a rapid pace. It can churn out new malware or modify existing ones to evade detection systems. In mid-2023, the United States Federal Bureau of Investigation published a warning about cybercriminals who use AI tools to create malicious code and launch intricate cyber attacks with greater ease.

In addition to the ability to produce malware, generative AI can also help threat actors in detecting vulnerabilities they can exploit. It can create a wide range of test cases to examine code execution paths and edge cases, accelerating the speed by which cyber criminals find security issues in code for them to attack. These are functions that are already typically employed in the threat detection functions of cybersecurity systems, but they can now become useful to cybercriminals with the help of generative artificial intelligence.

Additionally, it is important to highlight how organizations that habitually use AI to write code are exposing themselves to potential attacks. Code generated by AI can contain security issues. They may also come with unforeseen consequences or unintended functionality, which work to the benefit of attackers. Since organizations that use AI to write code typically do it to speed up their processes, it would be fair to say that they are unlikely to be conscious of code security. They focus more on functional concerns, particularly in deploying their rapidly built systems.

It is crucial to ascertain code security. Developers (who are not working on cybersecurity solutions) may not have a direct involvement in the detection and prevention of AI-produced malware, but they can do something to make sure that the code they write is free from exploitable vulnerabilities.

Mitigating the risks

To be clear, completely eliminating all security defects from the code is virtually impossible. There will be issues that will be overlooked especially when using open source components. Based on data from the 2023 Open Source Security and Risk Analysis (OSSRA) report, 80 percent of codebases have at least one vulnerability while 48 percent of applications have high-risk vulnerabilities.

However, code security problems can be significantly mitigated with the help of secure coding practices and continuous monitoring. It is advisable to embrace the shift-left approach and implement security checks to detect issues and resolve them as soon as possible. This is where organizations can also turn to artificial intelligence to be part of cyber defense. There are AI-powered cybersecurity solutions that automate various processes in code security testing. They can leverage deep learning techniques alongside threat intelligence feeds to enhance security evaluations and proactively secure applications. Additionally, AI is useful in modeling threats. There are tools that can simulate adversarial behavior, allowing organizations to probe security weaknesses so that they can anticipate attacks and ensure resilience.

Moreover, human oversight should always be part of the risk mitigation solution. While it helps to take advantage of AI to detect and address security flaws in code, cybersecurity in general is still far from becoming fully automated. AI-powered threat models and threat detection systems are far from perfect. Human scrutiny and intervention remain essential in cybersecurity.

Keeping up with the changing threat landscape

Generative AI has been a major buzzword over the past year—and for good reasons. It is an important technology that is reshaping the way people do many things, including the ways by which cybercriminals attack. To make sure that generative AI does not only serve the interests of threat actors, organizations need to actively make it a part of their defenses by integrating security into DevOps and choosing cybersecurity solutions powered by AI.

Code security may initially appear unrelated to generative AI risks, but the reality is that the ability of generative AI to aid cyber attacks makes it crucial for developers to ensure the security of their work and reduce exploitable vulnerabilities. AI-driven attacks are relatively new and the world has yet to see their peak, especially the cunning resourcefulness of cybercriminals in using AI to their advantage. As such, it is important to start beefing up cybersecurity and use AI for security to counter its evolving hostile applications.

Exit mobile version