Ethical Hacking News
A new wave of vulnerabilities has been discovered in open-source AI and ML models, highlighting the need for increased vigilance and awareness among developers and users. The most recent round of discoveries reveals over three dozen security flaws in various tools, with some potentially allowing remote code execution and information theft. Stay informed about the latest developments and take steps to secure your AI/ML supply chain to protect against potential attacks.
Over three dozen security flaws have been discovered in various open-source AI and ML models. Vulnerabilities could allow remote code execution and information theft, with some having a CVSS score of 9.1. Flaws in tools like Lunary and LocalAI could permit attackers to view or delete external users' data, compromising the integrity of the model's outputs. The discovery highlights the need for developers to prioritize security testing and vulnerability disclosure in their AI and ML models. A new jailbreak technique has demonstrated that malicious prompts encoded in hexadecimal format and emojis could be used to bypass OpenAI ChatGPT's safeguards.
Researchers have recently uncovered a concerning number of vulnerabilities in open-source artificial intelligence (AI) and machine learning (ML) models, highlighting the need for increased vigilance and awareness among developers and users. The most recent round of discoveries has revealed over three dozen security flaws in various tools, including ChuanhuChatGPT, Lunary, and LocalAI.
The flaws identified in these AI and ML models have significant implications for cybersecurity, with some potentially allowing remote code execution and information theft. For instance, vulnerabilities such as CVE-2024-7474 (CVSS score: 9.1) and CVE-2024-7475 (CVSS score: 9.1) in Lunary, a production toolkit for large language models, could permit attackers to view or delete external users' data, resulting in unauthorized access and potential data loss.
Furthermore, another IDOR vulnerability (CVE-2024-7473, CVSS score: 7.5) in Lunary would allow a bad actor to update other users' prompts by manipulating a user-controlled parameter, potentially compromising the integrity of the model's outputs.
In addition to these findings, researchers have also discovered vulnerabilities in ChuanhuChatGPT's user upload feature that could result in arbitrary code execution and exposure of sensitive data. The path traversal flaw (CVE-2024-5982, CVSS score: 9.1) identified in this tool could enable attackers to perform malicious activities without being detected.
LocalAI, another open-source project that enables users to run self-hosted LLMs, has also been found to contain security flaws. Specifically, two vulnerabilities (CVE-2024-6983 and CVE-2024-7010) have been identified in this tool, which could allow malicious actors to execute arbitrary code by uploading a malicious configuration file or guess valid API keys through timing attacks.
The discovery of these vulnerabilities highlights the need for developers to prioritize security testing and vulnerability disclosure in their AI and ML models. It also underscores the importance of continuous monitoring and patching to prevent exploitation of identified weaknesses.
In response to this growing concern, Protect AI has released Vulnhuntr, an open-source Python static code analyzer that leverages LLMs to find zero-day vulnerabilities in Python codebases. The platform works by breaking down code into smaller chunks without overwhelming the LLM's context window and flagging potential security issues.
Meanwhile, a new jailbreak technique published by Mozilla's 0Day Investigative Network (0Din) has demonstrated that malicious prompts encoded in hexadecimal format and emojis could be used to bypass OpenAI ChatGPT's safeguards and craft exploits for known security flaws. This finding highlights the importance of staying vigilant and proactive in the face of emerging threats.
In light of these recent discoveries, users are advised to update their installations to the latest versions of affected tools to secure their AI/ML supply chain and protect against potential attacks.
The vulnerability disclosure also comes as a reminder that AI frameworks, like any other software, can contain security weaknesses. It is crucial for developers to stay informed about identified vulnerabilities and take prompt action to remediate them.
In conclusion, the recent discovery of vulnerabilities in open-source AI and ML models serves as a wake-up call for the cybersecurity community. As these tools become increasingly prevalent in various industries, it is essential that developers prioritize security testing and vulnerability disclosure to prevent exploitation of identified weaknesses.
By acknowledging and addressing these concerns proactively, we can build a safer digital landscape for all users.
Related Information:
https://thehackernews.com/2024/10/researchers-uncover-vulnerabilities-in.html
https://nvd.nist.gov/vuln/detail/CVE-2024-7474
https://www.cvedetails.com/cve/CVE-2024-7474/
https://nvd.nist.gov/vuln/detail/CVE-2024-7475
https://www.cvedetails.com/cve/CVE-2024-7475/
https://nvd.nist.gov/vuln/detail/CVE-2024-5982
https://www.cvedetails.com/cve/CVE-2024-5982/
https://nvd.nist.gov/vuln/detail/CVE-2024-6983
https://www.cvedetails.com/cve/CVE-2024-6983/
https://nvd.nist.gov/vuln/detail/CVE-2024-7010
https://www.cvedetails.com/cve/CVE-2024-7010/
https://www.csoonline.com/article/571311/how-apts-become-long-term-lurkers-tools-and-techniques-of-a-targeted-attack.html
https://portswigger.net/daily-swig/who-is-behind-apt29-what-we-know-about-this-nation-state-cybercrime-group
https://www.securityweek.com/openai-says-iranian-hackers-used-chatgpt-to-plan-ics-attacks/
https://openai.com/chatgpt/overview/
Published: Tue Oct 29 09:20:44 2024 by llama3.2 3B Q4_K_M