Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Severe Security Flaws in Popular Machine Learning Toolkits Expose Organizations to Server Hijacks and Privilege Escalation



A recent analysis by JFrog has uncovered nearly two dozen security vulnerabilities in popular machine learning (ML) toolkits, exposing organizations to server hijacks and privilege escalation attacks. The most significant vulnerability, CVE-2024-7340, was discovered in the Weave ML toolkit, while others were identified in the Deep Lake AI-oriented database, Vanna.AI library, and Mage AI framework. The severity of these vulnerabilities cannot be overstated, as they can lead to a severe breach of sensitive data and operations.

  • Nearly two dozen security vulnerabilities were identified across 15 different open-source ML projects.
  • A directory traversal vulnerability (CVE-2024-7340) in Weave ML toolkit allows privilege escalation to admin role.
  • Command injection vulnerability (CVE-2024-6507) in Deep Lake AI-oriented database enables attackers to inject system commands.
  • Prompt injection and incorrect privilege assignment vulnerabilities were identified in Vanna.AI library and Mage AI framework, respectively.
  • Multiple path traversal vulnerabilities were discovered in Mage AI framework, allowing remote users to read arbitrary text files.



  • Security flaws have been discovered in various popular machine learning (ML) toolkits, leaving organizations vulnerable to server hijacks and privilege escalation attacks. According to a recent analysis by JFrog, a security firm specializing in software supply chain security, nearly two dozen security vulnerabilities were identified across 15 different open-source ML projects.

    The most significant vulnerability, CVE-2024-7340, was discovered in the Weave ML toolkit. This directory traversal vulnerability allows low-privileged authenticated users to escalate their privileges to an admin role by reading a specific file named "api_keys.ibd". The version 0.50.8 patch addressed this issue.

    Another critical vulnerability, identified as CVE-2024-6507, was discovered in the Deep Lake AI-oriented database. This command injection vulnerability allows attackers to inject system commands when uploading remote Kaggle datasets due to a lack of proper input sanitization. The version 3.9.11 patch resolved this issue.

    Furthermore, vulnerabilities such as CVE-2024-5565 (prompt injection) and CVE-2024-45187 (incorrect privilege assignment) were identified in the Vanna.AI library and Mage AI framework, respectively. These vulnerabilities enable attackers to execute arbitrary code on the underlying host or escalate privileges remotely.

    A prompt injection vulnerability, CVE-2024-5565, was discovered in the Vanna.AI library. This vulnerability allows attackers to achieve remote code execution on the underlying host by exploiting a flaw in the library's handling of user input.

    The Mage AI framework also revealed an incorrect privilege assignment vulnerability, CVE-2024-45187. This vulnerability enables guest users with high privileges to execute arbitrary code remotely through the Mage AI terminal server, despite being deleted after 30 days.

    Additionally, multiple path traversal vulnerabilities were identified in the Mage AI framework, including CVE-2024-45188, CVE-2024-45189, and CVE-2024-45190. These vulnerabilities allow remote users with the "Viewer" role to read arbitrary text files from the Mage server via specific requests.

    The severity of these vulnerabilities cannot be overstated, as MLOps pipelines may have access to sensitive ML datasets, model training, and model publishing resources. Exploiting an MLOps pipeline can lead to a severe breach, compromising not only the organization's data but also its operations.

    In response to this critical vulnerability, JFrog has released a defensive framework codenamed "Mantis". This framework leverages prompt injection to counter cyber attacks on large language models (LLMs) with over 95% effectiveness. Mantis plants carefully crafted inputs into system responses, disrupting the attacker's LLM and potentially compromising their own machine.

    The discovery of these security flaws highlights the importance of regular vulnerability assessments and patching for open-source software projects. It also underscores the need for organizations to prioritize the security of their ML toolkits and pipelines.



    Related Information:

  • https://thehackernews.com/2024/11/security-flaws-in-popular-ml-toolkits.html

  • https://www.sepe.gr/en/it-technology/cybersecurity/22500441/security-flaws-in-popular-ml-toolkits-enable-server-hijacks-privilege-escalation/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-7340

  • https://www.cvedetails.com/cve/CVE-2024-7340/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-6507

  • https://www.cvedetails.com/cve/CVE-2024-6507/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-5565

  • https://www.cvedetails.com/cve/CVE-2024-5565/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-45187

  • https://www.cvedetails.com/cve/CVE-2024-45187/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-45188

  • https://www.cvedetails.com/cve/CVE-2024-45188/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-45189

  • https://www.cvedetails.com/cve/CVE-2024-45189/

  • https://nvd.nist.gov/vuln/detail/CVE-2024-45190

  • https://www.cvedetails.com/cve/CVE-2024-45190/


  • Published: Mon Nov 11 05:32:38 2024 by llama3.2 3B Q4_K_M













         


    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us