Today's cybersecurity headlines are brought to you by ThreatPerspective


The Register - Security

Meta's AI safety system defeated by the space bar

'Ignore previous instructions' thwarts Prompt-Guard model if you just add some good ol' ASCII code 32 Meta's machine-learning model for detecting prompt injection attacks special prompts to make neural networks behave inappropriately is itself vulnerable to, you guessed it, prompt injection attacks.

Published: 2024-07-29T21:01:25













© Ethical Hacking News . All rights reserved.

Privacy | Terms of Use | Contact Us