#prompt-injection

[ follow ]
#ai-vulnerabilities
fromHackernoon
3 weeks ago
Privacy professionals

The Prompt Protocol: Why Tomorrow's Security Nightmares Will Be Whispered, Not Coded | HackerNoon

fromHackernoon
3 weeks ago
Privacy professionals

The Prompt Protocol: Why Tomorrow's Security Nightmares Will Be Whispered, Not Coded | HackerNoon

#peer-review
fromFuturism
1 week ago
Artificial intelligence

Scientists Are Sneaking Passages Into Research Papers Designed to Trick AI Reviewers

fromFuturism
1 week ago
Artificial intelligence

Scientists Are Sneaking Passages Into Research Papers Designed to Trick AI Reviewers

#machine-learning
#ai
Artificial intelligence
fromArs Technica
3 months ago

Researchers claim breakthrough in fight against AI's frustrating security hole

Prompt injections jeopardize AI systems; Google DeepMind's CaMeL offers a potential solution by treating language models as untrusted components within security frameworks.
Artificial intelligence
fromTechzine Global
1 month ago

Zero-click attack reveals new AI vulnerability

Echoleak exposes vulnerabilities in AI assistants like Microsoft 365 Copilot through subtle prompt manipulation, representing a shift in cybersecurity attack vectors.
#ai-security
Artificial intelligence
fromInfoQ
5 months ago

Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing Attack

Johann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
Artificial intelligence
fromFuturism
2 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
Artificial intelligence
fromInfoQ
2 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
Growth hacking
fromArs Technica
3 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromInfoQ
2 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

Artificial intelligence
fromInfoQ
5 months ago

Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing Attack

Johann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
Artificial intelligence
fromFuturism
2 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
Artificial intelligence
fromInfoQ
2 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
Growth hacking
fromArs Technica
3 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromInfoQ
2 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

Artificial intelligence
fromHackernoon
1 year ago

Prompt Injection Is What Happens When AI Trusts Too Easily | HackerNoon

Generative AI is becoming essential in daily life, but it poses significant security threats like prompt injection, which can manipulate AI systems.
[ Load more ]