#differential-privacy

[ follow ]
fromArs Technica
10 hours ago

Google releases VaultGemma, its first privacy-preserving LLM

The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to "memorize" any of that content.
Artificial intelligence
Artificial intelligence
fromTechzine Global
1 day ago

Google launches VaultGemma: privacy AI without compromising performance

VaultGemma is a 1B-parameter differentially private language model that preserves performance while preventing memorization or leakage of sensitive data and will be open source.
Artificial intelligence
fromodsc.medium.com
1 week ago

Tokenizing Text for LLMs, an AI Agent Dictionary, Optimizing Agentic Workflows, and AI for Robotics at ODSC West

AI for robotics integrates foundation models, autonomous navigation, manipulation, agentic workflows, tokenization, metadata protocols, differential-privacy synthetic data, and hands-on training including a robotics hackathon prize.
Privacy technologies
fromHackernoon
1 year ago

The Role of Boundary Objects in New Tech Adoption | HackerNoon

The adoption of differential privacy at the U.S. Census Bureau highlights the need for effective stakeholder mediation in algorithmic governance.
fromHackernoon
1 year ago

The Census Didn't Just Get Safer-It Got More Complex | HackerNoon

The Bureau's framing of differential privacy (DP) as a "modern" and "advanced" confidentiality method leads to an oversimplified narrative, masking its broader social implications.
Digital life
Privacy technologies
fromHackernoon
1 year ago

LLM Probabilities, Training Size, and Perturbation Thresholds in Entity Recognition | HackerNoon

The article explores the intersection of natural language processing and privacy, emphasizing Differential Privacy as a method to protect data.
The research identifies key indicators for evaluating privacy risks in NLP applications.
#natural-language-processing
Artificial intelligence
fromZDNET
5 months ago

How Apple plans to train its AI on your data without sacrificing your privacy

Apple leverages synthetic data and differential privacy to enhance its AI technology while protecting user privacy.
[ Load more ]