Artificial intelligence
fromFuturism
19 hours agoRogue AI Agent Triggers Emergency at Meta
A rogue AI at Meta exposed sensitive user data due to human error and AI hallucination, leading to a critical security incident.
Netskope One AI Security is integrated into the Netskope One platform and designed to protect various components of the AI ecosystem. These include AI applications, AI agents, datasets, and users in both public SaaS environments and private or internally hosted AI systems. Workflows in which autonomous AI agents communicate with other systems are also covered by the security.
Security tools are excellent at explaining why something is risky. What they don't do is make remediation safe and practical. The real breakthrough isn't more prioritization, it's removing risk without breaking the business. Reclaim does exactly that, and that's why it matters.
Meta Platforms is piloting a shopping research capability within its Meta AI chatbot, signalling a deeper move into ecommerce as competition intensifies with ChatGPT and Gemini. The feature, currently rolling out to select users in the US via the Meta AI web interface, enables consumers to request product recommendations. In response, the chatbot displays a carousel of images featuring brand names, pricing and merchant links, alongside bullet-point summaries explaining the reasoning behind each suggestion.
Until now, data loss prevention within Microsoft Purview only worked for documents in Microsoft's cloud services. Files stored on laptops or desktops were outside that scope. In practice, this meant Copilot could analyze locally stored documents, even when organizations had strict security rules in place. Microsoft is now putting an end to that limitation.
Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers' confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint.
The European Parliament has taken a rare and telling step: it has disabled built-in artificial intelligence features on work devices used by lawmakers and staff, citing unresolved concerns about data security, privacy, and the opaque nature of cloud-based AI processing. The decision, communicated to Members of the European Parliament (MEPs) in an internal memo this week, reflects a deepening unease at the heart of European institutions about how AI systems handle sensitive data.
Within months of its launch in November 2022, ChatGPT had started making its mark as a formidable tool for writing and optimizing code. Invariably, some engineers at Samsung thought it was a good idea to use AI to optimize a specific piece of code that they had been struggling with for a while. However, they forgot to note the nature of the beast. AI simply does not forget; it learns from the data it works on, quietly making it a part of its knowledge base.
AI systems are becoming part of everyday life in business, healthcare, finance, and many other areas. As these systems handle more important tasks, the security risks they face grow larger. AI red teaming tools help organizations test their AI systems by simulating attacks and finding weaknesses before real threats can exploit them. These tools work by challenging AI models in different ways to see how they respond under pressure.
Varonis has announced its acquisition of AllTrue.ai, an AI trust, risk, and security management (AI TRiSM) company, in a move aimed at helping enterprises manage and secure the growing use of AI across their organizations. The deal underscores a broader industry shift as security vendors race to address the risks introduced by large language models, copilots, and autonomous AI agents operating at scale.
National Cyber Director Sean Cairncross, speaking at the Information Technology Industry Council's Intersect policy summit, did not indicate when this framework would be finalized, but said the project is a "hand-in-glove" effort with the Office of Science and Technology Policy. President Donald Trump "is very forward leaning on the innovation side of AI," Cairncross said. "We are working to ensure that security is not viewed as a friction point for innovation" but is built into AI systems foundationally, he added.
"Smart people are burning out sifting through backlogs of unprioritized, low-value vulnerabilities, while the real critical pathways go unprotected," says Shlomie Liberow, founder and CEO of Aisy (and formerly head of hacker research and development at HackerOne). He doesn't see this changing for mid-tier and larger companies - partly because of the security industry itself. Each vulnerability tool competes with other vulnerability tools, and each one avoids the possibility of a competitor finding more issues than it does itself.
"Phone theft is more than just losing a device; it's a form of financial fraud that can leave you suddenly vulnerable to personal data and financial theft. That's why we're committed to providing multi-layered defenses that help protect you before, during, and after a theft attempt," said Google in the announcement. Your phone now fights back when stolen The most impressive upgrade targets the moment of theft itself. Android 's enhanced Failed Authentication Lock now includes stronger penalties for wrong password attempts, extending lockout periods to frustrate thieves trying to crack your device.
While this is a good start, traditional red-and-blue teaming cannot match the speed and complexity of modern adoption and AI-driven systems. Instead, agencies should look to combine continuous attack simulations with automated defense adjustments, enabling an automated purple teaming approach. Purple teaming shifts the paradigm from one-off testing to continuous, autonomous GenAI security by allowing agents to simulate AI-specific attacks and initiate immediate remediation within the same platform.
But like everything else in life, there will always be a more powerful AI waiting in the wings to take out both protagonists and open a new chapter in the fight. Acclaimed author and enthusiastic Mac user Douglas Adams once posited that Deep Thought, the computer, told us the answer to the ultimate question of life, the universe, and everything was 42, which only made sense once the question was redefined. But in today's era, we cannot be certain the computer did not hallucinate.
The organization puts on the prominent annual gathering of cybersecurity experts, vendors, and researchers that started in 1991 as a small cryptography event hosted by the corporate security giant RSA. RSAC is now a separate company with events and initiatives throughout the year, but its conference in San Francisco is still its flagship offering with tens of thousands of attendees each spring.