Confident Security provides an encryption tool called CONFSEC to protect user data from being stored or used by AI models. Tech giants often collect user data, raising privacy concerns in sensitive sectors like healthcare and finance. This apprehension hampers AI adoption. Confident Security, funded with $4.2 million, positions itself as a mediator between AI vendors and customers, aiming to facilitate secure AI interactions. The product is also suitable for new AI browsers, ensuring sensitive information is safeguarded from unauthorized storage or access.
"The second that you give up your data to someone else, you've essentially reduced your privacy," Jonathan Mortensen, founder and CEO of Confident Security, told TechCrunch. "And our product's goal is to remove that trade-off."
Confident Security aims to guarantee that prompts and metadata can't be stored, seen, or used for AI training, even by the model provider or any third party.
Fears about where data goes, who can see it, and how it might be used are slowing AI adoption in sectors like healthcare, finance, and government.
Confident Security came out of stealth on Thursday with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx.
Collection
[
|
...
]