Why AI is vulnerable to data poisoning-and how to stop it
Briefly

The quality of AI systems in train stations relies heavily on the accuracy of the training data. Tampering with this data can lead to disastrous consequences. For example, using a red laser to mislead AI cameras could cause misinterpretation of train statuses, possibly resulting in delays and public safety risks. Data poisoning involves feeding incorrect data into systems, leading the AI to learn faulty patterns over time. Such scenarios can escalate into severe security breaches or operational failures if left unchecked.
Attackers can intentionally feed misleading data into a system, causing AI to learn incorrect patterns. This can lead to dangerous consequences for operations and public safety.
An example involves using a red laser to mimic a brake light, which misleads cameras about train statuses. The AI may then delay trains based on false occupancy signals.
Read at Fast Company
[
|
]