Data poisoning is a type of adversarial ML attack that maliciously tampers with datasets to mislead or confuse the model, potentially causing inaccurate or unintended behavior. As AI adoption expands, data poisoning becomes more common and can harm the future of AI. Examples include inserting misleading information into datasets or inputting targeted messages to skew classification processes.
