Tiny alterations in training data can introduce “backdoors” into machine learning models

Tiny alterations in training data can introduce “backdoors” into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with “reinforcement learni…

Tiny alterations in training data can introduce “backdoors” into machine learning models