AI has come a long way in recent years — but as many who work with this technology can attest, it is still prone to surprising errors that wouldn’t be made by a human observer. While these errors can sometimes be the result of the required learning curve for artificial intelligence, it is becoming apparent that a far more serious problem is posing an increasing risk: adversarial data. For the uninitiated, adversarial data describes a situation in which human users intentionally supply an algorithm with corrupted information. The corrupted data throws off the machine learning process, tricking the algorithm into…
This story continues at The Next Web
Source link
Jessica spends 12 hours a day on the internet managing security for web assets and loves her macha tea
Related
Author: Jessica Venkat
Jessica spends 12 hours a day on the internet managing security for web assets and loves her macha tea