Security Holes In Machine Learning And AI
By Ed Sperling, Semiconductor Engineering
Machine learning and AI developers are starting to examine the integrity of training data, which in some cases will be used to train millions or even billions of devices. But this is the beginning of what will become a mammoth effort, because today no one is quite sure how that training data can be corrupted, or what to do about it if it is corrupted.
“There are a few aspects you need to get it right,” said Raik Brinkmann, CEO of OneSpin Solutions. “One is authentication, to make sure that the device sending back data is the device you want to talk to. In silicon, you need to know this particular chip is the one you’re talking to. There are some IP companies targeting that question. How do you burn an ID into something that you deploy? And only when it is activated at the customer do you get that ID, rather than in the factory. You can associate the data source with this chip. Then there are technologies like blockchain to make sure that the data flowing from this device continuously is authenticated and the data that you expected. Tamper resistance is important on the data stream. You need to control the flow of data and guarantee integrity, or you will have a big security problem.”