close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Bridging Machine Learning’s Divide

By Brian Bailey, Semiconductor Engineering

Why new approaches are needed to tie together training and inferencing.

There is a growing divide between those researching Machine Learning (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets.

[…]

Semiconductor Engineering logo

The result of a training exercise is a floating point model. “You can do a lot of optimization on that,” says Raik Brinkmann, president and CEO of OneSpin Solutions. “There are companies looking at doing Deep Neural Network optimization, where you first try to reduce the precision of the network. You go to fixed point, or even binary weights and activation, and while this doesn’t work for learning, it works fine for inferencing. The result of this optimization step is a reduction in the size of the network, maybe a reduction in some edges that have unimportant weights to make it sparser, and a reduction of precision. Then you run it again in simulation against the training data or test data to check if the accuracy is still acceptable.”

Back

Related Links