close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Architecting For AI

By Ann Steffora-Mutschler

Semiconductor Engineering sat down to talk about what is needed today to enable artificial intelligence training and inferencing with Manoj Roge, vice president, strategic planning at Achronix; Ty Garibay, CTO at Arteris IP; Chris Rowen, CEO of Babblelabs; David White, distinguished engineer at Cadence; Cheng Wang, senior VP engineering at Flex Logix; and Raik Brinkmann, president and CEO of OneSpin. What follows are excerpts of that conversation.

[…]

Semiconductor Engineering logo

Brinkmann: You touched on the choice for inference between being on the edge or being in the cloud. I want to add there is one more vector that you need to consider why you would perform the inference on the edge is privacy and security. It is a big concern that the data actually the leaves your premises and goes through whatever channel into some other premises. You may want to keep the raw data local for that reason, not just because of bandwidth limitations or latency, which is a big concern. It’s also the privacy and security of the data this is the primary problem in some applications, so one more reason maybe to look at the edge.

Brinkmann: One interesting trend that I wanted to add on this inference side is that you see inference being performed on the same data in different ways. If you’re going to the edge as an example, if you have a particular sensor to prevent someone breaking into your house or a sensor that should actually recognize specific people, you don’t want to run the inference algorithm of recognizing a specific person all the time, for power reasons, for example. So what you will have is a camera image sensor and you analyze the data to see if there’s any movement there; it’s a very simple thing. Once that is true, you kick off the next algorithm: is there actually a person, which is another algorithm which is maybe a little more power intensive than the first one. Once you see there is someone there, you run the face recognition, which is a lot more compute intensive and power intensive, but if you layer that up, you can actually run things like that on a battery for a very long time because you’re not running constantly the very heavy influence algorithm; you layer that up, and that’s an interesting approach where you can make this AI edge computing really something that you can deploy on a battery-based device. As there are so many billion connected devices, you have to power them, so I can’t imagine having 20 million or 20 billion wires going all around the world to patch up all these little IoT devices; most of them will be battery powered, and that’s where you come up with these kinds of solutions to say, okay, how can we extend the lifetime from a few days of battery life to a few weeks to 5 or 10 years.

Back

Related Links