close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Processing In Memory

By Ed Sperling, Semiconductor Engineering

Adding processing directly into memory is getting a serious look, particularly for applications where the volume of data is so large that moving it back and forth between various memories and processors requires too much energy and time.

The idea of inserting processors into memory has cropped up intermittently over the past decade as a possible future direction, but it was dismissed as an expensive and untested alternative to device scaling. Now, as the benefits of scaling decrease due to thermal effects, various types of noise, and skyrocketing design and manufacturing costs, all options are on the table. This is particularly true for applications such as computer vision in cars, where LiDAR and camera sensors will generate streaming video, and for artificial intelligence/machine learning/deep learning, where large volumes of data need to be processed quickly.

[…]

Semiconductor Engineering logo

“The big problem is the memory processes and logic processes don’t fit together, so you can’t do a reasonable job manufacturing these devices together,” said Raik Brinkmann, president and CEO of OneSpin Solutions. “That spurs another wave of innovation on the manufacturing side. For example, with a monolithic 3D architecture you have very thin wires between the logic layer and the memory layer that connect two pieces of silicon. That is basically in-memory computing.”

Back

Related Links