I’m fascinated with cars, and there is a very famous car maker museum that I visited recently where they gave a thorough rundown of the history of each of their cars from how they are designed, to the materials they used, to the evolution of energy efficiency. It wasn’t until the last exhibit where they talked about the future of automobiles and how AI, including machine learning, is the driving force behind the next several generations of cars and transportation in general. The exhibit went on to explain how each phase of autonomous learning will be exponentially more difficult than the one before. Each phase requires more significant “horse‐power” in terms of data processing, how that data gets connected to everything else, and the customizations “under the hood” to the processors and SoCs.
Artificial intelligence (AI) is steadily progressing toward advanced, high-value applications that will have a profound impact on our society. Automobiles that can drive themselves are perhaps the most talked about, imminent technological revolution, but there are many more applications of AI.
It has been a year since Accellera’s Portable Test and Stimulus Specification became a standard. Semiconductor Engineering sat down to discuss the impact it has had, and the future direction of it, with Dave Kelf, chief marketing officer for Breker Verification Systems; Larry Melling, product management director for Cadence; Tom Fitzpatrick, strategic verification architect for Mentor, a Siemens Business; and Tom Anderson, technical marketing consultant for OneSpin Solutions. What follows are excerpts of that conversation.
As RISC-V’s community grows ever larger, the RISC-V Foundation has announced its 2019 EMEA roadshow, Getting Started with RISC-V.
Electronics Point talked to three experts who will be speaking at the event: OneSpin’s Rob van Blommestein, Trinamic’s Onno Martens, MINRES Technologies’ Eyck Jentzsch, and GreenWaves Technologies’ Martin Croome.
Formal verification traditionally has been regarded as an advanced technique for experts to thoroughly verify indi- vidual blocks of logic, or perhaps small clusters of blocks. The appeal of formal techniques is the exhaustive analysis of all possible behavior for the design being verified. This stands in sharp contrast to simulation, which exercises only a tiny fraction of possible behavior by running specific tests. If no test triggers a design bug, the bug will not be found. If the bug is triggered but no change in results is observed, the bug will not be found. Given a sufficiently robust set of properties to describe intended behavior, formal tools can not only find all bugs but also prove that there are no more bugs to be found.