By Brian Bailey, Semiconductor Engineering | Featuring David Landoll, Solutions Architect, OneSpin
Experts at the Table: Who is responsible for safety and security and what can we do as an industry to make it better?
Semiconductor Engineering sat down to discuss industry attitudes towards safety and security with Dave Kelf, chief marketing officer for Breker Verification; Jacob Wiltgen, solutions architect for functional safety at Mentor, a Siemens Business; David Landoll, solutions architect for OneSpin Solutions; Dennis Ciplickas, vice president of characterization solutions at PDF Solutions; Andrew Dauman, vice president of engineering for Tortuga Logic; and Mike Bartley, chief executive officer for TV&S. This is the third part of this discussion.
Landoll: I hear that in DO-254 all the time. There is a push and pull that a lot of people doing implementations know that the energy being spent is not energy to make it safer, it is energy to get the compliance signed off. If I could put the energy into doing something else, I could actually make it safer. But I am not allowed to do that because it won’t be approved.
By Ann Steffora Mutschler, Semiconductor Engineering | Featuring Raik Brinkmann, President and CEO, OneSpin
Systems are extremely specific and power-constrained, which makes design extremely complex.
Adding intelligence to the edge is a lot more difficult than it might first appear, because it requires an understanding of what gets processed where based on assumptions about what the edge actually will look like over time.
“When you build systems, there are multiple things you need to care for,” said Raik Brinkman, CEO of OneSpin Solutions. “It’s the same whether you do it in hardware or software, but the challenge is how you keep track of data. Nothing is fixed, and as you get new data, you may find you have gaps and have to retrain systems. There are multiple layers of data. And with machine learning, you need to recompute everything. This is a big management task. People are not aware of the complexity in all of this. They’re happy enough that it works.”
Tesla’s autopilot chip executes 72-trillion additions and multiplications per second: It better get the math right...
ARTIFICIAL INTELLIGENCE (AI) is steadily progressing toward advanced, high-value applications that will have a profound impact on our society. Automobiles that can drive themselves are perhaps the most talked about, imminent technological revolution, but there are many more applications of AI.
AI software, such as a neural network (NN) implementing a machine learning (ML) or deep learning (DL) algorithm, requires high-performance “artificial brains,” or hardware, to run on. Computer vision is fundamental to many complex, safety-critical decision-making processes.
By Ann Steffora Mutschler , Semiconductor Engineering | Feat. Joerg Gross, Safety Product Manager, OneSpin
Experts at the Table: With both established companies and new players clamoring to play a role in the automotive space, how is the industry moving towards automation? (Part One)
Semiconductor Engineering sat down to discuss functional safety thinking, techniques and approaches to automation with Mike Stellfox, Fellow at Cadence; Bryan Ramirez, strategic marketing manager at Mentor, a Siemens Business; Jörg Grosse, product manager for functional safety at OneSpin Solutions; and Marc Serughetti, senior director of product marketing for automotive verification solutions at Synopsys. What follows are excerpts of that conversation.
Grosse: We want to automate the FMEDA [Failure modes, effects, and diagnostic analysis] process as much as possible. My personal opinion is that you can’t have an FMEDA tool. I don’t think that exists. You have to have several building blocks. You have to have a part for the hardware safety analysis, a part for dealing with the diagnostic coverage analysis determination, and also the part of the computational model. That’s where we are looking into it and automating this process.
"What we have here is a failure to communicate." – Cool Hand Luke
"The farmer and the cowman should be friends." – Oklahoma
There are apparently a couple of silos in the EDA world that could use some breaking down.
- On one side, we have verification. This is a well-established discipline involving numerous EDA tools and a brief that compels verification engineers to make sure that a design does what it is intended to do.
- On the other side, we have safety engineering. This is a newer discipline to EDA, charged with making sure that a design won’t put someone or something in danger if things go awry.
Historically, safety has been limited to the rather rarified realms of aviation and military. Folks operating in those markets have been a different breed, sacrificing flexibility and agility for what many might see as a cumbersome, inefficient process of checks and cross-checks and adherence to what can be mind-numbing regulations, all designed to keep soldiers and aircraft passengers and, frankly, innocent bystanders, safe.
By Brian Bailey, Semiconductor Engineering | Feat. Dominik Strasser, VP Engineering, OneSpin
Why time spent in debug is increasing, underlying trends, and what surveys do not reveal.
Semiconductor Engineering sat down to discuss the debugging of complex SoCs with Randy Fish, vice president of strategic accounts and partnerships for UltraSoC; Larry Melling, product management director for Cadence; Mark Olen, senior product marketing manager for Mentor, a Siemens Business; and Dominik Strasser, vice president of engineering for OneSpin Solutions. What follows are excerpts of that conversation.
Strasser: We do see new containers of bugs, like security and side-channel attacks. These are new concerns, and nobody has thought about these in the past. Suddenly, things are seen as being malicious. Out-of-order execution suddenly becomes a side-channel attack and someone is listening to what your processor is doing. It is eavesdropping. Is that a bug? Where is the bug? The bug is in the specification. The bug is in the way the system is constructed.
Fish: It is hard to quantify security. It is hard to optimize anything that cannot be quantified. It is like playing whack-a-mole. Every time there is a breach you learn how to stop that class of problem.
By Brian Bailey, Semiconductor Engineering | Feat. Raik Brinkmann, President & CEO, OneSpin
Calling an open-source processor free isn’t quite accurate.
Open source processors are rapidly gaining mindshare, fueled in part by early successes of RISC-V, but that interest frequently is accompanied by misinformation based on wishful thinking and a lack of understanding about what exactly open source entails.
Nearly every recent conference has some mention of RISC-V in particular, and open source processors in general, whether that includes keynote speeches, technical sessions, and panels. What’s less obvious is that open ISAs are not a new phenomenon, and neither are free, open processor implementations.