close By using this website, you agree to the use of cookies. Detailed information on the use of cookies on this website can be obtained on OneSpin's Privacy Policy. At this point you may also object to the use of cookies and adjust the browser settings accordingly.

Chips Listening to Gibberish

By: Sergio Marchese, Technical Marketing Manager, OneSpin Solutions

By Pre-silicon verification engineers assume that hardware interfaces must behave according to well-defined protocol rules. What happens when the rules are broken?

We all talk gibberish once in a while. At least, I do. I might be in a silly mood, thinking aloud, listening to music or talking over the phone using my headphones (they are quite small, and if you don’t notice them, you could think I am crazy). Regardless of the circumstances, I mean no harm, I promise. However, it’s still possible that a passer-by could get distracted trying to figure out what’s wrong with me.

ICs - and the electronic systems that include them - speak well-defined languages. On-chip and external hardware interfaces and I/O ports implement strict protocol rules that ensure IPs and subsystems can communicate effectively with each other and the external world. Think about data read and write transactions across an AMBA bus interconnect or an I2C interface, for example. Both master and slave signals must comply with the rules defined in the protocol specification. Is it possible for a master or slave to get out of line, break the rules, and start talking gibberish? Could this affect chip security?

Bugs in the hardware design may lead to bus transactions that violate protocol rules. Once the circuit is not behaving as intended, many undesirable things could happen, including violations of security requirements. These design mistakes are often severe and may cause system malfunctions and costly hardware re-spins (ASICs) or re-programming (FPGAs). During pre-silicon functional verification at the IP and SoC level, engineers aim to detect such bugs and fix them before they are manufactured, programmed, and deployed. Pre-silicon functional verification is a critical, time-consuming task. In my experience, it is often a task that stands in the critical path to successful – and timely – project completion.

During pre-silicon verification, engineers assume that the world around the hardware component behaves nicely. If a test detects an illegal transaction at an internal interface while the SoC is in an invalid configuration state, this scenario could be dismissed as irrelevant. These situations happen quite often in practice. The hardware designer analyzes the failing test and determines that it is not valid because the hardware is not intended to operate in the scenario: it is the software's responsibility to configure the hardware correctly in the first place. Similarly, input signals at external interfaces are assumed to respect protocol rules. Failing that, designers can claim that there is no defined expected behavior for the hardware. In practice, verification engineers try to avoid invalid test scenarios, as they are a waste of precious time. On the other hand, when it comes to verification of security requirements, these same invalid scenarios can become very relevant.

Attackers are not known to behave nicely, respect rules, and use hardware components only in the intended way. The Meltdown and Spectre processor vulnerabilities are a great example of how certain hardware features can be misused to break security assets' confidentiality. Complex hardware components have plenty of features to increase performances, reduce power consumption, and optimize silicon area. If one considers that these features can be misused, it may be hard to predict what can and cannot happen.

Using formal verification technology is a powerful, efficient way to extend functional verification to address security concerns. In the specific case of violation of interface protocols, formal can make an exhaustive analysis and identify scenarios that violate security requirements. For security-critical systems, this is a crucial task as there is evidence that protocol violations could enable attacks. Recently, I have used the OneSpin formal verification tool to analyze a RISC-V-based SoC. Within hours of work, I have identified illegal transactions at an external interface that interfere with the boot code's execution after reset. Due to the SoC configuration at reset and some other design implementation details, it is possible to use an unrequested response to smuggle instructions into the processor cache that are executed instead of the boot code and thus overtake the entire boot process. While this particular SoC does not claim to have any security features, this behavior was not expected and raised numerous alarm bells.

Even when executed correctly and to a high standard, traditional functional verification does not implicitly cover all security concerns. Simulation-based verification is particularly weak in revealing unforeseen misuse case scenarios. New processes and methods are required to systematically identify hardware weakneees and vulnerabilities and generate security assurance evidence.

Learn more

I invite you to take a look at the trust and security glossary, download the white paper Trust Assurance and Security Verification of Semiconductor IPs and ICs, and join the LinkedIn group ISO/SAE 21434 Automotive Cybersecurity. And of course, don’t miss the next 6 Minutes of Security blog post!

 

Back