6 Minutes of Security: Staying A Step Ahead of Hackers with Continuous Verification
By John Hallman
We’re all familiar with the apps on our phones and how often they get updated. Most of the time, these updates are done over the air quickly and easily. Other times, a completely new download of the software is required. But let’s take a look at the hardware platforms that the software runs on. What happens when the hardware needs to be upgraded? Today’s hardware platforms are expensive to develop especially if we look the more complex system that run critical infrastructures, industrial IoT, autonomous vehicles, or safety-critical applications. To amortize the cost of these systems, feature upgrades need to be incremental and customizable throughout the lifetime of the product. Heterogenous platforms that include programmable logic, programmable engines, and accelerators deliver on the needed customization and flexibility that allow updates in the field.
As the various incarnations of the hardware platform occur, however, the threat of bugs or worse malicious logic finding their way into the design rises. The hardware also needs to keep pace with the evolving ways that hackers can infiltrate the design. What’s needed is a continuous verification strategy to ensure that the hardware functions as intended and is secure at all times.
Verification at the pre-fabrication stage is often an extensive and expensive process and produces an enormous amount of information. This information though, is often left behind once the design enters fabrication, and as shown in the figure, there is still much left in the lifecycle which requires verification. Fortunately, we continue to find innovative ways to advance technology, and one such advancement is the introduction of the “digital twin” to aid the continuous verification effort.
A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object (in this case the integrated circuit). The digital twin allows the physical product to be reprogrammable as well as the digital twin itself. The digital twin enables design models and verification originally performed prior to pre-fabrication to be combined with digital models virtualized from the physical device. The ability to combine this original modeled data with physical dependency models with real-time data enables more system simulation and predictive analysis to improve end-to-end processes. This process also enables availability of the original automated verification methods as well as analysis solutions to be used with the most current models of the design in the system. Furthermore, if new security weaknesses or vulnerabilities become known, the model of the IC may be analyzed for impact. While not all exploits may be discovered prior to deployment, the continuous verification environment enable much faster risk assessment and mitigation action for the issues that might otherwise go undetected or undiagnosed for months or even longer.
The recent SolarWinds hacking incident that left many fortune-500 companies and US government networks exposed is an interesting cautionary tale for unchecked software and hardware supply chain security vulnerabilities. The attack occurred during an update process.
In typical verification processes, the update would be verified in isolation prior to creation of the signed binary. Details are still forthcoming exactly where in the update process the malware was introduced, but let us assume the verification of the updated code prior to binary creation did not reveal any abnormal behavior. The binary was signed and then delivered for deployment. Most systems rely on the signed binary, and then potentially do some “sandbox” testing before full deployment. However, with this particular attack, even a few weeks of rigorous test would not have discovered the anomalous behavior due to the timed-release nature of the malware. You may ask, so what could have been done differently?
To answer this, we propose the hypothetical scenario where the hardware is programmable, such as the case with field programmable gate arrays (FPGA) and will receive a maintenance update. Using disciplined processes, the update would be functionally verified. As discussed earlier in the pre-fabrication verification, this rigorous coverage-driven process and completeness verification form the foundation, after which additional security assessment can be performed. For higher assurance, additional analysis and methodologies may be required. When security objectives are complete, the product meets criteria for deployment.
In the case where continuous verification methodologies exist, models of the updated hardware may continue to be verified in the system model. Real-time data of the deployed update simulated in the system may offer more insight into any anomalous behaviors detected. While this may still not detect all potential exploits, we must take advantage of the methods and processes available.
OneSpin provides a set of technologies that can assist in the continuous verification effort to ensure that modifications done to the hardware don’t introduce new bugs, safety issues, or security vulnerabilities. We have published a new white paper that services to address SoC verification from pre-fabrication to over-the-air updates.