Due to the volumes of data that need to be analyzed and the limits in simulation tool capabilities and processing power, sign-off in chip and electronic system design has traditionally followed a monolithic, margin-based approach that has resulted in larger die-size and longer development times. Today I’d like to tell you about a fundamentally new approach and software architecture called ANSYS SeaScape that will revolutionize chip simulation by harnessing the power of elastic computing, machine learning and big data to perform multiphysics simulations and design more compact, complex chips. This approach has demonstrated its ability speed up chip design and help eliminate many of the inefficiencies of traditional methodologies.
Consider for example, voltage drop sign-off: Most design teams historically have partitioned the sign-off threshold (approx. 15%) between the chip (approx. 10% of supply voltage), the package (approx. 3%) and the PCB (approx. 2%). Once they defined these limits, they performed transient, DC and AC analyses to meet these limits for their respective domains. With available resources and simulation tool capabilities, design teams could validate and exercise only a small fraction of all operating modes. To protect against such coverage exposures these thresholds were pushed up to pad these margins.
As long as the supply voltages were in the 1+V range, these pre-defined thresholds were mostly met, but it required putting wide power and ground interconnects along with decoupling capacitors. Even though these power and ground networks disrupted routing and created timing congestions, design teams were able to move forward by increasing the die size to manage these constraints. Package designers focused on careful power/ground bump and plane planning along with the use of discrete capacitors. Given that current spikes (or di/dt) flowing in and out of the chip was reasonable, these package elements could operate with standard design techniques.
FinFET devices operate faster, resulting in sharper switching currents (or peaks). Also, the higher density of placement guarantees significantly higher di/dt in localized areas. That, coupled with higher power delivery network (or PDN) parasitics, results in considerable dynamic voltage drop swings. However, these voltage drop fluctuations are present on a sub-500mV supply rather than on a 1.2V supply – this makes power noise rise to 25-30% of the nominal supply than 10-15% seen earlier. It thus becomes virtually impossible for design teams to meet their historical thresholds unless they create large power grids that are uniformly wide across the chip. That leads to timing congestions and routing bottlenecks which in turn leads to ballooning design sizes.
However, there is no guarantee that the “over-design” helps the chip avoid any failures, since the simulations capture only a very small fraction of all operating modes. So design teams over-simulate by creating artificial operating conditions that exaggerate the power envelope and di/dt conditions, creating significantly higher power ground noise. In these artificial scenarios, even with an over-designed PDN, it becomes impossible to meet the tighter margins (or threshold), resulting in considerable delays on their design schedules.
This situation repeats itself for timing, Design Rule Check (DRC) or reliability modeling. Thus, there is an overarching need to be able to look at simulation and the sign-off process differently.
To solve these problems whose complexity in many ways dwarfs that of the internet itself, we should take inspiration from the work done to manage and deliver the internet experience. A modern SoC and its associated electronic systems are naturally big data problems given the interrelated nature of the billions of unknowns present in these designs. It is counterintuitive that we are working on these challenges with software architectures that are more than twenty years old. If we truly want to take a step forward we have to think outside the traditional EDA approaches and look at big data platforms for chip and electronic system design.
With the launch of ANSYS SeaScape, the world’s first custom built, big data architecture for chip and electronic system design, we are bringing the computer science of elastic computing, machine learning and big data to the physics based world of engineering simulation. ANSYS SeaHawk, the first product on this architecture focusing on in-design power closure, has already enabled leading networking, mobile and graphics companies to achieve turn-around time reduction (10X), die size optimization (5%) and sign-off confidence improvement (100% first time silicon success).
In my next blog post, I will discuss specifically how ANSYS SeaScape provides an infrastructure for applications that address the problems of margin management, design optimization and multiphysics analysis.
We think that ANSYS SeaScape will be a hot topic of conversation in June at the DAC conference in Austin, Texas. If you’re attending make sure you stop by our booth.
In the meantime, you can download the Application Brief ANSYS SeaScape — A Big Data Approach to Complex Chip Design that also tells you more about the SeaScape platform.
The post Introducing ANSYS SeaScape for Chip Simulation appeared first on ANSYS Blog.