The Executive’s Guide to the Semiconductor Digital Twin
Four Ways to Expand to a System-Wide View
At its core, the digital twin is exactly as it sounds; a digital replica of a physical asset. To be a true twin, it must match the physical copy in every way. Every way, that is, except for being physical. It models the dimensions, properties, manufacturing sensitivities, and performance parameters just as they exist in the physical world.
The concept of a digital twin has proven successful in the semiconductor, aerospace, and automotive industries and has now begun to gain traction in industries like retail and consumer products where it is used to model key parameters. Successful companies have found that a digital twin enables them to make design adjustments prior to beginning mass production, and factory adjustments during production. For example, clothing companies are using a digital twin to model the fit of different fabrics and how they lay, then adjusting the material, cut, and pattern before bulk ordering fabric or sending the design to third party manufacturers.
The benefits are significant:
- Better returns for products sent to market due to higher quality designs and tailored products optimized for the target consumer, which leads to increased sales
- Reduced design cycle times decreasing the number of iterations and not having to wait for samples to arrive and send physical samples back and forth with adjustments
- Increased ability to get products to market faster
- Better knowledge of what is going to market, and ability to very products with confidence (ex: color)
Despite the novelty of the digital twin in other industries, this concept has been leveraged in semiconductor design and manufacturing for decades. Hardware description languages like VHDL and Verilog, along with simulations and pre-silicon verification, have been used to virtually model chip functionality and performance and help design engineers to manipulate designs in the digital world since VLSI (very large-scale integration) circuits have been around. It’s not exactly groundbreaking to model a chip before producing it.
Expanding the Semiconductor Digital Twin
To truly realize the full benefits of a digital twin, semiconductor firms must move beyond the chip in isolation and look at how it interacts with the data, manufacturing processes, and systems that must come together to turn it from an ingot to an integrated circuit, and how that finished component interacts with the rest of the product it is used in. A comprehensive digital twin includes a full digital representation of (1) circuit performance, (2) manufacturing sensitivities and throughput, and (3) interaction of the data of all the modules in the system with other components, hardware, and software.
Understanding key interactions of the entire system up front – including electrical, thermal, mechanical, and software integrations – brings additional insights to the manufacturing process. Process engineers today are focused on process sensitivities that impact yield with parametric measurements and functional testing. Frequently this means reacting to process excursions and parametric measurements in class probe and other test events. However, a holistic digital view of the operation of the entire fab, along with interactions between silicon and the rest of the system, can prescribe improvements to improve yields across the system – not just Silicon yields – before the chip is taped out and throughout production.
To enable a foundation for smart operations and proactive analytics-driven yield improvements, the digital twin should incorporate the performance of the component in the entire system as well as the entire fab operation. This digital view of system data provides visibility to insights at scale across platforms, process nodes and factory production lines. Engineers can leverage this data to make improvements that go well beyond traditional pre-silicon verification efforts and Monte Carlo simulations to predict yields.
For example, a fab may have yield engineers responsible for yield forecasts and trends, excursion response, root cause analysis and defect resolution. These engineers depend on parametric measurements and other yield data to make decisions. A fab will also have operation planners and maintenance personnel responsible for maximizing uptime and utilization of fab tools. They depend on planning tools, preventative maintenance, tool changeover scheduling, and process flow recipes to manage the operations. The digital twin should be able to model the interactions between these disciplines, such as wafer histories, log files, tool dependencies, production volumes, and throughput requirements. Yield engineers gain visibility into tool impact on parametrics and yields, and operations can predict production loads, tool configurations, and throughput requirements well in advance. This enables engineers to look both at output and yields to capture overall equipment effectiveness (OEE).
Four Ways to Get Started with Digital Twins
Of course, creating this system-wide digital view is a challenge. EDA and simulation data are stored in development systems that speak their own language. Production and yield data are stored in production systems that speak another language. Operational data are stored in factory scheduling systems that speak yet another language. And system engineering data that models the performance of all the components in the system is also separate.
While these systems are separate, the data can (and should) still be brought together to enable views that produce valuable insights. Here are four ways to get started:
- Identify the business problems that need to be solved. Are you looking to improve quality? Reduce design cycle times? Increase product performance? Improve factory throughput? Maximize probe and final test yields? A focus on business value and ROI is critical to the success of your digital twin initiative.
- Begin by connecting disparate sources of data with a summary of results and statistics to demonstrate value through connecting information. Consolidating to a single source of truth for this data is not needed right away. Meaningful insights can be gleaned by analyzing summary-level data across domains. There are many tools that can enable views and analytics from disparate databases.
- Establish an intelligent data lake that can connect more and more sources of data and bring in additional context and structure to that data. This will be an ever-evolving source of data that can be sliced and viewed in different ways to continually improve predictive and prescriptive analytics.
- Continually improve the accuracy and precision of your digital twin. One view isn’t enough. Analytical analysis will become outdated. New interactions and dependencies will surface. As you generate and connect more data to expand the view of your digital twin, continually look for additional views and insights, including design sensitivities, production yields, system performance, cycle times, and reliability standards. The more data you have, the more valuable your insights will become.
The semiconductor industry has historically been a digital twin pioneer, and semiconductor technology drives the proliferation of digital technologies in other industries. It is time for leading semiconductor firms to pave the way with the next generation of digital technologies by creating a more comprehensive digital twin.
Learn More
Download our eBook for full details on leveraging digital technologies in high-tech, including use cases, benefits and pragmatic starting points.