4C – IP Session: Data Analytics in Test

Day: April, 10th 2017 Room: Pompeian III Time: 16:20 – 17:20
Organizer: Suriya Natarajan (Intel Corporation)
Moderator: Abhijit Sathaye (Intel Corporation)
Big Data Analytics Engines for End-to-End Supply Chain and Quality Control
Speakers: Thomas Harper, Paul Simon (Qualtera)
Abstract: Innovations in web technologies, cloud computing and big data analytics made it possible to connect test equipment and MES systems to centralized big data platforms to massively collect test data in real-time from any test floor. In addition, modern compute engines then massively process the incoming data and systematically perform sanity checks, data manipulations, and any kind of statistical computations on all the test data in real-time. Basically any algorithm can be setup to be triggered automatically to execute when appropriate data is available. If any analytical signal is present in the manufacturing data, it will become visible – in real-time; as apposed to becoming available much later after complex manual engineering work. The same massive compute engines can also be used to perform (triggered) simulation, modeling and building of predictive signals to generate feed-forward and/or feedback throughout the supply chain. This presentation will discuss examples of such use cases in an end-to-end enterprise solutions for high-volume big data analytics. It will be shown how these sophisticated technologies are fundamentally changing the way semiconductor companies approach operational practices and rapidly achieve gains in operational KPIs such as engineering efficiency, productivity, yield and device quality.
Data Mining of Defective Parts Investigation in Test
Speaker: Rahima Mohammed (Intel Corporation)
Abstract: The purpose of the presentation is to identify potential causal mechanisms for the defective parts in test for improving semiconductor manufacturing health. A methodology will be demonstrated to enable fast pattern recognition by recreating the history of the defective and non-defective parts by making full use of the data collected during Fab Sort, Assembly and Test processes, followed by utilizing machine learning algorithms for initial exploration. To begin with, the unique processor identification mechanism is used to trace the manufacturing history of the part. Detailed recreation of the history of the defective part is created by pulling in all the data of the specific processor unit from wafer level sort, package level (burn-in, class, binning, fusing, system level validation, quality assurance and packaging) test steps. Once the manufacturing history of the defective part is created, then we identify the neighboring parts from the same lot and the neighboring lots specially the ones that were tested using the same test program, same conditions at each step of the manufacturing tests at the various manufacturing sites at the wafer level and the package level. This is what constitutes Big Data for the defective units. Once complete recreation of the history is done with the defective parts and the non-defective parts of the neighboring units, then the methodology chosen is to use various decision tree mechanism iterations to pin-point the most critical parameters that has the top 5 or so relevance. These critical parameters are then analyzed using detailed multi-variate analysis to find any specific patterns with the defective units. Second, this methodology is applied to the defective units investigation of a real case scenarios. Finally, the conclusions and future work will be presented. This work delivers an expedited methodology to find the causal mechanisms in the defective units and institute monitoring mechanisms in the test flow to address and mitigate the defects.
Intelligent Data Driven Test Eco-system
Speaker: Amit Nahar (Texas Instruments)
Abstract: When we have thousands of products, delivering products on-time with the best quality and the least cost becomes a challenge. All devices may not get equal attention and thus we may leave significant opportunities on the table. We need an intelligent data driven system to monitor all devices continuously by analyzing design, test, manufacturing and supply chain data. This paper will talk about such an eco-system to intelligently mine data from supply chain, test and manufacturing and help predict product delinquencies, product ramp issues, identify opportunities for yield and quality improvement and also cost reduction. Early look into parts of such an autonomous system has shown great benefits in improving quality and product ramps.

Back to the Technical Program