Insightinar #3: Managing data against shortage Panel: Helen Crimmins (Quick Release_), Chris Jobse (PIMvendors.com), Dr. Ece Sancı (University of Bath School of Management) Host: Ahmet Mazocioglu (Quick Release_) --- AHMET MAZOCIOGLU (host): Hello everyone, I am Ahmet Mazocioglu, a project analyst at Quick Release_. I would like to welcome you to this year's first event of the QR_ Insightinar series. In this series, we will be capturing bite-sized insights from a diverse mix of experts on a product data management related theme. Today's theme is managing data against shortage, and I have three wonderful panelists for you. Each panelist will present a brief talk, then address the questions from the audience. You can share your questions with us via the Q&A button on the top right corner, and I will be moderating those questions to the panelists after each talk. Without further ado, our first speaker today is our own Helen Crimmins. Helen is a business manager at Quick Release_ Detroit. She specialises in data analysis and project management to support new product releases for automotive companies. Helen will kick off this event by providing her overview on a highly topical subject: the chip shortage problem in the automotive industry. OK Helen, what do you have for us today? --- HELEN CRIMMINS: Thank you for the introduction, Ahmet. My name is Helen Crimmins. As you said, I am a business manager with Quick Release_, and I'm going to be talking with you today about the chip shortage and how it is affecting the automotive industry. So, to kick things off, here's an overview of what we're going to go through today. First, a little background on what we're talking about when we talk about chips and the chip shortage. Then the difference in sales we saw before the pandemic and as it began, the rise of the chip shortage, the impact on the automotive industry, and where we are today. First off, let me talk a little about chips in general. A chip is a microchip — a series of integrated circuits printed or etched onto a wafer made of a semi-conductive material like silicon, produced at a nanoscale. Imagine that one silicon wafer the size of a fingernail contains hundreds to tens of thousands of integrated circuits. Because these are produced on such a tiny scale, they require very highly specialised equipment to fabricate and heavy investments in R&D to keep up with technology. Zeroing in on the automotive industry: if you take a look at cars, trucks, SUVs, and all the parts that contain microchips, you're looking at a hundred or so parts each. Some of the microchips are newer ones that are more complex, so they have more circuits printed on them and they can handle more complex functions. Most, however, are older, less complex, and although they do cost a bit less for the automotive companies themselves, they are a lower margin of profit for the chip fabricators. So taking a look at all the different types of parts that can contain microchips in your vehicles — and there are even more than on this list — if we look at U.S. automakers in 2019, they sold about 17 million vehicles, so in total this required billions of chips. Now we'll look at what this looks like across different industries. Despite the large number of components that require chips going into automobiles, their consumption was actually pretty modest compared to what is consumed by wireless communications, PCs, storage, graphics processing units, etc. Automotive comes in at a very modest $41 billion. If we take a look to the right, this starts to show the expected growth in each of these sectors from 2019 to 2020. The black dots are the forecast growth amounts made in 2019, before the pandemic took off — most of these areas were expecting 3 to 10% growth, and automotive itself was expecting around 7%, which was aligned with what they had been doing in previous years. Fast-forwarding into 2020, the pandemic hit, a lot of changes to the business landscape, and we reach the blue dots — the actual change in sales from 2019 to 2020. Most of these areas actually saw a much higher growth of 10 to 19%. Storage, PCs, wireless communications gained the most, consumer electronics as well. Industrial applications had no change, and automotive was the only one that saw a loss. Instead of the predicted 7% increase, automotive experienced a 9% loss. This huge swing between what was forecast and what actually occurred is where we begin to see the chip shortage impact these different industries. Now taking a look at the chip shortage itself: what started it? First and foremost, the chip shortage was driven by digital transformations happening across every industry. Medical, aerospace, automotive, personal consumer electronics — everyone was advancing their technology, needing it to be faster and smaller, all of which was driving more demand for chips and for more advanced chips. And as this continued, demand began to outpace the available supply. When the pandemic hit, it just exacerbated the situation. We experienced government lockdowns all over the world. Massive groups of people switched from working on site to working and going to school at home. Plants and businesses shut down. There were less people in the workforce. All of these factors caused the already rising demand for PCs, cloud computing, consumer electronics, etc. to suddenly surge. As you can see from this chart showing lead times, captured in 2021, this is what the swing actually looked like in terms of microchips. We went from different types of chips, depending on complexity, taking four to eight weeks to get from the fabricator to the next step in the supply chain where it's installed into a part, to going up as high as 52 weeks. As you can see, it didn't only affect actual production of chips, but areas of the supply chain like packaging and distribution. Now we're going to zoom back in on the automotive industry and take a look at what this looked like from their perspective. In March 2020, right around when the government lockdowns really started to take off, demand dropped for new cars, trucks, and SUVs. To react to this, automotive plants and their suppliers shut down. This was partly a reaction to demand, partly also to protect their workers from infection. And then automotive companies also began cancelling many of their orders. Part of the business culture of automotive companies is to practice just-in-time delivery, meaning they keep a tight rein on spending and hold minimal amounts of inventory, instead relying on deliveries to be made right when parts are needed at the plant. So since they weren't going to be making anything for a while for the foreseeable future, they cancelled many of the orders they had pending, including those for parts that contained microchips. As this was happening in the automotive industry, demand began to surge for consumer electronics, servers, wired and wireless communication — for the reasons we talked about previously: everyone switching to working and going to school from home. Because of that, chip fabricators naturally reallocated their capacity to focus on these other businesses, since their previous orders had been cancelled with the automotive industries. And as a bonus for the chip fabricators, these new contracts were higher margin than the automotive contracts. Skipping ahead a few months to the summer of 2020, demand for cars, trucks, and SUVs returned much sooner than expected. Automotive plants reopened, their orders were reissued, but chip fabricators could not immediately react to these new requirements. Since they had switched their focus to these other applications, they had not been building up a stockpile of parts to be able to send to automotive companies. Even the simplest chips were taking 10 to 12 weeks from start to finish to be manufactured. And chip companies were reluctant to switch their capacity from these newer contracts back to automotive, because automotive didn't hold as high a place in consumption and so their buying power wasn't as high and there wasn't as much incentive to switch over. So chip manufacturers did their best to add automotive demands back into what they were already working on. Skipping ahead to today, what does it look like now? Automotive companies have been coping with the chip shortage in a number of ways. They've been building vehicles without certain chip components and with the intent to install them later, reducing complexity, investing in R&D on part improvements to reduce the number of chips required, increasing communication with suppliers and forecasting, placing orders earlier, and partnering with semiconductor suppliers. As you can see from the right, they are hoping to recover their global sales in the next couple of years. However, the chip shortage is still continuing for everyone across every industry. Places are working to open up. There have been new chip manufacturing sites, but each site takes billions of dollars and several years to get up and running to full capacity. There have been COVID outbreaks in areas where chips are currently being produced, packaged and distributed, which has disrupted processes. There are disruptions to global shipping — people have talked about ports being backed up. There have also been acts of nature, disasters at plants that just can't be foreseen or planned around. And different geopolitical issues, such as the war in Ukraine, which is disrupting the supply of neon — one of the elements critical to semiconductor production. Ukraine produces about 60%, so that is also having an impact. So where does that leave us? Unfortunately, higher prices and longer waits will continue for consumers. But as businesses find new and better ways to handle the shortages, hopefully those improvements will show some noticeable results for the rest of us. Thank you. --- Q&A with Helen AHMET: Thank you very much, Helen, for the wonderful presentation. We have a few questions for you from the audience. The first is: is it possible to control more functionality, maybe from a lower number of chips but making those chips more powerful with the increased technology? Would that have helped the shortage problem? HELEN: It could. That is something automotive companies are working on right now, researching and developing the current designs they have for parts. A lot of the parts they make, as I mentioned, contain these lower-tech legacy chips, and there are so many of them. Because they had a system that was working and they were doing well with manufacturing, there wasn't necessarily as much incentive to prioritise researching how to consolidate those functions into a lesser number of chips — but that is something that is being pursued now. I think that will end up making a difference. But it does take time, so it's hard to say how soon they'd be able to see those results. AHMET: Interesting. So one more question. There are certain trends in the automotive industry going on right now, such as electrification or autonomous driving, and these kinds of trends generally require additional complexity on these parts, or higher technology — which might require more chips or different types of chips. So how would you comment on how these trends have interacted with the ongoing chip shortage problem? HELEN: I certainly don't think it's going to help. These new trends definitely require more parts containing microchips, and increase the technology and capabilities of the parts already being used. In terms of autonomous driving, there's still a lot of research and development required before we actually see a lot of them on the roads. So I'm hoping, anyway, that the chip shortage would be over by the time we start seeing more of those on the road. Many experts are expecting the chip shortage to end 2023, maybe by 2025. And in terms of EVs, a lot of that is happening right now anyway. Although more and more parts would require chips for those types of vehicles, because those are a higher complexity part, that's something the chip fabricators are currently investing more money in anyway. Hopefully that would be more in line with what they're working on, and automotive companies would have more success in obtaining those types of parts. AHMET: I see. Thank you, Helen. One final question. This chip shortage problem is very topical, very recent, and it caused a huge problem. Beyond this semiconductor problem, has there been any other similar disruption in the automotive industry that automotive companies can take lessons from? HELEN: I'm sure that there have been. The chip shortage is the one I have been most involved with. But in general, I know during the pandemic — because of the reduced capacity of a lot of suppliers, plants for the automotive companies shutting down, disruptions in mail service, different types of global disruption — that affected the availability of parts in general, not just the chip parts. --- AHMET: We are thanking our first speaker, Helen Crimmins, and going on to our next presenter. Our second speaker today is Chris Jobse, who is the CEO, founder, and co-owner of PIMvendors.com. With his knowledge of the product information management market — or PIM market — Chris is an important link between the retailer and the PIM supplier. As a senior master data and product information expert, Chris advises customers on how to organise their data in the most efficient way to serve their omnichannel goals. Today Chris will give us a brief history lesson on product data management and teach us how smart architecture and intelligent tooling can help. Chris? --- CHRIS JOBSE: OK. Well, the story of product content management starts off in the days when a store owner is a local guy. He knows the customer, he knows the needs of the customer, and he can advise his customer because he knows the products he's selling. If his customer has a new demand, he just orders these products with his supplier. Things change when the owner starts to have more than one store, ending up in a complete chain of stores like Woolworths or Sears. There are more employees involved in the relation with the customer, and that relation needs to be maintained in different ways. How does the store owner reach his customers with the right information? Companies started to use advertisements in newspapers and to publish product catalogues. A big example is Sears — they can be considered the masters of the catalogue. To produce a catalogue took a long time: gathering all product information, images, prices, sending it to a publisher, and in the end distributing the catalogue to the customer. To get the products in the store, a complex supply chain was needed. This could be managed via an enterprise resource planning system. Good, reliable product information is crucial for a smooth process. The ERP system was primarily meant to keep price, stock, and logistical information. It was not suited very well to keeping additional marketing product information, so this was managed outside the ERP. This information was disconnected from each other. As it was basically only the catalogue to be served with information, this was not a very big issue. In the 90s, e-commerce emerged. This was a game-changer in getting correct product data to the customer fast and correctly. The demand for information from the customer grew. The pace of the information flow increased dramatically. A lot of product information needed to be available because the customer didn't have the possibility to view and touch the physical product. So images, videos, manuals, etc. needed to find a location in the organisation to be stored and retrieved. The information needed could not be stored entirely in the current systems like the ERP. Spreadsheets for product data, and file systems for product images. Many stakeholders became involved in managing this huge information pile, and the process started to be uncontrollable. To manage the growing product information demands, and to manage the time people needed to organise this process, a centralised repository needed to be created. This is called a product information management system — shortly called PIM. It replaces the spreadsheet jungle. More work could be done with fewer errors and fewer people. Now I come to the point of architecture. How do all these touchpoints interact with each other? What is the logical sequence? And how do I avoid redundant steps in the process? From a historical point of view, the process looked like this: a product was created in the ERP, manually or automated. From the ERP, the product is added to the PIM. In the next stage, the supplier adds additional information to the PIM, or it could be provided by specialised data pools. The PIM then serves all possible output channels like the catalogue, the webshop, marketplaces, and more and more also social media channels. A more modern approach is positioning the ERP behind the PIM. With this, the process of providing product data can be even more efficient. It doesn't need additional interaction with your suppliers, as in a traditional setup. Data can be delivered in one go. Instead of producing product data yourself, suppliers and data pools will deliver product data. The PIM contains all the information needed by the several output channels — it is considered the single version of the truth, the system of record where the whole organisation can rely on. The ERP, in this case, is just another output channel. This approach has more advantages, especially when a long-tail strategy is used. It makes sense that not all product information ends up in an ERP. Again, saving the organisation a lot of work. So by applying a smart architectural setup, you are able to maintain more product data with fewer staff. Thank you. --- Q&A with Chris AHMET: Thank you very much, Chris — that was a wonderful presentation. We have some questions for you from the audience. How does good product data alleviate shortages on the supply chain? CHRIS: When you have good product data, you have less failure in getting the right information and the right products in the right place. For instance, in automotive, when you have spare parts, you need to know which parts fit in which vehicle. A PIM system is able to make relations between objects, and that makes it possible to have the right information for you in the end. AHMET: Thanks, Chris. Another question we have: is there a shortage of product data? Or is the bottleneck really in moving that data between silos, from different players within the supply chain? CHRIS: As you have heard, there is no shortage of data. There is a lot of data, and this data needs to be organised in a proper way. There is actually a shortage of people managing this data. You want to have a lot of data because there is a lot of information — but you need smart people to organise this, and smart systems to help them do it. AHMET: I see. So what you have presented on enterprise resource planning, or ERP — does ERP really address more of the people, like human-resources side of this shortage question, or more the product shortage question? CHRIS: The ERP is more about the logistical part of the organisation, and the finance part. The PIM is more the content part. So there are two different systems, and I always say they are married to each other — they need each other and they can't do without each other. HR is a different part, that's not really ERP. You have other ways of managing that. --- AHMET: Thank you very much, Chris. Our final speaker today is Dr. Ece Sancı. Ece is an assistant professor at the University of Bath School of Management. She received her PhD in industrial and operations engineering from the University of Michigan in 2019, and her research focuses on decision-making under uncertainty with applications in disaster relief and disruption risk mitigation. Today Ece will share a case study she conducted at the Ford Motor Company as part of her PhD work on supply disruption risk mitigation. So Ece — what are we going to learn from you today? --- ECE SANCI: Hello everyone. My name is Ece Sancı, and today I would like to present this study on mitigation strategies against supply disruption risk. Like Ahmet briefly mentioned, this is based on a research project I was involved in during the last year of my PhD at the University of Michigan, back in 2019. We were collaborating with Ford. This is my outline for today. First, I would like to start with a brief description of the problem environment we considered in the study. But I would like to spend most of my time describing the framework we developed to choose optimal mitigation strategies against supply disruption risk. And finally, I will end my talk with some conclusions we derived from a case study at Ford. This problem of selecting mitigation strategies against supply disruption risk is especially important in a just-in-time environment. Like Helen mentioned, just-in-time is the dominant management philosophy in the automotive industry. The fundamental idea behind just-in-time is to keep inventory levels to a bare minimum, because this has certain advantages — it helps reduce inventory costs, improves production efficiency, and identifies quality problems quickly. But implementation of just-in-time is only possible through very flexible suppliers, which can promptly respond to changing needs of just-in-time companies. This is why typically just-in-time companies develop very close relationships with their suppliers. Single sourcing is quite common among just-in-time companies, because single sourcing enables them to fully coordinate deliveries with their own production schedule. But low inventory levels coupled with single sourcing increases exposure to supply disruption risk. So in this study, we developed a decision support framework to choose the best mitigation strategy against supply disruption risk, considering these companies operating with a low inventory level and a small supplier base. Here we considered specific characteristics of a car company. For example, a car company typically has its own tooling used by its suppliers, and typically this tooling level determines the capacity reserved from the supplier. This is why the first mitigation strategy we consider is to reserve backup capacity at the primary supplier through investing in acquiring additional tooling. The second one — I said single sourcing is the dominant strategy, but it is still an option to use dual sourcing. So the company can reserve capacity from the primary and secondary suppliers together. But this means a company will commit itself to source parts from these two suppliers regularly during business-as-usual periods, and if the secondary supplier is significantly more expensive, this can create a major cost burden. The compromise here is to pre-qualify the secondary supplier and only use them during disruption periods. Finally, I mentioned that these companies prefer to have very low inventory levels — but they can actually build up backup inventory using the time available before the launch of a new model or new program. Typically this is not very long; for example, in the case study we considered, we assumed this is eight weeks. So they can actually keep up to eight weeks of inventory and use this backup inventory during disruption periods. The building block of our framework is a multi-stage stochastic program — a type of optimisation model we use in operations research. We use this model to determine the optimal mitigation strategy for a given time-to-recover and disruption-probability parameter. You may say, OK, but these are very difficult to estimate — it's very difficult to come up with a point estimate for time to recover and disruption probability. This is why we use these strategy graphs. First, we take the input from decision-makers: what ranges of time-to-recover and disruption probability they are interested in. For example, in the strategy graph, we have time-to-recover from two weeks to 20 weeks and disruption probability from 0% to 10%. We solve our model for each combination and then depict the optimal strategy on this graph. Looking at this graph: if we focus on the 0% region, that means basically there is no disruption risk. Only under this condition, using 100% regular capacity from the primary supplier and doing nothing else is the optimal approach. But as soon as disruption probability is more than 0%, we have to use a mitigation strategy. For example, if time to recover is less than or equal to eight weeks, it's optimal to use inventory mitigation. If it is between 8 weeks and 16 weeks, we see a hybrid strategy — integrating inventory mitigation with 50% backup capacity from the primary supplier. If time to recover is more than 16 weeks, then depending on the disruption probability it's optimal to integrate inventory with pre-qualification, or integrate inventory with 100% backup capacity from the primary supplier. The nice thing about using a strategy graph is that it eliminates the need to estimate these parameters with high precision. For example, if decision-makers agree that time to recover is strictly less than or equal to eight weeks but they can't decide on the disruption probability, then regardless of disruption probability, inventory mitigation is optimal. This can be a problem when deciding between two strategies, but our results from our case study show that the regret of using a neighbour strategy because of an imprecise estimation tends to be very small. This in a way removes the burden of having very high-precision estimations. Finally, I would like to show you these heat maps to understand the effect of these strategies on cost. This first heat map shows the expected total cost when we don't consider any disruption risk and determine the optimal strategy based on that. In a way, here we just reserve 100% regular capacity from the primary supplier and do nothing — no backup capacity, no secondary supplier, no inventory, nothing else. For the 0% disruption probability, this actually stands for our original cost, so this will be our reference cost — I cannot really reveal cost information here. But I want to show you: if there is actually disruption risk, for example when disruption probability is 10% and time to recover is 20 weeks, the cost is increasing dramatically. In fact, it is 3.55 times the reference cost. Instead, if we use the optimal strategies shown on the strategy graph — the ones generated by our framework — the cost increases at most 5%. To conclude: from this case study we observed that only relying on the regular capacity of the primary supplier can significantly increase the expected total cost when disruption happens. For smaller values of time to recover, just holding backup inventory itself is an effective strategy. For larger values of time to recover, it is more appealing to integrate backup inventory with either backup capacity or with pre-qualification. And finally, hybrid strategies are actually quite robust for all combinations of time to recover and disruption probability, and this removes the burden of estimating these precisely. Thank you for listening. One final slide — this is our paper, published last year. If you are interested in knowing Ford's approach in risk mitigation against supply disruption risk, there is an earlier work by Simchi-Levi et al. They have a framework to identify critical parts, and this actually becomes an input for our paper — because in our paper we don't identify critical parts, we assume it is given to us. Thank you so much. --- Q&A with Ece AHMET: Ece, thank you very much — that was very interesting. Let me actually direct a follow-up with the papers that you have shown. So you have two aspects to this study, right? First, you use the previous method in literature to identify which parts are the highest risk. Then you use your method that you developed on what is the best individualised risk-mitigation strategy for each of those parts. ECE: Yes, correct. The earlier paper was published in 2015, and when we started this project in 2018 Ford already had this very detailed study — they showed us how they grouped parts into different categories, and identified a group as their critical parts, saying they want to use these mitigation strategies especially for these parts. In our framework we considered only one critical part, so in a way you have to generate the strategy graph for each of these critical parts identified. AHMET: OK. While the results seem very impressive from your study combined with both of these works, I'm curious: what kind of input actually went into these models? Was that data already available, was it usable, or did you just cherry-pick certain parameters that were difficult to collect? Basically, how applicable is the model you brought to a practical case? ECE: Along with these identified critical parts, Ford also provided us with some market analysis conducted by their purchase experts — for example, tooling cost and unit production cost, which are the costs provided by the potential suppliers. These were important inputs into our framework; our model needed these cost figures. They also said, OK, we want to consider a constant demand and this is the demand value you can use — so demand for parts is another important input. But there are some other parameters we couldn't get from Ford because some of the things were actually new — not in practice yet. For example, pre-qualification of a secondary supplier: in that strategy we assumed you don't want to commit to this supplier, you just want to pre-qualify them. But Ford didn't do this beforehand, so how much is it going to cost to pre-qualify? What is the upfront investment needed? We also assumed, for example, this pre-qualified supplier cannot immediately continue production right after the disruption — there needs to be some routine preparation time. What is this time? These were all unknown because Ford didn't use this strategy before. We came up with some numbers, and then conducted sensitivity analysis — for example, we assumed a pre-qualified supply will be ready in six weeks, and then looked at what happens if it's much higher than six weeks. This is how we considered all the inputs in the model. AHMET: Thanks, Ece. One more question. You conducted this study before the whole pandemic and chip shortage crisis — you developed a model in a pre-pandemic world. So I'm curious: would there be any other major causes that this mitigation strategy or model would not be able to capture? For instance, would it be able to capture what is happening right now? If not, could you adapt the model to the current problems? ECE: That's a very good question — it's actually one of the limitations of our paper. We started this project before the pandemic, in 2018. Back then, we only considered very extreme disruptions — we assumed there is going to be only one major disruption during the three-year life of the programme. Of course, the pandemic changed this. After the pandemic, we are observing multiple disruptions following one another. Also, we assumed only one supplier — our primary supplier — would be affected by a disruption. In the literature it is typical to assume one reliable but expensive supplier and one unreliable but cheaper supplier, so we were following that common assumption. After the pandemic, we see multiple companies in the same sector, or even multiple sectors, can be affected from the same disruption. These are limiting, and definitely we are looking for ways to adapt this into our framework and improve our model to also incorporate this new world after the pandemic. AHMET: All right — thank you very much, Ece, for the wonderful presentation and the insights. With that, we can wrap up. Thank you very much to all our panelists, and to everyone in attendance today. We would love to hear your feedback on the event and hear what you'd like to see in the next upcoming Insightinar. Thank you all for attending. Goodbye.