Manufacturing Executive’s Series:
Using Value Stream Data and Tools to Drive Operational Excellence
Data is produced in every sector and industry in business every day. This is especially true in manufacturing.
This data should be used to drive informed and effective decision making. If your goal as a manufacturing
executive is the following: improving product quality, maximizing efficiency, achieving operational
excellence, and streamlining the value stream, it would behoove you to look at the data and listen to the story
it tells. This can save you a lot of money, time and your reputation as a manufacturer.
The following are steps that should be taken to compile and effectively utilize the data that is created every
day in your manufacturing company.
1. Use Statistical Tools to Determine Key Issue Root Causes – Used to compile and translate raw data in manufacturing, leading to more informed decision making.
Histograms – Used when determining output and frequency distribution. Outputs are broken down into groups known as “bins,” which can then be easily analyzed.
Pareto Charts – Used to determine the “critical few” factors that are causing most of the problems. This is useful in not only determining the root causes of key issues, but also in prioritizing what issues need be addressed first in process improvement initiatives.
Measurement System Analysis – Used to analyze a measurement system and its accuracy. This helps to determine if the problem lies in the process, product, etc or in the measurement system itself that is used to gather the data.
2. Implement Quality Checks and Controls Throughout the Value Stream – This is especially true in processes that have a wide range of internal buyers, and thus multiple points of potential failure. Errors can and should be caught early. It is much more expensive to fix errors once they get to the customer versus when they are still with you, the supplier. A great deal of effort should be given to ensure that only high-quality deliverables make it to each step the value stream.
3. Implement Poka-Yoke (Error-Proofing) to Reduce
Human Error – For example, a recent process improvement
effort that I was leading, was at a manufacturer of fuel cells.
The fuel cells required cooler tubes in a precise position, so water could flow through them to cool the product. If cooler tubes were placed in the wrong position, the fuel cells would not function
properly and possibly crack or break prematurely.
Root cause identification can be used to solve the following:
1. Variations - Also known as
2. Long Lead Times
3. Low Process Yields/ Missing
4. Process Waste
We created a fixture in the form of a conforming cooler tube (factoring in room for minor variability). This served to immediately reduce the amount of non-conforming fuel cells that made it to the end of the process. This meant less time wasted on detecting and correcting defected products, and consequently more money saved. Cooler tubes that came from the supplier in a non-conforming shape were also sent back (Quality at the Source).
This did not completely eliminate human related error, but it drastically reduced it and resulted in fewer defects as well as improving bottom line by millions of dollars over the course of the following months. There was also no additional cost after designing and ordering the initial fixture. Poka-Yokes are low-cost methods to ingrain quality into the process, and are typically very easy to sustain after initial set up, making them an ideal solution.
4. Maximize Use of Data Already Available – A great deal of data is produced every day on a shift by shift basis, this data is often readily available and just needs to be used. Sometimes this data can be easy to find, other times you will have to go looking for it. Some common places that data can be found, is in written OEE and metric sheets.
Sometimes the data is collected for other reasons and just needs to be repurposed.
Take for example, a product going through the value stream, once it makes it to each new step in the process, it is scanned by the shop floor workers for tracking purposes and to ensure that quota is met. However, the scans also have a time stamp. These time stamps can be used to determine key performance indicators (KPI), such as average time is takes for the product to get through the value stream. If one step is taking excessively long, then we know something is wrong, and this prompts us to take a closer look.
Note, that a baseline will most likely need to be established if data is being repurposed for the first time. When you are ready to revisit data, you can reference this as your starting point.
5. Identify Patterns to Drive Decision Making – Once information is compiled and translated, you can now use it to make choices on changes that will optimize operations. Over time, patterns tend to reveal themselves, and you and your operations team can use this to your advantage when making changes to the process. Do manufacturing costs spike during a certain quarter every year? Is machine downtime
steadily rising? Identifying these patterns can help to answer questions such as these. Most importantly, decisions will be driven by data metrics versus a guess and check approach.
Final Points to Consider:
1. If you have not compiled any data, or the data is “bad,” start a controlled batch to monitor the output until you have a decent amount of data to get an idea of what is occurring in the process. From there, use that as your baseline.
2. Visualize the data through charts and graphs to get a better picture of what is occurring in the process.
3. Analyzing the data is a constant task that needs to be performed regularly and not just when a problem arises. This serves to create a proactive rather than reactive culture in the company.
4. Collecting and translating data is no substitute for seeing and understanding the process. Go on to the shop floor, speak with the shop floor personnel, and observe process first hand to get an understanding of the way the process works.