This QA first appeared in Data Science Briefings, the DataMiningApps newsletter as a “Free Tweet Consulting Experience” — where we answer a data science or analytics question of 140 characters maximum. Also want to submit your question? Just Tweet us @DataMiningApps. Want to remain anonymous? Then send us a direct message and we’ll keep all your details private. Subscribe now for free if you want to be the first to receive our articles and stay up to data on data science news, or follow us @DataMiningApps.
You asked: More than any other industry, manufacturing executives put a high priority on drawing intelligence from their big data. Why do you think this is? Furthermore, manufacturers say they know they are hampered by inefficient data-gathering and management capabilities. What should manufacturers do to improve these abilities and expand analyst teams to leverage the power of analytics across their operations?
The advent of big data and analytics has triggered the need to discover hidden gems of customer behavior. Customers interact with a firm through a variety of different channels: on-line, off-line, social media, call center etc, hereby generating a massive trail of data ready to be analyzed and leveraged for competitive advantage. In a manufacturing context, this has huge implications. E.g. by being able to better forecast customer demand, safety stocks can drop substantially, the bullwhip effect can be mitigated, hereby significantly reducing costs across the entire supply chain which can then be further translated into a more consumer friendly end price. Even a marginal improvement in terms of forecasting performance can yield huge cost savings. Hence, big data represents a key asset and vast resource in spearheading competitive pricing.
To answer your second question, you mention two things here that are indeed important. Let me start by zooming into the data –gathering first. As mentioned before, data is collected across various channels and in various formats (e.g. structured versus non-structured). This data is typically also stored in various locations and platforms across the various business units of the enterprise. Having a consistent, corporate-wide data representation and availability is a key prerequisite to successfully start doing analytics. On top of that, appropriate tools should be put into place to make sure that the data is of good quality. Just as an excellent cook cannot make a good dish with bad ingredients, a data scientist cannot build good analytical models with low quality data. This is often referred to as the garbage in, garbage out (GIGO) principle. In a supply chain setting, this also implies mechanisms such as vendor managed open data, where a company gets access to the data of its downstream supply chain partners to better forecast demand further down in the supply chain. Next, to facilitate agility and further reduce lead times, predictive analytical models should be operationally efficient in the sense that small fluctuations in customer demand can be forecast both accurately and quickly, preferably in real-time. Finally, customer behavior and demand is a dynamically changing phenomenon. Hence, predictive analytical models should be continuously backtested and monitored and re-built when deemed necessary. Obviously, this requires investments and that’s where the management enters the game. For analytics to be successful, it is important that the management is first aware of its power and believes in its potential. Next, they should put into place the appropriate corporate governance mechanisms and policies to make big data and analytics part of a company’s DNA.