In today’s market, competition is fierce and customer expectations are high. The ability to exceed customer expectations is a key differentiator for manufacturers. Beyond direct customer feedback, which can be inconsistent and hard to collect, manufacturers need a built-in system that allows managers to objectively assess the customer’s experience from the available internal data.
Start by choosing a single metric to track and monitor.
At the outset, you likely have a sense of what factors might be hurting the customer experience. For example:
- Have complaints about late orders been increasing lately?
- Have finished goods been rejected or returned?
- Have lead times been increasing, causing longtime customers to start looking elsewhere?
While it might be tempting to try to measure every possible customer success metric, don’t try to boil the ocean. Instead, look at this effort as a minimum viable product (MVP) that allows the team to become more acquainted with using automated reporting to understand how their actions contribute to – or risk – customer satisfaction.
With that in mind, you might initially focus on a metric like on-time delivery (OTD). Given a hypothetical situation in which the plant manager has been fielding complaints related to late deliveries but doesn’t have a digestible report to validate how much of a concern truly exists, we’d choose this metric to highlight on a dashboard.
Find and collect data that is relevant to the metric you want to track.
Now that you have an initial goal, you can consider the available data pertinent to your efforts. Of course, this will vary considerably, based on how the organization is currently capturing data.
For example, some manufacturers will have shop floor data collection systems, which consistently capture data based on the production activities on the floor. Some manufacturers will have data captured through their ERP, whether it’s input manually or ingested automatically through an integration with a shop floor data collection system.
Ensure that you have a basic understanding of where the key data resides. It’s possible that you’ll need data from more than one system to support the metric you’re targeting. For example, you might need promise dates from your CRM, alongside actual ship dates from your ERP.
Once you understand where the datasets reside, speak with line managers and other knowledge workers to identify any limitations of the data. For example:
- What is the completeness of the data?
- What is the longitudinal coverage?
- Are there any data points that we’re not capturing?
Clearly define the metric and incorporate that information into the data model.
Now that you’ve gathered the raw data, (and hopefully are using a modern data lakehouse instead of Excel files & CSV exports), your next task is clearly defining the metric.
If you’re reading this guide, you probably don’t need the definition for on-time delivery! Still, beyond the basic definition, you need to keep a couple things in mind:
- Consider an all-or-nothing OTD metric. Meaning, you don’t count partial orders towards the overall OTD metric. If five out of seven units ship on time, the on-time KPI for that order should be 0%, not 71.42%.
- Account for ship dates, and regard “shipped on-time” as shipped early enough to use standard rates. Ultimately, OTD is about when the customer receives the order, not when it’s shipped. But, if orders are shipped late, (requiring priority delivery), OTD might look great, but inefficiency is creating extra cost.
- While some view OTD as a “production KPI,” it’s really an indicator of how the entire plant is functioning. It’s a great starting point, but measuring the specific inefficiencies, whether in production, sales, or outbound logistics, will require broadening your set of metrics.
- And just in case you need it:
- On-time Delivery = (# of Orders shipped on time / total orders) * 100
Once this is complete, the metric can be codified in the data models within the data platform, (or within a Power BI file, for example), and you can automate the process of pulling the raw data from sources systems, applying the metric definition, and then consuming the insights through a dashboard.
This is why we say “hopefully” above, regarding the use of a modern data lakehouse. Because without the right data architecture, someone will need to manually update the source data any time you want to see an updated dashboard. Is it possible? Yes. But it is an arduous, time-consuming, and fragile solution prone to human error.
The ultimate value of this entire expenditure of time and energy is to consume the insights and take action – and just as rote tasks are automated in production processes whenever possible, you should apply the same automation to data.
Design a dashboard that will let your team know how and when to take action.
With the data in place, turn your attention towards designing a dashboard. But what would the value of a dashboard be, if it only showed you an overarching OTD percentage?
Instead, you must consider how to integrate this dashboard into the pre-existing workflow. So, it must be based around the user (the plant manager and line managers, for example), and their natural flow of analysis & action. First, bring awareness to the macro level (did OTD go up or down compared to yesterday, or last week?). Second, dig a layer deeper to see which customers were impacted. Third, dig further to see which managers oversaw the order and finally, take action. In this case, taking action means determining the root cause of the late delivery, addressing any unforced errors, and potentially contacting the customer directly.
With this approach, you can start to see how dashboards create value. They shouldn’t be thought of as simply “reports,” but rather as analytical tools that use data to propel actions and decisions, which in turn reinforce profitable decisions and actions.
But again, the goal is to showcase that potential – with a right-sized success story. Staying disciplined and basing a dashboard around an achievable metric allows you to prove the business value of the initiative, which is paramount. Otherwise, if you do try to boil the ocean, the company will wait 3+ months for any sort of dashboard to interact with or surface insights. As crazy at that sounds, we’ve seen countless companies fall into the trap.
Monitor your team’s use of the dashboard and iterate on the design to improve its impact.
Now that you have an automated dashboard, it’s time to collect feedback and to practice using the new tool to improve business results. There are various ways to monitor usage statistics on dashboards, but informal feedback is just as critical. After releasing a new dashboard, take time to talk with users to understand how they’re using the dashboard, what they like about the layout, and what they like to see included.
Developing business intelligence isn’t a one-time effort. It’s a continual process of analyzing how the business is leveraging available data to focus teams on the key metrics that determine profitable growth. Priorities and key initiatives change, and sometimes metric definitions evolve to capture more nuances. As priorities shift, the specific needs for data shift as well.
Once you’ve perfected the automated measurement of the business at a macro level (e.g., measuring overall OTD), the team will want to examine the key drivers within specific functional areas as well. That could mean detailed dashboards for production, outbound logistics, and even sales reporting to monitor promise dates and demand.
Our guiding principles for data projects.
Data is a fundamental element in improving processes, whether it be spotting and monitoring bottlenecks, or improving customer loyalty. Surprisingly, while companies usually have the data available, they aren’t channeling it effectively to drive improvements.
As this guide highlights, it’s important to keep a short list of guiding principles in mind:
- Start small – prove out a use case and build on it to automate more insights.
- Invest in a proper data platform, to provide a foundation for automated analytics.
- Remember that dashboards are built for humans to consume data – what you develop shouldn’t overwhelm or distract from the key insights that drive improvements.
- Measure your efforts and collect feedback, then use it to iterate. Dashboards should be considered living and breathing entities, and a solid feedback loop with end users will promote adoption.