Dillygence
Operational performance: 3 errors to avoid in your projects
Improving factory operational performance without window-dressing: flow diagnostics, digital baseline, and bottleneck-driven prioritization.

Operational performance in the factory: the 3 mistakes that drag down your operational excellence projects
In aerospace as in automotive, the same mechanism shows up: the gains announced at the start of a program melt away as soon as metrology varies and data isn't controlled over time. The curves “improve” on dashboards, yet the actual throughput delivered doesn't increase.
Behind this gap, three mistakes keep coming back with worrying regularity.
The thesis is simple: the will to improve isn't enough without a factual diagnosis, strict prioritization, and continuous measurement. Shop-floor data is first used to arbitrate, not to look good in committee. Flow simulation and the digital twin help choose between options that “seem good,” but don't have the same system impact.
Key takeaway: improving operational performance requires a quantified baseline, levers selected for their effect on the system, then verification through indicators.
Define the performance to manage, not the performance to display
A factory can “run well” on the surface and still lose money. Operational performance refers to the ability of an industrial system to deliver throughput that meets expectations, with controlled lead time and cost, in a stable way. Everything else flows from that stability.
What the shop floor really measures, and what management reads (capacity, costs, cash, carbon)
The shop floor records breakdowns, scrap, cycle times, material shortages, queues. Management reads units shipped, margin, EBITDA (earnings before interest, taxes, depreciation, and amortization), working capital requirement, emissions. If the two worlds don't talk, actions stay local and gains just move around.
Don't confuse operations performance, operational excellence, and financial performance
Operations performance describes how flows run: bottlenecks, queues, variability, quality, pace. Operational excellence describes an approach: standards, problem solving and routines, often inspired by lean management (allégé management). Financial performance translates the result: margin, cash, ROI. In many cases, lots of activity produces little financial effect if it targets non-constrained levers or shifts variability elsewhere.
The classic trap: OEE up, deliveries down
An OEE (Overall Equipment Effectiveness) increase can coexist with a drop in service level if the constraint sits elsewhere or moves with the mix. OEE measures availability, performance and quality of one piece of equipment, not end-to-end flow. Optimizing a non-constraint station tends to increase work-in-progress in front of the bottleneck: you “produce more” locally and wait longer at the constraint. The system can deteriorate despite good local KPIs.
Mistake #1: optimizing “blindly” without a factual diagnosis
Optimizing without diagnosis is like prescribing treatment before the exam. Teams often intervene on what is visible or irritating. The hidden cost shows up later in lost months, misdirected investments, and organizational fatigue.
Misleading symptoms and root causes: why gut feel is expensive
A customer delay sometimes looks like a lack of capacity, when it actually comes from work-in-progress that's too high or badly positioned. A material shortage can come from an incorrect bill of materials or mis-set consumption parameters. A quality issue may stem from process variability exploding at the constrained station. Gut feel points to a symptom, rarely the root cause.
Variability hides in micro-stops, rework, priority changes, unstable settings and degraded sequencing. It creates waiting and queues, then saturates the bottleneck intermittently. When the constraint fights “minute to minute,” useful throughput tends to drop even if theoretical pace stays unchanged.
Flow audit and a digital baseline
A flow audit locates the constraint, quantifies work-in-progress by zone, and measures time variability. It delivers a value stream map, a histogram of cycle times, and a work-in-progress–lead-time curve. The initial reference sets a quantified starting point: throughput, lead time, work-in-progress, scrap, downtime, resource consumption.
A digital baseline often takes the form of a flow model fed with observed data. The model becomes a shared reference across production, industrial engineering, supply chain and finance. You debate assumptions and scenarios rather than perceptions, and you quickly see whether an action reduces lead time or just moves the problem.
Mistake #2: trying to fix everything at once—so nothing changes
A program often fails because it aims at “everything”: inventories, 5S, indicators, maintenance, planning, quality. The organization spreads itself thin, teams get exhausted, and flow doesn't move. Prioritization must follow impact on total throughput, not local convenience.
Prioritize by impact on total throughput, not local convenience
The useful question: which lever increases the system's useful throughput or reduces lead time without hurting quality? A non-constraint station may gain 20% with limited impact on total throughput, while a bottleneck that gains 5% can move deliveries. This reasoning also protects CAPEX: one more machine doesn't fix an incoherent release rule or untreated variability.
Reducing work-in-progress tends to act more effectively on lead time than increasing local pace, especially in unstable flows. Less work-in-progress means less waiting, fewer priority changes, and fewer orders “aging” along the route. The mechanism is simple: the bottleneck dictates throughput, work-in-progress dictates lead time.
Action portfolio: quick wins (fast gains), system workstreams, investments
Action type | Shop-floor example | Deliverable | Proof indicator |
|---|---|---|---|
Quick wins (fast gains) | Reduce changeover time at the constrained station | Setup standard + checklist | Median changeover time |
System workstream | Release and scheduling rules | Written rules + frozen window | Plan adherence + lead time |
Investment | Automate a bottleneck quality check | Business case + phased scenario | Useful capacity + scrap |
Reducing changeovers often sounds attractive but can rigidify sequencing and miss customer priorities. Grouping too many batches increases inventory. The right compromise depends on the mix, customer constraints and stock policies: it's tested rather than decreed.
Mini case: making bills of materials reliable and reducing inventory
What: a multi-part-number site suffers frequent shortages and excess stock, with unstable service level.
How: make bills of materials and consumption parameters reliable, then realign replenishment rules based on observed data.
Impact: a realistic 20% inventory reduction and a direct effect on working capital. Better execution sometimes starts with better truth in the ERP (enterprise resource planning system).
Mistake #3: managing without measurement—so without proof
Without measurement, a transformation is hard to sustain. Without a baseline, you can't quantify gains or hold the line when urgencies return. Sustainable performance requires a short loop: measure, decide, verify.
From KPI to lever: link an indicator to a decision, then to a quantified gain
A KPI (Key Performance Indicator) must point to a decision—otherwise it stays decorative. A scrap rate triggers a process or material action. A throughput time triggers action on work-in-progress and the release rule, not necessarily a race for pace.
Scrap and rework consume bottleneck capacity: a reworked part comes back through, blocks space, and adds variability to sequencing. Even with decent OEE, useful throughput can drop if the station “works” on non-quality. That's why measuring the right indicator in the right place matters.
Dashboards and management routines
Category | Indicator | Typical decision |
|---|---|---|
Result | Service level | Arbitrate customer priorities and capacity |
Result | Throughput time | Adjust releases and reduce work-in-progress |
Lever | OEE at the bottleneck | Action plan on dominant stops |
Lever | Scrap rate | Treat the quality root cause |
A useful routine defines frequency, format and an escalation rule. Daily, it handles short-term deviations and protects flow. Weekly, it arbitrates resources, maintenance and priorities. A deviation beyond a threshold triggers a decision, then a check on a set date.
Limiting releases is often a lever, but not a religion. Some material constraints, some long supplier lead-time environments, some inventory policies or a high level of variability require different logics. Deciding a release rule, documenting it and measuring its effect remain non-negotiable.
The 4 dimensions and 5 measurable operational objectives
Four dimensions interact and can work against each other if optimized separately. Holding them together requires a system reading centered on flow.
Quality: defects, scrap, rework and cost of poor quality. FPY (First Pass Yield) measures the share of parts conforming without rework. Low FPY at the bottleneck consumes capacity and reduces useful throughput.
Lead time: lead time (throughput time) measures the real promise; service level measures customer credibility. Reducing lead time often comes from reducing work-in-progress, especially if variability stays high.
Cost: productivity, material consumption, energy and variability-related costs. An unstable factory costs more, even if standards exist on paper.
Flexibility and safety: ability to absorb a product mix and rate changes without drift. A poorly controlled ramp-up often degrades both.
Five objectives cover most transformations:
increase useful capacity without overinvesting
reduce throughput time and work-in-progress to free up cash
improve quality by reducing scrap and rework
stabilize flow to meet customer lead times
reduce energy and material losses per good part
Each objective must include indicator, target, horizon and assumptions.
A 3-level management framework: executive, site, line
Effective management separates three levels, with limited indicators and explicit decisions. Consistent definitions matter more than tool sophistication.
Level | Main indicators | Frequency | Associated decision |
|---|---|---|---|
Executive | EBITDA, working capital requirement, service level, useful capacity | Monthly | CAPEX, OPEX (operating expenses), phasing, priorities |
Site | OEE at the bottleneck, work-in-progress, lead time, scrap rate | Daily to weekly | Release, sequencing, maintenance, quality |
Line | Main stops, FPY, bottleneck output, missing components | Per shift to hourly | Setup, first-level maintenance, quality escalation |
8-step method: start from flow, return to the standard
A robust method avoids improvisation and protects continuity. It starts from flow, identifies the constraint, then returns to a documented standard. Each step produces a concrete deliverable.
Map the value stream: describe sequence, times and waits; compare actual flow vs. theoretical flow station by station.
Identify the constraint and measure variability: spot the persistent queue; produce a Pareto of stop causes and a distribution of times at the bottleneck.
Establish a quantified baseline: set starting values with period, scope and calculation rules.
Prioritize root causes by system impact: link causes to impact on throughput, lead time, quality and cost through a matrix.
Test countermeasures via simulation and scenarios: compare scenarios with explicit assumptions; avoid irreversible decisions based on intuition.
Deploy without breaking production: phasing, switch thresholds and buffer capacity.
Standardize and document: work standards, checklists, management rules covering releases and exceptions.
Sustain: short routines, review calendar and maintenance of reference data.
Flow simulation and the digital twin: the discipline of proof
A simulation model converts local events into system effects on lead time, work-in-progress, capacity and costs. It shows that the constraint can move depending on product mix, sequence or load—not only based on the “slowest machine.” It makes persistent queues and domino effects between areas visible.
A model produces misleading results when input data is wrong, overly averaged, or when rules don't incorporate disruptions and real behaviors. Validation requires comparison over a historical period, then iterations. A useful model stays simple enough to be explained to teams and precise enough to separate two close options.
Three industrial mini-cases: from KPI to EBITDA
Case | Situation | Method | Impact |
|---|---|---|---|
Assembly workshop with massive work-in-progress and machines with decent OEE. | Locate final inspection as the constraint, reduce rework via upstream sorting and stabilization of settings. | ~25% reduction in throughput time, lower work-in-progress and stabilized service level. | |
Saturated site considering heavy CAPEX. | Simulate a re-routing and sequencing that reduces changeovers at the constrained station. | 8–12% useful capacity gain with limited investment; verify impact on service level and inventory. | |
Line with unstable scrap rate and frequent restarts. | Isolate dominant causes, lock process parameters and define an immediate reaction rule. | Potential 30% scrap reduction, lower energy per good part and improved useful throughput. |
The 5 traps that ruin gains—and their countermeasures
Local over-optimization that degrades global flow.
Countermeasure: manage to the bottleneck throughput and adjust releases based on real variability.Too many KPIs, so no one decides.
Countermeasure: limit to 3–5 KPIs per level, link each KPI to a decision and an owner.Unreliable shop-floor data, so biased trade-offs.
Countermeasure: measure observed data and maintain master data, especially routings and bills of materials.Standards not held, so you return to the starting level.
Countermeasure: install short routines with escalation thresholds and audits of standard adherence.Investments launched before sorting causes.
Countermeasure: test scenarios on a digital baseline, then commit CAPEX if system impact is confirmed.
Reading grid: “if your problem is X, look at Y, decide Z”
If your problem is… | Look at… | Decide… |
|---|---|---|
Chronic delays | Lead time and work-in-progress by zone | Adjust releases and protect the bottleneck, without breaking service level |
Urgent CAPEX | Bottleneck throughput, dominant stops and variability | Test scenarios before investment, then phase |
High inventory | Quality of bills of materials and parameters | Make data reliable, then realign rules |
Unstable quality | FPY at the bottleneck and defect Pareto | Stabilize dominant causes, because rework and scrap consume capacity |
Emergency costs | Plan adherence and priority volatility | Set sequencing and escalation rules, then control adherence |
Dillygence combines industrial expertise and the digital twin to convert a factual baseline into decision scenarios, then into quantified gains on capacity, lead times, costs and operational carbon footprint.
FAQ — Operational performance
What is operational performance?
Operational performance refers to an industrial system's ability to produce and deliver at the expected throughput, with controlled lead time, quality and cost, in a stable manner. It is proven through flow indicators, not isolated local results. It connects shop-floor decisions to business outcomes.
What are the 4 types of performance?
The four types cover quality, lead time, cost, then flexibility and safety. Quality addresses defects, scrap and rework. Lead time addresses lead time and service level. Cost addresses productivity and losses linked to variability. Flexibility and safety cover the ability to change mix and rate without drift.
What are the 5 operational performance objectives?
Five major objectives: increase useful capacity; reduce throughput time and work-in-progress; reduce scrap and rework; stabilize flow to meet customer lead times; reduce energy and material losses per good part. Each objective must include indicator, target, horizon and assumptions.
What are operational performance indicators?
Common indicators: OEE, FPY, service level, lead time, work-in-progress, productivity, scrap rate. A useful indicator triggers a decision and a verification. An effective dashboard limits the number of indicators per level.
How do you define operational performance objectives aligned with the strategy?
Alignment starts from a business objective, then translates into targets for capacity, lead time, quality, cost and cash. A quantified baseline sets the starting point and enables an explicit business case, with assumptions and sensitivity. Management can then arbitrate CAPEX, OPEX and risks with comparable elements.
How do you manage operational performance across multiple sites?
Multi-site management requires identical KPI definitions, normalization by product mix and the same data granularity. It requires a shared review cadence and an action portfolio prioritized by system impact. Comparison becomes reliable when each site measures the same thing and decides with consistent rules.
How do you standardize best practices to improve operational performance?
Standardization relies on work standards, written management rules and routines to hold the standard. It requires maintenance of reference data, especially routings and bills of materials. A practice becomes “best” when it shows a measured gain from a quantified starting point and remains sustained through short reviews and audits.


