Optimizing production: the local optimization trap
Optimizing production: Discover why local optimization degrades global flow and how the Theory of Constraints protects you.

Optimiser la production without creating a “jumping” bottleneck: stop fake gains, manage overall throughput
When a resource's utilization approaches 100%, waiting times increase nonlinearly: a well-established result from queueing (queueing theory). Many plants keep improving production workstation by workstation as if flow were the sum of local indicators. Overall throughput doesn't follow, and work-in-process grows. To improve production, look first at the end-to-end flow, not just the machines.
A shopfloor order of magnitude: the more you push a saturated workstation, the more WIP explodes elsewhere
Speeding up one workstation sends more parts downstream; if downstream capacity stays unchanged, queues appear and lead times increase. Queueing formalizes this: near saturation, variability creates disproportionate queues. Local improvements often move the problem without increasing shipped throughput.
The shopfloor tension: OEE up, on-time delivery down, cash tied up
OEE (overall equipment effectiveness) can improve on one workstation while OTIF (On Time In Full, delivered on time and complete) declines. WIP (Work In Progress, work-in-process inventory) swells and cash is tied up in parts waiting. Performance management requires linking flow, quality, and finance.
Takeaway: a local improvement is worth nothing until the end-to-end flow holds
The right question isn't “which workstation to speed up,” but “which overall throughput can we hold with what level of WIP.” The Theory of Constraints helps you reason in terms of flow. Simulation provides quantified trade-offs before committing to changes.
I. Defining production optimization: what you actually want to improve
Optimization targets higher shipped throughput, shorter lead time, stable quality, and lower costs, with explicit trade-offs. A plant can show “green” local indicators and still lose customer service if WIP rises. Management must connect capacity, yield, quality, and finance to optimize production at the system level.
Capacity, yield, service, quality: four objectives, one trade-off
Capacity indicates output potential per unit time. Yield measures resource efficiency. Service reflects the ability to deliver on time and complete. Quality avoids consuming capacity in rework.
Mini quantified example: when the shopfloor gains points and the customer loses days
A site gains 6 OEE points on an upstream workstation thanks to fewer micro-stops. Downstream keeps the same changeovers and the queue before the next workstation doubles. Lead time goes from 3 to 6 days with no increase in shipped throughput. The useful work here: manage release rules more than pushing machine speed.
II. The illusion of victory: the “jumping” bottleneck and the risk of local optimization
The bottleneck shift mechanism: you fix one workstation, you create a new barrier
A faster workstation increases the inflow to the next one; if it can't keep up, its queue forms and its perceived availability drops. Result: more initiatives, more WIP, little additional throughput. The answer is visibility on shipped throughput, not just local OEE.
Theory of Constraints: pragmatic, nuanced management
The Theory of Constraints, popularized by Eliyahu M. Goldratt in The Goal (Le But), proposes thinking in terms of flow. A system generally has one dominant constraint at a given time, even if several resources can become limiting depending on mix, disruptions, or management rules. The approach is to identify the most limiting constraint, exploit it, then subordinate the rest to optimize production at the plant scale.
Early effects of optimizing off-constraint: instability, unclear priorities, discredited KPIs
Priorities change more often, downstream rejects the surplus, and upstream keeps producing. KPIs become inconsistent when each area optimizes its local metric. Trust drops and decisions revert to intuition.
III. The dynamic bottleneck: product mix, variability, and shockwaves
Why product mix makes saturation migrate from one resource to another
Product A consumes machining; product B consumes assembly. A mix change shifts the relative load and moves the limiting resource. The bottleneck “jumps” because demand and sequencing vary.
Factory Physics: utilization, variability, and delays
Factory Physics links utilization, variability, and delays via models. A workstation near saturation generates disproportionate queues; managing by averages hides this reality.
IV. Why solving one bottleneck can make things worse: WIP, lead time, and cash
What degrades in a chain: WIP, lead time, and cash
Speeding up upstream reduces its local queue but transfers congestion downstream. WIP rises: work-in-process ties up cash and increases working capital requirement. You pay for waiting, not for value.
Linking shopfloor and finance: when producing more sells nothing more
Shipped throughput depends on the real constraint, not the number of parts released. As long as OTIF remains degraded, the company absorbs delays and penalties. Shopfloor performance must translate into sold throughput, not pushed throughput.
V. 72-hour diagnosis: find the real bottleneck and prove it constrains throughput
A short diagnosis works if you observe the real flow and test a constraint hypothesis. Goal: identify the workstation that limits shipped throughput, not the one that “makes noise.” The protocol must be reproducible from one site to another.
Where to observe: follow one part from release to shipping, spot physical buffers, rework loops, and logistics interfaces.
What to measure: real cycle times and their dispersion, micro-stops, changeovers, quality rework.
What to extract: downtime and rate from the MES (Manufacturing Execution System, production execution system), scrap, plan adherence, component availability.
Confirmation tests: recurring queue, high utilization rate, upstream/downstream blockages, direct link to shipped throughput.
When data is missing: shopfloor sampling over 2 to 3 shifts, validation with operators, documentation of assumptions.
VI. Manage the bottleneck instead of suffering it: Drum-Buffer-Rope and release rules
Once the constraint is identified, subordinate everything else to its rhythm. The Drum-Buffer-Rope (drum-buffer-rope) method sets simple release rules and protects the constraint, limiting WIP.
Drum: the constraint sets the pace; the plan starts from this resource.
Buffer: protects the constraint against disruptions; sizing comes from variability analysis and a service target.
Rope: links release to the constraint's rhythm to cap WIP; lead time often drops without CAPEX.
VII. Decide what to do first: prioritize by impact
“If/then” rules: variability, changeovers, workstation imbalance, unstable quality
If downtime variability dominates → standardization and maintenance before automation.
If changeovers dominate → SMED (Single-Minute Exchange of Die, quick changeover) first; cutting 20–40% is common.
If workstation-to-workstation imbalance dominates → line balancing to increase throughput without CAPEX.
If quality remains unstable → eliminate rework, often more profitable than adding resources.
Automation → useful when it removes dominant variability on the constraint, strongly reduces changeovers, or significantly increases usable capacity.
Automation: when it makes sense
Automation stabilizes flow if it addresses the real cause: variability on the constraint, frequent changeovers, or a real capacity need. It can also improve quality. Without prior standards and management rules, it risks freezing an unstable process and shifting problems.
How to recognize local optimization (with no global gain)?
Local OEE up, OTIF (on-time and complete delivery) down
OEE rises on a machine but OTIF stagnates or declines; shipments stay flat despite “high-performing” workshops on paper.
WIP up with no increase in shipments
Waiting areas grow and cash is tied up: a sign you need to revisit release rules, not just local speed.
Priorities changing constantly, scheduling “by shouting”
Urgencies crush the plan, instructions conflict, and everyone protects their workstation — a sign of a poorly identified constraint or poorly sized buffers.
More urgencies and downstream shortages
More missing parts, rework, and last-minute arbitration: downstream is out of sync with upstream.
Saturation moving from one workshop to another
The bottleneck varies by week or by shift, often driven by product mix, changeover times, or quality.
Unstable lead time despite local gains
Cycle times drop locally but lead time remains erratic: queues and variability dominate, and buffer protection is missing.
VIII. Indicators that prevent self-deception: link KPIs, bottleneck, and decisions
Indicator | What it reveals | What it hides | Associated action |
|---|---|---|---|
OEE (Overall Equipment Effectiveness, overall equipment effectiveness) | Local machine losses | Queue effects on flow | Target losses on the constraint |
WIP | Congestion level | Bottleneck location | Cap release via Rope |
Lead time | Real customer delay | Causes of extension | Reduce queues via Drum-Buffer-Rope |
OTIF | Customer service and sold throughput | Source of delay | Link to the constraint and WIP |
Scrap / rework rate | Hidden lost capacity | Hidden bottleneck in rework | Fix first-pass quality |
A relevant dashboard contains 8 to 12 indicators maximum, each tied to an action. It must make the constraint visible in the daily review; otherwise it misses the point.
IX. Digital twin and flow simulation: test 10 scenarios before executing one
The digital twin lets you test flow decisions without risking the shopfloor. Simulation integrates breakdowns, quality, and product mix to compare throughput, WIP, and lead time. It turns an opinion debate into a quantified trade-off.
The protocol: explicit assumptions, calibration on real data, degraded scenarios to validate robustness. For multi-site use, standardize KPI definitions and modeling conventions; the twin then becomes a portfolio tool for initiatives, not a showcase.
X. Three mini-cases with limits and lessons
Case 1: capacity recovered without extra m², then flow stabilized
What | How | Impact | Limit |
|---|---|---|---|
An assembly line lacked capacity. | Diagnosis: constraint on a test station; Drum-Buffer-Rope implemented and upstream buffer sized. | 10–25% usable capacity gain without additional m², WIP reduced. | Initial buffer too high: iteration to reduce WIP without losing throughput. |
Case 2: changeovers reduced, but rework underestimated
What | How | Impact | Limit |
|---|---|---|---|
A multi-SKU workshop suffered from changeovers. | SMED (Single-Minute Exchange of Die) reduced changeover times by 20–40%; scheduling grouped families. | Shipped throughput increased, lead time reduced, but rework absorbed part of the gain. | Underestimated rework required a second upstream quality initiative. |
Case 3: WIP reduced, lead time stabilized, release rules adjusted
What | How | Impact | Limit |
|---|---|---|---|
Site with decent OEE but unstable lead time and low OTIF. | Release rule to cap WIP by area; simulation to size buffers. | WIP reduced by 15–30%; lead time stabilized. | Initial rule too rigid for some mixes; adjustment by product family required. |
XI. The 5 expensive traps—and their countermeasures
Optimizing off-bottleneck: surplus becomes a queue elsewhere. Countermeasure: identify the constraint, subordinate releases via Drum-Buffer-Rope, measure shipped throughput before/after.
Managing only OEE: high OEE doesn't guarantee OTIF. Countermeasure: add WIP, lead time, and shipped throughput to the dashboard; make the constraint live in the daily review.
Overstocking to “protect” the plan: WIP inflates. Countermeasure: size targeted buffers via simulation, limit releases with the Rope.
Automating before stabilizing: freezing an unstable process freezes its defects. Countermeasure: stabilize quality, standards, and maintenance; validate automation on scenarios.
Confusing machine speed with line throughput: speeding up one workstation doesn't guarantee line throughput. Countermeasure: measure shipped throughput and track bottleneck migration.
Dillygence combines industrial expertise and a digital twin to test flow scenarios, prioritize improvement initiatives, and convert every decision into measured gains on throughput, lead times, WIP, costs, and CO₂ emissions.
FAQ — Production optimization
What is production optimization?
Production optimization aims to improve shipped throughput, lead time, quality, and costs, with explicit trade-offs. It differs from local optimization by judging performance at the end-to-end flow level. It's measured via throughput, WIP, lead time, OTIF, OEE, and quality.
How do you identify and eliminate bottlenecks to optimize production (and when does the gain actually increase outputs)?
Identify the bottleneck by observing recurring queues and proving it limits shipped throughput. Confirm with a simple test: stopping this workstation reduces outputs, and improving it can increase outputs if the rest of the system can follow and management rules are adapted. Then eliminate losses on this workstation, subordinate upstream releases, and iterate—because the bottleneck can migrate with mix.
How can you improve production day-to-day with a fast flow diagnosis?
Do a part walk, map waiting and rework areas, then measure a few real cycle times and micro-stops on the suspect workstation. Extract MES data, scrap, and plan adherence to corroborate shopfloor findings. Formalize a simple release rule and verify its effect on shipped throughput and WIP.
Which indicators should you track to continuously improve production?
Track shipped throughput, WIP, lead time, and OTIF to manage flow and service. Track OEE on the constraint to attack losses that remove usable throughput. Link each indicator to a threshold and a standard action to keep management stable.
How can you improve industrial production without degrading quality?
First treat scrap and rework causes on the constraint: quality consumes net capacity. Stabilize standards and maintenance to reduce variability. Increase speed only if downstream can follow, and validate via scenarios when mix varies.
How can you improve production by improving OEE and equipment availability?
Improve OEE by targeting the constraint: one minute lost on that workstation removes shipped throughput. Reduce unplanned downtime and micro-stops, then adjust release rules to prevent the gain from turning into WIP. The useful result: higher OEE on the constraint and lower lead time.
How can you improve production with a digital twin and flow simulation?
The digital twin compares layout, resources, release rules, and buffers without disrupting the shopfloor. Simulation integrates breakdowns, quality, and mix to estimate throughput, WIP, and lead time. A robust protocol includes explicit assumptions, calibration on real data, and shopfloor validation.
How can you improve production across multiple plants with a standardized approach?
Standardize KPI definitions, measurement rules, and scopes to compare without bias. Deploy an identical short diagnosis and then a flow model with common conventions. Manage a portfolio of initiatives by impact on throughput, lead time, quality, costs, and CO₂, with site-by-site validation.


