Dillygence
Industrial Ramp-up: Steering Production Scaling Without Excel
Production Ramp-up: Reduce lead times by modeling variability, breakdowns, and queuing, rather than relying on static calculations.

The industrial ramp-up: managing the ramp up rate — the Excel illusion
In corporate spreadsheets, 94% of complex files contain errors. During a ramp-up phase, that “detail” can push the target back by several weeks, even several months. Yet ramp-up is still often managed in Excel, as if production were just an addition of capacities. Takeaway: a “plausible” file can get very expensive as soon as the real system reacts with thresholds, queues, and variability.
Industrial ramp-up refers to the period when a line, a cell, or a plant increases its output up to the contractual rate, while meeting quality, cost, and delivery targets. It includes operator learning, process stabilization, supply robustness, and controlling day-to-day disruptions. If you don't know when the rate is truly met—under which product mix and at what risk level—you're not managing: you're betting.
A launch is not just a planning topic: it is a flow dynamic. Breakdowns, scrap, setups, inspections, and supply disruptions don't add up; they multiply. A simulation model makes variability manageable instead of discovering it too late.
I — The “war chest”: why spreadsheets seem unbeatable during launch
A universal language that reassures every management layer
Spreadsheets feel reassuring because they speak to everyone, from the team leader to the CFO. They turn complex industrialization into boxes, milestones, and color codes, readable instantly in a steering meeting. They also make it easy to “show” a trajectory: weekly volumes, planned hours, headcount, investments. But that simplicity hides a hard truth: the file describes an intention, not the real behavior of the shop floor.
In many plants, the spreadsheet even becomes the decision interface: columns are negotiated, assumptions debated, then a plan gets “approved.” The problem isn't the tool—it's the use: people start confusing internal consistency of the document with physical feasibility.
Maximum flexibility: add training, scrap, and milestones in minutes
During launch, parameters change fast: operators in training, evolving routings, extra inspections, unexpected rework. The spreadsheet adapts in minutes: a “scrap” tab, a “training” column, a tweak to OEE (Overall Equipment Effectiveness). That responsiveness creates the impression of fine-grained control, almost day by day. In reality, it encourages stacking local rules and exceptions with no global coherence.
Over time, the file becomes a compromise between multiple “truths”: production, industrial engineering, finance, sales. No one is lying, but the model is no longer a model: it's a negotiation encoded in cells.
The psychological trap: green cells don't prove real capacity
Seeing a plan turn green creates a sense of control: “we made the launch work” on paper. But the color validates a formula, not physical capacity or process stability. A spreadsheet-feasible plan can create real saturation: queues, exploding WIP, priorities changing every hour. Green confirms a static calculation, while ramp-up requires dynamic validation.
Mini case study (what / how / impact): an assembly line targets 120 units/day in week 6, shown “green” with an average OEE of 75%. On the shop floor, a single test station suffers micro-stops and drops to 60% availability for 3 days; the buffer is consumed in half a day. Result: a 2‑week delay and time wasted on partial shipments, while the plan looked clean.
II — The ticking time bomb: when Excel creates errors and blind spots
94% of complex spreadsheets contain errors: direct impact on performance
Multiple studies on spreadsheet reliability cite up to 94% potential error rates: wrong formulas, overwritten cells, broken references. During ramp-up, a wrong value for scrap, cycle time, or machine availability can shift the break-even point and distort cash needs. The biggest risk is propagation: one cell feeds the budget, then the staffing plan, then customer commitments. Result: you manage a fictional trajectory—until the operational crash.
Quick test: have two teams rebuild the same file from the same assumptions. If results differ, it's not an opinion debate—it's a management risk.
Cost and schedule overruns in industrialization are documented, notably by Deloitte (Global Automotive Supplier Study) and several McKinsey experience reports. The pattern: early instability degrades margin and burns cash via poor quality, overtime, and premium freight. One month late doesn't cost “one month”—it triggers systemic disruption.
The linearity bias: non-linear flows, moving bottlenecks, and queues
Excel nudges teams toward linear assumptions: add one hour, get one hour of output; add one machine, get additional capacity. Real flows behave differently: near saturation, a few minutes of disruption can create a disproportionate queue. Queueing theory explains it: as resource utilization approaches 100%, waiting time explodes. A workshop doesn't “plateau” gently; it flips.
Another classic blind spot: the bottleneck moves. It changes with product mix, setups, operator maturity, or a concentration of failures on a resource. A spreadsheet adds capacities station by station, while a real system behaves like a network with thresholds.
Single Point of Failure: when the expert leaves, the model becomes unusable
The ramp-up spreadsheet often ends up depending on one person who knows the macros, exceptions, and cells “not to touch.” That's a Single Point of Failure: when the expert leaves, the logic disappears, even if the file remains. Maintenance becomes impossible: each change breaks something else, and no one dares to fix it. Result: the company keeps a critical object, but one that cannot be maintained.
In industry, that risk isn't theoretical: it shows up when the project team changes, when production shifts to serial rhythm, or when a customer imposes a rate change. The model can't keep up, so decisions get made “by gut feel,” with an Excel façade.
OEE and fragile assumptions: a small gap is enough to shift the load plan
OEE (Overall Equipment Effectiveness) combines availability, performance, and quality. It's useful, but highly sensitive to collection assumptions and scope: do you include micro-stops, changeovers, quality stops, rework? A few points' difference, over a short period, changes hours, shifts, and above all WIP. Excel often treats OEE as a constant, while it varies by product, by shift, and by stabilization phase.
Mini case study: a cell “OEE = 80%” is used to size 2 shifts instead of 3 on a cell. Over 4 weeks, quality drifts during learning, real OEE oscillates at 68–72%, and backlog accumulates silently. The plant then adds overtime—but too late: customer delay is already there, and fatigue increases scrap.
III — The cost of approximation: customer disruptions and idle CAPEX
Under-sizing teams: machine saturation and late deliveries
When labor is underestimated, ramp-up derails even if machines look available. Changeovers stretch, quality drifts, inspections become bottlenecks, and flow locks up. The real cost isn't only the delay, but the emergency catch-up: penalties, premium freight, internal trade-offs.
In automotive, aerospace, or rail, increasing production rate is not just “machine capacity.” Traceability, compliance, and documentation consume human time and create friction invisible in plans.
Financially, reducing the issue to shifted revenue is a mistake. Costs stack up: overtime, rework, scrap, atypical logistics, then penalties. AIAG (Automotive Industry Action Group), a reference body in the automotive supply chain, formalizes via APQP (Advanced Product Quality Planning) a simple idea: planning quality and process robustness upstream (requirements, FMEA, control plan, process/product validation) costs less than fixing downstream. Put differently: what isn't anticipated during ramp-up comes back later—more expensive.
Reactive investments: buying capacity in the wrong place
A poorly identified bottleneck pushes teams to buy capacity “where it screams,” under pressure from a committee or a customer. Adding a machine on a non-limiting operation creates idle CAPEX: the asset runs, but it doesn't unlock overall throughput. Reactive investment also adds integration costs: installation, qualification, training, maintenance, spare parts. The launch becomes a budget escalation with no rate gain.
Mini case study: a plant adds an assembly station to “catch up” on delays, while the real bottleneck is a test bench with highly variable cycle time. The new station increases WIP, not shipments, and the area becomes congested. Impact: tied-up capital, saturated space, and increased operational stress.
Physical congestion from WIP: understanding WIP dynamics
WIP (Work In Progress) increases when inbound throughput exceeds outbound throughput, even if each station “works.” Accumulation takes space, slows handling, increases search time, and degrades safety. Many workshops then live a paradox: the more parts you push in, the fewer parts you ship. Excel rarely models this physical congestion and its feedback loops on cycle times.
A simple indicator helps discussion: internal lead time versus WIP. When WIP rises and lead time explodes, you don't lack work—you lack flow.
To remove ambiguity, one framework gets consensus because it's measurable on the floor: Little's Law, formulated by John D. C. Little. It links three variables with a simple equation: WIP = throughput × lead time. If target throughput increases without reducing lead time, WIP rises mechanically. The issue isn't motivation—it's flow physics.
IV — Getting out of cell hell: moving to dynamic flow simulation
What static calculations can't see
Flow simulation (often “discrete-event”) represents variability: breakdowns, cycle time dispersion, supply disruptions, operator learning, rework. It shows how a short incident, at the wrong time, with low buffer, triggers a lasting queue. It also enables testing operating scenarios: lot sizes, priority rules, stock levels, changeover organization. Decisions are then made from observed behavior in the model, not from an average.
Research point worth using: production systems behave like interdependent networks where variability propagates. Reducing variability at the source (process stability, SMED, quality) can have more effect than “adding capacity” in the wrong place.
Digital twin: visualizing bottlenecks before they appear on the shop floor
A digital twin makes emerging bottlenecks visible, along with their causes and their consequences on throughput, WIP, and lead times. The value isn't limited to “seeing”: it's about measuring the impact of a scenario and comparing alternatives using shared metrics. That visualization aligns production, industrial engineering, and finance, because everyone looks at the same system and the same assumptions. The model becomes an arbiter: fewer positional debates, more tested decisions.
Concretely, a digital twin can integrate MES/ERP data, etc., and it can be updated throughout launch as real data replaces assumptions.
Validate the target rate with real product mix and shop-floor constraints
A target rate only makes sense if it holds with the real product mix: setup times, inspections, material variability, customer requirements, operator profiles. Simulation tests ramp-up under these conditions, with resource constraints, shifts, maintenance, and internal logistics. It can show that a target is achievable—but with a different phasing: progressive ramp, temporary extra shift, or buffer adjustments. Industrial ramp-up stops being a wish; it becomes a quantified validation.
A useful committee tool: a probable rate curve (with an interval), rather than a single “target” number. Industry lives in distributions, not perfect values.
Operational deliverables: dynamic saturation, ramp-up curve, contingency plan
A simulation project produces deliverables usable in steering meetings and actionable on the shop floor: a dynamic saturation report (constrained resources, load windows, threshold effects), a ramp-up curve based on tested scenarios, and a contingency plan built on sensitivity analysis.
Based on studies and industrial experience reports cited by McKinsey, modeling and simulation can reduce stabilization time by 15–30%, especially during ramp-up. The gain comes from upstream decisions that avoid costly shop-floor iterations (buffers, priorities) and focus effort on what truly drives performance.
Examples of practical deliverables, easy to use:
list of dominant constraints (resource, cause, impact on throughput and lead time)
buffer sizing (where, how much, why)
achievable weekly rate with conditions (headcount, target OEE, supply availability)
plan B if a resource drops to X% availability for Y days
To make arbitration fast, a “scenarios vs impacts” table works very well.
Scenario | Change | Expected impact | Cost / effort | Main risk |
|---|---|---|---|---|
A | Reduce lot sizes | Lower lead time, reduced WIP | Low to medium | More changeovers |
B | Temporary shift on the bottleneck station | Stabilized throughput during learning | Medium | Skill availability |
C | Add equipment | Gain if the constraint is proven | High | Idle |
Reading grid: what a spreadsheet can do—and what it must stop doing
Excel for reporting: track facts, don't decide the means
The spreadsheet remains useful to consolidate facts, produce tracking, and share a summary. It is excellent for comparing actual vs plan, tracking actions, and escalating gaps. However, it must stop being used to decide the means, because it does not represent flow dynamics. A coherent use: Excel to report, simulation to decide.
A good practice is to lock the reporting file (governance, sources, versioning) and move variable assumptions into a living model—testable and explainable.
Recurring traps of artisanal ramp-up management—and the simulation alternative
Artisanal management fails in the same place: it confuses average with capability, then reacts too late. It also creates a comprehension debt: each local fix makes the file more opaque, and decisions become hard to explain. Finally, it encourages urgent investments, while the constraint hasn't been demonstrated. The alternative is to test scenarios in simulation, then choose with full visibility.
The five traps (and the countermeasure):
Average everything → work with variability (distributions), not only averages.
Set one single OEE → differentiate by product, shift, and learning phase.
Ignore WIP → size buffers and measure lead time.
Invest before proving the constraint → validate the bottleneck via simulation and shop-floor observation.
Depend on a single “file guardian” → document, version, industrialize the model.
At Dillygence, ramp-up is managed as a physical and financial system: experience it through Operation Optimizer.

