Dillygence
Production assumptions: validate your initial assumptions
Production assumptions: move from spreadsheet to model, test variability, and avoid buying machines just to shift the bottleneck.

Production assumptions: what simulation reveals, what shop-floor observation can't see
A study of 64 industrial companies measured an average loss of 1.6 hours per day per employee due to task interruptions and resumptions—more than 20% of an 8-hour day (Atlassian, context switching). In a factory, that friction doesn't show up in a spreadsheet, but it is paid for in lost capacity. The root cause often lies in implicit assumptions, simplified calculations, and undocumented management rules.
This is often where the gap hides between announced capacity and capacity actually delivered.
In this article, you'll see why simulation is primarily used to validate assumptions about capacity—well before trying to “save” a few seconds at a workstation. Through an anonymized case from the transport/infrastructure sector, you'll see how 6 months of on-site observation could look reassuring… and still miss the initial error: the bottleneck wasn't the machine, but flow organization and overly optimistic cycle times.
Key takeaway: simulation makes assumptions explicit, tests them, and quantifies their impact on throughput, work-in-progress, and lead times.
The illusion of victory: when the bottleneck “jumps” and makes a machine investment irrational
During a refurbishment project in a large heavy-overhaul plant for railway components (a historic site designed in the 1930s), the objective was clear: double production while integrating a new generation of components. Management identified a “slow” workstation and proposed a higher-performance machine. The digital model showed something else: upstream delivered batches out of sequence, and priorities changed without clear rules. The accelerated machine generated intermediate stock and saturated the next step, which was more quality-sensitive.
Result: overall throughput unchanged, lead time increased, and budget blown. The investment was irrational because the constraint was organizational, not technical. Simulation highlighted that the initial assumptions were wrong—something 6 months of on-site observation had not revealed.
1) Define production assumptions without fooling yourself
A plain definition: what you assume about demand, capacity, and costs
Production assumptions are the explicit suppositions that link demand to an industrial system, and then to cost and lead time. They describe what you think you can produce, under which conditions, and with which risks. They are used to build a capacity model and an industrial business case.
Without formalization, the project rests on a narrative rather than evidence.
Three families recur:
demand assumptions (volumes, mix, seasonality),
industrial assumptions (deliverable capacity)
and economic assumptions (variable and fixed costs, CAPEX (capital expenditures), OPEX (operating expenditures)).
Family | What is set | Simple example |
|---|---|---|
Demand | Volume and allocation by references | 800 units/month, 60% A, 40% B |
Industrial | Deliverable system capacity | 1 shift, 5 days/week, quality yield at 96% |
Economic | Costs and financial assumptions | Catch-up OPEX in overtime hours if late |
What classic capacity models explain… and what they don't cover on the shop floor
Classic capacity models structure the relationship between inputs-process-capacity-yield and provide a useful framework for sizing. They work when flows remain stable and variability stays low. On the shop floor, micro-events (micro-stops, rework, movements) accumulate and create queue effects that static models capture poorly. A poorly positioned changeover creates a shockwave across the day; static models give a first estimate, while simulation puts these assumptions under stress with variability and real control rules.
“Average” assumptions vs robust assumptions: variability, queues, and interdependencies
The average is reassuring but hides the extremes that saturate queues. A robust assumption describes a distribution, a range, or multiple scenarios—not a single number. It makes dependencies explicit between stations, teams, upstream and downstream, and indicates when it stops being true.
Without validity conditions and model break factors, the model becomes hard to audit.
2) Invisible bottlenecks: organization, flows, and control rules before technology
The “jumping” bottleneck: typical symptoms in legacy plants being modernized
In factories built before pull flow, priority rules and storage zones are often inherited. As volumes ramp up, local workarounds lose effectiveness and the constraint shifts by hour, by reference, and by operator availability. Customer urgencies disrupt scheduling and increase setup times. Simulation quantifies these symptoms, enables targeted action, and forces simple rules to be defined—then measures their effect.
Why speeding up one station can slow down the whole factory: wave effects and downstream saturation
Speeding up an isolated station increases local throughput and therefore downstream stock if the next step isn't aligned. Without alignment to Takt Time (customer pace), you produce WIP (work in progress), which you pay for in space, quality, and lead time. Simulation helps quantify the point at which WIP starts growing with no gain in outputs. That point then becomes an operational guardrail.
WIP, space, safety, and lead time: the hidden cost of an overly optimistic pace assumption
WIP increases lead time, hides defects, and complicates inventory. It raises safety risk as traffic density increases, and in some layouts, WIP even becomes the main physical constraint. Simulation identifies the threshold beyond which WIP grows faster than outputs; that threshold matters for a business case. Without quantifying it, “buffer” space ends up absorbing flow problems.
3) The dynamic bottleneck: product mix sets the rules
Theoretical cycle times vs observed cycle times: the gap that blows up the load plan
A “theoretical” cycle time comes from a routing or a standard; an observed time reflects reality: operator variability, micro-stops, tooling, and quality control.
A 10% gap on a constrained station is enough to break a load plan.
A reliable model doesn't use a single value, but a distribution—or at least three values: favorable, nominal, unfavorable—linked to the product, the team, and equipment conditions.
Logistics distances, priorities, rework, and disruptions: the variables that move the constraint
Rework (retouching) shifts the constraint toward inspections or rework stations. Disruptions—unplanned stops, component shortages—change scheduling and increase changeovers.
The model makes these interactions explicit, forces simple and respected rules, and allows realistic trade-offs to be tested: priority by due date, by criticality, by batches, or by families. The goal remains useful throughput, not local performance.
Why six months of shop-floor observation sometimes miss what the model makes obvious
Shop-floor observation captures a past already compensated by teams: operators bypass blockages, so the observer sees a factory that “holds.” After a change, those compensations may disappear and reveal risks that were never formalized.
The model applies explicit rules, measures the impact of changes on flow, and forces a scheduling logic to be chosen. It doesn't replace the human eye; it stress-tests assumptions and makes cross-functional discussions more fact-based.
4) From spreadsheet to model: build a minimal capacity model that is auditable and reusable
The basic forecast volume formula: units, parameters, explicit assumptions
A minimal model fits into a simple formula and a list of verifiable assumptions.
Forecast volume (parts) = Working time (hours) × Availability (%) × Rate (parts/hour) × Quality yield (%).
Each parameter must have a unit, a source, and a validity condition. This formula is a starting point, not a conclusion.
In a real industrial system, you must also integrate product mix, changeovers, variability, logistics constraints, shared resources, upstream/downstream constraints, scheduling rules, and the logic of the constrained station. Otherwise, you get a “clean” number that doesn't survive the first disruption. The value of the minimal model comes from traceability: it helps identify where accuracy is missing.
Input data to demand: typical sources and update frequency
Data | Typical source | Useful frequency |
|---|---|---|
Cycle time by reference | MES (Manufacturing Execution System — production execution system), shop-floor measurements | Weekly |
Stops and causes | CMMS (Computerized Maintenance Management System — maintenance management software) | Daily |
Produced quantities and scrap | MES, ERP (Enterprise Resource Planning — integrated management software) | Daily |
Changeover time | MES, shop-floor measurements, methods standards | Monthly |
Headcount, versatility, absenteeism | HR, shop-floor schedule | Weekly |
Opening calendar | Site planning | Quarterly |
If you don't know where a number comes from, flag it as a risky assumption. If you don't know how often it changes, you don't know when your model becomes wrong. A quick source review often saves more time than another spreadsheet iteration—and avoids hardening errors right when work starts.
Document and version: assumption register, evidence level, and cross-functional reviews
Each assumption needs an owner and a review date, recorded in a register with value, unit, source, evidence level, and risk if wrong. Cross-functional reviews (maintenance, quality, logistics, methods) turn debate into arbitration and prevent decisions based on a single functional perspective.
Keeping versions prevents a business case that changes without traceability. An uncertain assumption isn't a problem, as long as it is identified and tested.
Field | Example | Why it helps |
|---|---|---|
Assumption | Average changeover time on family B | Avoids dilution into a global number |
Evidence level | Shop-floor measurement on 30 occurrences | Helps sort “solid” and “fragile” values |
Owner | Methods manager | Defines who decides and who updates |
Review date | End of month | Explicit update cadence |
5) Simulation validation method: test scenarios before committing to heavy work
Pessimistic, central, and optimistic scenarios: what changes when variability enters the equation
Simulation becomes useful when you build multiple scenarios: pessimistic, central, optimistic. You vary sensitive parameters and observe throughput, WIP, and lead time sensitivity. The goal is to identify assumptions that flip ROI (return on investment) and prioritize shop-floor evidence actions. A well-built pessimistic scenario is worth more than a flattering optimistic one; it helps define operational guardrails and reaction plans.
Select 5 to 10 sensitive assumptions (rate on constrained station, changeovers, scrap rate, availability).
Define three coherent sets of values (not three arbitrary numbers).
Measure throughput, WIP, and lead time, then spot tipping thresholds.
Decide shop-floor evidence actions on the two most influential assumptions.
Takt time, buffers, and resource sizing: manage the bottleneck instead of suffering it
Takt Time provides a target rhythm; you need to size buffers (buffer stocks) in the right places to absorb variability without drowning the shop floor. In many high-variability environments, operational flexibility can sometimes create deliverable capacity faster than new equipment—especially when the constraint comes from flows, priorities, or shared resources. Conversely, in highly automated or continuous environments, or when the bottleneck remains purely technological, equipment remains the main lever. The point isn't “people versus machines,” but which lever treats the real constraint.
Multi-site standard: make calculations comparable across plants and avoid false benchmarks
Comparing plants requires common definitions: how OEE (Overall Equipment Effectiveness) is calculated, loss categories, and how changeover times are measured. Without a standard, you compare incomparable numbers and misdirect CAPEX. The standard defines a common language, lets local parameters vary, and enables grounded trade-offs: where to invest, where to reorganize, where to train. Without this foundation, benchmarks become internal slogans.
6) Industrial mini-cases: three assumption errors, three quantified impacts
Case 1: overestimated rate → announced capacity, then delays and additional OPEX
What | A manufacturer announces 1,000 units/month based on 55 units/hour at the constrained station. |
|---|---|
How | Shop-floor data shows a median of 50 units/hour due to micro-stops and longer inspections for some references. |
Impact | Real capacity ~910 units/month, delays and overtime to compensate. |
Most profitable action | Stabilize standards and reduce micro-stops rather than buying a machine. The business case was recalibrated on deliverable capacity. |
Case 2: underestimated changeovers → queues and lower useful throughput
What | The project assumes 15 minutes per changeover and 12 changeovers/day. |
|---|---|
How | Measurements show 25 minutes on average. |
Impact | Daily loss increases from 120 to 300 minutes—3 hours of lost capacity and saturated queues. |
Solution | Smarter batching or external preparation of changeovers. The model showed that more changeovers also increase rework and quality variability. |
Case 3: overly optimistic ramp-up → lost weeks and misdirected CAPEX
What | Ramp-up planned to reach 100% in 8 weeks, linear curve. |
|---|---|
How | Reality requires process qualification, training, and quality stabilization phases. |
Impact | 6-week delay and CAPEX decisions taken too early. |
Result | The model introduced a stepped ramp-up (production ramp-up) curve with disruptions. Investment refocused on flexibility and control, lower overall inventory, and capacity gained without overloading the organization. |
7) The five traps that destroy a business case (and countermeasures)
Trap 1: confusing nominal capacity with deliverable capacity
Problem: nominal capacity ignores quality, disruptions, and real availability.
Countermeasure: define deliverable capacity with OEE (Overall Equipment Effectiveness), quality yield, and shift constraints, then test it in simulation.
Trap 2: optimizing locally and degrading global flow
Problem: speeding up one station creates downstream saturation and WIP.
Countermeasure: align improvements with Takt Time and size buffers in the right places.
Trap 3: forgetting changeovers, rework , and disruptions
Problem: the model ignores non-cycle time and overestimates useful throughput.
Countermeasure: integrate changeovers, rework, and downtime with distributions and scenarios.
Trap 4: promising a linear ramp-up
Problem: real ramp-up follows steps driven by quality, learning, and maintenance.
Countermeasure: define a milestone-based ramp-up curve and audit it periodically.
Trap 5: not standardizing definitions across sites and functions
Problem: OEE, cycle time, and losses don't mean the same thing from one site to another.
Countermeasure: define a data dictionary and a shared multi-site model.
Dillygence perspective: a digital twin to test your assumptions before CAPEX and OPEX
Dillygence implements a digital twin and simulation to test your capacity and organization assumptions before committing to CAPEX and OPEX that would lock your factory in for years.
FAQ — production assumptions
What are production assumptions?
They are explicit suppositions that link demand to industrial capacity, then to costs and lead times. They cover working time, availability, rate, quality yield, product mix, and ramp-up. They are used to build an auditable model and decide on traceable foundations.
Why are production assumptions critical for sizing industrial capacity?
Because deliverable capacity rarely matches nominal capacity. Assumptions determine useful throughput, WIP, lead time, and quality stability. A wrong assumption directs investments to the wrong levers; simulation reveals organizational bottlenecks.
How do you formalize production assumptions in a capacity model?
Start from a simple formula with units, then make each parameter explicit: working time (hours), availability (%), rate (parts/hour), quality yield (%). Attach each value to a source, an update frequency, and a validity condition. Document everything in a versioned register and add scenarios for variability.
How do you estimate realistic production assumptions from the shop floor?
Prefer measurements and history over catalog standards. Cross MES, CMMS, and ERP with shop-floor measurements, then segment by product, team, and execution conditions. Use distributions or ranges—not a single average—and update at a cadence aligned with drift.
How do you integrate changeover times into production assumptions?
Measure changeover times by product family and context, then link those times to the number of changeovers per period. Integrate the loss into working time station by station instead of diluting it into OEE. Test batching rules and external preparation, then verify the impact on useful throughput and WIP.
How do you build production assumptions for a ramp-up?
Model ramp-up in steps, with qualification milestones and learning phases. Link each step to assumptions on quality, availability, and skills. Add pessimistic, central, and optimistic scenarios and revise on fixed dates with shop-floor evidence.
How do you validate and challenge production assumptions before a decision?
Run a cross-functional review and request, for each assumption, a source, a rationale, and an evidence level. Execute targeted shop-floor tests on sensitive parameters such as rate, changeovers, and quality. Simulate scenarios, identify thresholds that break throughput or make WIP explode, then arbitrate after identifying the ROI-critical assumptions.
How do production assumptions influence the ROI of an industrial investment?
Assumptions determine deliverable capacity and therefore achievable revenue. They also impact the OPEX needed to keep the promise. An overly optimistic assumption hides catch-up costs; an overly conservative one can oversize CAPEX. ROI depends on the robustness and sensitivity of assumptions.
How do you standardize production assumptions to compare several plants?
Enforce a shared dictionary: OEE definition, cycle-time measurement methods, stop categories, and scrap/rework calculations. Require an assumption register format with units, sources, and review dates. Apply the same minimal capacity model to obtain reliable comparisons.
Which production assumptions should you require in a plant or line business case?
At minimum require: opening calendar, OEE by constrained station, rates by product family, quality yields, changeovers, scheduling rules, labor needs, and ramp-up assumptions. Also request economic assumptions: unit costs, cost of non-quality, CAPEX, and OPEX. Require a versioned register with owners and evidence levels; reject numbers without sources or validity conditions.


