Dillygence

Production line design without superfluous CAPEX

Production line design: align capacity with customer demand, not with the catalog, to improve ROI and OPEX.

Introduction: Succeeding in production line design, avoiding oversized CAPEX

In industry, a 10% to 20% “just in case” margin on capacity is often decided in a meeting—and then paid for over ten years. That reflex inflates CAPEX and locks in recurring costs on assets that run less than expected. Worse, it creates an illusion of control while actual performance remains constrained by variability and queues. That's why conception ligne de production—or production system design more broadly—should start with equations, not a machine catalog.

The real cost of the “safety margin”: too many machines, too much maintenance, and still waiting for target performance

A redundant machine is often a “safety margin” or growth capacity purchased too early. Sometimes, it's simply one machine too many: it isn't required to hit the target output. In both cases, it doesn't only cost its purchase price: it adds maintenance, spare parts, training, inspections, and planned downtime. It takes up square meters, which means energy, heating, ventilation, and real-estate capital. And if the bottleneck is elsewhere, this pseudo-insurance doesn't increase throughput—it increases complexity. So why keep repeating this nonsense?

 

1. Why does oversizing a production line hurt profitability?

Oversizing destroys ROI in two simple ways.

  • It ties up capital in underloaded assets, then creates fixed costs that survive any drop in volume.

  • It also weakens operational discipline, because excess capacity hides the real flow problems.

The result? Classic: “available” machines, but lead time that doesn't go down.

 

Immobilized CAPEX, recurring unnecessary OPEX, and underused production resources

A CAPEX decision should be judged by the incremental margin it unlocks—not by psychological comfort. Capacity bought “just in case” generates depreciation even if demand never arrives. It adds OPEX even if planning never uses it. And it reduces the ability to invest later in quality, useful automation, or reducing the carbon footprint.

Shifting the bottleneck instead of treating it on the line

Adding a machine to a non-bottleneck station does not eliminate the constraint.

The constraint moves and then reappears in another form: upstream waiting, downstream congestion, intermediate inventory, and rework. Teams see “more equipment,” and then they manage “more complexity.” Robust design treats the constraint and then balances the line around it.

 

2. Sizing to demand: Little's Law as a guardrail

Serious sizing starts from customer demand and works backward to required capacity. This logic forces trade-offs between throughput, WIP, and flow time. It avoids unrealistic assumptions based on “catalog” rates. And it puts lead time at the center, because lead time absorbs variability.

How does Little's Law make plant design more reliable?

Little's Law imposes mathematical consistency between what the factory wants to ship and what it accepts to hold in the flow. It forces you to define the target throughput and the expected flow time before choosing a WIP level. It quickly exposes impossible promises—for example, short lead time with high WIP (Work In Progress) and capacity near saturation. It therefore improves design reliability by turning a fuzzy discussion into measurable constraints.

Production line design: linking throughput, WIP, and lead time with Little's Law

The relationship \(L=\lambda \times W\) links three objects too often separated: average WIP, average throughput, and average flow time. If throughput \(\lambda\) stays constant and WIP \(L\) increases, flow time \(W\) increases too. If you want to reduce \(W\) without changing \(\lambda\), you must reduce \(L\), meaning stabilize flows and limit queues. This logic makes visible the hidden costs of “comfort WIP” that ties up cash and stretches delivery times.

 

3. Takt Time vs. cycle time: stopping the confusion that ruins assumptions

Many projects confuse internal speed with external demand.

This confusion leads to overinvestment, because the line is sized on a station rate instead of the customer rhythm. It also leads to lead-time promises that don't survive the first disruption. Rigorous design distinguishes the metrics and then builds capacity assumptions station by station.

What is the difference between Takt Time and cycle time?

Takt Time is the rhythm imposed by customer demand: available time divided by the volume to deliver.

Cycle time is the actual time to produce one unit at a station or operation. If cycle time exceeds Takt Time, the station cannot meet demand—even with excellent quality. If cycle time is below Takt Time, the station can meet demand, but variability and synchronization still need to be managed.

Sizing a production line: estimating the number of stations… then dealing with shop-floor reality

A first estimate of the number of stations is \(N=\frac{\sum T_c}{T_t}\), where \(\sum T_c\) is the sum of cycle times and \(T_t\) is the Takt Time. This gives a baseline, and the shop floor immediately corrects it: machine availability, changeovers, scrap, skills, and logistics. A line doesn't live in a spreadsheet—it lives in a workshop with micro-stops and competing priorities. A good approach turns this calculation into scenarios, then validates them via simulation.

 

4. Variability, queues, and Kingman's Law: the saturation point you must not cross

Sizing that ignores variability builds a theoretical factory. As load approaches capacity, queues take over. Lead times then grow faster than load, which always surprises organizations. Queueing theory explains this behavior and provides simple design rules.

Why is it risky to run a machine above 90% utilization?

Above 90% load, the smallest fluctuation creates a queue that doesn't disappear faster than it forms. Lead times blow up even if average cycle times look fine. Operators see the machine running “all the time,” and they also see WIP growing “all the time.” This situation hurts service, quality, and stability—therefore profitability.

Breakdowns, product mix, micro-stops: the variability that makes lead times explode

Variability comes from breakdowns, setups, changeovers, quality deviations, and product mix.

Even low variability is enough to create long waits, because queues pile up on resources close to saturation. This phenomenon isn't an execution flaw; it's system dynamics. Robust design treats variability as an input, not as an after-the-fact excuse.

Saturation curve: sizing for production flow, not for the supplier catalog

Kingman's Law helps estimate queue waiting time from two simple parameters: resource load and variability (breakdowns, setups, product mix). The closer utilization \(\rho\) gets to 1, the faster waiting time rises, even if average “on-paper” capacity seems sufficient. In other words: at 95% utilization, a small disruption can generate a big queue. This reading helps choose an operating point that protects lead time and service level. It avoids a frequent trap: buying a faster machine and then discovering the line remains slow because of queues and synchronization.

 

5. Simulate before buying: Monte Carlo to test 1,000 factories in minutes

A line doesn't face one scenario; it faces a distribution of disruptions. A Monte Carlo simulation samples these disruptions to estimate throughput, WIP, and lead time with a confidence level. It lets you compare architectures, redundancy levels, and inventory policies. It costs a fraction of a poorly sized CAPEX.

What is the value of Monte Carlo simulation before purchasing equipment?

Monte Carlo simulation tests thousands of combinations of breakdowns, cycle times, and product mix. It quantifies the probability of meeting a throughput and a lead time instead of asserting a comforting average. It also reveals fragile zones—for example, a station that causes no issue “on average” but drops service level as soon as two disruptions overlap. This visibility helps choose a useful investment rather than a defensive purchase.

Service curves (e.g., 95%) and minimal cost: choosing the right redundancy level

A 95% service level is sized; it isn't declared. Monte Carlo lets you plot a cost-versus-service curve and identify the point where adding another station brings almost no additional gain. This ends the “if we add a machine, we sleep better” argument. The right redundancy level becomes a quantified trade-off between CAPEX, OPEX, and risk.

Sensitivity report: quantifying the impact of failures on throughput and lead time

A sensitivity report measures the effect of one variable on system outputs—for example, the effect of MTBF (mean time between failures) and MTTR (mean time to repair) on throughput. It shows which levers deserve action (maintenance, critical spares, standardization). It also shows which levers don't deserve CAPEX (capital expenditures) because their impact is marginal. A mature design uses this report to prioritize actions and justify investments.

 

6. CAPEX decision framework: deciding with equations, not fear

A solid CAPEX decision links a demand hypothesis to a flow architecture, then to an acceptable variability level. It then checks consistency between WIP, lead time, and service level. Finally, it formalizes total cost—CAPEX plus OPEX—over the asset's lifetime. This framework avoids “gut-feel” purchases that turn into fixed charges.

Signals that indicate oversizing disguised as caution

  • A first signal appears when the study applies a 10% to 20% margin without a variability model.

  • Another signal appears when the business case ignores maintenance, energy, tooling, and floor-space footprint.

  • A third signal appears when the bottleneck has no proof—only consensus.

  • A final signal appears when the target lead time comes from neither Little's Law nor simulation.

What a credible study phase must include before issuing a purchase order

  • A credible study phase starts by defining Takt Time from demand,

  • then collecting cycle times and failure data. It includes a first estimate of the number of stations: add the cycle times of all operations (\(\sum T_c\)) and divide by Takt Time (\(T_t\)): \(N=\frac{\sum T_c}{T_t}\). Then a saturation and queue analysis using Kingman;

 

Dillygence supports your teams to design and size a production line on measurable foundations—thanks to its Design optimizer . .

1. Why is oversizing a financial trap?

3. Takt Time vs Cycle Time: what is the difference?

5. What is the value of Monte Carlo simulation before purchasing?

2. How does Little’s Law secure your design?

4. Why should you never saturate a machine to 100%?

1. Why is oversizing a financial trap?

2. How does Little’s Law secure your design?

3. Takt Time vs Cycle Time: what is the difference?

4. Why should you never saturate a machine to 100%?

5. What is the value of Monte Carlo simulation before purchasing?