Dillygece

Calculate production capacity without making any mistakes.

Increase production capacity without purchasing machinery: calculation method, bottlenecks, OEE, and flow simulation.

production capacity

Introduction: increase production capacity by 20% without CAPEX (capital expenditure) — the real issue is flow

Most industrial investments compensate for flow imbalances that they then make worse...

A plant can show 85% OEE (overall equipment effectiveness) and still deliver late. This paradox rarely comes from a lack of machines, but more often from a lack of synchronization. When flows drift out of alignment, work-in-progress grows, priorities change constantly, and throughput drops silently. Buying a new piece of equipment treats a symptom, not the system's dynamics.

The “buy a machine” reflex often hides a synchronization imbalance and Muda (waste)

In most saturation cases, the bottleneck sits in how the flow is organized, not in installed power. Muda hides in micro-stops, logistics waiting, changeovers, and quality rework — a reading inherited from the Toyota Production System. These losses nibble 5 minutes here, 12 minutes there, then end up consuming the equivalent of a team or a machine. The worst part is that these minutes stay invisible in a nominal capacity table.

The “metal” reflex increases fixed costs, adds complexity, and can move the bottleneck further down the line. You pay for a machine, then discover the constraint was in internal logistics, scheduling, or quality. Overall throughput doesn't move, but the shop becomes harder to run.

Key takeaway: before metal, measure effective capacity, locate the bottleneck, then test scenarios via simulation

Capacity is not an average but a distribution. Variability (breakdowns, mix, logistics) is the main destroyer of shippable capacity.

A robust approach starts with three actions. First, measure effective capacity, not nominal capacity. Then, locate the bottleneck using data and shop-floor observation — Eliyahu Goldratt's theory of constraints helps frame this search. Finally, test several scenarios in a digital twin before spending one euro of CAPEX.

 

I- Nominal capacity, effective capacity, normal capacity: definitions and trade-offs

Nominal capacity describes what the asset could produce in an ideal world. Effective capacity describes what it actually produces with stops, mix, scrap, and human constraints. Normal capacity describes what you can sustain over time without exhausting teams or degrading reliability. Confusing these three notions leads to impossible load plans.

Link each notion to a concrete choice: load plan, hiring, subcontracting, investment in the company

Nominal capacity supports a layout choice or a product feasibility study. Effective capacity supports a planning, batch, and sequencing trade-off, because it includes real losses. Normal capacity supports social and industrial decisions, because it includes what is sustainable. If demand exceeds normal capacity, you have four options: smooth demand, increase hours, improve reliability, or invest.

“Sellable” capacity vs technical capacity: optimizing product mix matters as much as volume

Technical capacity is measured in hours and parts, but sellable capacity is measured in margin and customer promise. A plant can have spare capacity on low-demand references, and lack capacity on profitable references. Product mix and batch size dictate useful throughput, even if machines look underloaded. An extra machine can increase technical capacity, but reduce sellable capacity if it forces more changeovers.

 

II- From theoretical calculation to real throughput: a reusable calculation method

Variables, units, and scope: available time, cycle time, batch sizes, product mix

First define the available time over the period in useful minutes, then remove planned stops.

Then define cycle time by product or family, in minutes per part, measured at the constrained station. For a single product: nominal capacity (parts/period) = available time (minutes/period) / cycle time (minutes/part). For multiple products, use a mix-weighted average, or calculate per product then check the load on the constraint.

Nominal → effective conversion with OEE and FPY

OEE (overall equipment effectiveness) converts nominal capacity into effective capacity, because it integrates availability, performance, and quality (reference: ISO 22400 standard). FPY (First Pass Yield, first-pass yield) measures the share that passes without rework.

Effective capacity = nominal capacity × OEE.

Useful capacity = effective capacity × FPY,

if rework passes again through the constraint.

Full numeric example: result in parts/day and parts/month, then a multi-product variant

Take a critical station with 2 shifts of 8 hours, i.e., 960 minutes per day. After 60 minutes of breaks and 30 minutes of planned stops, 870 useful minutes remain. With a cycle time of 2.0 minutes per part, nominal capacity is 435 parts per day. An OEE of 72% gives 313 effective parts; an FPY of 92% brings useful capacity down to 288 parts per day, i.e., 5,760 parts per month over 20 working days.

Multi-product variant: two references A (60% of volume, 1.8 min/part) and B (40%, 2.6 min/part) share the same station. The weighted average cycle time is (0.6 × 1.8) + (0.4 × 2.6) = 2.12 min/part, i.e., 410 nominal parts per day before OEE and FPY. If B rises to 55%, the weighted time increases to 2.24 min and nominal capacity drops to 388 parts. In other words, without breakdowns or drift, a simple shift in product mix toward the slower reference mechanically reduces production capacity and tightens the line.

Common calculation mistakes: confusing utilization rate, load rate, and saturation

Load rate compares a planned load to a reference capacity. Utilization rate describes time actually consumed. Saturation describes a regime where variability makes queues explode, even if the average seems feasible. Queueing theory (Kingman, Little) shows that the closer you get to 100% utilization, the more waiting dominates: WIP (Work In Progress, work-in-progress) explodes, lead time increases, and shippable capacity drops.

Source: queueing theory.

 

III- Limiting factors and bottlenecks: identify the resource that dictates the line's throughput

Detection in 5–7 steps: collect stops, WIP, changeovers, then validate with data

  1. Define the scope and observation period, then freeze the product mix.

  2. Record real cycle times, stops, micro-stops, and setups, station by station.

  3. Measure queues and WIP before each station across several time windows.

  4. Measure changeovers, then link them to batch sizes and scheduling.

  5. Calculate effective capacity per station with OEE and FPY.

  6. Validate with data the station that limits throughput, then confirm by observing symptoms.

  7. Test a simple improvement, then measure impact on overall throughput, not on an isolated station.

Starvation and blocking: two symptoms that make you believe you lack resources

Starvation (starvation) describes a station stopped because it lacks parts. Blocking (blocking) describes a station stopped because downstream no longer takes output.

These two phenomena create the illusion of a capacity shortage because machines “aren't running”. In reality, they reveal poor flow synchronization that simulation can reproduce and correct.

Shop-floor rule: the bottleneck remains the resource that accumulates the queue and sets the pace

The bottleneck is the resource that builds a persistent queue, even when you “push” upstream. It's also the one where one lost hour cannot be recovered without overtime. Until you identify it, every action looks like a bet.

 

IV- Recover “free” capacity: micro-stops, changeovers, quality, maintenance

Each OEE point recovered mechanically improves fixed-cost absorption, hence operating margin (EBE/EBITDA), without additional investment.

Stop management and recovery: separate breakdowns, micro-stops, setups, and internal logistics to regain throughput

A 2-minute stop repeated 40 times per shift is worth more than one long breakdown. Segment the causes: breakdowns, micro-stops, setups, logistics waiting, quality waiting. Link each segment to an owner action, with before/after measurement.

Mini-case: an assembly line was losing 55 minutes per day to screwdriving micro-stops and replenishment waiting. The team implemented a torque standard, a parts kit at the station, and a paced logistics route. OEE at the constrained station rose from 68% to 76% in four weeks, i.e., +11% daily throughput without additional equipment.

Changeovers and reliability: reduce wasted time without degrading quality

A changeover costs setup time, but also start-up parts and quality drift. Reducing this cost requires external preparation, setup standards, and sequencing discipline. Mini-case: a machining shop suffered 9 changeovers per day on a constraint. Grouping families and revisiting batch sizes reduced total setup time by 35%, with +8% useful capacity.

Reliability, non-quality, and multi-skilling: recover useful capacity without adding equipment

Each scrapped part consumes capacity for nothing and cash via working capital requirement. FPY becomes a capacity indicator: if rework consumes the constraint, treat FPY first. Operator multi-skilling reduces losses linked to absences and shift handovers.

Mini-case: on a final inspection line, training two multi-skilled operators and aligning breaks reduced end-of-line WIP by 22% and increased shippable capacity by 6% at marginal cost.

To the great delight of CEOs and CFOs, note that reducing WIP directly lowers working capital requirement and frees up cash.

 

V- Digital twin and flow simulation: test 10 scenarios before executing one

A factory never runs at the average regime but under permanent variability (breakdowns, mix, disruptions).

Flow simulation captures this dynamic and allows testing control rules, buffers, batch sizes, and scheduling rules and reconfiguration scenarios.

You get a throughput, a lead time (throughput time), and a WIP level per scenario, without disrupting the shop.

Buffers and scheduling: stabilize the constraint and limit dead time

A buffer protects the constraint from upstream and downstream disruptions. Too small, it leaves the constraint starved; too large, it inflates WIP and extends lead times. Little's law shows the direct impact of WIP on lead time and throughput: when WIP explodes, shippable capacity degrades even if theoretical capacity is unchanged. Simulation finds the balance point and also tests scheduling rules — priority to margin per constrained minute, family campaigns, leveling long references — to compare their impact on throughput and service level.

Source: Little's law

Workshop reconfiguration: what global throughput gain, and at what operational price?

A reconfiguration can free capacity by reducing waiting, but it can also create new accumulation points. The operational price is measured in disruptions, learning curves, and quality risks. A digital twin estimates these effects before moving the first station, and links capacity and sobriety by reducing handling and energy consumption per part.

That is precisely the purpose of an industrial digital twin: test without risk before investing.

VI- Capacity management at the right level: workshop, site, multi-site

 

Workshop dashboard: OEE, FPY, WIP, load rate, lead time

Indicator

Question it answers

Associated decision

OEE

Where does available time go?

Reliability, standards, micro-stop reduction

FPY

What share comes out right the first time?

Root-cause treatment, process robustness

WIP

Where does work-in-progress accumulate?

Buffer tuning, release rules, leveling

Load rate

Does planned load exceed what is sustainable?

Planning trade-off, subcontracting, hours

Lead time

How long does a part take to traverse the flow?

WIP reduction, waiting elimination, synchronization

Master production schedule and multi-site capacity model

The master production schedule links demand to normal capacity, then forces mix and batch trade-offs. If you change batch sizes without recalculating setup times, you change your real capacity. Normal capacity corresponds to the level you can sustain without permanent degraded mode, factoring in variability from disruptions.

Comparing sites requires a single language. Standardize at least four elements: reference available time, OEE rules, FPY rules, and mix convention. Enforce a common granularity by line or by process family. Without this standard, you move volumes, not capacities.

 

VII- Decide without fooling yourself: CAPEX vs OPEX, make or buy

Compare optimization vs extension: timing, operational risk, working capital, energy, carbon footprint and CO₂

Criterion

Flow optimization

Capacity expansion

Time to impact

Weeks to a few months

Several months to years

Operational risk

Discipline and change risk

Ramp-up (ramp-up), qualification, integration risk

Working capital

Can decrease if WIP is reduced

Can increase via stocks and WIP

Energy / CO₂

Can decrease per part via loss reduction

Often increases via new assets and footprint

Cost

Targeted OPEX, little CAPEX

High CAPEX + recurring OPEX

 

Size an investment and arbitrate make or buy (make or buy)

An investment is sized on shippable capacity, not on a catalog.

Define the target capacity in sellable units, translate it into constrained minutes, add a variability margin linked to breakdowns and mix, then build a ramp-up scenario with qualification milestones. A digital twin tests the sizing and highlights downstream saturation risks.

Make or buy is not a binary decision; it's an allocation of constraints. Think in “constrained minutes” and margin: a reference that consumes many constrained minutes for little margin becomes a buy candidate. A profitable reference should remain protected on the constraint. This reasoning links capacity, mix, and competitiveness without a speech.

 

VIII- Mini-cases: the ROI of intelligence vs the ROI of metal

Case A: €300,000 machine, ROI (return on investment) in 36 months, then bottleneck shifted

Problem: a site wanted +20% throughput and chose to buy a €300,000 machine.

Method : the machine increased upstream capacity, but final inspection became the constraint.

Result : ROI stayed at 36 months on paper, delays continued, and the site gained metal while losing flow.

Case B: €15,000 study + simulation, +20% output and ROI in 2 months

Problem: same +20% objective without buying equipment.

Method : a €15,000 flow study measured micro-stops, changeovers, and WIP, then a simulation tested three scheduling rules and a buffer.

Result: OEE on the constraint gained 9 points, output increased by 20%, and ROI (return on investment) was reached in 2 months.

Case C: unstable product mix, scheduling changed, WIP and lead time reduced

Problem: a multi-product workshop suffered from urgent orders and a lead time that varied from 1× to 3×. Method: the site replaced a “customer due date first” rule with a hybrid rule with family campaigns and WIP capping. Result: WIP decreased by 30% and lead time by 25% at constant mix, increasing shippable capacity without speeding up machines.

 

IX- Final decision grid: increase capacity without degrading lead times, quality, and CO₂

Organization, reliability, bottleneck management, investment: which lever depending on the indicator that drifts?

  • If load rate exceeds normal capacity: act on shifts, multi-skilling, demand leveling, then subcontracting.

  • If OEE drops: treat micro-stops, breakdowns, setups, then maintenance and standards.

  • If FPY drops: treat quality causes, process stability, and rework on the constraint.

  • If WIP rises and lead time explodes: revisit buffers, release rules, scheduling, and batch sizes.

  • If the constraint remains saturated after optimization: size an investment, then validate via simulation the impact on overall throughput.

Expensive traps

  • Local over-optimization: one station gains 30% but overall throughput doesn't move because the bottleneck is elsewhere.

  • Oversized buffers: WIP hides instabilities, then working capital rises.

  • Speed-up: quality drifts, maintenance suffers, and useful capacity drops.

  • CAPEX in the wrong place: you increase a non-constrained step and shift the saturation.

In 80% of cases, capacity gains are achievable without CAPEX, provided you manage flows rather than machines.

 

In summary?

1. Real capacity is limited by the bottleneck, not by machines
2. Variability destroys shippable capacity
3. WIP simultaneously degrades lead time, cash, and performance
4. Flow optimization always comes before investment

Dillygence turns your shop-floor data into a digital twin to compare flow, mix, and investment scenarios, then choose a shippable-capacity trajectory with numbers, not beliefs.

 

FAQ: production capacity

Definition of production capacity

Production capacity describes the maximum volume an industrial system can ship over a given period, with a clearly defined scope. It depends on available time, cycle times, stops, quality, and product mix. Depending on the management objective, it is expressed in parts, hours, or euros of value added. In practice, distinguishing nominal, effective, and normal capacity avoids “gut-feel” decisions.

How to calculate a usable production capacity

Start from net available time (useful minutes), then divide by the cycle time at the constrained station to get nominal capacity. Then multiply by OEE to estimate effective capacity, then by FPY if non-quality consumes the constraint. In a multi-product context, use a mix-weighted average cycle time. This calculation mainly helps compare scenarios within the same scope.

Theoretical capacity vs real capacity: where the “+20% without CAPEX” hides

Theoretical capacity assumes zero disruptions and zero losses: it's a reference baseline, not a commitment. Real capacity includes stops, performance, quality, changeovers, and human constraints. The gap often comes from variability and repeated short losses. That's the gap you recover when you treat the constraint at the right level of detail.

What limits capacity: machines, flows, and variability

Typical limiters: physical bottleneck, stops and micro-stops, setups, non-quality, internal logistics, and available skills. Product mix alone can “sink” throughput if slow references dominate. Poor scheduling creates blocking and starvation. Poorly sized buffers increase WIP and reduce truly shippable capacity.

Identify the bottleneck that sets capacity

Spot queues and WIP before stations, then cross-check with stops, cycle times, and changeovers. The bottleneck is the resource that accumulates a persistent queue and sets the output pace. Check that one lost hour on this resource cannot be “magically” recovered without overtime. Validate via a measured action and its impact on overall throughput.