Building a connected factory: the 80/20 rule
Building a 4.0 factory: real-time data improves energy, quality, and traceability, provided that flows are coherent and stabilized.

Factory 4.0 construction (greenfield): real time, engine… or costly mirage?
According to McKinsey, nearly 70% of industrial data remains unused. Yet many factory construction projects promise “real time everywhere” from day one. The contrast reveals a simple tension: lots of data, too little operational value.
A greenfield project starts with a blank slate on the technical side. You can embed fiber, cable trays, connection points, and consistent naming rules directly into the plans. But that freedom alone does not guarantee that, on opening day, the plant will deliver the expected performance.
The right approach is to build a data-ready infrastructure, then activate real time on the assets that move the KPIs: energy, safety, bottlenecks, critical quality. The rest can wait for controlled iterations, at the pace of the shop floor.
Key takeaway: When designing a Factory 4.0 in Greenfield mode, targeting real time is a unique opportunity to optimize decarbonization and operational performance. However, agility is not about exhaustive data, but about a connected, evolvable infrastructure. The recommended approach is to implement real time on critical assets (energy consumption and bottlenecks) while ensuring a robust data architecture for the future. This approach limits start-up risks while delivering long-term competitive advantage through a natively intelligent factory.
I. From design to new plant construction: the blank page changes the rules
An existing site carries technical debt: disparate PLCs, uneven networks, heterogeneous nomenclatures. By contrast, new plant design makes it possible to impose conventions and avoid costly patches after start of production.
However, the benefit only arrives if civil engineering decisions integrate operations: test points, redundant power supplies, dedicated cabling, planned radio zones. Without that, the greenfield turns into a disguised retrofit, with a deferred bill… and rarely a lower one.
Extension or new site: direct effects on flows, capacity, and yield
An extension adds square meters. But it often keeps the same entrances, docks, aisles, and traffic patterns. You gain space, not necessarily throughput. Common outcome: logistics congestion, saturated stores, rising WIP, and productivity per m² that plateaus.
In practice, building a new plant is not decided to squeeze out marginal capacity. It happens when the existing site can no longer keep up: unfixable flows, building constraints, energy, HSE, neighborhood constraints, or inability to meet the ramp-up. In that case, a new site enables a laminar flow layout. Incoming materials, transformation, testing, and shipping follow one continuous thread. The gain shows up as less handling and less buffer stock, therefore lower costs and lead time.
When civil engineering prepares the data: sensors, fiber, networks, IoT
Once flows are drawn and the layout is locked, civil engineering must follow. Not the other way around. Planning, from the plant's construction, cable routes, openings, and IT/OT racks avoids drilling, stoppages, and improvised fixes after start-up. Solid radio coverage and a fiber backbone cost more up front, but reduce heavy rework, HSE risks, and availability losses.
One example: an automotive site validated its layout too late. The islands moved, the network stayed fixed. Wi‑Fi workarounds and improvised addresses created technical debt and weakened cybersecurity. At that point, it is no longer innovation—it's survival.
New plant architecture: aligning OT, IT, MES, ERP, and cybersecurity
The usual trap in new factory construction is stacking OT/IT layers and tools without defining “who does what.” Result: duplicates, arbitration conflicts, delays.
OT protects safety and availability. IT manages governance, integration, and access. The MES orchestrates shop-floor execution. The ERP carries planning and master data.
Cybersecurity must be designed from the earliest studies: network segmentation, identity control, traceability through logs. In modernization of an existing site, teams too often end up patching solutions and accepting trade-offs due to limited room to maneuver. On a greenfield, that excuse doesn't last long.
II. Real time in production: what ROI confirms (and what it contradicts)
Real time pays off when it triggers an action. An alert with no action becomes costly: it creates ignorance and tires teams out. ROI is measured in loss reduction and speed of correction.
Three areas deliver immediate gains: energy, quality on critical operations, and visibility on supply-chain breaks. Other use cases can wait, even if the demos shine in steering committees.
Energy: managing utilities and linking measurements to total cost
Tracking utilities almost continuously translates directly into euros and tons of CO₂. Compressed air, boilers, industrial cooling, and peak demand should be handled first. Placing sensors in the right spots on these utilities clearly generates quick wins.
Field example: on an aerospace site, compressed air consumption dropped by 10% after identifying a drift and adjusting settings. The result came from robust measurement and disciplined execution, not a sophisticated algorithm.
Quality: detecting drifts before scrap, rework, and delays
Real-time quality creates value when monitoring focuses on parameters correlated with major defects. Monitoring everything creates noise. Monitoring product–process correlations creates margin.
Railway case: by consolidating data, a critical torque value was spotted in time. Result: less rework and reduced non-conformity risk. The system saved time for quality teams without replacing their know-how.
Supply chain: traceability and operational visibility without pointless latency
The issue is not tracking every part to the second, but identifying moments when a few minutes of delay translate into losses. Receiving, kitting, point-of-use consumption, and shipping come first. The rest is often “nice to have.”
Near-instant visibility of shortages and possible substitutions reduces micro-stoppages and limits inflated “just in case” inventories that too often end up obsolete. Digital doesn't eliminate variability, but it prevents discovering it too late.
III. Connected from day one: premature complexity that delays go-live
Because the infrastructure is modern, you want to activate everything immediately. This reflex confuses technical capability with operational maturity. Go-live requires stable processes and operational skills.
Noise replaces signal: too many alerts, fewer useful decisions
From day one, some systems generate hundreds of alerts per day. When 95% lead to no concrete action, teams end up filtering everything—including the 5% that are truly dangerous. Thresholds therefore must be calibrated, approved, and tied to simple, explicit response procedures.
Ramp-up: the process sets the pace, not digital
Ramp-up follows physics first. Machine capability, settings, changeover time, operator stability, and supplier quality set the pace. Digital can accelerate after the fact by making drifts and micro-stops visible. But it will not compensate for an unstable product–process pair or a poorly defined bottleneck.
A digital twin or MES brings analytical power, provided standards are stabilized, shop-floor data is reliable (tags, timestamps, units), and decision rules already exist. Otherwise, you get a “connected” factory that talks a lot and produces little.
Automating too early: when the algorithm complicates shop-floor work
Sophisticated automation solutions work well only with clean, structured data and already-identified failure scenarios. Without these basics, you force overly rigid procedures on teams that have not yet stabilized their shop-floor reflexes.
In the field, the balance that works is to automate simple, repeatable decisions first, then leave ambiguous choices to human routines. The model can come later, when the process is mature.
IV. Industrial data infrastructure: aim for “ready” rather than “everything in real time”
Saying a plant is “data-ready” means being able to connect equipment, record history, put information back into context, and exploit it without heavy work. This approach reduces surprises at start-up and keeps an Industry 4.0 trajectory realistic. By contrast, activating every data stream on day 1 puts go-live under stress.
Budgets should aim for the blocks that last: data model, governance rules, system-to-system exchange capability, cyber protection, and ease of maintenance. Those foundations remain valid after one year… and after five. The rest changes—sometimes faster than your org chart.
Data model and governance: lay the foundation before optimization
Data without context is useless. OT tags must link to an equipment model, a product nomenclature, a production order, and a KPI definition. Without that structure, analysis stays artisanal.
Greenfield allows you to impose conventions and a data dictionary from the design phase. It's invisible, but it avoids months of cleaning during production.
Interoperability: ISA‑95, MES, ERP… without an overengineered monster
ISA‑95 is a standard that describes how to make factory management systems and enterprise systems work together. Put simply, it defines who does what between the ERP (planning and master data) and the MES (executing and tracking production on the shop floor), and what information to exchange (work orders, consumption, progress, quality). The MES must remain an execution tool. The ERP must keep the master data. When each tries to play the other's role, double entry and inconsistencies appear.
Latency, availability, maintainability: the criteria that survive the shop floor
Acceptable response time depends on the use case. Required availability must align with what is truly critical. Maintenance must remain doable by your teams or by partners you already manage. By ranking flows by criticality, you protect production and avoid chasing “zero defects” where the stakes are secondary.
V. The 80/20 rule: invest where capacity really gets blocked
In a new plant, 20% of equipment often explains most throughput losses and quality variability. Prioritizing these assets gives direct leverage on capacity.
This method reduces the number of systems to master and avoids turning commissioning into a permanent IT project. The goal is not to have “everything”; the goal is to ship volume.
Bottlenecks: instrument what caps the rate
The bottleneck sets the pace, not the org chart. Measure its real availability, its micro-stops, and their causes with a simple taxonomy. You get direct leverage on rate and lead time. In factory construction, the bottleneck is not always a premium machine: often it's a manual station, an inspection, or a test bench.
Safety and compliance: monitoring what engages liability
On safety, there is no room for approximation. Temperatures, ATEX zones, machine interlocks, and HSE incidents must be reported without delay, precisely time-stamped, and stored traceably for audits. When building a new plant (greenfield project), ATEX classification, zoning, certified equipment selection, emergency stop placement, and evacuation scenarios are decided on the drawings. That's where safety distances, exit accessibility, and compliance are set.
Energy: target the loads that really weigh on costs
Equipping every electrical feeder and every utilities loop with sensors quickly drives up cost, then maintenance workload. When building a factory, start with what really moves the bill: compressed air, cooling production, steam, peak demand. Then add sub-metering by workshop to link a drift to its cost and trigger targeted actions.
VI. Deployment roadmap: from design to control, in iterations
A Factory 4.0 is built in stages. The roadmap starts by defining physical and information flows, then expands digital use cases at the pace of operational maturity. The principle: measure, understand, control.
From monitoring to control: IoT, MES, and operating routines
Monitoring alone creates the illusion of control. Control requires management routines: reviews, root-cause analyses, CAPA decisions, reaction rules. The MES and IoT must feed these routines, not replace them.
Wave-based deployment: expand after flow stabilization
Splitting the launch into phases avoids putting commissioning under pressure.
Phase 1: energy utilities and constraining stations.
Phase 2: high-impact quality points and traceability that truly serves operations.
Phase 3: advanced improvement and fine optimization.
At each step, confirm measurable indicators and reduce exposure to drift.
Measuring results: performance, costs, lead time, carbon footprint
Linking digital tools to quantified gains is not debatable. OEE, capacity, scrap rate, lead time, consumption, and emissions are the baseline.
Setting a starting point and a target from the design phase cuts short endless debates about real impact.
VII. Mistakes to avoid in connected factory construction
Failures rarely come from a lack of sensors. They come from confusing modernity with efficiency. A plant can pile up technology and still fail to produce at the planned rate.
Confusing a “ready” infrastructure with real time everywhere
Data-ready is not the same as immediately activating all data streams.
Over-equipping and over-configuring slows ramp-up.
What to do? Build a scalable architecture, then activate real time on the assets that impact KPIs, is a solid compromise.
Instrumenting everywhere… and understanding nowhere
Adding sensors without structuring data quickly turns the site into a notification machine. Teams spend their days reconciling dates, units, and links between systems. Defining naming conventions and a data dictionary early reduces the worst debt: the one that makes information unusable.
Changing tools before stabilizing operational standards
A tool does not erase the lack of standards. If routings and rules change constantly, the tool rigidifies learning and slows performance. Stabilizing standards before expanding tools remains the safest rule.
Final lens: first build the infrastructure that lasts, then real time on the 20% of assets that generate 80% of the gain. Avoid premature complexity, validate each wave with KPIs, and link data to clear decisions.
Phase 1: utilities and constraints → kWh/unit, energy cost/hour, utilities downtime.
Phase 2: quality and useful traceability → scrap rate, rework rate, FPY (First Pass Yield: percentage of parts compliant on the first pass, without rework), reaction time to drift, batch completeness rate.
Phase 3: fine optimization → OEE on bottlenecks, rate vs target, lead time, WIP, shipping service level, cycle time, micro-stop rate.
This path delivers new factory construction capable of increasing output, reducing costs, and improving on-time delivery.


