Talk to anyone who runs a plant for a living and you’ll hear the same sentence about ransomware: “We’d be okay for a few hours, but a full day would cost us the quarter.” That sentence is the entire threat model. In manufacturing, the bill isn’t the ransom. It’s the line, and what the line was supposed to ship.
And yet most plant security programs are still designed like 2018’s playbook: more sensors, more alerts, more dashboards, more “visibility.” Detection is helpful, but at this point it’s the equivalent of installing a better smoke alarm in a house with no fire doors. The alarm goes off. The fire still spreads.
When the line goes down, everything else gets expensive
Manufacturing ransomware is unique because it’s an operational event before it’s a data event. Encrypted spreadsheets are inconvenient. Encrypted controllers, scrubbed historians, frozen MES instances — those stop production in ways that ripple outward by the hour:
- Idle headcount — operators on the floor with nothing to do, fully on the clock.
- Spoiled inventory — work-in-progress that ages out of spec the longer it sits.
- Late-shipment penalties — contracts that quietly add up to seven figures by the second day.
- Customer trust — the part where you find out who has dual-sourced you.
That’s why the answer is never just “buy more EDR licenses.” It’s designing the plant so that when (not if) something gets in, it can’t reach the line.
They don’t break in. They log in.
Modern ransomware operators rarely zero-day their way through a perimeter firewall. They buy or phish a valid login — typically a vendor account with broad VPN access, a remote-support credential that was supposed to expire in 2022, or a contractor laptop that’s been roaming on the corporate network without MFA. From there, they live off the land:
- Enumerate the network with native Windows tooling that no AV will flag.
- Pivot from the IT side to the OT side through a flat “engineering” subnet.
- Stage encryption payloads on file servers that historians and MES talk to.
- Detonate during a Friday evening shift when the on-call queue is shortest.
Multifactor authentication helps, but it’s not a complete answer. If the attacker is signing in with credentials your vendor legitimately owns, the system is doing exactly what it was designed to do.
Why detection arrives late on a plant floor
OT networks weren’t designed to be loud. PLCs don’t talk much; when they do, they talk in protocols (Modbus, DNP3, OPC-UA) that most enterprise SIEMs don’t parse natively. By the time the SOC sees something anomalous on the OT side, the encryption job is usually 80% complete.
The hard truth is that visibility is necessary but not sufficient. Containment is what saves the line. Detection without containment just gives you a better post-mortem.
Containment over heroics
We frame our manufacturing engagements around one question: if an attacker is already inside, how far can they go before something stops them? Most plants we walk into can answer that honestly only after the first incident, which is the wrong time to learn. The good news: every meaningful control here is unsexy, durable, and cheap relative to a single day of downtime.
Six controls that quietly work
- A real IT/OT boundary. A jump host between corporate IT and plant OT, with explicit east-west allow rules, not a flat L2 network held together by tribal knowledge. This single control eliminates the most common ransomware path.
- Time-bound vendor access. Every external vendor account gets a calendar-bounded session, MFA, source-IP restriction, and full recording. If a vendor needs three weeks of access, they get exactly three weeks.
- Segmentation by function, not floor. Group OT assets by their actual role (extrusion line, packaging, QA) and put hard allow-lists between them. A breach in shipping shouldn’t reach the spec-controlled finishing cell.
- Immutable backups for the OT side. Snapshots of MES, historian, and recipe data on storage that can’t be deleted by a domain admin token, ever. Test the restore quarterly with the operators who’d actually do it at 3 AM.
- Risk-based patch cadence. You aren’t patching the PLC during a run. Fine. But you can patch the engineering workstations that program it, and you can air-gap the controllers that can’t be patched at all.
- One pre-rehearsed runbook. A single page, posted in the control room and the office: “If the line behaves strangely or alerts fire, here are the four people to call and the three switches to flip.” We help clients draft this in plain English and run a tabletop on it twice a year.
The 30-day starter plan
If you’re reading this and your organization doesn’t have a clear answer to “how far can an attacker go,” you don’t need a six-month consultancy. You need three weekends:
- Week 1 — inventory every external account with access to anything plant-side. Disable the ones you can’t justify; MFA-and-time-bound the rest.
- Week 2 — diagram the IT/OT boundary as it actually is, not as the network drawing claims. Put the jump host in front of it.
- Week 3 — confirm an immutable copy of MES, historian, and recipe data exists, and that someone has restored from it in the last 90 days.
- Week 4 — run a 60-minute tabletop with operations, IT, and the GM. Rewrite whatever didn’t work the first time.
That’s it. No 90-page assessment, no panic-buy of a new platform. Just the calm, durable controls that turn ransomware from a line-stopping event into a Tuesday afternoon footnote.
If you want help running the assessment — or just a second pair of eyes on your IT/OT boundary — we’ll do it free for the first 90 minutes. We’ll bring the runbook template.

