4 Systems, 4 Ship Dates, 1 Problem

SHIPMENT DELAYED

“Sarah, what’s the ETA on the Johnson order?”
“Which date do you want?”
Mike looks at her. “What?”
“ERP says the 15th.” She’s already scrolling. “But they sent an email Friday– here, the 22nd.”
“Okay so–”
“Wait, I wrote down the 18th after I called them Monday.” She opens her Excel tracker. “Yeah, the 18th.”
“So the 18th.”
“Let me just check the portal real quick… Okay portal says the 20th.”
Mike just stares at her.
“Want me to call them?”
“No, I’ll call them.”
Ten minutes later: “They said the 25th.”
Sarah adds it to the tracker. “I’ll tell IT to update the ERP.”
“When?”
“I don’t know, next week probably.”

Mike adjusts the schedule. ERP still says the 15th. Sarah’s tracker says the 25th. The email says the 22nd. The portal hasn’t updated.

This isn’t a data quality problem. This is Tuesday.

The System of Record Is a Lie

For decades, the manufacturing playbook has been clear: pick a system of record, tell everyone to use it, maintain it religiously.

ERP is the source of truth for suppliers. MES is the source of truth for production. Everything else is temporary, messy workarounds that in theory are eventually added to the proper system.

Except they never are.

Suppliers update via email, buyers track changes in Excel, planning uses local spreadsheets and quality keeps PDFs in a siloed compliance portal.

This is our reality.

Yet we keep calling it a problem. We launch projects to clean and centralize it. We keep pretending that if we were more disciplined, the truth would live in one place.

It won’t.

Distributed work means distributed data. The question isn’t how to fix that. The question is how to work with it.

Static Files Are First-Class Citizens

A PDF from a supplier isn’t a workaround. It’s a primary source.

The traditional approach treats static files as second-class data that needs to be cleaned and entered into the system of record. This creates lag. The PDF arrives Monday, someone opens a ticket Tuesday, IT processes it Thursday. By then there’s a newer email with another change.

Better infrastructure reads static files directly. The PDF arrives and the system extracts the ship date. The email comes in and the system recognizes it’s about PO #12345. Sarah’s Excel file updates and the system catches the change.

Not as a migration step. As a part of the ongoing operation.

But if data lives in four places, how do you know which one to trust?

Map by Field, Not by System

Stop asking which system owns supplier data. Start asking which system owns which field.

Here’s what that could look like:

  • Legal name → ERP (what invoicing needs)
  • Approved parts → MES (what production checks)
  • Current lead times → Sarah’s Excel tracker (updated weekly from supplier calls)
  • Ship date for PO #12345 → Yesterday’s email
  • Compliance certs → PDFs in the portal

Once you map by field instead of trying to force everything into one system, you can start building infrastructure that pulls fields together from where they actually live.
Truth isn’t in a single system. It’s in the connection of the most accurate data from all systems.

Stop Fighting the Chaos. Connect It.

Back to Mike and Sarah. When Mike asks for a ship date, Sarah checks one place for the answer. Not because everything lives there, but because all four sources are connected and updated with the latest information.

The email is read automatically. Sarah’s Excel update is noted. The portal syncs in the background. Mike sees the current date with full lineage.

One question. One answer.

Manufacturing truth is distributed. You can keep trying to force it into one system, spending months on projects that are outdated before they finish. Or you can build infrastructure that reads Excel, PDFs, emails and portals as easily as it reads ERP. That maps by field, not by system. That connects distributed sources in hours, not months.

The next time Mike needs a ship date, he’ll have the right answer. Everything is connected.

Data Stack

Liberating the Data Engineer

Data experts are not human middleware. Liberate your data team from firefighting and maintenance to building intelligence

Your Data Needs a Brain
AI Data

Your Data Needs a Brain

Data pipelines transport information but can’t remember what it means. Without contextual memory, AI makes expensive guesses. Why your data needs a brain.