Designing systems that actually work where work actually happens
There’s a persistent gap in modern software: the difference between what looks good in a demo and what survives contact with the real world.
In isolation, mobile apps are clean. APIs are predictable. Simulations are controlled. But once you connect those systems to physical equipment, human workflows, and production constraints, things get messy—fast.
Bridging that gap is where the real engineering lives.
The Illusion of “Done”
It’s easy to feel like a system is complete when:
- The UI is responsive
- The API returns correct data
- The simulation behaves as expected
But none of that guarantees the system works in production.
In operational environments—manufacturing floors, distribution centers, industrial automation—“done” means something else entirely:
- Does it tolerate bad data from a PLC?
- Does it fail gracefully when a network drops?
- Does it reflect reality, not just intent?
- Does it help someone make a decision in seconds, not minutes?
A system isn’t done until it aligns with how work actually happens.
Where Software Meets the Physical World
When you start integrating with PLCs, SCADA systems, and real equipment, you lose the luxury of assumptions.
Data isn’t always clean. Timing isn’t always consistent. And the system you’re building is no longer the source of truth—it’s just one participant in a much larger ecosystem.
In my own work, this becomes very clear when tying together:
- PLC data streams (often noisy, sometimes delayed)
- Visualization layers like FactoryTalk Optix
- Simulation environments like Emulate3D
- Downstream analytics like OEE tracking
Each layer has its own version of reality. Your job is to reconcile them without introducing friction.
Simulation Is Not Reality (But It’s Close Enough to Matter)
Tools like Emulate3D are powerful because they let you model systems before they exist physically. You can test logic, validate flows, and identify bottlenecks early.
But simulation has limits:
- It assumes ideal inputs
- It doesn’t fully capture operator behavior
- It rarely reflects edge-case failures
The mistake is treating simulation as proof instead of preparation.
The right approach is to use simulation to:
- Validate system design
- Stress test logic paths
- Identify obvious inefficiencies
Then assume reality will still surprise you—and design for that.
The Role of Visualization: More Than Just a Dashboard
Platforms like FactoryTalk Optix aren’t just about displaying data—they’re about translating system state into something actionable.
A good visualization layer should answer, instantly:
- What is happening?
- What just changed?
- What needs attention?
Too often, dashboards become passive. They show data but don’t guide decisions.
In operational environments, that’s a failure.
If an operator has to interpret multiple screens, cross-reference values, or mentally reconstruct system state, the software isn’t helping—it’s adding cognitive load.
Clarity beats completeness every time.
PLC Data: The Source of Truth (and Noise)
PLC data is foundational—but it’s not perfect.
You’ll encounter:
- Rapid state changes
- Inconsistent tagging
- Latency between systems
- Edge cases that weren’t anticipated
Treating PLC data as clean input is a mistake. It needs to be:
- Normalized
- Buffered
- Interpreted
And sometimes, questioned.
For example, a machine might report “running,” but downstream conditions indicate it’s effectively idle. That distinction matters—especially when calculating metrics like OEE.
OEE: Useful, But Only If You Respect the Context
Overall Equipment Effectiveness (OEE) is often treated as a definitive metric. Availability, performance, quality—it sounds comprehensive.
But OEE is only as good as the data and assumptions behind it.
If:
- Downtime isn’t categorized correctly
- Micro-stops are ignored
- Cycle times are inaccurate
Then OEE becomes misleading.
The goal isn’t just to calculate OEE—it’s to make it meaningful.
That means:
- Aligning definitions with actual operations
- Ensuring data reflects real conditions
- Providing context alongside the number
A 75% OEE could be acceptable—or a red flag—depending on the system.
Designing for Real Workflows
One of the most common failures in software is designing for how work should happen instead of how it does happen.
In practice:
- Operators take shortcuts
- Processes vary by shift
- Exceptions are the norm, not the edge case
Good systems don’t fight this—they accommodate it.
That means:
- Allowing flexibility without losing structure
- Supporting imperfect inputs
- Designing interfaces for speed, not ideal usage
Mobile tools, in particular, need to reflect this reality. They’re often used in motion, under time pressure, with limited attention.
If your mobile experience assumes focus and precision, it’s already misaligned.
APIs as Contracts, Not Just Endpoints
APIs are often treated as technical plumbing. But in systems like these, they represent contracts between layers:
- PLC → middleware
- Middleware → mobile
- Simulation → live system
A fragile API introduces risk across the entire stack.
Key considerations:
- Idempotency (can requests be safely retried?)
- Tolerance for partial data
- Clear versioning and backward compatibility
In operational systems, you don’t always control when clients update. Your API needs to handle that reality.
Reliability Over Elegance
There’s a temptation to optimize for clean architecture, modern frameworks, or clever abstractions.
But in environments where software intersects with physical systems, reliability wins.
Every time.
That means:
- Favoring predictable behavior over clever solutions
- Handling failure paths explicitly
- Logging in ways that help diagnose real issues
A slightly “uglier” system that works consistently is more valuable than a perfectly abstracted one that fails under pressure.
Closing the Gap
Bridging software and operational reality isn’t about any single tool or technology. It’s about alignment.
Alignment between:
- What the system thinks is happening
- What the data says is happening
- What is actually happening
And that alignment requires:
- Iteration in real environments
- Feedback from actual users
- A willingness to adjust assumptions
The most effective systems aren’t the most advanced—they’re the ones that fit.
They respect constraints. They reflect workflows. They deliver value in the moments that matter.
That’s the standard.
Everything else is just a demo.