In the ongoing series exploring the fundamental differences between Information Technology (IT) and Operational Technology (OT), which I call the “8 Ps of OT”. In case you have missed any, here are quick links:
This installment focuses on how these two worlds handle their hardware. The contrast is stark, pitting IT's planned, dynamic, and generalist approach against OT's static, purpose-built, and long-term mindset. It's a key reason why securing these environments requires two very different strategies.
For IT, hardware is a consumable asset. The systems we use, from laptops and desktops to servers and network gear, are part of a planned obsolescence cycle. Organizations typically refresh these devices on a three-to-five-year rotation. For example, they might replace a quarter of their employee laptops every four years, ensuring a steady stream of new, more powerful machines. While not always "bleeding edge," these incoming systems consistently offer significantly more CPU power and RAM than the machines they replace.
A new IT system is purchased with as much power as possible because upgrades to individual components rarely happen after deployment. The same "over-buying" philosophy applies to network infrastructure, such as the port density and backplane capacity of firewalls and switches. In IT, a reasonable over-provisioning of resources has proven to be a wise investment, as demands on the network and systems inevitably grow over time.
This approach is made possible by IT's reliance on generic hardware designed to run a wide array of applications. The move from serial ports to Universal Serial Bus (USB) ports, for instance, was a seamless transition for the IT world, as most business applications were not tied to that legacy connectivity.
OT operates under a completely different paradigm. OT cyber assets are often expected to last for the entire lifetime of the physical asset they control, which could be decades. It's not uncommon to find essential systems still running on small, DIN rail-mounted computers using ancient operating systems like Windows NT 4.0 or Windows Embedded.
When hardware fails in these systems, the standard procedure is to source "new-old-stock" hardware (identical, decades-old components) to restore functionality. This ensures the system's configuration remains frozen in time. This is why some Bank Automated Teller Machines (ATMs) and industrial control systems still run on versions of OS/2 Warp.
OT systems don't "overbuy" in the IT sense. Instead, they often wisely keep multiple offline spare units. The greatest fear for an OT professional is the day a specific hardware unit becomes obsolete and unobtainable. In some cases, the system's precise timing is so critical that a modern, faster processor would literally break the production process. For a wave-flow circuit board tank, for instance, the timing might be matched to a very specific Intel 486SX 25MHz processor. In these not-unusual cases, it would not be unheard of to see engineers use faster chips (50MHz or 75MHz) and intentionally clock them down to 25MHz to match the system's requirements.
OT hardware is often purpose-built and highly specific. The loss of legacy serial ports upon the advent of USB was a significant problem for many OT systems that relied on these connections for communication with industrial machinery.
IT computing resources have become highly elastic and dynamic thanks to virtualization and hypervisors. If a virtual machine needs more processing power or RAM, a few clicks can instantly reallocate resources from the underlying hardware. This ability to scale and adapt on demand is a cornerstone of modern IT operations.
While some core OT systems in control centers have begun to adopt this virtualization philosophy, the vast majority of mid-tier and low-level systems remain static and difficult to upgrade. Their purpose-built, "frozen-in-time" nature means they are not easily virtualized or dynamically changed. This fundamental difference in hardware philosophy has profound implications for how we secure these environments, a topic we will explore in a future blog post.
In our next post, we'll take a look at how the parameters (aka datapoints) in IT and OT differ dramatically, and how that is going to change even more with the impact of AI systems. Stay tuned!
Joe Baxter, Network Architect, IT & OT Battle-scarred Veteran
Experience the simplicity of BlastShield to secure your OT network and legacy infrastructure.