Voltage pushes current; resistance slows it down. But can we find something similarly elegant in IT infrastructure?
Computers and networks are more complex than a simple circuit. However, several concepts in networking, storage, and CPU performance behave similarly to Ohm’s law and can be modeled using comparable relationships. Below are practical, engineer-friendly analogies you can use when sizing, troubleshooting, or explaining systems.
Throughput, Latency, and Parallelism - the closest analogy
Across many IT systems, throughput is constrained by both latency and the degree of parallelism. A useful heuristic is:
This pattern appears in CPU pipelines, network protocols, storage I/O schedulers, and databases.
How it maps to Ohm’s law
Electricity IT system
Voltage (U) - pushes current forward Clock speed / line rate — pushes operations forward
Current (I) - flow of electrons Throughput — flow of data or operations
Resistance (R) - slows current Latency, locks, I/O wait, protocol overhead
Note: This is not a physical law but a powerful mental model for identifying bottlenecks: raise the driving force or increase parallelism, or reduce ‘resistance’ (latency) to increase throughput.
TCP Bandwidth-Delay Product (BDP) - a real formula
BDP (Bandwidth-Delay Product) is a networking term that tells you how much data “fits” in the network path at once.
Simple definition
BDP = Bandwidth × RTT
The network domain gives us a closely analogous, mathematically exact formula:
Interpretation
-
If RTT (Round-Trip Time) increases, throughput decreases (for a given window).
-
If you increase the TCP window size, throughput increases.
This is exactly analogous to where the window size is the “voltage” pushing bytes, RTT is the “resistance,” and throughput is the “current.” BDP is crucial for WAN performance tuning, high-latency links, and large data transfers.
Storage: Latency, IOPS, and Throughput
Block storage systems obey simple, practical relationships:
Consequences
-
Lower latency → higher IOPS
-
Larger block sizes → higher MB/s for a given IOPS
-
Storage controllers, queue depths and software stacks act like resistors, they limit how efficiently the “driving force” (IO requests) convert into throughput
Modern NVMe and in-memory storage reduce latency dramatically; thus IOPS and throughput rise almost linearly, which is why queue tuning and software stack optimization matter more than raw disk speed in high-performance systems.
CPU performance: IPC, frequency, and cores
CPU performance is modeled multiplicatively rather than as a simple ratio:
Implications
-
Improving IPC (instructions per cycle) or frequency raises single-thread performance
-
Adding cores increases throughput only if the workload is parallelizable
-
Thermal and power limits act as practical “resistance” - they throttle frequency and reduce gains
Why these analogies matter
When troubleshooting or capacity planning, think in terms of identifying the component that behaves like a resistor:
-
High latency? Look at network RTT (Round-Trip Time), disk latency, or lock contention
-
Low driving force? Consider underclocked CPUs, limited link speeds, or slow storage media
-
Limited throughput? Possibly too few worker threads, small TCP windows, or insufficient queue depth
The mental mapping driving force / resistance → throughput helps convert nebulous system behavior into specific actions: increase parallelism, reduce latency, or raise the driving signal (window size, CPU frequency, link speed).
Conclusion
There is no single “Ohm’s law of IT”. Instead, multiple domain-specific relationships behave similarly to .
In networking (BDP - Bandwidth-Delay Product), storage (IOPS & latency) and CPU performance (IPC × frequency × cores), simple formulas and heuristics let you reason about bottlenecks quickly and effectively.
If you design or maintain infrastructure, keeping these analogies in your toolbox will help you spot bottlenecks faster and make better design choices.
No comments:
Post a Comment