Schrodinger’s Cat and the War of Machines

Many variations of the classic “cat in a box” experiment first posed by Erwin Schrodinger in the earlier part of the 20th century have intrigued scholars of quantum theory. In its simplest form the experiment consists of a closed box that contains a poison in a glass container, and a cat. Should the fragile container break, as a result of, for example, a sudden movement of the cat, the poison would inevitably kill the animal. Yet, without opening the box and looking inside, it would be impossible to tell whether the cat lives or not. In other words, the state of the system (box, cat, and poison) is unknown and one must regard both eventualities (i.e. the cat lives or cat dies) as equally likely states. (It does not matter here that Schrodinger actually devised the experiment to make the point that such assumption is actually ridiculous and flawed; the experiment survived time to demonstrate the exact opposite of what Schrodinger intended to show!).

Today we live surrounded by many systems whose state we imagine to understand fully: we get up in the morning to an alarm clock that we set the night before, fully relying on it doing as told – ring the alarm at the set time; we get into our car on the way to work, and rely on the fact that the steering, breaks, the electronic device controlling the ABS breaking system, etc. perform as designed.

In the age of interconnectivity, increased integration and complexity, these assumptions could no longer be taken for granted. Let us for a moment re-imagine Schroeding’s Cat experiment in the age of the Internet of Things. In today’s version of the experiment, we have three boxes. All boxes are of equal size, black (not that it matters), and have a light bulb at the top.

The first box contains a simple mechanical switch. The user can toggle the switch, and, as one would expect, the bulb will light up as the switch enters the ‘ON’ position. It will go off in the ‘OFF’ position. Like in a child-play we immediately and fully understand the function of the box.

The second box has no switch; the bulb comes on and off seemingly randomly. On close inspection and with some experimentation, the user discovers a light sensor on the side of the box, which appears to turn on the bulb as the light sensor gets shielded from ambient light sources. In addition, it appears a motion sensor inside the box toggles the light on as the box gets moved. In other words, through experimentation, inspection, and guesswork the user can figure out the internal workings of the second of my boxes, and with some probability the user can predict the behaviour of the light bulb.

The third box has no switch and no perceivable sensing device; the bulb comes on and off at random times, and despite best trying the user is not able to detect a pattern, or influence the behaviour of the light bulb. Shaking, shielding, shouting at the box all appear to make no difference to the flickering light emitted by the bulb. The secret of the box is an embedded controller which, in this thought experiment, is attached to the internet, communicating to other devices, servers, web services, etc., in search of a timing signal that it could visualise using the light bulb. Maybe the box is looking for an hourly time signal to emit an ‘hour count’ like church towers do; maybe it is looking at the stock market to signal up- or down-movements of your employer stock. We don’t know. The system is in an infinite number of states that we cannot guess or understand, without inspecting the code on the micro controller, and the code on any other machine that our micro controller would be communicating with.

What is the essence of all of this?

In the age of the Internet of Things, or Industry 4.0, it is impossible to fully understand the behaviour of our systems or systems of systems. As machines talk to each other, ensuring the privacy of the exchanged information might actually end up being one of the lesser concerns compared to the possibility of systems being corrupted by erroneous or possibly malignant information received from other systems.

This is not as far fetched as it might seem.

Most online map apps derive traffic information and sometimes even new road layouts based on collecting location information of their users. Your private shortcut from home to work is not that private any more! But the only guard against violators going up a one-way street the wrong way is likely the law of large numbers, at least as long as there are many more law-abiding drivers than violators (sorry India).

The list of instances where small system outages take down entire networks and corporations is long and growing every day. Remember the day an airline didn’t fly because a maintenance system, route planning system, or boarding system failed? One favourite example actually dates back to the famed “Year 2000” transition, and it still might be the only documented case of the millennium bug causing economic damage. It actually happened in 1990: A fully integrated warehouse of a well-known fast food chain began to receive supplies with the expiration date beyond the year 2000. The fully automated ware house logistics system, at that time not “Year 2000 compliant”, detected the wares to be vastly out of date (“1900”), and initiated their fully automatic disposal. The logistics system then proceeded, again fully automatically, to order more supplies. Apparently the cycle continued several times before the supplier intervened and questioned the fast food chain about their apparent dramatic surge in usage for the ingredient.

How can we protect from such failures?

First and foremost, there is no way back. Interconnectivity has brought us the internet and feature-rich systems that are adjusting dynamically and automatically to changes in their environment, and ultimately to our needs. Modern world will not work without it. The answer is therefore not to turn off these capabilities, but to understand the risks. In our minds, we must consider two fundamental aspects:

1 – System boundaries.

We should clearly understand and define WHAT the overall system is – how far does it reach, are all components within that system trusted, and how far can my private information flow; as data, control, or communication traverses these defined trust- and privacy-boundaries (and they could be different!), information should be constrained, filtered, or anonymised, and control information must be subject to special scrutiny.

2 – Control.

When systems take decisions of importance, be it economic or literally life- or –death-decisions, no single system should be entrusted with full control. Where it matters, IT systems are duplicated with multiple systems, programmed by different teams with different algorithms, control each other and only if in agreement a decision is implemented. As an example, look at aviation and the Airbus A320 flight control systems. As we begin to see driverless cars actually appearing in our neighbourhoods, lets hope that their designers have taken similar precautions. However, they might have not.