Music and programming have so much in common. It’s not a coincidence both have keyboards. In fact, the musical keyboard can illustrate how all programs work.
Until the computer, the most complex tool ever created was the pipe organ. The source of the sound starts out simple: pressurized air — poetically called “the wind” — flows into the pipes, which vibrate, creating sound.
The pipe also has to be connected to the keys (called the “action”). Early on, each key was connected to one or more pipes directly, by a wooden rod. This was simple, but also limiting — you couldn’t have more than a few sets of pipes, and the more air pressure, the harder the key was to push down.
One partial solution, stops, turned off groups of pipes. The organist would choose a few groups to play together, making the keys easier to push (occasionally they would do the opposite, “pull out all the stops”, to make the instrument as loud as possible).
This still required the keyboard to be physically connected to the pipes. The ultimate solution used a process that software development calls “decoupling”.
Decoupling, like it sounds, means removing connections. Inessential connections. There’s no reason you want the keys hooked directly to the pipes. On a bicycle or a car, some might want direct manual control for sport reasons, but there’s no benefit to direct action on a keyboard.
On the organ, the wind held the solution. A separate stream of the wind routed to tubes under the keyboard. Pressing the key allowed the wind to (silently) escape, which then caused a spring flap to open, letting full wind into the pipe. By balancing the strength of the spring against the keyboard-wind, the keys became almost as easy to press as a piano, and just as easy to hold.
This method had tons of advantages. The pipes could be placed far from the keyboard. The limit on pipes was now how much wind (and space, and money) you could spare. Stops became a matter of routing pneumatic tubing, leading to complex variations on grouping.
But like mechanical action, there’s nothing desirable about pneumatic action itself either. Running tube to every key and hooking up the stops like a telephone interchange was serious effort. And I’m sure all those pipes required maintenance as the bass frequencies shook them loose. Electronic action replaced pneumatic.
Eventually the wind and pipes aren’t strictly necessary. I have a fondness for a big, physical pipe organ. But a really good amplifier and speaker can do 98% of the job at 2% of the cost.
We’ve made everything easier and easier. Now we come to the point where we replace the input and output.
Once you’ve decoupled two things enough, they only communicate through what’s called an interface. The organ doesn’t know what is pressing the keys. The keyboard could be replaced by a player-piano type mechanism, or with an electronic sequencer. The _interface_ of the pipe action only needs to know 1. what pipe(s) to send wind to 2. at what time.
Then we can further unhook the output of the action from the amplifier and speaker, and connect it to a simulation of another interface.
There’s an interface inside your head, between your eardrum and the air. We know how the pipe physically vibrates the air, and if we calculate the acoustics, now we have a simulation of what the organ sounds like in a room.
All modern software works by breaking up layers like this. It’s called abstraction. The “ear” layer doesn’t know anything about the “pipe” layer, which knows nothing about the “key” layer (which knows nothing about the “player” layer).
Breaking layers at the wrong place can make an awful mess. But done right, it has tremendous benefits, allowing massive variations to be tightly controlled. All modern software uses these techniques.
Software development calls the result of good high-level abstraction “composition”. Which makes developers like composers.
It’s no coincidence.