This is still the header! Main site

Ways Stuff can Work

2021/07/27

... a very short essay on this. With a lot of simplifications.

You can make things work in two different ways:

Most things are a combination of the two.

... mostly in your head

Let's say you want to mount a shelf to your wall. A civil engineer will tell you what kind of screws you'll need for what amount of load. You'll make some relatively simple calculations, you drill the holes, you place the shelf. Throughout the process, there are no big surprises: you might generally want to take a look at the wall beforehand, but there aren't really that many kinds of wall anyway.

The end result is predictable, you can explain the process to another person, you are mostly aware of what's actually happening, and there are no big surprises.

... mostly iteratively

You train a machine learning model. After 3 days, its error rate goes down enough; it is now capable of telling apart dogs from cats almost perfectly. In the process, you used 15,000 example pictures of dogs and cats.

Neural network training consists of incrementally adjusting weights so that network outputs end up closer to the optimal one ("dog" for dogs, "cat" for cats). In the beginning, you have no idea what adjustments you'll need. In the end, you'll get network weights that work really well. You yourself did not learn a whole lot from running this; if you wanted to create yet another neural network, it'd also take a bunch of training examples and weight adjustments, taking 3 more days.

Here is one more example: evolution. There is an optimization process, implicitly optimizing "number of copies produced" just by letting copying happen... but it's doing this in an impressively stupid way: just making random changes and seeing what happens when organisms built from the randomly modified DNA are exposed to the real world. (Most of the time, not much good.)

Optimization loops

Actually, it's about a kind of optimization in both cases. There is an important difference though.

When an engineer designs a simple artifact, parameters (type and size of screws, etc.) are picked and validated within the engineer's head, or a piece of paper. You don't generally need to try mounting 270 different shelves in order to find the optimal combo; this has been done already, we learned from it, and now we can generalize. Maybe we didn't even need the actual shelves, even earlier: "physics" is a really good generalization of what's going on in the real world.

At least, in some special cases.

In some other cases, it's a lot harder to design something well. For example, SpaceX needs to blow up a number of Starships first, to gather data ensuring that later Starships would be somewhat more reliable. They do a lot of optimization in their heads or computers (... if it was for pure evolution instead, we'd need millions of launches instead of just a dozen or so), but you still need a larger feedback loop, too, to validate all the assumptions and produce input data.

The machine learning model is kind of an extreme case. We have absolutely no idea how to come up with network weights that would do a good job at dogs vs. cats in a math-y, top down way; all we can do is to mix up something random and keep adjusting it until it works.

Why is this important?

Well, take software engineering.

You might think that this is an excellent example for "stuff in our heads". It's a bit like math, after all: it's us who designed the operating system the program will work on, we are aware of what each line of code can do, and, unlike a machine learning model, it's us writing the code. We even have some standards, interfaces, etc., to make it sure that the parts of the world our code interacts with are fairly well-defined, too.

So... programming should be an easy, top-down process, where we turn UML diagrams into code, right?

The problem is... the entire thing doesn't fit into our heads.

Plus, this is worse than with most engineering artifacts. If you're designing a plane, you can apply a simpler model that is not accounting for each molecule of paint on the nose cone; it'll produce roughly correct results as long as you don't attach something majorly silly to the plane that would have needed further modeling. With software though (especially with security), any tiny part can break the entire thing.

But it still feels like we're designing the entire program, just because we're writing parts of it, and we can imagine understanding what's going on in the rest, even if we momentarily don't.

If you scale this up to a large number of people, it starts behaving a lot like evolution or gradient descent. And this is where bugs stop behaving like individual design errors (that you can perhaps learn from) and start doing something that more closely resembles thermodynamics: they keep emerging due to the friction caused by new features, and you keep their numbers down by applying programmer pressure.

And, in the end, if you only see the end result, it'll look like as if everything was working as intended.

Of course you can install Some Messy Operating System on Newest Hardware; it has the right CPU architecture! (... in practice: nothing worked before 20 people were dispatched to fix all the problems caused by random tech debt.)

Of course, that game will work on any OpenGL-capable video card! (... well, it would've, but we needed a workaround for that driver bug which no one anticipated... and then we optimized it a bit more after benchmarking everything.)

... conclusions?

Keeping things simple is nice.

It's nice because we see complex things working all the time, so we tend to underestimate the amount of effort needed to make complex things work.

We also overestimate how well our part of the system works, and how actually simple of a model everyone else will need for it (a.k.a. "leaky abstractions"), making the entire problem worse.

This is post no. 21 for Kev Quirk's #100DaysToOffload challenge.

... comments welcome, either in email or on the (eventual) Mastodon post on Fosstodon.