engine of joy mikel evins

What do you mean, you're an anarchist?

tags: politics 

Once in a while it comes up in conversation that I consider myself an anarchist. Most of the time when people learn that, they have one of three reactions:

  1. I’m kidding (or exaggerating, or engaging in some kind of hyperbole for effect)
  2. I’m serious, and crazy
  3. I’m serious, and they want to know what the heck I mean and how I can hold that position

Well, I’m not kidding, and as far as I know, I’m not crazy, so maybe I should say what I mean and how and why I hold the position I do.

Part of the problem is that “anarchist” is a loaded word, as are its cognates “anarchy” and “anarchism”. Usually when people mention “anarchy” in casual conversation, they mean violent social disorder. To be clear, I do not advocate violent social disorder.

Other meanings people commonly have in mind are lawlessness, or refusal to be involved in organizations or to honor social conventions. I don’t advocate any of those things, either.

Finally, people may think of the familiar trope of “bomb-throwing anarchists”, and I’m not one of those, either.

When I say I am an anarchist, I mean something fairly specific: I mean that I don’t believe in the legitimacy of state authority.

Those are more loaded words that probably don’t do a good job of saying what I mean, so I’ll unpack them some more.

By “state” I mean something approximating the definition given by the Prussian sociologist Max Weber: a human community that successfully claims a monopoly on the legitimate use of physical force within a given territory.

Another way to say it is that a state is a gang that dominates or excludes all competing gangs in its territory, claims that its rule is legitimate, and gets away with it.

My objection is that claiming your rule is legitimate in the absence of anyone capable of challenging you is not enough to actually make your rule legitimate. It may make you the de facto ruler; it does not therefore make you the legitimate ruler.

Many people argue, in effect, that somebody has to be in charge, therefore whoever actually is in charge is the legitimate authority. I don’t buy it. That would mean that there is no such thing as a justified insurrection or revolution, that any kind of regime change is necessarily illegitimate.

I think not. We can point to various rebellions that are generally held to be justified. Some of them are regarded as the roots of legitimate social orders. The obvious example is the American Revolution, a rebellion against an overweening state that is generally seen as the birth of a new (and legitimate) social order, but there are others, as well.

My point here is that even people who argue in this way usually allow that at least some rebellions are right and proper. That can’t be true if mere de facto rulership is enough to confer legitimacy.

So there must be something that distinguishes legitimate from illegitimate social orders. I think that something is voluntary contract.

If you have entered voluntarily into an agreement, and if it’s not an agreement that reasonable people would find unconscionable or coerced, then you have an obligation to abide by it. If you fail to abide by it, then you owe some reasonable compensation for your failure.

But if someone particularly strong and forceful comes along and tells you that this is now his territory and you are obliged to follow his rules, that doesn’t obligate you to do so. That’s not a legitimate agreement.

Even if that ruler has overwhelming force on his side, it doesn’t mean he’s in the right claiming jurisdiction over you. It may be in his power to command you, but that doesn’t therefore make it within his rights. You may comply to avoid serious consequences, but that doesn’t make you wrong if you don’t.

We are bound to a compact to the degree that we have knowingly and voluntarily entered into it and agreed to be bound by it. We owe fulfillment of promises knowingly made. We do not owe fulfillment of aspirations imposed on us without our knowledge or consent.

That doesn’t mean it’s a good idea to violate all of the norms that other people seek to hold us to. Some of them are just good sense. Basically, don’t make war on your neighbors unnecessarily. Doing so is wicked and foolish.

Some conventions that you are asked to obey are not necessarily right or just, but refusing would carry a high cost without prospect of a compensating benefit. In other words, you may think the rules imposed by the state are unjust or foolish, but it’s equally foolish to flout them for no good purpose, especially if you stand little chance of succeeding.

Why bother to think this way? What does it accomplish?

I run into that objection sometimes. The simplest answer I can make is that it’s the way I do think, and it happens naturally, without any particular calculation on my part. I could as easily ask someone, “Why don’t you think this way?”

But I can make another argument.

Sometimes people object that a peaceful anarchy is impossible and will never happen, so it’s pointless to hold the position that I do.

I think there are valid objections to that claim, but I’d rather observe that a person making this kind of argument probably doesn’t really believe it, or its ramifications.

Approximately no one thinks robbery and murder are okay. Approximately no one thinks they will ever be entirely eliminated once and for all. Does that mean that we should approve of them? Of course not.

In the same way, believing that the state will always be around does not mean you must therefore approve of it. I do not approve.

Wishing for a more voluntary basis for legitimate authority is not entirely a pipe dream. There is historical precedent in legal systems based on voluntary courts. One example is that of medieval Iceland. Another is the Lex Mercatoria. Others are the Xeer courts of old Somalia, or Anglo Saxon hundredcourts, and other relevant examples exist.

These are systems of law based on voluntary agreement and negotiated settlements that existed for centuries without strong central authorities to enforce their provisions. In Weber’s terminology, they are examples of stateless societies. There was no singular entity claiming a monopoly on legitimate use of force in a defined territory. Instead, such legal systems often overlapped with neighboring, competing ones, functioning more like a market for legal protections than a state.

Once it becomes clear that such systems have actually existed, a common objection is that they must have been inferior because they didn’t last forever. Well, no system lasts forever, and if it did, that wouldn’t make it good in any way other than longevity. The despotism of Pharaonic Egypt lasted for millennia, but that doesn’t mean we should prefer it.

Still, as I said, voluntary legal systems have sometimes lasted for centuries. In one case, such a system lasted into modern times: the present day international system of commercial law is essentially the Lex Mercatoria under another name.

Were voluntary legal systems paradise? Of course not. Neither is any other known legal system.

They were better in some ways than what we are currently accustomed to, and probably worse in others. But the authority wielded by a voluntary court in adjudicating voluntary agreements is an authority I can believe in.

“Might makes right” isn’t.

On sailing

tags: stoicism 

Living is sailing.

When you sail, you are borne here and there by vast forces that are outside your control. You’re carried by the sea, propelled by the wind, and guided by the sun and stars. None of these elements of nature pay the slightest attention to your wishes.

The wind and sea may at any moment turn against you. They may swamp you or dash you against an unfamiliar shore. They may bear you far off course and maroon you some place far from everything you know. If you sail long enough, you can be sure that eventually the sea will take you.

Still, you can become a skillful and accomplished sailor if you’re wise and patient. If you don’t waste your effort trying to control the wind and waves, but instead learn them, if you practice riding out their tantrums and working with them rather than against them, then you can learn to sail anywhere you want to go.

Often you won’t follow exactly the route you planned, because the wind and the waves will have something to say about it. Maybe you won’t arrive on time. Still, people can and do sail all over the world. Some have sailed right around it.

You can’t tell the sun how hot to be, or the waves how steep, or the wind how stiff. You can’t command shores not to crush your boat.

But you can learn how to read the sea and how to navigate. You can learn how to care for your vessel, how to keep it shipshape, when to put in to port, how to prepare for a stiff blow. You can learn to keep it properly stocked and in good repair so that it gives you the best possible chance to ride out rough weather.

My life is a small vessel in a vast ocean. It’s at the mercy of immense forces, but I can still work on it every day. I can batten down the hatches, replenish the stocks, swab the decks. I can study the charts, watch the weather, and trust the wind and waves to do what the wind and waves always do. I can keep an eye on the sun and stars and practice my skill with sail and tiller.

I’l get blown off course. I’ll take wrong turns. I’ll discover things I can’t do, and there will be places I never make it to before the sea takes me. But there are also many things I can do and many places I can visit. A tiny person in a tiny boat can sail all over the world with a little skill and the patience to wait for a favorable wind.

On young-earth creationism

tags: science 

Once, around ten years ago, someone in an online discussion said that they were raised as a young-earth creationist, but had become curious about the scientific account of the natural world and how it worked. They wanted to know what the problems with young-earth creationism were, from a scientific perspective.

I offered them the best answer I could manage. Here it is:

The first thing you have to do is set aside God as an explanation. Sure, if you assume there’s an omnipotent, omniscient, supernatural creator, then you can explain anything by saying, “It’s God’s will.” The problem is, that doesn’t really explain anything. It doesn’t tell you anything that you didn’t already know.

Take infectious disease, for example. It’s been a scourge of the human race since prehistory. Saying, “It’s God’s will,” or “the will of the gods,” for thousands of years didn’t help people discover the causes or cures for diseases. What did help was examining the natural world with a curious mind, and seeking to understand what was found.

Curious people found out that there are tiny organisms, too small to see, and they’re everywhere. Some of them can colonize our bodies, and some of those cause diseases. There are also chemicals we can use to counter those invading organisms, and those chemicals help us defend ourselves against many diseases.

We didn’t learn those things from saying “disease is God’s will”—regardless of whether it is or isn’t. We learned them from looking at the evidence that the natural world gives us and reasoning about what we found.

So the first thing you do is set God aside. Then you collect evidence and see where it leads you. You try to find natural explanations for the evidence that you find.

So far, that approach has been spectacularly successful.

From that perspective, there are some serious problems with young-earth creationism.

For example, living things are made of lots of carbon, hydrogen, oxygen, and nitrogen, and smaller amounts of other things. There’s more than one kind of carbon, each identified by an atomic number. You can find both carbon 12 and carbon 14 in living things.

It turns out that carbon 14 is radioactive, which means that it falls apart, turning into atoms that aren’t carbon 14 anymore. This happens at a predictable rate: if you keep a lump of carbon 14 around for 5730 years, you’ll find out that half of it isn’t carbon 14 anymore; it’s turned into carbon 12.

So that should mean that eventually all the carbon 14 should disappear, right? It should all turn into carbon 12. Except that radiation from space striking the earth’s upper atmosphere keeps making more of it at a pretty steady rate. It rains down and mixes into the atmosphere and gets carried all over the world.

One kind of carbon is as good as another to living things, so plants absorb it, animals get it by eating the plants or each other, and it ends up in everything living.

This, too, happens at a steady, predictable rate: out of every trillion carbon atoms in a living creature, one of them is carbon 14.

But remember: carbon 14 is radioactive. It falls apart. After 5730 years, only one atom in two trillion will be carbon 14, unless the living thing keeps adding more.

Well, when creatures are alive, they do keep adding more, by consuming it from their environment. But when they die, they stop adding carbon 14. The carbon 14 doesn’t stop falling apart, though; it keeps on turning into carbon 12. The longer the corpse lies there, the more of it falls apart at that steady, predictable rate. So if you measure the carbon 14 in a corpse, you can use simple arithmetic to figure out how long ago it died.

When you do that with a lot of dead things, you find out that some of them died more than 6000 years ago. In fact, a whole lot of them died way more than 6000 years ago.

This isn’t the only way to find out how old things are, by the way. You can measure age by the radioactive decay of elements besides carbon. You can use other natural processes that happen at steady rates, like the deposition of silt, or the changes in magnetism recorded in rocks, or the rate that minerals trap ions the solar wind and cosmic rays. When you use several different measures of age and get close to the same result, you can be pretty confident that you know the age of the thing you’re measuring.

We’ve collected age measurements for a lot of things at this point. We have tons of animal and plant remains that are millions—or in some cases billions—of years old. We have minerals that are millions or billions of years old. We have astronomical evidence of things in space that are millions or billions of years old.

So now our naturalistic picture of the world needs to account for the fact that we’ve found all this evidence of very old things. For example, we need to account for living things that died a lot more than 6000 years ago.

Now, we could just say that God created the world with all that evidence already in it. That’s cheating, though. Remember: the rules are that we look for natural explanations. That’s science. If we resort to supernatural explanations then we’re doing theology, not science.

Also, we’d have to wonder why the creator planted gobs of evidence that the world is billions of years old, if he wanted us to believe that it isn’t, but that’s not science, either

The simplest natural explanation for dead bodies more than 6000 years old is that the earth is more than 6000 years old.

If you run through the mountains of evidence that scientists have accumulated over the past five hundred years, you’ll notice a lot more situations like this, situations where you either say, “Well, God just made the world like that,” or else you try to think of a natural explanation. The natural explanations tend to contradict young-earth creationism.

The earth is more than 6000 years old because we can find things in it that have been around longer than that. Living things evolved from other, earlier living things because we’ve seen it happen in our present environment, and because we’ve found many remains of living things that are different from anything currently living, and because we’ve found sequences of genetic material from which we can infer ancestral relationships.

Human beings, for example, are apes because we have the skeletal structure of apes, the blood types of apes, the reproductive biology of apes, and the genetic makeup of apes. We know humans are apes in exactly the same way that we know chimpanzees are apes.

We can measure how similar or different two pieces of genetic material are, and from doing that we know that humans are closely related to chimpanzees and bonobos. We also know that there used to be other human-like species that are gone now, because we’ve found their tools and their remains.

We know that there were stars in the sky more than 6000 years ago because we know how fast light travels and we can measure the distances to the stars pretty well. Some stars are so far away that it’s taken more than 6000 years for the light to reach us. For example, the light from the Andromeda galaxy—the nearest galaxy to our own—takes two and half million years to reach us.

You can still believe in young-earth creationism, if you want to. You just have to also believe that, for some reason, the creator planted evidence against it all over the place.

But if you want a natural explanation of what we see around us, then young-earth creationism just doesn’t do the job.

Programming as teaching

Most programming uses an approach that we might call programming as carpentry. We start with some idea of an artifact we want to build. We analyze the idea into a set of needed parts. We set to work on our workbench to craft the needed parts and assemble them into a finished product, then we examine and measure the result to determine how close we came to our goal.

Next we repeat the process to correct details that have fallen short of what we want, either because we made mistakes in construction, or because our original plan was wrong in some way.

There’s a different way to approach programming. We might call it programming as teaching. In this approach, we begin by starting up a runtime that already knows how to be a working program; it just doesn’t know how to be our particular application. By talking to the runtime interactively, we incrementally teach it the features we need it to have. When it knows how to provide all those features, we’re done. We can save the accumulated changes to an artifact, and that artifact becomes our product.

It’s less like nailing together an artifact on a workbench, and more like teaching a student a new set of skills.

Programming as carpentry is far better known and more widely used than programming as teaching. I’d venture to say that most programmers aren’t even aware that programming as teaching is an option. That’s a shame, I think, because for some people it’s a better option.

I don’t claim that the teaching paradigm is absolutely and objectively better. I claim only that it’s better for some people. Take me, for example: programming as teaching makes me much happier, faster, and more productive in my work.

I’m not the only one. There are many other programmers who prefer the teaching paradigm. The reason it’s an obscure minority paradigm is that there are even more programmers—vastly more—who are familiar with programming as carpentry.

The fact that most programmers don’t seem to even know that the teaching paradigm exists suggests to me that if it were more widely known, then more people would use it. What are the odds that everyone who prefers the obscure paradigm is already using it, when most people don’t know it exists?

In some sense, it doesn’t matter. The great majority of software is developed in the carpentry paradigm, and the industry prospers. Is it really important to evangelize a different paradigm? Probably not, at least for the sake of the industry as a whole.

On the other hand, I think it is important for the sake of us as programmers. If I hadn’t been exposed to programming as teaching, I probably never would have known how happy and productive I can be in my work. A world in which I know about the teaching paradigm is a better world, at least for me. If there are other programmers who would prefer the teaching paradigm, and who don’t know about it, then there’s a better world awaiting them. All they need is to find out about it.

So if programming as teaching is such a great idea—even if it’s only great for a minority of programmers—why isn’t it better known? I think it’s because programming as teaching requires some tools over and above what programming as carpentry requires. You have to expend extra work to make those tools available, and you have to know about them before you can decide to do that.

You can build a solid carpentry workbench without knowing anything about the teaching paradigm. The reverse isn’t true. A good programming-as-teaching system is going to need all the tools of the carpentry workbench. It’s going to need parsers and data structures and code-walkers and code generators. It’s going to need file I/O and performance tools and debuggers, and so forth. It’s going to need all that stuff, but it’s going to need some other stuff, too.

Programming as teaching means starting the application running before it’s defined, then defining and redefining features while it runs. It means inspecting the contents of its memory, stopping control structures in the middle of executing, and inspecting and changing and redefining their dynamic context. It means updating the definitions of functions that are pending on the stack and of datatypes that are used by existing instances, and relying on the application to keep working reasonably while you’re tinkering around in its guts.

All of those features require substantial runtime support, and if you want it to work well, the runtime needs to be designed from the start to support it.

Smalltalk systems have thorough support for that kind of programming. So do Common Lisp and other old-fashioned Lisp systems. Outside Smalltalk and Common Lisp and, to some extent FORTH systems, there aren’t many development systems with the full suite of programming-as-teaching features.

Some of these features exist in some form in modern programming-as-carpentry systems, but they don’t amount to a programming-as-teaching system. To build a working programming-as-teaching system, you need to design the whole runtime from the ground up to support it. Let me give you an example of what I mean.

The ANSI Common Lisp standard defines a generic function named UPDATE-INSTANCE-FOR-REDEFINED-CLASS. When the runtime detects a value whose class has been redefined since it was instantiated, it automatically calls this function to restructure the value so that it conforms to the new definition. In effect, it retroactively makes the value an instance of the new definition instead of the old one.

You can specialize UPDATE-INSTANCE-FOR-REDEFINED-CLASS to correctly reinitialize instances to conform to their new definitions. If you don’t, then the runtime drops you into an interactive session in which you can provide the new initialization interactively. You can then resume execution and the affected value behaves as if it was originally instantiated from the updated definition.

If you’re coming from programming-as-carpentry, you might reasonably wonder why you would ever want a function like that at all, much less why you would want it to be part of a language standard. But Common Lisp was designed by experienced Lisp users and implementors. They were steeped in the practice of programming as teaching (though they didn’t call it that; they just called it “programming”).

To the designers of Common Lisp, the normal way of developing a program was to start the Lisp and then teach it, definition by definition, how to be the program they wanted. They wouldn’t expect to have to restart their Lisp just because they redefined something; that’s silly. Redefining things was nearly all they did! No, their expectation was that you redefine things and keep going. It’s the job of the runtime to adapt in a reasonable way to the changes that you tell it about. It’s also the job of the runtime to ask you for help when it doesn’t know what to do.

UPDATE-INSTANCE-FOR-REDEFINED-CLASS is one element of a whole standard protocol defined by ANSI Common Lisp for changing and redefining things while the program you’re developing continues to run. The standard describes facilities for defining classes and functions, updating bindings, catching errors by dropping into an interactive session where you can inspect and change everything about the dynamic environment, then tell the function where the error occurred to resume execution with the new definitions, and generally for accomplishing every aspect of building and deploying a program through an interactive conversation with the running program itself.

Smalltalk systems have the same kind of design.

This kind of development environment is a whole that is greater than the sum of its parts. To get it, you have to design the language and runtime from the start to support it. You can’t convert a carpentry system to a teaching system by patching in one feature at a time.

Very few development toolchains support the full suite of programming-as-teaching features. Most of the ones that do have been around for a long time. Newer languages and runtimes are mostly designed without any knowledge or understanding of the whole-system programming paradigm that is embodied in old Lisp and Smalltalk systems. Even some newer Lisps have been designed without that understanding.

I worry sometimes that programming-as-teaching is fading away, and I feel sad that my favorite paradigm might someday disappear altogether.

I don’t think that if it does it’ll be the end of software development, or anything apocalyptic like that. I do think that I’ll miss it when it’s gone, and I think it’ll be sad if new generations of programmers never have the opportunity to experience it.

There’s something magical about gradually turning a program into the application you want by talking to it while it runs. For some fraction of programmers, it’s the best way of working. I’d hate to lose it.