Today’s topic is self-replicating machines, particularly von Neumann Probes and Berserkers. We will be looking at the basic concept of self-replicating machines and some misconceptions about them, then moving in to discuss those two specific types of them and some others.

There’s also three things we should clear up about self-replicating machines straight from the outset.

First, Self-Replicating machines do not have to be small. Second, we are arguably capable of making one right now. Third, Self-Replicating Machines are not necessarily just machines, they can be alive.

Now in fiction self-replicating machines are usually implied to be small, tiny little Nano-robots, but they hardly have to be, nor do Nano-robots need to be self-replicating. They are also usually portrayed as essentially a single type, when you could easily have swarms of them composed of many sub-types and sizes. Often if something larger is desired they just clamp together to make it, but that’s not actually a good approach.

For instance, if we want metal to make more of the little buggers we could send them into go remove individual atoms of metal from dirt, which is a regular portrayal but kind of silly, or we could have them form into a conventional kiln to smelt metal, which is also pretty silly especially considering the whole point of the kiln is to smelt thievery metal they are made out of, and so them too. It would make a lot more sense to have them just build a kiln, so this is our first strike against the classic concept of grey goo, some swarm of robots that looks like goo while eating up planets.

The original concept for this is usually in John von Neumann’s concept of a Universal Assembler or Constructor, something Drexler followed up with as a Molecular Assembler which is pretty much where all the concepts for Nano-machines come from. But self-replicating machines way predate that, to at least the time of Rene Descartes, who apparently told Her Majesty Christina of Sweden that the human body was a machine. The queen, being quite the scholar herself, apparently pointed at a clock and told him to make it reproduce. Then not long after Darwin’s notions began circulating Samuel Butler toyed with the notion of self-replicating machines mutating and evolving consciousness.

So this concept is a good deal older than people tend to think and did not focus on tiny machines. On the second point, that we could probably make one now, that size concept is important. If I have some automated factory rolling around eating up rock and spitting out new fully equipped factories that is still a self-replicating machine.

A 3D printer able to print itself is a self-replicating machine as well. It need not be able to make its own construction material though. After all, you and I are self-replicating machines and we not only do not typically convert matter into food directly ourselves but actually use a lot of integrated but independent life forms to keep ourselves alive. We don’t just eat other organisms; we have lots of little buggers hanging around in our guts helping us digest them. We even have mitochondria in every cell of our body that reproduce themselves using their own genetic code.

Useful hitchhikers for eons who we’ve formed a genuinely symbiotic relationship with even though we do not actually have code for them in our DNA. So a giant factory that replicated itself for instance would still be a self-replicating machine even if it had other self-replicating machines inside it that it was dependent on but did not actually make itself. Since we are comparing them to organisms, let us examine that claim that self-replicating machines are alive. Now there is no universally agreed upon definition for life but generally we would include the ability to eat, grow, excrete, replicate, and to adapt to and interact with its environment.

The reason it is often hard to nail down a solid definition is that almost any of those can be done away with while leaving a valid claim to life, at least in theory, but that is an important qualifier when we are discussing building and tinkering with life. I have never seen a definition for life that comfortably contains all the normal examples but which would also exclude a self-replicating machine, though it might exclude some types of them. And I do not mean in a sort of vague way like we might say fire is alive or crystals are alive.

This is not semantics, the typical self-replicating machine we envision would have some ability to eat, replicate of course, and have some equivalent to DNA it used for that. Now it need not necessarily have the ability to grow or repair itself, so long as it is able to make another full-grown copy of itself in decently less time than it usually takes for it to breakdown. Biological organisms do not use that method, of just manufacturing something separate and fully-grown, they get bigger and subdivide into two or make small copies of themselves that grow up. A self-replicating machine could be built to do the same but it has that third option to construct a fully grown version too. But a self-replicating machine does need a blueprint to work off of, same as any other organism, and rather than invent a new term or just say machine DNA, I will just call this DNA even though it would almost certainly not be DNA in most cases. In some it might be. After all quick path to self-replicating tiny machines is just tweaking the DNA or RNA of existing cells or viruses to do a job. GMOs, genetically modified organisms, are an example of self-replicating machines, and again something we have now.

So this brings up what we use these for. What is their task or mission? Obviously they do not have to have one besides copying themselves but a machine is built with a purpose in mind. Now you could use such devices for any number of things but the two big ones tend to bemuse off the planet or use inside a human being or other complex device. This is because the most appealing quality of them is that they can help repair things, like help a human heal from injury or fix a piece of equipment so you did not need to throw it out or take it in for repairs. That’s very handy for things like space probes since it means you could send a probe out at relatively slow speed and expect it to still be working when it arrived at its destination solar system thousands of years later.

The Concept of Self-Replicating Machine

The Concept of Self-Replicating Machine

Now before proceeding I want to go ahead and kill the common myth that self-replicating machines invariably mutate. We mutate, other organisms mutate, and indeed a bunch of little robots could too, but they do not necessarily have to. Even on astronomical timelines. Mutation is an absolute necessity for evolving from that most simple organism that presumably once assembled itself into more complex ones, but mutation is not a desirable trait if you are building to a specific purpose. I do not want my probe heading off to the Andromeda Galaxy to mutate in the millions of years it takes to arrive. If I hand someone a book and tell them to copy it word for word we know they will fail at that task, they will make a couple mistakes, and if they hand that to someone else that person will probably copy those mistakes while making a few new ones, and so on until you get a copy that is just nothing like the original. That is mutation.

If that is my only way of copying and preserving data, say I am some old king and I want to make sure my scribes maintain my memoirs properly, I can order three copies made of my memoir by three different scribes, that way if the original is destroyed they can compare those three copies word by word and if they come across a word that is different in one than in the other two, they know it was probably the two that are correct not the one, and can fix that. Of course it is always possible those two scribes made the same error, or that all three copies disagree, but both of those events are less likely.

They are still likely enough though that it can happen and with potentially millions of lines of code and millions of copying events you would often get two identical errors or spots where all three disagreed. If you add in a fourth copy, these odds decrease. If you add in five or six it gets even less likely, and you can increase that to the point where while it is still possible, the odds of it happening even once over the age of the entire Universe is less likely than not. So for instance I could have a setup so that to make a new machine it required multiple machines to get together, just as with sexual reproduction but more polygamous, as it were. Say, twenty other ones had to assemble together into a dodecahedron, a platonic solid with twelve faces and 20 vertices, with each at one vertex, and they’d build the new one in the middle. Before adding each new bit, they check and agree.

If all twenty agree on a given chunk of the blueprint, all is well. If not, then the odds of less a majority of them having the right bit are freakishly tiny. We have done these kinds of extreme improbabilities before but the brain does not tend to work well with them and I am sure some of you are thinking right now, “Sure, but it is still chance so it will happen”.

This is technically true but reaches a point of absurdity when you begin dealing with improbabilities so high that they would be less likely than not to occur if every atom in the Universe were turned into the things and we sat around waiting until the stars all burned out. So you could for instance tell every machine it needed to match back up with 19 of its buddies once a year for a check where they compare data, or to shut down if they can’t find 19. You could also have all sorts of different species and protocols for dealing with unexpected events, and you can always ‘what if’ your way into some bizarre circumstance but that’s not really the point. You might want your robots to mutate, you might not, but if you want to send your robots off somewhere confident they won’t mutate before arriving, that is an option.

John von Neumann's universal constructor

Is Self-Replicating Machine Possible?

This one of the objections to what is called a von Neumann probe, so I wanted to clear that up before jumping in. John von Neumann came up with the idea of universal assemblers, often just called von Neumann machines or Grey Goo, and this sparked 5 major concepts for how this might be used in regards to deep space. One is the basic version and I’ll just call that a von Neumann Probe even though the others are too, here are those 5 categories:

  1. Von Neumann Probe
  2. Bracewell Probe
  3. Terraforming Swarm
  4. Berserker Swarm
  5. Grey Goo Swarm

Von Neumann Probe

 

A basic von Neumann Probe is simply an interstellar probe able to maintain itself with little robots and stop over place to repair, refuel, and reproduce copies of itself to explore more places. Now in practice if it can self-repair this way you’re better off launching all your probes from our own solar system.

Even if you needed to budget a hundred tons for each probe, about ten times what the Hubble telescope masses, you could still get away with manufacturing a trillion of them, more than the number of stars in the galaxy, with the available mass of just one medium-large asteroid. Those could all go cruising off from our solar system and arrive at their destination far sooner than if you send out a handful of probes that slow down at the nearest stars, build more, launch those, which slow down again to build more, and so on.

You’d be better off using that automated production ability to have a small probe arrive, grab a small asteroid, and convert it into a bigger monitoring station that can also act as a relay for information from ones further out.

Bracewell Probe

 

Now the second type, the Bracewell Probe, you would be most familiar with from the movie 2001. The monoliths in that were Bracewell Probes. A Bracewell Probe is designed to communicate with other life forms, so it needs to be a good deal smarter and more adaptable. The simplest form would just be one that was able to self-repair and identify planets with a decent probability of life, and then setup shop nearby there and sat around transmitting repeating radio loop of how to make contact with us and some basic info about us. Sort of giant flashing neon sign saying hello, here’s our phone number along with a Rosetta stone for how to talk to us. Typically, this is envisioned as a human-level or greater intelligence though.

Something with actual brains and decision-making capability. Now technically a Bracewell probe doesn’t have to be a von Neumann machine but considering the timelines for interstellar travel and how long it might need to wait when it arrived, you would need to be building the components unbelievably tough to expect it to survive for those kind of duration without the repair capability of either being a self-replicating machine or able to make use of them to fix itself. Also again it is probably more advantageous to ship these all out from our own solar system and just have them unpack and build themselves on arrival at the target.

This has the advantage that it could set itself up on some small asteroid and send in satellite surveillance and even ground probes to collect data. Or make contact rather than have to sit around broadcasting. If it can manufacture on site and has human level intelligence, it could pick up enough data to send in androids that looked like the hypothetical primitive young race of aliens and chatted.

For instance, someone doing this to Earth centuries ago might start with satellite surveillance, then stealthy aerial drones, then little android birds or mice for close looks and to get the language and customs observed, then send in an android to ask questions it could not get from just listening and watching, or to give them information. Obviously you could take that into the ethically grey realm of trying to teach them or pretending to be a god. Incidentally both of these approaches work just fine for building manned spaceships too.

You can build larger spaceships for people to be on with self-replicating machines helping in the building and maintenance but the assumption is you can always build these automated versions cheaper and faster, both in terms of construction time and their velocity in interstellar space.

Terraforming Swarm

 

Now a Terraforming Swarm is essentially the notion that you are sending out von Neumann probes in advance to scout places for human settlement and that those probes either have terraforming capability or would be followed up by ones that did and followed up by manned ships later. The probe arrives at the destination and sets to work expanding itself so it can take on terraforming a planet. Or even turning the entire solar system into habitats as we have discussed before when talking about Dyson Spheres. This is another morally grey one because if you do not include an intelligence in that Terraforming Probe it might just go and terraform an inhabited planet, cheerfully disassembling the local flora and fauna in the process. That would seem pretty ghastly even if there was no intelligence on those planets, on the other hand you might think it was fine to Terraform a place that only had amoeba on it.

Individual views range from this being wrong only if there was intelligent life to it being wrong even if there was a decent possibility life might arise on that planet one day. Of course some might feel it is fine even if it had intelligent life on it.

Berserker and Grey Goo Swarm

 

The Berserker is usually seen as something a civilization might build if it had those kind of views, or made a terrible mistake. The name comes from a series of novels by Fred Saber hagen where the robotic ships or probes explore the galaxy to seek out new lifeforms and blow them up. The ones from the series are not actually von Neumann machines individually but the collective whole was.

A Berserker is essentially a malevolent Bracewell Probe, in that the Bracewell Probe’s mission is to seek out new life and meet it, the Berserker Probe is to seek it out and kill it, and our last type, grey goo, just wants to eat it. Grey Goo is occasionally called a Hegemonic Swarm, which I think originated from author Iain M. Banks. This doesn’t have to be robots or tiny little machines, the specific grey goo case, so I will use Hegemonic Swarm as a maybe more appropriate term. This could be grey goo, where self-replicating machines just fling themselves across the void stopping at stars and turning all the matter into more of themselves, but it could also apply to something like the Borg from Star Trek or a crazy machine intelligence that has decided it needs to turn the whole Universe into paperclips.

Alistair Reynolds even had a particular variety of this in one of his books that began as a terraforming swarm and through bad design was racing around turning everything into habitats around a star, into Dyson Swarms basically, already filled with flora and fauna, but was attacking colonized planets and spaceships too, to add them to the habitats. This is one of the reasons I wanted to take some time on the mutation issue because it is often assumed any self-replicating probes would turn into Berserker Armadas, Grey Goo, or Hegemonizing Swarms given enough time to mutate.

Once genuine mutation is in play, especially on machines that were not sentient, it is reasonable to assume they would start mutating toward strictly Darwinian goals like survival and replication. From that comes an assumption that left loose to run around the galaxy unsupervised furlong times most of the nice and benign or helpful types of these von Neumann machines will turn all evil. So it worth remembering there are ways to prevent mutation, but something often overlooked is that mutation does not change something from A to B, it turns A into a whole alphabet and then a library, given enough time. There is a reason right now that even though my billions of times great grandfather was an amoeba I am sitting here at the moment, and there’s also a reason why there are still tons of tiny and simple micro-organisms.

So you would expect a runaway mutating self-replicating machine to result in a whole ecosystem. At the solar system level, you would expect to see the bottom of the food chain probably be self-replicating machines that swarmed in the trillions around a star sucking up its light then other things which came by and ate them and got eaten in return. Probably complete with detritus eating versions at the far end of the food chain and one’s swimming around Kupier Belt grabbing comets and small asteroids freshly arrived from deeper out. Such things are not an example of machine life, they are just an example of life, it’s kind of silly to think of it any other way at that point. And it is worth remembering that is how life began on Earth too. Our planet did get gray-good by those earliest life forms and probably more than once.

I occasionally find it amusing to think of intelligent life as an adaptation of grey goo to produce a new wave of it that can leave an atmosphere or solar system, since classic evolutionary processes do not lend themselves to those kind of jumps. Okay some final notes about self-replicating machine and Nano-machines in general. I already mentioned that mutation does not have to be an automatic feature of these, but with that reminder that we basically descend from grey goo I should address the misconception some get that tiny little robots swarming around can just disassemble whole planets in days.

I mentioned speed limits a couple articles back where 3D printing machines are concerned and tiny little robots have them too. Just for conceptual purposes, keep in mind that bacteria can reproduce quite quickly compared to us, often able to double on a timeline of an hour or so. Meaning if you start with one you could have a million the next day and a trillion the next day and a quintillion the day after that.

On paper anyway, in practice exponential growth tends to get dampened by other effects. However, there is a clear evolutionary advantage to being able to grow and replicate quicker, and to be as omnivorous as possible about your sources of food and fuel. Yet bacteria do not split in two every second, even the fastest viruses, which are incredibly simple critters if they are organisms at all, are not that fast. Complexity has a cost; it takes longer to assemble. Now a machine should be designable that does replicate faster than biological life, but it is not very likely to be many orders of magnitude faster than organisms of the same size.

It is also worth remembering that chemistry and construction tend to produce a lot of heat, there is a reason bread dough or compost or other things bacteria go nuts in tend to get hot. You can only go so fast replicating before the heat would get so bad it destroyed the machines doing it. It is also a lot harder to snatch and place molecules in something you are building when it is hot and all the molecules are bouncing around much faster and also bouncing into whatever you are building.

We tend to forget that at the molecular level hotter temperatures mean stuff moving quickly, but building at that scale in a hot environment would be like trying to pitch a tent in the middle of a hailstorm. Heat, as we have seen in a lot of our topics on this blog, tends to be a big bottleneck on a lot of processes.

You also have to remember that tiny things are very fragile and that each component adds more material, more time to build it, and slows the process. How are your tiny little robots getting power? Solar energy? Not very useful for anything but the surface layer when its sunny and a solar panel can only be so thin before it would be incredibly fragile. You are not getting a nuclear power source, fusion or fission, that small. That just is not how that works. That leaves either battery power, which it needs to go recharge somewhere and could get quite bulky, or using existing fuel in whatever it is disassembling. That is great for medical nanotechnology, you can design it to run off your own power supply in your cells, but tiny and universal chemical fuel eaters are not exactly doable.

You would almost have to give it a separate engine for each type of fuel supply so it could run on oxygen and methane in one place, solar in another, sugar in another, etc. Adds more complexity, adds more bulk, adds more replication time. Want to shield it from an EMP blast? Add shielding, adding more bulk and more replication time. Also slows down how quickly it can do other things since it uses more energy to move its greater bulk and has more of its mass devoted to things other than moving and assembling and disassembling things. Want it smarter, more material, more energy, slower replication times. Probably a lot faster than biological life but not likely to be super fast. Scale always been hard on people, lots of folks think your typical biological cell is made up of atoms and molecules like they were LEGO bricks, in practice cells do not have dozens of atoms or hundreds or thousands, they tend to have trillions.

If you’re thinking of atoms or small molecules as building bricks of cells, don’t picture a house, which has several thousand bricks, picture a major metropolis, or even an entire planet, that’s the scale of most cells compared to atoms and simplest molecules. Even a typical virus tends to be in the hundreds of thousands, more akin to a skyscraper than a house. Your average mammal like a human or a puppy would be more like a galaxy in this atom equals brick analogy.

So yes you could build machines that were quite tiny compared to bacteria, probably were pretty sturdy, versatile, clever, and fast to reproduce but we do not want to get carried away and think of them as invincible swarms of grey goo moving over a planet like a tidal wave or possessing magic powers. We also want to remember they could come in a lot of sizes, from on par with a virus to quite larger than a person.

Self Replicating Machine

Self Replicating Machine

So when will we have these? I would guess quite soon, again we arguably already have them and it’s an important area of research. Automated construction, both for factories and for use in space, would be very useful, and is already very useful. At the microscopic scale the medical applications are huge and so is the convenience angle of having colonies of specialized tiny robots that hung around our appliances fixing them, particularly since you can make things smaller if they do not have to be durable enough to survive minor damage which could instead be repaired.

You’ve got two options on something like that too, you don’t have to have the machines able to self-replicate, you could have them without that and produced somewhere and you buy a vial of non-replicating ones with an expiration date. Back at the factory for them slightly bigger machines churn out millions of the tiny ones constantly, which is advantageous since you can save mass and energy by making them more specialized and without all the extra bits for reproduction and have some smarter bigger control bots that can issue them orders. They’d probably be more like a solution of various species of Nano bot. Ditto you could still go the replication route just have that done a couple steps up, with smarter bacteria sized mobile factories that built them as needed and could get updates and issue updates. Lot of potential uses and a big game changing technology.

One I suspect most of us will live to see, but also not a magic wand or instant doomsday device. Their value for medicine is huge, potentially making us biologically immortal, and their value for space exploration and colonization is equally huge. That leads to our topic for next week, Spacecraft Propulsion, where we will be doing a quick survey of all sorts of spaceship propulsion ideas either in use now or on the drawing board, including the EM drive.

In the upcoming weeks, we’ll be covering more interesting topics on EduQuarks. Please write down a comment and share this article with others if you enjoyed. You can stay connected with our YouTube channel and Facebook page to get updates of the upcoming articles. Until next time, thanks for reading, and have a great day!

error: Content is protected !!