So, if you haven’t gotten bored yet of my writings on memetics and complexity theory, here’s Part 3. I’m hoping to finish it in 3 parts, but I’m kind of a windbag writer, so there may be a Part 4.
If you’ve followed along thus far, we’ve introduced a few major concepts spanning from the fields of evolutionary game theory, complexity theory, and general mathematics.
This section will serve more to define terminology:
A meme is a subclass of the more generalized self-replicators, which consist of Turing-computable structures which contain sufficient information to, given the right circumstances, replicate themselves. A careful reader might note the verbiage I used here—”given the right circumstances”. This is because simply, a self-replicator does not necessarily by itself have the ability to successfully replicate. Much like a program can only run given a compiler or a virus needing a ribosome, self-replicators can require available machinery or materials in order to replicate. However, it must provide the instructions necessary to replicate.
The self-replicator has a minimal representation, also known as the Kolmogorov complexity of the self-replicator. This is essentially the minimum amount of information about the self-replicator needed to be replicated and propagated to continue its “lifecycle” (another act of replication and propagation). Different self-replicators may have different sized minimal representations (e.g. large viruses versus small viruses, ayy lmao memes vs. copypasta).
In the subclass of self-replicators that require the interaction of another agent—the host—we can model the interaction of the self-replicator with the host using traditional game theory. The self-replicator by our prior definition has only one dominant strategy—to achieve replication (the virus, the meme, etc.). Note, this does not imply there exists a pure strategy for the self-replicator (in the viral case, there are many examples of mixed strategies [e.g. lysogenic lifecycles]). It is not an agent capable of decision generally (it is possible to extend this taxonomy, however, to include self-replicators that can make decisions). The host, however, may or may not prefer to replicate the self-replicator—this can be understood in terms of two quantities:
Cost of propagation - In most realistic circumstances, the host must exert some energy equivalent in order to help the self-replicator self-replicate. In the viral case, the cost of this propagation is quite high—for the host cell, this usually means death (arguably infinite cost).
Utility of propagation - What’s more interesting in the non-pathogenic viral case is often there are examples of self-replicators which provide some benefit for the host. This is hard to imagine in the viral case, although it does happen. However, in the more abstract case there are multiple clear examples of utility gained by the host from assisting the self-replicator:
In religious or political cases (proselytization), the host derives utility from believing in righteousness.
In internet meme cases, in the moral case this takes the form of social approval (likes, retweets), or in the more sketchy cases, financial motivation (pump and dumps).
The mutation of the self-replicator occurs during the acts of replication and propagation. This is obvious—without replication, there is no method in principle for the self-replicator to change (effectively entropy). In our simple theoretic model, we can define mutation as both stochastic (random) and context-free (no part of the self-replicator is favored to not be mutated, for example). These two criteria are sufficient to imply that the mutational burden (amount of units as part of the self-replicator mutated) of a larger self-replicator (in terms of information units) must be higher for every action of propagation than a smaller self-replicator.
Mutation can be abstractly defined as two separate types:
Omission — This is essentially a deletion of a portion of the information, e.g. “cattle” becoming “cat”. In more complex cases, there are error-correcting mechanisms for identifying omissions, (e.g. “cattle” is a word but “cattl” is not), but this is outside the scope of this series.
Replacement—This is the replacement of a portion of the information with another, e.g. “cot” becoming “cat”.
As we will discuss more below, mutation is a property of both the host and the medium. In fact, we can say mutation occurs “twice”:
First, mutation may occur when the self-replicator is replicated in the host.
Second, mutation may occur as the meme propagates through the medium (discussed below).
However, in the toy model we’re constructing we can treat it simply as a combined mutation rate, simply because the purpose of the self-replicator is to replicate and propagate. These are not really atomic actions — failure due to propagation or replication is identical (the self-replicator “dies”).
Mutation over many iterations (propagation/replication) can cause the self-replicator to evolve, usually in one of three ways:
Inviability - When mutation (whether omission or replacement) occurs, this can simply be a benign replacement (like the tail bone of humans or multiple nipples, the vestigial information does not negatively impact the self-replicator). However, if mutation impacts the ability of the self-replicator to continue propagation and replication (the minimal representation), this will lead to the inviability of the self-replicator. This will cause it to die, essentially.
Synonymous replacement - This is a replacement that will occur in the self-replicator without changing the actual minimal representation of the self-replicator (the underlying meaning, in plainer terms).
Speciation - When mutation is a significant enough replacement/omission to not cause inviability but to alter the information content of the self-replicator, this will cause speciation, or a fissioning of the self-replicator from its copies. This is effectively evolution in biological terms.
Both the self-replicator and the host exist in a medium, which is simply the world in which both live in. The self-replicator must be propagated through the medium, which can be described in terms of two factors:
Transmission—The transmissibility of the medium in layman’s just refers to how difficult, all other factors held constant, it is for a self-replicator to propagate given the host chooses (or is forced) to emit it. We can easily visualize a high-transmission medium as a complete graph:
In the graph above, we can see every single point is connected by an edge to every other. In the analogy of infection, this would be the case where any individual can spread the infection to any other in the same group (the network). That said, a complete graph is not the sole example of a high-transmissibility medium. This would also likely be considered a high-transmissibility medium:
We can observe in the above that not every node in the graph is connected to every other, but the density of connections (the average number of nodes connected to any given node) is still quite high.
A low transmissibility medium is the opposite, and generally tends to be sparse. An analogy would be a pandemic spreading through the Great Plains—if your nearest neighbors are miles away, how likely it is you will get infected by the virus?
In the graph above, we can see each node is connected to on average 2 others (the edge nodes to only 1). This implies a low-transmission medium.
Fidelity—Fidelity, other than being a brokerage, refers to how faithful of a copy the self-replicator will produce given one transmission event through the medium. This is the mutation rate of the media as mentioned previously.
A low-fidelity medium for transmitting information analogously tends to be the internet—between individuals in a chat room or a forum, there is a high tendency for information to mutate during transmission. This can lead to:
In contrast to a low-fidelity medium, a high-fidelity medium usually ensures successful propagation of the self-replicator from host-to-host. The most intuitive example of a self-replicator in a high-fidelity medium would be the memorization of religious texts or cultural mythos in the pre-literate era. Individuals would train with similarly trained individuals carefully to memorize the self-replicator, ensuring high fidelity of transmission.
Much like the medium, the host must also have a replication mutation rate (which, in ideal scenarios, can also be zero). This can be analogized to imperfect recall (the game of telephone between two or more individuals) or mutations induced by viral replication (polymerase slippage).
Finally we can have some fun.
The Lemma of Complexity
Given our above definition of terms (medium, host, self-replicator, mutation) and prior concepts, we can finally make inferences about the fitness of the self-replicator. Fitness is an overloaded concept in evolutionary theory, but in this case we can define it as the property unique to the self-replicator (e.g. separate from the host and medium) that defines how easily it spreads.
Naively, fitness matters mainly in a competitive game between different self-replicators, rather than in the single examples we’ve discussed before. This is intuitive—in a world with one meme, we can simply model the spread of the meme by how dank (how much utility - cost it has to replicate/propagate) and how lossy the medium is. Eventually our one-meme world will either reach the point of total saturation (that all reachable hosts will have replicated and propagated the meme), or the meme will utility of replication/propagation will fall to the point where the host will decide not to participate (in our cooperative case).
However, that’s pretty boring, and has pretty well-defined solution concepts (as discussed above). What’s more interesting is envisioning a world much closer to our own, where multiple self-replicators will compete for dominance. This is easiest to model by assuming one-replicator-one-host — for a given time period, one host can only replicate and propagate one self-replicator. In this case, even when there is a commensal or mutualistic relationship between the self-replicator and the host (dank internet memes, for example), there exists a zero-sum relationship between self-replicators — the increased replication and propagation of one self-replicator will occur at the expense of all others.
However, in the most general case, the self-replicator does not “think” - unlike the host which can maximize its utility via decision, the self-replicator (usually) can’t. That said, we can first look at the evolution of the single self-replicator across the network in order to derive some powerful insights.
Our Toy Model
Let us construct a network of hosts where each host is connected to some subset of other individuals (it doesn’t actually matter except in determining the number of iterations of the model). These hosts are connected in a lossy medium with some background mutation rate b per 100 units, and each host also has some mutation rate m per 100 units (to keep it simple, let’s assume all hosts have the same mutation rate). For this toy example, let us construct mutations to only allow omissions.
Similarly, to keep this example general we can assume the utility function (the utility minus cost) for a given host is 0—the host is indifferent to propagation and replication of the self-replicator to others.
We can define our self-replicator to start at size 500, with a minimal viability size of 400 (this is not in real units of course). We can start assuming a given host has successfully been “infected” (as in, this host will for sure propagate and replicate the self-replicator), and assume the following occurs on turn 1:
We can assume that during the spread from host 1 to its network through the lossy medium, there is mutation during replication (m) of part of the self-replicator and during propagation through the medium (b). By our toy example, this is all omission, and reduces the size of the self-replicator by (m+b)*500/100, or 5m + 5b.
We can then iterate this from all newly infected hosts, and observe that the size of the self-replicator should (as we assumed) monotonically decrease (although not linearly, given the mutational burden’s dependence on the self-replicator’s size).
This process will continue until one of two conditions is reached:
Self-replicator inviability - After enough iterations with omissions, we will anticipate the self-replicator size will fall under the minimal representation and cease to be able to continue to replicate and propagate.
Network saturation—Depending on the initial size of the network and the transmissibility of the medium, our self-replicator may continue to replicate and propagate until all hosts are “infected”. In this case, it will cease to propagate
Assuming a large enough network (or low enough transmissibility through the medium), we can quickly see that a self-replicator undergoing solely omission-based mutation will eventually reach the size of its minimal representation (the Kolmogorov complexity). This naturally makes some critically simplistic assumptions, which I will discuss in later works (namely host indifference).
At the size of the minimal representation, however, we would anticipate that in a continued lossy host/medium, the meme would quickly cease to propagate and replicate. This effectively sets an asymptotic lower bound in our omission based toy model—while this provides the minimal size that will ever propagate and replicate, it will never reach this bound.
What’s more interesting is when we include replacement and its iterated counterpart, speciation. When replacement occurs without loss of size, over an iterated game, this will cause either speciation (the formation of a different self-replicator with a different minimal representation) or synonymous replacement (the maintenance of the same minimal representation).
In our toy model, we can observe the two cases have significantly different outcomes:
In synonymous replacement, we can in our simple example assume no replacement has occurred. This isn’t necessarily always true, but in the toy example the only unique qualities of the self-replicator is the current information size and the minimal representation. Since neither are changed by this action, it is invariant.
The speciation example is much more interesting, however. We can separate speciation into two cases:
The new species has a larger minimal representation — when this is the case, over a substantially large amount of iterations it will be outcompeted by the species with the smaller minimal representation (it will reach inviability faster).
The new species has a smaller minimal representation — when this occurs, over a large amount of iterations it will outcompete the original species due to its smaller minimal representation (it will infect more hosts before becoming inviable).
The spread of the self-replicator (its “success”) is in the one-replicator world defined simply by:
The minimal representation of the self-replicator — this, in conjunction with the fidelity of the medium and the host mutation rate, defines how many iterations of spread can occur (assuming no network saturation) before inviability
The fidelity of the medium and the host mutation rate
The transmissibility of the medium and the network size—this, in conjunction with the network size, defines the number of iterations of propagation and replication.
The host utility function—this is a more complicated topic, and demonstrates interesting symmetry in competitive (parasitic) versus cooperative (commensal, mutualistic) host/self-replicator relationships. In the simplest example, we can discount the host’s utility and assume it to be indifferent, but this has very real effects on spread in most cases.
More interestingly, when we assume a competitive landscape of self-replicators, we can observe that a simpler self-replicator, assuming all host and medium-related factors are removed, will outcompete a more complicated one (with a larger minimal representation). This is an extremely interesting result, and leads us to define the lemma of complexity: the relative fitness of a self-replicator in a set of self-replicators is proportionate to its Kolmogorov complexity.
Conclusion
I’m pretty sure compared to Parts 1 and 2 this may have read with the same ease of being run over on the highway. That said, we did it — we recovered from our network game the lemma of complexity! Pat yourself on the back, or punch me instead (for making you read this).
In the next and final part of this series, I’m going to talk more about the symmetry of host utility functions and the idea of secondary encoding factors—properties present outside the self-replicator which allow it to reduce its minimal representation and hence help it outcompete others (as well as stay viable for longer). Finally, I’ll talk about some real world examples, and present some fun equations and more food for thought.
Until next time.
Yours,
Lily
Consider Zipfs law in the context of the genome. It's been written about..but never considered seriously. The lexicon can be defined..but most of the "meaning" resides in non coding region where nobody looks
This is awesome. I love it.