On the Mathematics of Liquidity Value (Part 4)

Bubbles pop.

Except maybe, sometimes they don’t.

Like sometimes, they end up as:

Fiat currency, in a lot of respects, has many of the bubble properties I’ve talked about. In the last post, I talked about my conceptualization of moneyness, which we can formally define as the proportion of imbued value we give an asset that comes from liquidity value — our forward projection of liquidity for the asset.

But. Let’s save that for the next post.


Since this is one of the more mathy posts in the series (either the last post, or the second to last), I’d be remiss not to give a more formal argument for what liquidity value is, especially in making a bold claim of the dollar as a bubble.

In Post 2, I argued that liquidity changes in the calculus of taking risk as a buyer (seller) of an asset. This is simply derived from the meaning of transaction costs — in the most extreme illiquid market (for example, where you intend to buy an asset with an expectation of no future buyers), your implied risk is the entire cost of the asset. If we expect that, between now and infinity, there will be exactly one buyer, the calculus again changes.

In a market of only one buyer (that is to say, once the transaction occurs there will be no buyers), we can tautologically show that whatever offer the buyer makes must represent the buyer’s read of the asset’s fundamental value. This is because by the definition of liquidity value, the liquidity value of the asset to that buyer is zero.

To add some mathematical notation, let’s define the fundamental value of the buyer as F(t). Similarly, we can define my fundamental valuation of the asset as F_0(t). Importantly, the fundamental valuations for both the buyer and myself are latent variables, since we cannot directly observe each others’ valuations.

However, we do have a method of interaction — the offer O(t) the buyer makes at any given point in time. To keep this exercise much simpler, we can make a few assumptions:

1) At any given point in time, the buyer will not pay more than their fundamental valuation - In reality, one could argue there’s substantial uncertainty even in individual fundamental valuations, doubly so for an economic asset will real increases in fundamental valuation over time. However, to keep this simple, we can assume F(t) is always less or equal to O(t).

2) Both myself and the buyer are risk-neutral, time-neutral, liquidity-neutral agents - That is to say, we both transact only with the rationale to increase our expected value over time (represented by the fundamental value). Similarly, we have - for this example - no liquidity preference at any given time — we are indifferent to holding cash or the asset at equal values. Lastly, neither I nor the buyer have time preference - I am more than happy waiting until eternity to win.

3) I will not sell for less than my fundamental valuation - This follows from 2. Since I have no need for liquidity and I seek to maximize my expected value here, I should never sell for less than my fundamental valuation.

4) I will sell for any value above my fundamental valuation - This is a simplification, but should follow from #2. If the buyer makes me an offer

5) We can assume both F_0(t) and F(t) are sampled from a true valuation distribution, and F_0(t) and F(t) change by an equal amount per time-step. - This is an important axiom, and a bit separate from the rest. While we don’t necessarily have to argue that my valuation and the buyer’s are even remotely close together, we do argue there exists almost a Central Limit Theorem of valuations. Or to put it less mathematically — if we instead had an infinite amount of buyers and plotted their valuations for the asset at any given time, we’d probably see something akin to:

Of course, with no loss of generality, we don’t need to assume the shape of this distribution. It could also just as easily look like:

We can call this the value distribution (V) - for every given time-step, there exists some distribution of all valuations all potential sellers and buyers will accept (pay) for a given asset. In our limited example, both my valuation and the buyer’s can be seen as samples from the hidden value distribution.

The first obvious thing to note here in a single asset market is the game seems zero sum-between the buyer and I: any excess utility gained during the transaction (e.g. by me selling over my fundamental value) is lost in equal amounts by the counter-party.

But this isn’t actually correct, as it relies on the idea of one true fundamental valuation for the asset, rather than individual valuations. It’s equally likely (and more often than not the case) that even in true liquidity and risk-neutrality, our fundamental valuations differ, and we can easily construct a game where both players win (if in the previous example my fundamental valuation is less than the buyer’s, I may maximize utility by selling above my fundamental valuation and the buyer may also maximize theirs).

During this game, from the time the buyer appears until infinity (or a transaction occurs) I can anticipate the buyer will make an offer. By our rules, we can set some constraints:

O(t) <= F(t) for all t

if at any t, O(t) >= F_0(t) a transaction will occur

if at any t, O(t) < F_0(t) a transaction will not occur

If we add an additional constraint — that all valuations are invariant over time — the problem becomes fairly straightforward:

1) If F(t) < F_0(t), a transaction will never occur (the buyer will never pay my fundamental valuation of the asset)

2) If F(t) >= F_0(t), a transaction will eventually occur (the buyer will eventually provide an O(t) equal or greater than my fundamental value, and we will transact)

What’s interesting to note here however is these constraints tell me nothing of whether I should or shouldn’t make a transaction at any given t. As a rational agent maximizing my utility, I should do the following:

I should accept the transaction if and only if I expect this offer is greater than or equal to my fundamental value, and is the best offer I will get (up until t=infinity).

I should reject the transaction otherwise.

Due to the lack of time preference, this implies that a rational buyer will not gain anything from making an offer lower than a previous offer, given that I will not accept it at any time. This implies O(t) over time is monotonically increasing.

Similarly, from our perspective, we know that the buyer will not pay more than their fundamental value, while we will not accept at any point less than our fundamental value. This means over an infinite amount of timesteps, we expect O(t) to approximately look like this:

However, it’s important to note I also have an infinite amount of time. In fact, given future expectancy, I should never accept less than the buyer’s fundamental value. Even if we anticipate O(t) is drawn from a distribution randomly between 0 and F(t) for every single iteration, eventually we will hit F(t), meaning I could’ve maximized my expectancy by simply waiting.

Or. If the buyer was nice, they’d just save us an infinite amount of time and pay that to begin with.


In the real world of course, we don’t have infinite time, nor do we lack liquidity preference, nor risk preference, nor time preference.

However, a lot of the basics we can generalize here, and it becomes a lot nicer in many ways. Let’s examine a similar but different scenario - many buyers, and to start, one time-step.

In this case, we retain our value distribution, with a twist — when a marginal buyer (seller) is added to the market, they can be treated as a completely random sample drawn from the value distribution. This we can treat as a no-information property — unless we have information to condition on otherwise about the buyer (seller), our best guess of the new buyer’s valuation is the expected value of our distribution V.

Many of our constraints still hold here — we can assume a buyer (seller) will behave rationally and never make an offer/accept a bid in such a way that they would be worse off. Similarly, because we are only looking at one point in time, tautologically we can assume that all buyers (sellers) are offering bids at or beneath their fundamental valuation, since they cannot sell (buy) it afterwards.

In the one-timestep variant of this game, the problem is decidedly simple:

1) If I observe offers above my fundamental valuation, I should always accept the maximum offer.

2) If all offers are below my fundamental value, I should always reject transacting.

This problem ends up being incredibly straightforward, and showing the first glimpse of liquidity value in the mathematical sense. Even in the one-timestep and one-buyer games, we can observe that no matter what happens, I am strictly the same or better off having more buyers (sellers). This is because I always have the opportunity to reject the transaction.

More interestingly, this gives us a specific method in these simple scenarios to quantify precisely how much better off I am. In the one-timestep game, we can observe the liquidity value cleanly as the differential between the maximum offer (bid) I receive and my fundamental valuation. Because I always have the opportunity to refuse the transaction, this number cannot go below zero.

Similarly, in the one-buyer game, we can observe the liquidity value cleanly as the differential between the buyer’s fundamental value and my own. Identically, since I can refuse the transaction to infinity, this number must be positive for all timesteps (this is true even if a transaction never occurs due to the buyer’s fundamental value being lower than my own due to uncertainty in future offers).

Finally, we can extend our game to include multiple simultaneous buyers over multiple timesteps, but it shouldn’t change the calculus here too much. The key simplifying assumption here is a continuous value distribution, which may or may not change over time, and that we assume that for every given timestep, each buyer (seller) obeys our no-information principle. We can observe here that for a multi-buyer, multi-timestep game, I should reject the transaction at any given time t if no bid (offer) is equal or greater to my fundamental value, or if I anticipate a better selection of bids (offers) in a later timestep. This, to my understanding, is mostly controlled by the evolution of the value distribution rather than individual buyer (seller) value estimations, given that by the no-information principle we expect each marginal buyer (seller) to value the asset at the expected value of the value distribution.


So what does all this have to do with money?

This is a very simplistic universe, again assuming one buyer (seller) choosing to transact between one or multiple simultaneous sellers (buyers). But it gives us an interesting approach to understanding the value of a dollar.

If you think about money, money isn’t a unique asset per se; it just, in most respects, has the highest amount of liquidity value (our moneyness property). In commodity money, the monetized asset has some fundamental value as well, but it’s pretty well substantiated that in the absence of an official money, the fallback tends to be the asset with the tightest bid-ask spreads (e.g. by competition, the most buyers (sellers)).]

So now, let’s talk about currency valuation. Depending on how you define currencies, valuation might be a moderately meaningless concept — the old adage “1 Bitcoin is 1 Bitcoin” rings hollow, however. In the modern world, we mostly weigh the value of currencies against each other (the floating rates of the foreign exchange markets, for example) or against a basket of commodities (this tends to be better as a measure of inflation).

When we talk about the value of currency, though, we more practically are considering the value of currencies weighted against others. It’s fairly difficult to assign a strict valuation in practice, given that the value of a currency depends largely on two factors:

  • The currency supply - How many units of the currency exist?

  • The currency demand - What can I buy using the currency (or more specifically, what aggregate quantity of goods is being represented by this currency?)

If we substitute the asset described in our multiple-buyers single-timestep model with our currency, we can observe the following relationship: Assuming there is a uniform value distribution among all buyers (seller) (for our currency) and the no-information principle holds, we are strictly equal or better off as a new buyer (seller) is added to the pool. This is because for one timestep, we can always choose to take the maximum bid (offer) assuming it is over or equal to our fundamental value of the currency, or reject transacting.

The value gained here is heavily dependent on the shape of the value distribution, however. If we relax our constraint and allow each individual buyer (seller) to also transact during this timestep, we can observe that the marginal value gained by adding an additional buyer (seller) to the transacting pool should scale quadratically (we’ll discuss more about it in the next post, but it depends roughly on local network topology).


Despite the rough edges to this argument, we made it! We have a moderately cogent argument for a mathematical basis to liquidity value, and more importantly, a rough way to estimate how liquidity value evolves. To the more savvy reader or someone with a good discrete mathematics background, some of the arguments discussed should remind you of:

This is not an accident. In this model, we can most stably represent currencies (or rather, any asset with liquidity value) as deriving value from the network — the group of transactors (buyers/sellers) who accept the asset.

This was a hefty post, and I’m excited we got this far. It’s probably entirely wrong, so please feel free to roast it on Twitter.

Ciao for now,