Demand Side Token Design

Wednesday, August 30th, 2023
Here is a video where I presented some of these same ideas:

Wednesday, July 26, 2023

Demand vs Supply

In The Protocol Guide to Enterprise-Led Growth, I addressed the potential fallacy of targeting developers around blockchain-based solutions and imagining that doing so will result in the same growth in web2-based developer-first go-to-market motions.

This is still the higher-order need that needs to be addressed: unique business-model innovations (Blockchain Business Model Innovations) that drive new demand to blockchain specific solutions.

That being said, we already know that there are blockchain-native use cases emerging, especially on the consumer-side, which can pull in demand; and in some cases, the blockchain solution solves an unsolvable problem in legacy environments that unlocks demand Unlocking Latent Demand with Blockchain Supply.

In this essay, I wanted to build upon the following:

After reviewing these frameworks, I felt I could contribute to the conversation and build upon their ideas.

Goal or Objective Function

The "goals" in the Lazzarin presentation describes "what" does the protocol do.

The example for MakerDAO was "to develop a stable asset native to Ethereum." This is supply-side focused: what is the protocol offering?

However, Trent McConaghy's use of "objective function" and the definition of "getting people to do stuff" feels stronger and more aligned with the value of tokens on blockchains as incentive machines.

In this case, the function for OceanDAO was defined as "maximize the supply of relevant AI data and services."

This is more than just a description of what the protocol does, which could be "to supply relevant AI data and services" or "enable data marketplaces on the blockchain."

Defining the goal as a function can potentially increase in value and rate of change depending on inputs, which could be other functions, gives more directional clarity.

When it comes to protocols, this kind of dynamic objective function is more fitting and powerful.

It's similar to how the Terminator had its own relentless objective function, stopping at nothing to achieve it.

via GIPHY

In fact, defining an objective function ethically, responsibly, and holistically really is a new design space for orchestrating ecosystems as a type of product superpower.

In Bitcoin's example, maximizing the security of its network, the objective function aligns different players of miners, nodes, users, investors to continue to do so based on a Proof of Work model which relies upon energy-consuming hash rate.

The objective function and the incentive mechanisms supporting it has resulted in a network that has relentlessly (to date) pushed hash rate and associated electricity usage higher and higher.

This already raises the question of The Paper Clip Maximizer [4] [5] which Trent also references. This thought experiments states that if the objective function of AI were to maximize paperclips, it may decide to eliminate humans.

Given this, what if the design were explicitly around demand for a protocol's service?

This would be different from, say, Ocean Protocol's objective function of maximizing supply.

The thought exercise to do so opens up limitations to this approach, as well as possibilities.

Creating Demand for Data Services

Imagine a data indexing and delivery protocol had the following objective function: "Maximize the number of search queries using the protocol's data services."

This begins to shape the protocol around its demand. But it also raises questions.

For example, defining query usage as the measurable atomic unit puts constraints on the method of data consumption (in this case, atomic queries). Streaming data from blockchains, such as events, or real-time data, makes the measurement of consumption in terms of atomic queries irrelevant.

Defining the objective function has the future-proofing and generalizability of a typical "mission statement", but more teeth and, potentially, immutability and relentlessness.

Given our protocol's ambitions are to index and serve all blockchain data, we should consider widening the aperture of how data is consumed and ultimately priced; this exercise probably would need detail into the different kinds of Jobs To Be Done and data-consumption interfaces to get a real feel for the design space of data consumption.

But...for purposes of this article, I'm going to go with volume of data as the right output to maximize over time maxtf(t). This raises some potential questions around how can the service distinguish between different types of data or "baskets" of data (high value vs low value; verified vs unverified; old vs recent). But for simplicity, let's assume some of those data constraints are inherited by limits of what can be stored on the actual blockchain (so this would eliminate raw videos files, for example).

I also wonder whether the input t for time is the right input. I feel it is, because we want to think in time horizons and as a sanity check conduct thought experiments on limt+f(t).

With an objective function maximizing over time, we can add additional constraints and supporting functions to granularity to the types of data, preferences curves, and personas. If the curve is to maximize the volume of data, then the network may accommodate itself to change pricing or interfaces to always increase the volume.

But, wait, is this still the right objective function?

On its own, it may not.

After all, if demand is inverse to price, this function could potentially drive prices of the data served to zero. Or it could render the value capture, in the form of token value, also to zero.

The design of feedback loops, however, could come in.

So we could approach it in two ways:

Protocol Design #1:

Objective Function: Maximize the volume of data served by the data protocol
Constraint #1: As volume of data consumed goes up, the value of the protocol Token also increases

Protocol Design #2:

Objective Function: Maximize the value of the protocol's Token
Constraint #1: The value of the protocol Token goes up when volume of data served increases

How would one choose, and does it make a difference?

My "gut" says, looking at the data service as a product, that #1 is the right answer.

In the end, the protocol provides a service to customers, and that should be the priority.

Picking #2 feels like defining the mission statement of a company "To Maximize the Stock Price" and then a sub tenet to be, "The price goes up when we make customers happy." It feels very short term and overall "meh" - but I don't have a really strong, rigorous way to defend my choice.

However, if one could incorporate time horizons as another constraint, "The time horizon for decisions to maximize value is 100 years" this could alter some of the short-term mercenary feel.

Writing this, part of me wouldn't be surprised if Amazon.com were, in fact, designed by Jeff Bezos' to be kind of corporate creature with such an objective function. Under that design, employee health and happiness and supplier trust and quality could very well go out the window, and everyone ends up working for the company.

So we go towards Protocol Design #1.

What, then, are the challenges with Constraint #1? It's still somewhat early days on the mechanisms where demand for the service drives the value of the protocol's token. It seems reasonable to put that constraint, recognizing there could be different mechanisms.

At this point, I'd want to think about designing Reflexivity.

Reflexivity isn't a constraint, to me.

Instead, it's a design principle to help achieve the objective function based on differing, perhaps even adversarial, objective functions of different stakeholders.

Reflexivity such that positive feedback loops contribute to the growth of the data and/or price of the token.

Many may think spending this time thinking so early about the value of the token may think its pedantic or it will take care of itself. It might be true.

It's probably more likely true that over optimization of the price, a pure mercenary-mentality, may certainly be hurtful. But I don't think it's guaranteed.

For example:

So again someone could say that this cannot be pre-determined, and it can't, especially if the token is on the public market.

But it is fair to use as a constraint.

Adding Constraints

Constraints makes the system dynamic because the different stakeholders (which I cover in more detail under Mapping Stakeholders to the Demand-Side Objective Function) have their own objective functions.

However, to address the demand-pull side properly, I feel we should introduce an example of constraints.

Why?

Because we know that in a traditional web2 environment, one way to increase demand is to set the price to 0.[6]

However, let's say there's a basic feedback loop that the Indexers of the data need to be paid a portion of the query or consumption fees. And if those fall to 0, this could result in a loss of the service altogether.

Since we have introduced a constraint around usage and price, let's consider other constraints. Some might be technical constraints; some might be intentional constraints.

Other possible constraints (here I am struggling a little bit if these truly are constraints. I am thinking more in terms of another function f(n) that must be satisfied while maximizing the objective function versus what seem to be stricter constraints from the Lazzarin example, such as "price linked strictly to the dollar" and "systemic risk borne only by MKR holders").

But similar to values in a company, these feel like mathematical functions which can be quantified which express qualitative attributes:

Ocean Protocol described questions or considerations are their constraints:

These felt like they could either be restated to be constraints or could be design principles.

At this point, it feels like both goals and constraints are loosely defined (understandably) and there's probably much to gain in developing these further.

But for now, I'm thinking of constraints as functions that illustrate idealized behavior that we want to use to shrink the design space of the objective function.

Right now, we have an objection function of increasing the amount of data consumed from / served by the data service protocol. The constraint is that the price of the token function should go up as the total data consumed goes up (there's probably more nuance around this as well, such as defining time periods. If consumption is steady state, for example, the price shouldn't necessarily go up; a regular business, for example, that had static revenue wouldn't see its enterprise value or stock go up).

Are Levers or Sub Objective Functions Constraints?

This is getting long for a single-day piece. Going to continue on Demand Side Token Design - Sub Functions

PS: The higher order bit or preface to this (but can't be written till I work through some of these other essays) would be the Protocol Design Canvas.


  1. Protocol design: Why and how | Eddy Lazzarin - YouTube and Token Design: Mental Models, Capabilities, and Emerging Design Spaces with Eddy Lazzarin - YouTube ↩︎

  2. Token Engineering Case Studies. Analysis of Bitcoin, Design of Ocean… | by Trent McConaghy | Ocean Protocol ↩︎

  3. Demand Side Token Design - YouTube ↩︎

  4. Can Blockchains Go Rogue?. AI Whack-A-Mole, Incentive Machines… | by Trent McConaghy | Ocean Protocol ↩︎

  5. Nick Bostrom - Wikipedia ↩︎

  6. This "Law of Demand" works for ordinary private goods, but we know there are exceptions (which we won't include for now) such as Giffen Good, Veblen Goods, signaling effects, network effects. ↩︎