Working Backwards for Protocol Design

"Working Backwards", despite its "en vogueness", has many great properties for the product manager. One of those has been the ability to validate with the customer before beginning customer development.

The same methodology can be applied to protocol design.

In fact, despite the greater complexity, it might even be more important because of the number of stakeholders and the underlying economic interactions.

While it won't prevent bad incentive designs (Incentive Designs Gone Bad), it can reduce the risk.

In this essay, I'll start with a hypothetical protocol and apply the working backward approach. My goal is to come up with a more repeatable framework for any protocol or token-incentivized ecosystem.

What is the "Mission" or "Objective Function"?

I've written before on the role of the objective function (Mapping Stakeholders to the Demand-Side Objective Function, Language Can Be Political - Objective Functions Less So), but do so here in the context of the starting point of a protocol, which is not it's utility, but it's mission.

Typically, mission statements are fluffy and ineffective. When they can be translated as "objective functions" that are embedded into the smart contracts on a chain and based upon on-chain data, mission statements are less fluffy and less ineffective.

It's not perfect because much of the world can't be measured (full quote that puts in context the what gets measured quote)......

However, with the right context, it's better than a statement no one understands, believes, or truly lives except the die-hard missionaries (and ever protocol should have at its core missionaries who can carry on the objective function without on-contract enforcement or incentives in order to bootstrap and fine-tune it).

Imagine you want to start a data service which serves blockchain data to developers. This is a reasonable service with reasonable utility. There are many equivalent offerings in the web2 world.

And one would expect many to emerge in web3, as well.

Our exercise is to imagine, by working backwards, a data-service protocol by starting with its mission.

What is a Protocol?

A web3 protocol, however, differs from its web2 version (e.g. Simple Mail Transfer Protocol, File Transfer Protocol, Hyper Text Transfer Protocol) in a few ways. One is that the standard isn't "imposed" through a standards-body building specs. One of the benefits of those protocols was the hard work that went into collaborating and codifying the capabilities and interfaces of those protocols. Imagine needing to start from scratch to come up with a protocol and wrangle agreement across different players.

The second is that many didn't capture value. Most were free, which thankfully accelerated innovation. Some would argue that kind of collaboration but open specification and open software is the best way for innovation.

Some protocols "monetized" by monetizing the standard through a licensing agreement, such as Wifi or Zywave.

Web3 Protocols, instead, try to align and incentivize the participants and contributors to their protocols through tokens. When properly designed, this can get different stakeholders to contribute while encouraging ongoing innovation, at the same time limiting competition with the protocol directly.

Defining what the protocol does, like the web2 counterparts, is fairly straight-forward: blockchain data query and transfer service.

But what does it mean to deliver this as a service, versus the specification of a service?

Missions

Let's compare that language, "blockchain data query and transfer service" to the elements as a mission.

"Enable developers anywhere to query and transfer blockchain data in the manner that best meets their use case and satisfies their core non-functional requirements (e.g. latency, verifiability, availability, and comprehensiveness) without any platform risk."

This is much more of a mouthful, and it's not as pithy and lofty-sounding as most corporate missions.

But let's see how this still creates more excitement[1] and clarity versus the other definition, "a blockchain data query and transfer service."

Enable developers anywhere may be a bit amorphous as a mission. But a click-down raises some elements to the significance of this meaning: it should be performant and accessible from anywhere in the world. This attribute isn't available in all parts of the world, either due to lack of transit (for example, China) or regulatory limitations to services ().

"in the manner that best meets their use case" -- also initially hand-wavy, but it begs the question of how to do this given the diversity of data use cases. We know that this limitation of being all things to all people is often not met by a traditional commercial entity, even one with as expansive offering as AWS.[2]

"best meets" is also super vague and not quantifiable. But a click-down puts us into design space filled with challenges and nuance of creating the optimal developer experience. Entire companies have been built, for example, on different phases of creating the "best" experience -- from simplifying deployment (containers, Infrastructure As Code), to API design (SOAP to REST to GraphQL), to shifting security concerns "left".

By making this part of the mission, the protocol tasks itself the goals to continuously improve and innovate on the DevX while ensuring no limitations or blockers in its service delivery.

The language of "satisfies their core non-functional requirements" is open-ended because these can change or be in conflict with each other (so the protocol can't bend computer science if faced with a trilemma for example, but must give the developer full flexibility to make their own trade-offs). By listing a few key examples, this already sets the bar very high: latency and verifiability involves many complex technical elements in many parts of the data pipeline, and comprehensiveness tasks the ability of the protocol to source a growing volume, diversity, and complexity of data sources.

The last part "without any platform risk" might be the the crispest element of the protocol, and I'd argue, it's one shared by all web3 protocols.

The top-level platform risk for developers is being rugged with access to a service's API.^3

Developers want confidence that no part of their application can be stopped by a unilateral decision from a service provider. In the case of a data service provider, the infrastructure should still run if AWS or Google Cloud stop working; if a particular flavor of client to the blockchain (if that's part of the stack) is cut off or gone rogue.

Platform risk extends beyond just the service infrastructure.

As in the case of Twitter which raised prices from $0 to $5000 / month, platform risk includes non-competitive pricing behavior. This doesn't mean that the service must be provided for free. But it does mean some market-driven price-discovery is applied wherever possible on behalf of the developer using the service.

Another example of platform risk which isn't easy to solve is Apple's 30% take rate. Apple brings distribution and a better UX. There's value behind the 30%, but many developers feel it isn't and choose to go off, paying 0%. Because the web is open, they can do that, but there's something in between these two approaches where a price-competitive take-rate that still helps developers maximize their distribution could be possible -- and that would be another goal of this protocol.

So a challenge with a protocol mission statement is to still be as generalizable as possible while providing clear "sub missions" that either put constraints on the mission or give clarity on attributes that roll-up to the mission in a coherent way.

Objective Functions

What is different and exciting about objective functions is that a mission written in the sort of clunky style above can give clarity and alignment to different teams working in web3.

Imagine that such a protocol weren't served by a single monolithic company.

Instead, the protocol was open and incentivized such that the right people with the right skills anywhere in the world could participate with labor and be rewarded financially.

In fact, the protocol was so successful that it would never make sense for a competitor to compete directly; instead, it would be a higher return for even the most ambitious and skilled entrepreneur to make more contributing to the protocol than to strike out with an alternative service.

The concept of objective functions and blockchain tokens enables this -- but it must be carefully designed.

Let's look at the two more important foundational design considerations.

The first is the "objective function" of the protocol.

This must be a simply and clearly defined as possible that generalizes the goal.

Remember: the objective function for a properly designed protocol will relentlessly pursue the maximization of that function, like a Terminator.

One way to start this for this illustrative data protocol would be the following:

D(t)=af(t)

A way to describe this is demand (D) is to be maximized over time. The objective protocol is to maximize the Demand (usage) of itself.

There are more nuanced ways to do this which are specific to the query, which we'll get to, but here's the framework.

The second aspect is the other side of the equation, which is a set of functions f(t) that each contribute over time to the maximization of the objective function.

Note that each function contributes to the maximization of the objective function, not to itself!

The protocol needs ways to ensure there are no negative spill-over effect or local maximization "traps" from a function maximizing itself. Instead, the functions each need a feedback loop which identifies and overcomes those "valleys" -- a complicated n-dimension problem.

The second component in the design is the incentive mechanism.

Tying the incentive mechanism to the Demand is critical to aligning as many stakeholders as possible to the objective function.

Imagine something like this:

R(t)=rf(t)timeD(t)

What does this mean?

It means that as D(t) value changes, the protocol trying to maximize the value, the Reward function over time also experiences a monotonically increasing value.

But how this behavior expresses itself is not simple, so its expressesd as rf(t).

Why not just make it one to one?

Because the design can't be arbitrary!

The financial reward through the token is dependent not only on the rise in the Demand function.

It depends on how much value is captured by the Reward!

That value can be either an internal market amongst stakeholders, where more traditional supply and demand dynamics can apply. Designing those mechanisms are still complicated, but at least provide a closed system.

However, eventually, if there's real value, true price discovery might mean that the Reward token needs to expose itself to a wider financial market, where it competes with all the assets in the world to discover its actual value to those Stakeholders contributing to the service!

But for the rest of this series, we will take speculative impact out till we get through more details of the framework.[3]

Design needs to effectively balance two things:

Demand everywhere not a drop to drink

Demand should be the top priority of every protocol.

Absence of demands means absence of utility.

Utility, however, doesn't create demand.

In other words, a service can be deemed "useful", but only a small number of people wants it.

Let's consider ways to create the function for Demand for the illustrative data service above.

  1. Amount spent to receive the service of the protocol, denominated in one fiat currency
  2. Amount spent to receive the service of the protocol, denominated in the Reward Token
  3. Amount of data, defined by the bytes of data transferred from the service endpoint to the developer's client
  4. Number of queries made by developers through the end-point

As I wrote in Demand Side Token Design, optimizing for the amount spent, regardless of the denomination, feels like the wrong metric to measure. It doesn't focus on the utility.

I've been rethinking this, and feel perhaps the sub functions could help address it.

Willingness to pay feels important; but with the "developers anywhere" part of the mission, we add a constraint to ensure financial accessibility of the service to the poorest parts of the world.

This means one element of the protocol is to achieve perfect price parity to not price developers out of the market, while also not pricing below a developer from a wealthy BigTech company from paying the same low price.

Is this possible?

I don't know, but thinking about how we can express this with simple math helps clarify our thinking in terms of both the constraints, the goals, and the loftier aspects of the mission.

More concretely, imagine the protocol felt that maximizing their spend directed them to focus exclusively to the highest paying customers. This is not a bad strategy, many companies have pursued this. Those same companies, however, found themselves disrupted by disruptors.[4]

So "anywhere" may maximize the number of developers, incentivizing the protocol to pursue smaller paying accounts versus excluding them to solely focus on the largest accounts. It doesn't mean those large accounts are ignored. It means the service and GTM must be incentivized to pursue both types of businesses and do so in a way that doesn't hurt the objective function of maximizing total spend.

So I've moved away from pure utility, such as bytes served or queries, being the maximizing function for now because it's a good proxy for those services, and gives flexibility for the protocol to come up with the right unit economics across services and customers.

The next essay will tackle one of the critical sub-functions on generating demand and why permissionlessness in the design is critical to serve that.

See Preference Curves Like Permissionlessness in Protocols



  1. This is a very subjective statement, I know. But is it, nevertheless, a reasonable assumption that if offered a "job" to work at a company which described themselves the first way versus the second, a greater number would pick working for the second? ↩︎

  2. See AWS's number of databases and quote from Jassy. ↩︎

  3. While the ability for many projects to raise initial capital through token issuances such as ICO, alt to ICO, and it freed many projects to align with communities and stakeholders early without the often inescapably negative alignment from traditional VCs, the exposure so early to capital markets can hurt projects as well, I suspect. There's a reason why successful projects like Facebook stayed out of the public markets for so long despite having readiness to be public and an appetite from public investors. I don't know why this is the case because those reasons are way above my pay grade; all I know is that they must have been there based on their behavior. ↩︎

  4. Innovators Dilemma ↩︎