The Only Game in Crypto Town

Alessandra Sollberger
8 min readMar 12, 2018

--

When the Soviets installed missiles on Cuban soil in 1962, they were pretty damn sure of what would happen next. To avoid mutual destruction, the US would just warn the Soviets to retire. A nuclear attack from the US would result in themselves being nuked by the Soviets a few minutes later. And if the Soviets attacked the US first, they’d be nuked right back too. So what happened in Cuba? Your history teacher might have given you a fancier explanation, but it’s quite simple — the US warned the Soviets and the Soviets retired. Better than mutually nuking each other. The Cold War got the US and the Soviets trapped in an endless game of attack and defense. Only two scenarios were possible: both countries provoking, but not attacking each other, or both countries mutually destroying each other. The game’s loop was endless and it only stopped when the Soviet Union went down in 1991.

Plenty of economists have analyzed the Cold War through game theory. It’s almost too “text book” of an example to have actually happened.

When it comes to economics in crypto, there’s only one game in town: mechanism design, also known as reverse game theory. Aside from monetary policy, it’s all about designing incentives into a protocol. But what’s so confusing about mixing game theory with technology?

Before we continue, let’s quickly clarify what this theory is all about.

Game theory

You’re stranded on a beautiful tropical island. Unfortunately, it happens to be full of hungry cannibals. If a cannibal eats you, he’ll get tired and fall prey of the next cannibal. As you’d expect, these cannibals also happen to be experts in game theory. They think twice before making a move.

1 cannibal — easy scenario, there’s just one cannibal and you get eaten. Sorry.

2 cannibals — if the first cannibal eats you, the second cannibal will eat him. The best choice for the cannibals is to make no move. In this scenario, you get out of it alive.

3 cannibals — here, the cannibal who understands game theory best will try to move as fast as possible. Once you’ve been eaten, the problem is reduced to the one we just solved above. The other two cannibals can’t eat him while he’s digesting you as that would leave one of them exposed to the last one, who still hasn’t eaten anyone.

This problem has a quick solution — with an even number of cannibals, you get out of it alive. With an odd number of cannibals, the first cannibal to move reaps the benefits and the game stops.

In a more realistic situation (nothing against islands full of cannibals), a District Attorney uses basic game theory when trying to get two criminals to confess each other’s crimes. She traps them into the so-called Prisoner’s Dilemma, a paradox in which two individuals acting in their own self-interest will pursue actions that don’t result in the ideal outcome.

If criminal Arnie rats on criminal Bernie, and Bernie doesn’t rat back on him, he’s told that he’ll get out of jail right away instead of serving 4 years. Here’s the trouble though: if Bernie decides to rat too, Arnie is made aware that he’ll get 3 years instead of 4 (one year less as a goodwill gesture for his cooperation). If both Arnie and Bernie don’t rat on each other, they only get one year each as there’s not enough evidence. But hey — how can Arnie possibly assume that Bernie won’t rat on him? That would not be in his interest. He’d end up getting stuck with 4 long years of jail time. After all, Bernie will assume that Arnie is a rational criminal who will rat on him.

So the outcome for our two criminals is to rat on each other and annoyingly, they’ll get 3 years instead of just one. This “balance” is what we call Nash Equilibrium. The system reaches a stable state in which each player (or criminal) is assumed to know the equilibrium strategies of the other players. No player has anything to gain by only changing their strategy. Crucially, the players cannot coordinate their moves with each other. They cannot trust each other.

This is what happened during the Cold War too. For the US, the choice between preparing or not preparing nuclear weapons was obvious. No matter what the Soviets would do, better to be armed. Same thing on the Soviets’ side. This created a Nash Equilibrium that was logical, but definitely not optimal.

Mechanism design (reverse game theory)

When I was a rebellious teenager, my swimming coach figured out that telling me “no chance you’ll win this competition” was the best shot at making me give 100%. I just wanted to prove people wrong. This is basic reverse game theory in action: given the desired outcome (petulant teenage Alessandra doing her best), you design a course of action with the optimal mechanisms to get there.

Standard game theory is about moving forward with the most efficient action to optimize our own result. Reverse game theory, i.e. mechanism design, is about going backward from a desired result and building a system of incentives & penalties that gets the players to our ideal outcome.

Now we’re finally ready to go back to crypto town. We’ll use bitcoin as an example. The good news is that there are no cannibals or annoying teenagers in town. There are only two players in this game: users and miners. Users have a limited range of actions available. They can either send or receive bitcoin. To do that, they need a public and a private key.

Miners perform the actions that enable the system to exist. They mint new coins and they validate transactions. Their incentive to do that is being paid a certain number of bitcoins. Thing is, there’s no police in this town. So how do we make sure miners don’t cheat the system? For instance, what stops a miner from allowing an invalid transaction to be added to a block, giving herself some extra coins? Here’s where reverse game theory comes into action. The right mechanics are designed straight into the blockchain. In the example above, the rule is that any block mined on top of an invalid block becomes an invalid block. Other miners will simply ignore the invalid block and keep mining on top of legit blocks instead. That happens because as a group, miners choose the most stable state — which is the Nash Equilibrium of this system.

“Hold on” you might say at this point, “what about miners who speak with each other to cheat the system together?” That’s where coordination comes in. In our example with Arnie and Bernie, the lack of coordination was artificially created by the District Attorney. The criminals just don’t get to speak with each other. Different story with “n” number of players. As the coordination game goes, when a majority of people aren’t changing their state, the minority won’t have an incentive to choose a new state. In other words — with a vast and distributed group of miners, rogue miners can’t cheat through altered blocks (that is, unless they reach a 51% majority). The rest won’t give a damn about their dodgy blocks, which will be worthless. No point bothering with it.

Here, we’ve got a game with the assumption that a majority of participants will be honest. Fine — if you could communicate with a majority of users, you could also abuse a minority for your own gain. But how do you actually do that when the system is designed to keep identities anonymous?

When building new protocols, it’s not enough to incentivize players to take the actions that grow our system and give it value. We also need to think of the ways they could cheat and build in penalties accordingly. Cryptography alone doesn’t take economic incentives into account. Economics can’t provide anonymity or security. So what’s the right tool for structuring all of this?

Like in any town worth its salt, there’s a villain in crypto town. Smart and deceitful, it goes by a pretentious name: cryptoeconomics. Nobody has ever seen the villain in person. The villain’s name alone carries enough weight to make Voldemort jealous.

What’s the big deal with cryptoeconomics? When two seemingly unrelated areas come together, our human brains start throwing out question marks. This gets tricky with game theory, which is able to sneak into many different areas and leave us bamboozled.

Take romance. Plenty of gambling, guessing and “playing games” in there. But when two parties start trying to maximize their outcomes and analyze their relationship, there’s a lot more going on than just rational strategy (hormones, for one). So we typically end up confused and upset. The relationship between cryptography and economics isn’t straightforward, either. You see — cryptoeconomics isn’t a subfield of economics, like some VC investors claim. Cryptoeconomics is an area of cryptography that takes reverse game theory into account.

This is all quite counterintuitive. When it comes to money, we’re not used to thinking of it as an engineering problem — and when it comes to a new technology, we don’t tend to consider reverse game theory as an essential component. So if you look at a cryptoeconomic system like bitcoin only through the lenses of computer science, you’ll see things happening that computer science alone could never accomplish. It almost feels like magic. But there’s no Voldemort in cryptoeconomics. It’s just multidisciplinary in a counterintuitive way.

In the next post, we’ll explore the many faces of multidisciplinarity in crypto. As far as economics go (aside from monetary policy), mechanism design is the only game in crypto town. It comes down to economic incentives. But beware, because we’re entering a new Renaissance area. All fields are converging. It’s in the middle that hide real treasures. Every crypto Renaissance hero should be equipped with the right toolkit.

You can follow my rants on Instagram or Twitter. To help making economists & cryptographers even more confused, clap or share this article ❤

--

--

Alessandra Sollberger
Alessandra Sollberger

Written by Alessandra Sollberger

Investor: impact, biotech, blockchain tech. Founder: Top Tier Impact. Into sci-fi & extreme sports.

Responses (3)