When we think of trust as a concept in RPGs, the most common referent (based on my own experience and an impromptu straw poll on Twitter) is the trust between players to respect one another. Safety tools—like the X Card, Lines and Veils, and others—are mechanisms that safeguard that social trust, ensuring everyone at the table has an enjoyable experience.
In the context of players and designers, it may be a case of a designer’s track record for producing quality games. It may also be an issue of the designer’s character and reputation as an upstanding, respectable member of the community and humanity at large.
But what about the inverse—a game designer’s trust for their players?
As Katie Salen and Eric Zimmerman note in Rules of Play, designers cannot design play; they can only design systems that other people use to play. Since designers intend their systems to deliver a certain experience, they must indirectly design toward that end goal without being able to exert control over it.
Introducing the concept of trust into design intention raises some interesting questions. Does the designer trust their players, or do they only trust themselves? And how does that allocation of trust affect game design and play?
The concept of “trustlessness” comes from cryptocurrency, and it’s not particularly intuitive term. Linguistically, it’s likely to be interpreted as “untrustworthy,” but it actually means the opposite—albeit in a roundabout way.
A trustless system is one that doesn’t require interacting parties to trust each other. Instead, they need only trust the system through which they’re interacting.
CoinMarketCap’s knowledge base summarizes a trustless system:
“It enables individuals to place trust in abstract concepts rather than in people.”
In the case of crypto, that abstracted system is the blockchain: a distributed ledger that records all transactions. In the case of RPGs, it’s the formal mechanics laid out in a game’s rulebook(s).
Trustlessness and Simulationism
For a system to be trustless, it must be comprehensive. It aims for a Goldilocks zone of flexibility: enough that the player still has the ability to play (instead of following a rigid script) but not so much that the player isn’t constrained to the experience the designer intends.
Simulationist systems aim to be trustless by defining the game’s reality in great detail. This places the burden of fidelity on the formal mechanics and not on the players. (It does, however, place a different burden on the players, which I’ll discuss below.)
It’s not that designers of simulationist systems don’t trust their players to simulate correctly on their own; they instead craft a system that comprehensively models a reality so that trusting the players to get it right becomes a nonissue.
Magic as a Trust Indicator
The way a game handles magic can be one indicator of how trustless its system is. To illustrate, let’s look at spellcasting in two RPGS: Dungeons & Dragons and A Dragon Game (Chris Bissette, Loot the Room).
D&D’s spells are mechanically defined and described in almost excruciating detail—to the point that each use of a spell may necessitate re-reading its entry in the rulebook (especially if it’s one not commonly used). This is an example of trustlessness; the designers don’t have to trust the players not to abuse or misinterpret any given spell because the magical effects are so extremely precisely delineated. As a result, the spell list takes up an entire chapter in the PHB.
In A Dragon Game, though, spells are defined by their type (word, sigil, ceremony) and two words drawn from a pair of 20-entry lists. The spell’s effects are whatever the player interprets those two words to mean. The entire spellcasting system takes up only two pages (which are rather liberally laid out).
The difference between the two systems is trust. Opening the system up to interpretation self-evidently precludes the possibility of misinterpretation, of course, but more importantly, the designer isn’t designing against the possibility that the player will abuse that open-ended interpretability. Bissette trusts his players. D&D’s designers only trust their system.
Interpretability as Incompleteness
Interpretability is productive incompleteness—what Augury Ignored calls productive voids in their blog post of the same name. These voids are points a designer leaves undefined in a system; when done correctly, they provide the GM latitude to make rulings (instead of enforcing rules), which gives them room to act as a creative player (instead of only refereeing a formal system).
When a game emphasizes rules over rulings, it’s trying to provide a ruleset that locks down the play experience, eliminates incompleteness, and establishes a total system that forecloses on variability. The system becomes utterly central to the shared experience.
“Trustless systems are the opposite of centralized systems,” CoinMarketCap’s definition says of blockchain networks. They’re “an environment where there is no centralized authority.”
This is exactly the opposite in RPGs: the trustless system is the centralized authority. In System and the Shared Imagination, M. Joseph Young concludes “rules are authorities used to support the credibility of statements made by people.” Young’s interpretation is a product of his times; he wrote these words in 2005, an era when D&D 3e was taking AD&D’s totalizing ambition to its extreme, and when theorists likewise aimed at mastering RPG design by establishing totalizing frameworks.
Justin Hamilton’s essay “Less Rules Do More” (available in Knock! #3) addresses the flip side of this situation. Hamilton describes a gaming culture in which the GM as referee has become synonymous with “the person who knows the rules” instead of an asymmetrical participant in the conversation that constitutes play. Rules and mechanics exist to support the fiction being created at the table, but the heavier the system, the more the players’ conversation focuses on that mechanical infrastructure instead of the creative superstructure that’s intended as play’s manifest content; the system entrenches itself, and insists upon itself, as the central authority.
The Trustlessness Paradox
The more a designer attempts to circumscribe the intended play experience through a trustless system, the more cumbersome the game becomes, and the more it succumbs to the problems Hamilton describes. It becomes less of a game and more of an exercise in bureaucracy.
A trustless RPG system is the opposite (in theory if not in practice) of a blockchain network: it centralizes what should be a decentralized system of interaction and communication. In contrast, a trust-based system takes advantage of productive incompleteness to de-emphasize mechanics and empower creative play.
The lighter (more trust-based) the rules are, the better all players can become acquainted with and remember them, thereby rehabilitating the GM’s position as the other players’ interface with the imagined world rather than a repository and arbitrator of the rules governing that world. The more totalizing and trustless the system, the less authority it grants to the players (including the GM). In a productively incomplete and trust-based game, the system and the players share authority to shape their experience.
Liber Ludorum is entirely reader-funded. Please consider lending your support.
Games & Systems
The tradeoff between flexibility and direction, and the pitfalls of thinking about both
Callers, Multilogue, and the GM as Ant Queen
More musings on communication and structure in TTRPGs
On facilitating and incentivizing character depth in TTRPG systems
One thought on “Trustlessness & Centralization in TTRPGs”