Systems
Systems
A system is anything with multiple parts that depend on each other. In other words, every machine and activity is a system on some level. Systems are the best way to achieve Goals. Everything is a system, and is also part of a larger system.
Interesting Systems Properties
- Modularity.
- Responsiveness.
- Know what the system is doing and make the Feedback Loops fast.
- Decentralized.
- Permissionless.
Changing Systems
To change a system you need vision, skills, incentives, resources and an action plan. Changing a complex system is hard and even if the intention is good, the result might not.
First, focus on Incentives. Don't be angry at the people who are benefiting from a system, or at the system itself. Most just end up that way, the same way a river meanders towards the sea, or an electrical current tries to find ground.
Keep in mind intervening in a system requires some kind of theory, some kind of model where the positive effects will definitely be better than the side effects - and given how little we know and how bad we are at prediction, this will probably be wrong. A great way to start is removing things, kind of like a negative intervention, and so probably good (e.g: you're unlikely to find a medicine as helpful as smoking is harmful, so focus on stopping smoking). Easy to replace systems get replaced by difficult to replace systems.
A complex system that works is invariably found to have evolved from a simple system that worked(more elementary systems functions).
Complex systems usually have attractor landscapes that can be used to change it. The world is richer and more complicated than we give it credit for.
Evolution is easier than revolution. A good approach to incrementally change a system (similar to natural selection) is to:
- Start by identifying the highest-leverage level to optimize at: Ask whether you're optimizing the machine or a cog within it. Complex systems might change in unexpected ways (butterfly effects). Minor differences in starting points make big differences on future states.
- Begin optimizing the system by following the Theory of Constraints: At any time, just one of a system's inputs is constraining its other inputs from achieving a greater total output. Make incremental changes. Alter the incentive landscape. If you can make your system less miserable, make your system less miserable!
- Re-examine the system from the ground up. Get data. Take nothing but the proven, underlying principles as given. Work up from there to create something better.
Don't aim for an ideal system. Build a set of processes and protocols that evolve to fit the environment over time. Complex systems fail.
If everyone agrees the current system doesn't work well, who perpetuates it? Some systems with systemic/incentives failures are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.
A system needs competition and slack (the absence of binding constraints on behavior). By having some margin for error, the system is allowed to pursue opportunities and explore approaches that improve it.
Interaction between system actors causes externalities: the consequences of their actions on other actors or processes. This is important because, intuitively, humans are self-centered, and it's easy to not notice the effects your actions have on others. And it almost never feels as visceral as the costs and benefits to yourself. The canonical examples are coordination problems, like climate change. Taking a plane flight has strong benefits to me, but costs everyone on Earth a little bit, a negative externality. And a lot of the problems in the world today boil down to coordination problems where our actions have negative externalities.
Most large social systems are pursuing objectives other than the ones they proclaim, and the ones they pursue are wrong. E.g: The educational system is not dedicated to produce learning by students, but teaching by teachers—and teaching is a major obstruction to learning.
A mental model of a system is the reduction of how it works. The model cuts through the noise to highlight the system's core components and how they work together.
Remember, sometimes not doing something is better than doing it (Primum non nocere). E.g: controlling small fires instead of letting them burn the top layer of the forest. Spending 1 week repairing trains because there was an accident makes people use the car more, turning into more deaths than leaving the train rails as they were.
Almost no one is evil; almost everything is broken.
Inadequate Equilibria
An Inadequate Equilibrium is a situation in which a community, an institution, or society at large is in a bad Nash Equilibrium. The group as a whole has some sub-optimal set of norms and it would be better off with a different set of norms, but there's no individual actor who has both the power and the incentive to change the norms for the group. So the bad equilibrium persists. These concepts can be sorted in 3 categories:
- Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else.
- Cases where decision-makers can't reliably learn the information they need to make decisions, even though someone else has that information.
- Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state. One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.
Examples
- Making the switch from not relying on prediction markets to relying on prediction markets is fraught, because it might embarrass the leadership of existing institutions by revealing that their professed estimates are not very credible.
- There are several newly designed voting methods which are likely to be improvements over the current system, but most have seen limited, if any, uptake.
- It's difficult to change political systems from the outside.
- Within a two-party system, both benefit from first past the post voting, as they know they have a ~50% chance of winning each election, so there is no incentive for them to change from within.
- Proponents of voting reform have not yet been able to coordinate on which method they recommend.
- Ongoing over-fishing of ocean fish. Each individual fishery (and, at a higher level, each country) would prefer a world where everyone fishes a sustainable amount, rather than over-fishing and crashing the fish populations that they all rely upon, but without a centralized enforcement mechanism, they have no way of ensuring that the other fisheries (or countries) go along with them in cutting back on fishing, so unilaterally doing so would simply make them get out-competed by others.
- Countries building up their militaries. Most of the use of sizable militaries is fighting against other militaries (and as a deterrent against such), so they are overall a negative-sum game. If countries all agreed to cut back their militaries, they would (for the most part) all benefit, but due to the competitive nature, there is a strong incentive to not cut back.
- Using companies producing widgets as an example, each company might wish to fairly pay their workers, maintain a safe work environment, and not pollute the environment. However, other companies can gain an edge by sacrificing things in favor of producing more widgets (e.g. hiring more workers at cheaper wages). Thus, the principled company must make similar changes, or get out-competed. This can continue until the companies have all sacrificed everything they can in favor of more productivity, even if all of them would have preferred to peacefully coexist with comfortable work conditions.
- Doctors being overly cautious in treatment. The Incentives punish positive mistakes much more heavily than negative ones. In this case, any deviation from what is considered to be the "proper" way of dealing with a case subjects the doctor to risk of being sued for malpractice in a way that sticking to the "proper" method does not, even if the deviation would have been a net positive in expectation for the patient.