The Value of not Having to be Right
·In software we pride ourselves in being “data-informed”, “metrics driven”, and “formally proven” is the highest praise. Few things feel as satisfying as being actually right, with no shadow of a doubt and no way of escape for our opponents. Being tech people, we cling to this idea that “the more correct” idea, or the one which is “objectively right”, should win.
Now, do not underestimate this:
As long as all we have is opinions, mine is the best.
which is “one way to do it” – specifically, “a way” to do it if the team is comprised of jerks. But believe it or not - most teams are composed of decent humans who genuinely want to do good by each other.
The problem I have with the “prove everything” approach is that there are side-effects to it. First is this: switching gears into “proofs” also switches gears into the “slow thinking mode”. Making decisions - but also inventing a solution - will then become a much more deliberate, much more “solemn” process. And altough the delivered solution might, indeed, be marginally better than all the others, the cost of that marginal improvement – of “doing things right” – will be the extra time spent on the “proof rituals”. Some questions - and some problems - do in fact demand that rigor, but most of the run-of-the-mill problems that we encounter in web-app land actually do not. By indiscriminately demanding proofs for insignificant things we rob our teams of time and of agency.
The second side-effect is the possibility to endlessly bikeshed as long as nobody on the team comes about bringing exhaustive evidence which trumps all the others. The corpus of data requirement is assumed to be scientific, but it omits a few important differences: teams develop software with deadlines, with implicit expectations and within certain political pressures put on them. Actual scientific studies sometimes end up disproving their original hypothesis. They sometimes fail altogether, because the equipment or the methods turn out to be invalid or unusable for the purpose. Evaluating and challenging each other on the merits of the work is a standard feature in academia, but scientists often have a luxury that teams in product development – especially teams in startups – do not posess. That luxury is the abundance of time. A ritual of proving and refuting might eat into the time budget allotted to the team for actually performing, and can lead to a delay that may kill a product - doing nothing, for lack of proof that something could work, can be more deadly than delivering something that could keep the team moving ahead.
The third side-effect is “distrust by default”. Teams where everyone must be “right by the numbers” create a culture where any opinion or emotion is “null and void” by default unless based on a corpus of data. This creates a few dynamics which you really ought to anticipate for:
- Folks withhold their ideas until they have accumulated enough “proof” to allow themselves to feel safe enough to share them with the group, to be “worthy of being heard”. In the extremes people stop contributing or talking completely as they know they will never have sufficient proof. Partly because…
- Flat hierarchies are fake. There will be a person who also has to “prove themselves to be right”, but just ever-so-slightly less vigorously. Maybe they are the CTO, or one of the co-founders, or the most senior developer on the team. Maybe they have hoarded access to actual resources needed during delivery (like production access), or maybe they are de-facto the only on-call for the system to be built (so while they might not be the one contributing the most into the design they will be hit the hardest once the shit hits the fan). The end result will be that for that person it will be permissible to “drive by opinion” while the rest of the team will have to scramble for proof. Or to “be right” ever so slightly less.
These exclude people and erode trust.
Yet another item to perish in proof-addicted teams is experimentation. In many, many areas there absolutely is space for trying stuff out. But if the smallest of actions have to be substantiated with proofs, indiscriminately, the threshold for experimentation will be raised. And this is where creative work becomes a challenge. See, a good team is expected (it is never explicit, but often implied) to produce “miracles” from time to time. A sudden discovery or two, a nice optimisation, a hackathon project or another… Squeeze hard enough with the proof requirements, and you either suffocate experimentation entirely, or reserve it for the select few who know how to come with the right proofs at the right time (or are the entitled people mentioned above).
Caveat: none of the above applies when you are Alphabet and you can permit yourself to have dedicated research teams composed entirely of PhDs. Or if you are Apple and you are creating a compiler team for a new programming language. At those points you are doing exactly the different thing - you are doing science! Congratulations, because it is unlikely your team of PhD’s is going to be tasked with delivering a product, cadence over cadence, on a very tight deadline and with dozens of very minute choices. When doing science, different constraints apply.
Where is “proof of being right” really justified? Well, for instance, where a very specific technical choice with knowable outcomes is being made. When the consequences of that choice are going to be dramatic and impactful, and you are absolutely not in a position to mess it up. It might be in the scope of technical leadership to actually cherry-pick the issues which should be study-level proven for correctness, because to execute on them the team wil have to go into the slow thinking mode.
Or when there is a relatively small choice which can be easily proven with a small verification step - like a choice of a datastructure. For example: “using a map here instead of using a list makes searches with 1000+ items 10x faster in this particular UI, here are the numbers”.
Now, when you are in a situation when a team is already doing that too much there might be a benefit to breaking out of the vicious circle of everyone being exhausted of endlessly proving (on insignificant details). There are discussions which get stalled completely due to the “demands of the right way”. There are a few opportunities to drive the discussion back into a more humane process, at least for less critical topics. Here are a few questions which can help defuse these situations.
One is evaluating the cost of making the “wrong” choice, potentially on all contested items. What would be the cost of the wrong choice? Not to the individual proposing/critiquing but to the product? To the team? To the business? For instance: “What is the worst thing that could happen if we pick this technology or use this particular style of writing tests? And we realise it is wrong? What is going to be the cost of rework? Will we violate user privacy? Are we likely to get sued? Are we likely to run out of budget?”
Another is deliberately driving for the emotional component of the desire to “do it this way” to be brought to the fore, using tools from non-violent communication. It ain’t easy, and participants have to be vulnerable and open for feedback, and “crucial conversations” and “you must be this tall” - but you won’t get actual trust without going to those dark places. This can be difficult if folks on the team are very insecure, and if they have been conditioned to get hurt when they express subjective judgment. But it still can work and have a great healing effect. For example, consider an exchange like this:
– Why doing it this way, using thing, make you feel bad?
– Well, this is going to make using a slow query list much less predictable, so it violates an assumption.
– Fair point, do we use the slow query list in our system?
– No, but we might want to in the future!
– Ok, and if we use thing, and need to accomodate the use of the slow query list later - how much work will it be? Will you be OK with me committing to doing this work if we need it? (commitment can be stated in writing, for example in commit messages or in issue trackers)
– This might work, but I rather us just not do this thing.
– Why do you feel that way? Your feeling is probably based on an experience you have had, could you share it with me so that I can understand your struggle better?
– Well, at that previous place I was at we used the slow query list all the time. See, we had a lot of people doing queries, and a lot of them would hang or suddenly run for a very long time - so we used the query list all the time to kill the slowest queries. It was not possible to tell people to examine the queries they were executing - there was never enough time, and the teams they were on were incentivised not to optimize queries because there was a feature churn. And I was one of the people who had to use the query list to kill slow queries, sometimes a few times a day! That was awful…
– I understand better now. Given that we all know the importance of not shipping slow queries, and we are all aware of what happens when those queries need to be manually aborted - can we do the implementation with that knowledge, and leave good documentation in place so that we know where to look? Will that make you feel less bad about the fact that we need to introduce thing?
– Yes, this will work!
Nowhere in that exchange is it possible to establish numeric merit. Specifically: would using thing be more kilosomethings worth to us than having better use of the slow query list? In most discussions we won’t know ahead of time. The relative merit of those two things might even change for us at some point later. Either using thing or mandating that everything must be done in service of the slow query list could be “right” here, depending on the context. If we switch the discussion to figure out why the participants have certain preferences in the first place we can establish a much better causality:
- Person A wants to introduce thing
- Person B is afraid of thing because it might subtly break another thing and the person was previously hurt because that another thing was extremely critical to them
- The hesitation Person B expresses is not grounded in the current business need or in using the right tool for the job or in using the right solution in general – it is grounded in personal experience and trauma. We cannot include Person B meaningfully without having that trauma in the picture, and we can’t make a good choice that makes both parties happy without acknowledging and processing that trauma.
So with a little empathy we can remove the requirement of “being right”. The idea is to work out a mutual understanding of the fact that it is actually ok to have opinions - we just have to be more transparent about where those opinions come from, and we have to be open to yielding. It is OK to have opinions which are not “completely right”. It is OK to be subjective sometimes, and one does not need to bring the entirety of the decade’s USENIX catalogue just to be heard.
There are times to dig up that catalogue, but we can move much faster if we reserve it for special occasions.
See also – Is It My Fault You Can’t Handle The Truth?