136 - Akbar, don’t worry about Birbal

14-12-2023

leadership and org culture

_How to overcome the loss of your domain-specific expert in your org or team _

When Christine bought her first car, she was over the moon. She loved the freedom it brought her. In a city like Mumbai, she wasn’t dependent on the whims of auto rickshaws or the frequency of local buses.

But one thing unsettled her. She didn’t know anything about cars. So, she followed anything she read in her car manual to the T, such as taking the car for servicing every thousand kms or driving under 40 kmph for what seemed like forever to her family and friends who had to endure painfully slow drives.

This endurance test continued until Christine met her uncle, the car whiz. He owned a fleet of cars and was known to take good care of them. Most importantly, Christine trusted her uncle. From that point on, she was only too happy to let her uncle take all decisions about the car–when to service the car, when to change engine oil or brake fluid, when to align wheels, and so on. And her uncle considered road conditions, traffic, Christine’s driving style, even the weather in making judgments about the upkeep of the car.

You may or may not have had Christine’s experience with cars. But if you’re a business owner or a team leader, one of your worst fears would be losing Christine’s uncle–or your equivalent domain expert you trust who helps you with important decisions.

It could be a campaign expert, a negotiation expert, a hiring whiz, an ops expert, and so on. What is common to these experts is that they are all great at their jobs but when they leave they take with them their entire blackbox for domain-specific decision-making.

When such a scenario looms, organizations resort to something mechanical. Since they no longer can have the subjective expertise of the departing maverick, they put in place rules. These rules are really only rules of thumb that have been elevated to the status of defaults.

There’s another scenario where such rules may be instituted en masse: when business is growing and it is hard to find enough seasoned practitioners like Christine’s uncle in the open market. So, leaders make rules to achieve scale without sacrificing speed or without inviting randomness.

Most of these rules set thresholds for elimination. Humans are comfortable whittling down many options to a few by the process of elimination.

For a job: Candidates must have a postgraduate degree in economics.

For a home loan: _Applicants must have a regular monthly income _

For a matrimonial match: Aspiring grooms must be taller than 180cm

I might have made up the last one (and you can tell I’m a millennial by the phrasing) but you get my drift.

But here’s the thing about rules that gets overlooked: Rules are terrible masters. They are better guides.

And here’s the thing about guides: Their advice is contextual.

Bring on a new situation, a new combination of variables, and a good guide will alter her advice to fit the context. A rule, however, is unequivocal even when wrong.

Going back to the rules mentioned earlier:

❌What if the candidate hasn’t got a postgrad in economics but has real-world experience that far outweighs any degree?

❌What if the applicant is a successful freelance consultant who’s paid per project (running into several months) and not per month?

❌What if the groom is the funniest, smoothest, sexiest midget known to womankind?

The rules set out to eliminate randomness in decision-making. So they contained certain thresholds. But those thresholds made the rules _insensitive _to context and, in some cases, made the rules plain dumb.


From the book Brave New Work (paraphrased): Organizations are particular about who gets to make decisions. But the same organizations are lackadaisical about how decisions are made.

One outcome of this cultural artifact in organizations is that success becomes more people-dependent, less process-dependent. Business owners and team leaders are on permanent vigil about their trusted lieutenants with expert judgment. If these Birbals leave, they take with them all the ingredients of good decision-making, which is their well-honed intuition.

When such a scenario looms, organizations resort to something mechanical. Since they no longer can have the subjective expertise of the departing maverick, they put in place rules. These rules are really only rules of thumb that have been elevated to the status of defaults.

It need not be like this. If you own a business, handle a P&L, and/or manage a team and you have reacted to the departure of your trusted lieutenant by setting black-and-white threshold rules, you may have overcorrected.

Candidates must have a postgraduate degree in economics.

_Candidates must have had a minimum of 5 years of experience in B2B sales _

Candidates must be taller than 180cm

(Okay, I stole the last one from a matrimonial ad)

When the domain expert in your team leaves (or even before that), build a new operating system for decision-making that is less expert-dependent.

In this issue I’ll show you how to build one simple model. This is the model that started off Nobel winner Daniel Kahneman’s career as a psychologist in the Israeli army, and this model will yield better results than your expert did over time.

  1. List all important attributes that influence the decision.
  2. Assign weights to each parameter.
  3. Give a numerical value to each parameter for each of the options. This means that for some attributes you may have to convert a qualitative value, like ‘average’ or ‘excellent’, to a number on a scale.
  4. Multiple the numerical parameter value with its assigned weight for each alternative.
  5. Add up all the scores to get an overall score for each alternative. Now you’ve an apples-to-apples comparison for all the alternatives or options you’re considering.

Want to try this model? Use it to hire for an open position.

Make all the interviewers go through steps 1-5 above. The most important step arguably is including all important attributes, followed by assigning weights to the attributes. Getting to agreement on these two lays the foundation for a more reliable assessment.

To understand why, think about how the interview process pans out at most organizations. The interviewers are trying to predict how good a fit the candidate is based on some cues. They don’t agree on the cues, they don’t weight the cues the same, and because of these reasons their interviews are not structured. They’re not asking questions that lead them to relevant evidence.

The suggested model, on the other hand, helps structure the interviews. You can’t drift off to your favorite but irrelevant stock questions. The model picks out _calibration _errors too. Found a candidate who was rated 9 on resourcefulness by Interviewer A and 3 by Interviewer B? Maybe the team needs a shared definition of what resourcefulness is.

Why does this simple model do better than an expert?

💡Because the model reduces the impact of noise like first impressions or one interviewer’s unique set of attributes.

All decision-making problems are prediction problems. You’re trying to predict how the future will be based on what you know at the moment of decision. No decision-making model is perfect. But some are more useful than the others.

This model is better than being tied to an intuitive expert’s judgment. It is better than making hard-and-fast rules. Yet, it is subjective. _You _(or your team) decides the attributes, you decide the weights.

There are more objective models to make decisions with but they are often not needed because the marginal improvement in accuracy is not worth the time lost; or the skill level needed to run it (regression analysis, for example) is high; or what you’re doing is too new for you to rely on past data.

I believe any simple but systematic way of making decisions improves the quality of decision-making. So a subjective model like the one proposed here will work just fine for any organization that until now has relied on the well-trained nose of a handful of prized experts or has, in the absence of such experts, mechanically enforced rules for important decisions.


can’t tell you their recipe for success. Not because they don’t want to but because they don’t know it themselves.

Yes. Most of those who are good at making judgements for a living _cannot _break their process down to you even if they wanted to. They can’t say which variables they consider, under what circumstances, or what weights they attach to the variables. As you can imagine, admitting this can be unnerving for those who rely on their expert judgment day in and day out. So, an air of mystery builds. A secret sauce brews.

But if you’re a business owner or team leader or any boss who depends on the expertise of her team, you want this secret decoded step by step and codified for future use. You want to sleep well instead of worrying about your firm losing its uncanny knack for getting some important thing right.

There’s a term for the hard-to-explain domain-specific top-of-the-line judgment that you pay the big bucks for to that gifted lieutenant who’s leaving your company: automated expertise.

In their book Winning Decisions, academics and authors Edward Russo and Paul Schoemaker write:

For instance, a gifted claims handler with an excellent nose for sniffing out fraudulent cases was about to retire from her insurance company. She has the uncommon ability to make good intuitive decisions—decisions based on “automated expertise”—because she’d accumulated (and learned from) so much expertise over the years.

[...]

All she could say was that she looked at such factors as lack of adequate support data, valuable property that did not fit the insured’s income level, evasiveness in the police report, financial difficulty such as loss of a job, personal problems like divorce, and frequent or suspicious past claims.

Enjoying the article?

Check out our free ebooks and companion resources to aid your learning and development!

Explore Ebooks

Articles Categorised with the same tags