OKRs. Objectives and Key Results. I can’t believe I’m writing a post on this. But here we are. It’s but one of the latest viral concepts to be wildly misused. Like Agile, OKRs could be a good thing. Alas, once a large enterprise gets its hands on the latest buzzword, well, it’s bound to end in tears.
You’ve probably heard of OKRs. They sound like a really good idea. Maybe you’ve even advocated for their use. I have. Be careful what you wish for. It might be a case of, “Better the devil you know.” What was meant to be a lightweight alternative to annual planning ends up being the sneaky return of those dreaded MBOs….
In case you’re not familiar, OKRs originated as Andy Grove’s version of Drucker’s concept of MBOs. Grove taught his approach at Intel University in the 1970s. John Doerr, then an Intel executive, learned the practice from Grove and later taught it to the startups he invested in through Kleiner, Perkins, Caufield, and Byers.
The definitions are straightforward enough. Objectives are supposed to be qualitative stretch goals meant to be short, engaging, ambitious, and inspirational. Tied to each objective are several key results, the KRs, which are quantitative outcome measures used as that objective’s measure of success.
Seems great, right? So, what’s the problem? Well, best laid plans and all that. There is the idea in theory, its normative sense, and then there is what actually gets implemented, the descriptive sense. I guess seldom the twain shall meet. Ideas meant to improve organizations instead get grafted to them.
A transformation meant to increase org agility devolves into bullying product teams.
The latest attempt to better scale said agility only adds dependencies and bureaucracy and makes IT yet harder to work with, to the chagrin of the weary business units.
The Borg assimilates.
Time and again large organizations take the latest concepts with the best of intents, but then drive them through the prism of their existing cultures and break their backs to form them to a priori positions.
What we are left with is, well, a mutation.
So, OKRs. There are now countless trainings on the topic, countless articles, and John Doerr’s popular book, Measure What Matters. And much of this newer material oddly ignores that Grove himself already laid it all out in High Output Management. Chapter Six is all about OKRs, as is the Appendix titled “One More Thing.” What he discusses is straightforward enough, and yet OKRs seem to keep going sideways.
I’ve noticed they tend to go wrong in three big ways.
1. KRs tend to be just glorified task lists
This is true even in some of the oldest examples of their use, such as those from Intel shared in Doerr’s book. In other words, KRs tend to focus on output (lists of work to be done) and not outcomes (the difference that work is meant to make). Why does this matter?
Consider, the KRs for a given objective are supposed to be its “as measured by.” Thus, in writing an OKR, you are hypothesizing that a certain objective will be achieved when certain specific outcomes are met (its KRs). The strategic objective then is like a higher-level outcome made up of lower-level outcomes…but they should all be outcomes.
Now, as Grove stresses, KRs must have a measure associated with them. Creating unambiguous outcome measures is challenging, however, and so people tend to focus on to-do lists instead. To-do lists are easy. But when this happens OKRs tend to become…false statements. After all, just because you complete the tasks you specified up front does not mean the outcome in question was actually achieved. We’ll return to this point later.
The “O” is part of the hypothesis. Testing it should be the point of the exercise. With OKRs, however, the focus often remains on the proportion of KRs achieved (or, oddly, on the rated extent to which they are achieved).
With everyone’s attention firmly planted there, no one is really tracking whether the needles on these higher-level outcomes are being moved or not. Instead, we check the KRs off the giant, nested to-do list, and in the end, the whole exercise was a bit like reformatting a Gantt chart into a table.
2. OKRs might actually decrease your agility
As discussed by Grove, OKRs are meant to help teams pace themselves, prioritize work, and stay focused on what matters. The thinking is that while annual planning is very costly and time-consuming, OKRs can (and ideally should) be set in a matter of hours. This can help establish boundaries around what specifically you are saying yes and no to. This is important since, as Grove puts it, every time you commit to something you forfeit the ability to do something else instead.
But remember, an OKR is essentially a hypothesis—the KRs are the smaller outcomes the larger objectives are made up of. If you set your OKRs and only assess them every quarter, like most people do…that’s not very agile. And I don’t mean fast.
A tricycle is more agile than a bullet train.
Agility is about having the degrees of freedom left to change direction when needed.
If you take an empirical approach to your work and learn your way forward, then you don’t want to execute more upfront planning faster. You want to do less more frequently. You should seek the minimal output that sufficiently signals whether you’re headed in the right direction, or, as Jeff Patton puts it, the minimal output per maximum outcome.
OKRs then should be kept simple, fostering an ability to quickly assess whether they’re working out as hypotheses or whether they should be revised.
This should happen as soon as the learning occurs. Often, however, OKRs become a behemoth nested structure revisited at some ridiculous set cadence. As said, that’s not very agile. Agile OKRs would be cool. We could call it “OKRA.”
The problem is that if you’re learning your way to achieving target outcomes, as you should be, then just learn your way to target outcomes.
All the extra baggage of an OKR structure isn’t even needed.
3. Surprise! They end up getting tied to your performance review anyway!
So, everybody knows that Rule No. 1 about OKRs is that they CANNOT be tied to performance reviews.
Duh. After all, they’re supposed to be stretch goals. If you tied people’s performance reviews to them, then…. Yeah. Just call them slack goals. It’s Goodhart’s Law.
Well, come on. If everyone has to have individual OKRs—which is stupid—and you’re still doing performance reviews—which is stupid—then, well, stupid + stupid means something not very smart is going to happen.
Just. Don’t. Do. It.
While we’re at it, Deming was 100% correct and performance reviews should be done away with altogether.
You wanna fix your culture and improve quality? Start by nixing performance reviews. So much ink has been spilt about Toyota Lean this and Toyota Quality that without ever once mentioning that another thing Toyota did was tell employees if you help us continuously improve, we will provide you with housing and guarantee your job for life.
While we’re at it, let’s just say someone for some reason did tie OKRs to performance reviews. What would that even be called?
Let’s ask the product community.
Or here’s my friend Allen:
The correct answer, by the way, is MBOs, which is ironic, isn’t it?
Weren’t OKRs created to get rid of MBOs? Because it’s widely recognized that MBOs are bad, right? Yes, yes, and yes.
Now, Grove talked about MBOs in his book, High Output Management, and for a time talked about iMBOs, as in “Intel Management by Objectives,” but he was also trying to improve upon them. That’s what gave rise to OKRs.
And so, if at the end of the day, OKRs are imposed on you and tied to your performance review, and you’ve essentially come full circle back to MBOs, what should you do?
Well, to quote my friend April Mills, “You are someone. You are somewhere. You can do something.”
No, you probably can’t contact the CEO and get policy changed, but you can start where you are. How is your relationship with your manager? Start there. After all, for you, this is where “the rubber meets the road.” If OKRs are tied to your performance review, what this looks like for you depends on the conversations you have with your manager.
Are there some things you can do to educate your manager on OKRs? Has your manager read High Output Management? Has your manager read Christina Wodtke’s Radical Focus (probably the best book on OKRs)?
Remember, OKRs are supposed to be about focus and continuous improvement. They are supposed to be about leveraging stretch goals as a catalyst for growth, which again only works if not tied to performance reviews. So, start there.
Even if you can’t change your company’s policy, you can buffer yourself. Proactively engage with your manager about what YOU think tying OKRs to your performance should look like for you.
If you’re not leveraging them to facilitate your career conversations, you should probably fix that first. Then keep them simple. As Andy Grove stressed, if you have too many, then you’re not focusing. Three objects with three KRs each is already nine KRs. That should be about the max.
Now, I see different people citing different sources pointing out that OKRs should not be used for performance reviews, again ignoring that Andy Grove himself already addressed much of this, such as passage below from page 113 of his book. I’ll leave you with that.