OKRs for modern teams with Felipe Castro

Transcript

Scott Levy: Hi everyone, welcome to the new economy. I'm Scott Levy, CEO and Founder of ResultMaps, where we're on a mission to help the world be its inspired best. That means helping CEOs and leaders build healthy cultures of relentless execution so you hit your numbers and thrive, no matter where your people are or what challenges come your way. One way we do that is bringing you sessions with leaders who've learned to thrive even as they navigated uncharted territory and thorny challenges. The idea is to get you actionable learning you can put into practice immediately to lead and navigate at your best as you thrive forward.

Today, I speak to Felipe Castro. He's the founder of the boutique advisory firm Outcome Edge and an OKR trainer whose expertise and practical experience come through in everything he writes and the talks he gives worldwide at different conferences. What stands out to me every time I read an article or download a PDF from Felipe is how clear it is that he's not just jumping on an OKR bandwagon. He's really thought about these topics deeply and done the hard work of helping people translate them to action. And his ability to distill what he's learning and cycle through different narratives to help people learn faster is outstanding.

Felipe’s background

Scott Levy: I'd love to hear a little bit about how you became passionate about OKRs. Is that something you've always been interested in? How did you get into being this well-known OKR coach?

Felipe Castro: I'm an engineer by training. I started working in startups, doing business development, product, sales...everything in those startups. I lived in the Bay Area for a while. After that, I completely changed my career and became an executive recruiter headhunter. I started recruiting for that company, so I helped them build a team, etc. After a while, I founded my own recruiting firm. We hired for tech companies, and then eventually, some clients started asking me, "Hey, can you help us with other stuff besides recruiting?" That led me to work with performance management, which led me to helping them set goals. I realized this is broken and not working because, at the time, we were working with companies that used some backward-looking approach where, in October, you're trying to predict what you'll achieve in December of last year - 14 months later. That approach kind of worked for everything the companies had been doing for a while, but anything new or a new product was totally broken.

Then I started looking for a solution, and when I found OKRs, I thought, "Oh! That's like an agile, iterative way of doing this. That seems to work." I basically jumped on that around 2013 or 2014, and that's 100% of what I do now. I started with the pain point and took it from there.

Full stack agile

Scott Levy: It seems the engineering background led to the phrase "full stack OKRs" in one of your presentations?

Felipe Castro: Yeah, I used to call it "full stack agile." The idea is that the way companies use agile is usually for delivery only. The way they set their strategy is waterfall, the way they plan is waterfall, the way they set goals, the way they reward people during compensation - everything else is waterfall, except for the last part of the business, which is agile delivery. If you look at it end-to-end or as a stack with several layers, only one of them is agile, and the whole rest is waterfall.

One of the things we need to do is stop doing waterfall goals - that annual, top-down planning telling people what to do, which is the opposite of agile. The way I see it is that my work is to experiment with different narratives to see what helps people learn. Over time, I'm changing my product or narrative because some narratives work better in achieving the learning outcomes we need.

The way I explain it now is that the main problem we have with OKRs is what I call the "tinker bell approach." Basically, you take a traditional company using traditional management and traditional ways of working, and you sprinkle some OKR pixie dust on top of it. Companies believe that if they just sprinkle that OKR pixie dust and think happy thoughts, they're suddenly going to turn into Google and fly away. Of course, that never works.

Two real stories: First, a huge bank changed the label of their dashboard from "KPI" to "OKR" one week, and that was it - no actual change to using OKRs. Second, I was at a conference in Europe a few years ago, and a group of agile coaches said, "Oh, we're using OKRs," but they were writing really bad OKRs - just reductive results. When I asked how many people were involved, they said a huge number, like 8,000. And how did they train them? "Oh, we sent them a one-pager on how to use OKRs." That's a very common scenario because it's seen as magical - just say "OKR," and suddenly, people will know how to measure, understand the strategy, etc. But that's not how it works.

OKRs come from what I call the "startup way of working." Companies like Google, Amazon, and Netflix fight hard to still work like a startup. The best example is Amazon, where Jeff Bezos always says it's "day one," meaning they need to work like a startup. OKRs come from that model, so you can't take something from that model and apply it over a traditional company without changing how the company works. There's a lot of unlearning involved to use OKRs - you need to unlearn many of the old models because there are lots of things that don't apply anymore.

One of the hardest components of OKRs is unlearning. You explain the concept, and people say, "Okay, yeah, I get it, but I've been working the other way for 20 years." That's the part about unlearning. For example, when traditional companies say, "We want to be a data-driven organization," the very first thing they need to unlearn is the phrase "I have 20 years of experience in this industry" because experience is not data or evidence. I'm not saying to throw away experience, but it shouldn't be more important than evidence.

Feature factories vs. results generators

Scott Levy: Another thing I liked in one of your downloads was the idea of "feature factories vs. results generators" from Spotify - the traditional kanban style has a missing column for whether we got the desired result. The idea is that everything up to that point is about learning. You want to deliver, but if you're not learning, you don't have real agility.

Felipe Castro: The "missing column" idea is from John Cutler. Most teams today work as a feature factory - they ship a feature, and it goes away on a conveyor belt; there's no loop to measure and iterate on whether it worked. The difference is shifting from measuring outputs like story points or velocity to measuring outcomes and results - did we make the difference we wanted?

Cascading OKRs

Scott Levy: What are your thoughts on cascading OKRs - should you cascade them, and what are some of the traps you've seen with trying to take a strategy and mobilize it through an organization?

Felipe Castro: Cascading is one of the things we need to unlearn because it's so old that people believe there's no alternative. Cascading became a synonym for spreading the strategy across the organization. But think about it - a cascade is a natural, unidirectional phenomenon with no feedback loops or adjustments, and it ends crashing on the rocks. It's the opposite analogy you want, and cascade is a synonym for waterfall.

The idea is to have a conversation back and forth - a discussion. One helpful story is when JFK launched the lunar program. Kennedy didn't give NASA a specification like "build me a three-stage rocket with these features." He described the problem: "I believe this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to earth." He gave NASA a problem to solve after lots of back-and-forth.

It's not about saying, "Build me this feature" or even "Work on this problem." It's saying, "This is our top-level strategy, and I believe we should work on this problem. What do you think?" And then you have a conversation - maybe you say, "It doesn't seem like that's the main issue; maybe it's X. What's your view?" You explain the context and the problem you're trying to solve, and then it's an open discussion.

One of Netflix's principles is "lead with context, not control." The government gave NASA context and a problem, but NASA was part of selecting the specific problem. The same happened with the Japanese bullet train - engineers were given the problem: "We need a train capable of going 120 mph stably," not a specification.

Creating specifications

Scott Levy: There's so much attention today on teaching people to write clean specifications. But what I think we often find ourselves doing differently is saying, "Here's the context - is this the right specification?" Rather than just handing off a spec.

Felipe Castro: The idea is that when agile was created 20 years ago, we didn't have the techniques we have today to run experiments and test things rapidly. So it doesn't make sense to use agile or Scrum the way they did 20 years ago. What if, instead of a stakeholder telling the team what the backlog is, we gave them a customer problem to solve and let the team test different ideas until they figure it out?

One analogy is that the traditional model is companies spending 12 months building a wedding cake, making it perfect based on the belief that the plan is correct because "we have 20 years of experience." A startup works differently - instead of building a wedding cake for a year, they try to sell a new version of a cupcake every week, testing different flavors, ingredients, production sizes, etc. They don't think they can get it right upfront. So rather than "Build me this CRM," it's "Here's what we're trying to achieve; let's figure out what will actually work."

What type of cadence do you find is most helpful when adopting OKRs? How often should teams look back to see how things are going and what needs to change? The idea is that OKRs work like Russian nesting dolls with different cadences and cycles. You can have longer-term, typically annual, higher-level company OKRs for things that take longer to change because they're more stable. Then you usually have quarterly team OKRs, and we track OKRs every week to see what's working or not and try something different based on the measurements that week.

It's not that "feature factory" approach of putting something on the conveyor belt and never seeing it again. It's "What version of the thing did we try last week? What's our next hypothesis to try this week?" The idea of learning fast is crucial.

Team vs. individual OKRs

Scott Levy: Do you have an opinion on whether people should use team-based or individual OKRs or both?

Felipe Castro: It usually depends on how the team is structured. The idea is that OKRs are how you keep score - the scoreboard showing if you're winning. Some people play individual sports with individual scoreboards, while others play team sports. So if you're working on a cross-functional team all trying to achieve the same outcome together, you want a team-level OKR, not individuals going in different directions.

But in functions like sales or recruiting, where people operate more individually, you can have individual OKRs. The idea is to use team OKRs when you have an actual team structure and individual OKRs when it's really just individuals who happen to report to the same person. If you have a proper team like an agile team or product squad, you usually stop at the team level. Companies like Spotify and Google don't use individual engineering OKRs anymore because they don't make sense in that team scenario.

Previous
Previous

Product Updates 2021.07

Next
Next

Inside ResultMaps: February 2021