Flashover

What Software Developers Can Learn from Range

Feb 16, 2020

David Epstein's Range is the antithesis of the 10,000 hour rule. Whereas the 10,000 hour rule tells us that excellence is a result of early specialization, Range argues that instead of pursuing deep expertise, we should seek breadth in our skills and knowledge. For example, did you know that Nobel laureates are more likely to have interests in arts and other fields compared to their peers?

In this post, I'll discuss the key lessons software developers can take away from Range.

All section headings are quotes from the book.

"Frustration is not a sign you are not learning, but ease is"

Continuous learning is not an option for software developers who want to stay relevant as the technology landscape keeps on changing. Here is what Range teaches us about productive learning:

First, the more mistakes you are able to tolerate when learning new things, the better you are going to remember the new information even as time passes. This phenomenon is referred to as the hypercorrection effect. Hypercorrection effect is so powerful that educators say struggle is more important than repetition when it comes to efficient learning.

Second, you should practice "spacing" and "interleaving" when learning. Spacing means leaving time between practice sessions. Its effectiveness is somewhat related to the hypercorrection effect: with spacing, you add struggle to your learning and therefore help yourself remember things better in the long run.

Interleaving refers to mixed practice, and it's the opposite of focusing on one type of problem when practicing – i.e. "blocking." Blockers perform better than interleavers during practice sessions but when interleavers and blockers are given new problem scenarios, interleavers surpass blockers.

"Compare yourself to yourself yesterday, not to younger people who aren't you"

Career switchers are not a rare sight among software developers (I'm a career switcher myself). When you start a new career as an adult, it's easy to feel like others have a head start over you. At your workplace, you look around and see younger people who seem to know twice as much about software as you do. Meanwhile, your past colleagues in your old field are comfortably moving up the career ladder. You might think to yourself if switching careers was such a smart choice after all.

When The Dark Horse Project researchers started looking for high-flyers from different professions, they soon discovered that the majority of their interviewees had unconventional career paths. It was much more likely for a person to be a career switcher than a person who had pursued the same professional goals from early adulthood. Throughout their professional lives, these "dark horses" had been on a lookout for opportunities that could be a good match for them. They were concerned much more about matching their interests and skills with different opportunities as opposed to "falling behind" when switching professions.

"There is no such thing as a master-key that will unlock all doors"

When you become highly skilled in one domain, you also become inflexible in thought and behavior. When the problems you are used to solving change even slightly, your expertise can become a hindrance. This is referred to as "cognitive entrenchment."

If you want to avoid cognitive entrenchment, you could learn a new programming language or a new domain instead of specializing in one domain or language.

Going for breadth instead of depth might not reward you in the short term. For example, in my profession as a software consultant, my clients want to see deep specialization when they are looking for, let's say, a JavaScript developer. However, breadth helps you become a better problem solver as you become better equipped to study the problem itself instead of trying to match your preferred tools to it. Compare this to the law of the instrument: if all you have is a hammer, everything looks like a nail.

As is the case with learning, what's good for you in the short term can be harmful in the long term.

Final notes

Range contains much more lessons and interesting food for thought about topics such as grit and innovation than this post tries to cover. If you are wondering whether to add Range to your reading list or not, here's a link The New York Times review of the book. I personally recommend this book for everyone who is facing external or internal pressure to specialize instead of choosing the path of a generalist.

Weirdness Budget

Feb 9, 2020

Weirdness budget is the idea that when you are learning something new, there are only so many unfamiliar concepts that you can handle without getting frustrated or confused. It's easier for us to learn one thing at a time instead of getting bombarded by lots of new ideas that don't align with our current practices or ways of thinking.

This means that when you are building a new product, you should avoid making things any weirder than they need to be. You have a budget for weirdness and you should spend it wisely.

Of course, weirdness is subjective. What's weird for us can be normal for others. Part of understanding your customer is knowing what's weird for them.

Lean Startup, Kanban, and Validation

Feb 2, 2020

This post assumes you have heard about Eric Ries's book Lean Startup and know the basic idea of it (fast iterations and focus on validated learning). I'm not going to do a review of Lean Startup but instead discuss a specific section of the book that caught me off guard while reading it.

Here is the thing that made my jaw drop. It's Ries's version of the familiar kanban board:

Picture of Lean Startup's kanban board

Those first three columns look pretty familiar. There is the "backlog" for user stories that are waiting to be worked on. "In progress" is our WIP. "Built" contains the user stories that have been implemented and deployed.

But what is the fourth column "validated" for?

Here is how Ries defines validated:

Knowing whether the story was a good idea to have been done in the first place.

Believe it or not, not every user story provides actual value for your users no matter what the market research and user studies tell you. Ries suggests that if the value of a given user story cannot be validated with current or potential customers (using for example split testing or customer interviews) the user story gets removed from the product and from the board. Only the stories that pass validation make it through and stay in the product.

That's a very nice idea, but here is what would probably happen if my team started using this type of kanban board: Because validation takes time and we are expected to deliver new features at a consistent pace, it's easy for us to end up ignoring validation. At the end of the day, it's enough for us (and our managers) that the user stories get built and deployed. That's not of course what the goal should be. But if we are being honest for a second, our stakeholders pay much more attention to output instead of outcome when they evaluate our efforts. As developers, we would also much rather focus on code instead of talking to customers.

Ries used to be a developer himself. He knows that adding the validated column is not enough by itself. It would make it too easy for teams to ignore validation efforts and keep focusing on shipping new user stories.

To prevent this from happening, we will add a card limit to our columns: each column is allowed to have maximum three user stories. You might have heard about card limits inside our "in progress" columns because when WIP goes down, throughput goes up. But a limit for the "built" column? What kind of effect would that have on our process?

First, work will get stuck. Your "built" column is full because you are not validating the user stories. Next thing you know, your "in progress" column is full because you can't move any more user stories to the "built" column. Your team's lack of validation activities is now an inescapable problem that you have to address.

Let me quickly reiterate why we don't like to do validation in the first place:

  • We prefer to focus on our work instead of talking to customers
  • We want to ship as much stuff as possible and validation takes resources away from building and shipping

Card limits don't solve either of those problems. Instead, they actually introduce one more item to our list of grievances:

  • We don't want to see work sitting in queues and now we have work sitting in queues

Some developers might end up leaving your team because they want to work with code – not customers. Some managers might tear down your kanban board because they want to see results – not learning and validation.

In order for a team to push through these challenges, developers have to get comfortable doing work that's unfamiliar to them and teams have to unlearn everything they have been taught about developer productivity. As a more practical matter, the teams have to start thinking about validation as soon as they start working on user stories because otherwise it's likely for the work to get stuck in the pipeline. This is not easy. I'm not even sure if I could handle this challenge.

Compare this card limit to production lines in Toyota factories where employees can stop the whole line whenever they detect quality errors. This introduces a new incentive to identify problems earlier in the process because it's really bad for business when production comes to a halt. All of a sudden quality errors start to decrease.

When you force a team to stop working when they don't validate their ideas, the team will start to validate more. But is the initial stress and pressure something that you would make your team go through? The method seems risky, doesn't it? People might quit and in the end others might force you to get back to the old kanban columns. Also, what if the kanban board doesn't work? What if validation doesn't work? What if nothing changes in terms of customer value despite the social capital and trust you have spent introducing this idea to your team and your organization?

Adding a fourth column to your kanban board might seem like a trivial task. But it's much more about changing culture than reprinting your board for the team wall.

New Podcast: 0-100

Jan 26, 2020

I started a podcast. It's called 0-100 and you can find it at nollaviivasata.fi. The show is about how to start your own digital consultancy or agency. In it I interview founders of different Finnish agencies (in Finnish) and ask them how they got started on their journey and how they grew their company during the early years. In this post, I want to discuss why I started recording these interviews in the first place.

As a developer working with software consultancies (first at Kisko Labs and now at Wunderdog), I've overheard and sometimes participated in conversations about growth and strategy in the context of a consulting company. Some of the most interesting questions tend to be how to find better projects and how to transition from a vendor to a trusted advisor in the eyes of your client. At home, often inspired by a discussion at the office, I have tried to find more advice or opinions online (or for example from podcasts) to no avail. There's a plethora of great material provided by generous people from product startups. But when it comes to consultancies and agencies, it's much harder to find advice from seasoned entrepreneurs.

Whenever you do find people discussing consulting, often the advice seems to boil down to "do great work and the rest will follow." Doing great work is critical – I'm not denying that. But there has to be more to how to build a successful consultancy, hasn't there? Many times I have wanted to just walk into the offices of one of our "competitors" and ask them how they do X and have they ever considered Y.

Well, with this podcast I can do just that. There hasn’t really been anything stopping me to reach out to other consultancies for advice but I also feel like it’s something that just isn’t done. Can you really as a stranger get an hour from someone’s busy day for your inquiries? Probably not. However, if you promise to record the meeting and put it out for everyone else to also hear, it’s a different story. Strangely enough people will now give their time. Or even contact you and offer it voluntarily.

Let’s talk about the general topic of the episodes: Why focus on founding stories? Am I planning to start my own consultancy? No (or at least not in the foreseeable future). So why do I ask my interviewees how to start a consultancy instead of how to run one? The reason why in my interviews I ask about how did you get your first customer as opposed to how do you currently get your customers is that discussing current tactics can be uncomfortable to some and therefore lead to overly abstract discussions about sales and marketing. In addition, when we discuss stories from the past, I can make sure that the interviews won't only be about ideas but also about tactics that were tested in the real world.

I believe that even established consultancies can discover new ways of approaching different business opportunities when they get to hear how other founders kick-started their businesses with different offerings and different client bases. I do hope you get a chance to listen to an interview or two. You can find the show on Apple Podcasts, Spotify, Pocket Casts or wherever you listen to podcasts by searching for "nolla viiva sata." While the episodes are in Finnish, I will write (in English) about lessons learned later on this blog.

Wrong Analytics

Jan 19, 2020

Customer: What do you mean you have used all the budgeted developer hours? There's so much stuff missing from the product! How much of the original spec is here? 50%? 25%?

Developer: I hear your pain. Many of the original features had to go because of budget constraints, and yeah, some of these features that did make the cut could use some more work. But you are not taking into account our new analytics features that were not in the original spec. We built those features so that we can get real data from real users. We shouldn't measure our progress in lines of code written but instead look at the impact we create. And how can we judge this impact if we have nothing to measure it with?

Customer: But this... this is just embarrassing. I can't believe you shipped this to our users. What am I going to say to my managers and my team?

Developer: Don't worry. We kept the core features. With this product we were still able to test out your idea with real users. And thanks to those analytics we added, we got actionable insights into what's working and what's not!

Customer: Yeah? I guess that's better than nothing. So what's the main insight?

Developer: People hate it.

Customer: Hate? Who hate it?

Developer: The users, they don't like using our product. Do you see this number here? It takes roughly ten seconds for the average user to realize this product isn't working for them and after that they just leave and never come back. It's actually pretty great that we get to learn these things with our analytics reports.

Customer: How on earth is this great? First you tell me that I have to go to my colleagues with this unfinished piece of junk. And now you are saying that there are also real numbers for pointing out the epicness of our failure? What is wrong with you! I'm going to be the laughing stock of the whole department.

Developer: I'm sorry, I got distracted by this analytics dashboard. Did you say you wanted to show these numbers to your team? It's really easy to export the data from here. Would a CSV file work for you?

Customer: Get the hell out of my sight!

***

What's wrong with this scene?

Let's start with the obvious problem: our developer has been building stuff without keeping the customer up-to-date. The fact that the status of the budget and the prioritization of different tasks comes as a surprise for the other party is clear evidence of miscommunication between the developer and the customer.

But what I'm trying to describe here is a situation where people might be better off by not allocating any serious resources to analytics than to get real data on how their product is being used.

How is this possible? First, when a team is delivering more features instead of doing analytics work, other people in the organization might look at the team and applaud their productivity ("this team is delivering new features so fast!"). Feature development is easier to measure and more visible than business impact and because of that features can get more positive attention. Features are where the focus of our customers, managers, and peers naturally go.

Second, when there are no analytics there are also no negative reports. It is true that when you can show your product is a hit with analytics, you get more leverage with others. However, it's also true that disappointing results take some of that same leverage away. Since time and time again we overestimate the upside of our projects, it is more likely that the end results of our projects fail to live up to expectations. It's more likely that you get to report bad news instead of good news. So why would you want to report any news at all?

How could you prevent that scene from happening at your work place?

Page 1 of 18

Previous

Next