Software Coach Nick

Building value, not code

How to move past the mid-level of engineering into senior and beyond.


Code, on its own, matters far less than the value it delivers. Our users never see our IDE or our clever abstractions - they experience the outcomes we help them achieve. Many mid-level engineers already understand this in theory, yet still find themselves optimising for code elegance instead of customer impact. This article collects the most common patterns I see while coaching and offers practical ways to move beyond them.

Engineers who are ready for senior-level impact usually share similar traits: they solve problems creatively, understand their tools, and care deeply about quality. The sticking points often fall into a few habits that unintentionally reduce focus on value:

  1. Spending disproportionate time tweaking lint or static rules rather than adding tests and feedback loops.
  2. Dismissing TDD or unit testing because it feels like “double the work.”
  3. Filling commit history with cosmetic changes that obscure the real story of the code.
  4. Being involved in more outages or regressions than they would like.
  5. Requesting large blocks of time for “refactoring” or “tech debt” without tying the work to an outcome.
  6. Sharing blunt opinions about other people’s code without explaining the user impact.

Let’s explore each behaviour, the concerns behind it, and healthier alternatives that align your work with the value your users experience.

Code does not matter by itself

Consider the classic FizzBuzz exercise:

Count from 1 to 100. Print “Fizz” for multiples of 3, “Buzz” for multiples of 5, “FizzBuzz” for multiples of both, and the number otherwise.

Two implementations both meet the brief:

for n in range(1, 100):
    if n % 5 == 0 and n % 3 == 0:
        print("FizzBuzz")
    elif n % 5 == 0:
        print("Buzz")
    elif n % 3 == 0:
        print("Fizz")
    else:
        print(n)
print(1)
print(2)
print("Fizz")
print(4)
print("Buzz")

print(98)
print("Fizz")
print("Buzz")

Most of us instinctively prefer the first version - it’s flexible and elegant - yet both fulfil our user’s requirement. Whether the code is extensible or not is only relevant if the future request arrives; in fact, perhaps the user prefers our less elegant approach because it runs faster!

Code quality certainly matters, but only insofar as it supports delivery of value. With that lens in mind, let’s examine the six behaviours from earlier.

1. Over-reliance on static analysis

Static analysers assess code at rest. They’re fantastic at catching common mistakes, but they can’t tell you whether the software behaves correctly. Automated tests, especially when layered at multiple levels, give you confidence that the value you ship is intact.

Use a simple rule of thumb: if the issue you’re addressing could affect any project written in that language or framework, linting or compiler rules are probably appropriate. If the concern is specific to your product or domain, invest in tests instead. Tests are what prove behaviour for users; static analysis is a complement, not a replacement.

In addition, stricter static rules often contribute to longer dev cycles due to build failures. This can be a significant source of relatively well hidden cost in an engineering org, so you want to use these tools very intentionally, and only where you are certain they are bringing you benefit.

2. Dismissing TDD and unit testing

Test-driven development and behaviour-driven approaches are skills - ones that repay practice. Well-written tests encourage modular design, make refactoring safer, and provide executable documentation. If adding or modifying behaviour forces you to rewrite existing tests instead of just updating assertions, you may be testing at the wrong level, your tests might be trying to do too much, or your assertions are stricter than necessary. Aim for black-box tests where possible1 and supplement with integration or E2E checks when behaviour spans boundaries.

Without a thoughtful testing strategy, teams rely on hope and heroics. Tests create space for experimentation because you can move quickly without fear of silent regressions.

3. Frequent “clean-up” commits

Every change carries risk, no matter how small. Before tidying code, ask, “How will this help our users?” If the answer is fuzzy, consider leaving the code untouched until the improvement ties directly to a feature, fix, or learning. When you do make opportunistic clean-ups, make the intent explicit in your commit message and ensure tests protect you from regressions.

Clear change history is a gift to teammates: it helps them understand why decisions were made and accelerates onboarding. Focus on commits that tell the story of how your team’s work contributes value to your users.

4. Too many outages or regressions

If you find yourself linked to more incidents than you’d like, look for patterns. Are you changing code that lacks tests? Are you operating on risky areas without a rollback plan? Do you hear yourself saying “this shouldn’t affect anything”? That’s a signal to slow down, add tests, or pair with someone familiar with the domain.

Breakages are opportunities to strengthen your safety net. Invest in automated testing, feature flags, observability, and incremental rollouts. These practices protect both your users and your reputation.

If you are working on a mature codebase that doesn’t already have these tools available then read Working Effectively with Legacy Code. If you learn the techniques in this book and lead in their application you’ll have that codebase singing in no time! And everyone working on the codebase will be far less susceptible to outages and regressions.

5. Requests for “refactoring time” or “tech debt weeks”

Refactoring is part of building software, not a separate phase. If work feels difficult because of the existing design, articulate the impact in user terms. For example: “Renaming these modules will reduce onboarding time for new team members,” or “Simplifying this flow lets us release the next experiment safely.” Tie your work to outcomes, then include improvements in the scope of delivering those outcomes.

Similarly, “tech debt” is a metaphor for a business decision to trade scope for speed or cost. When the “interest” becomes too high - slowing releases, increasing bugs, making customer commitments risky - explain that impact in plain language and collaborate on when to repay it.

6. Harsh opinions about others’ code

Most codebases are imperfect because they solve messy, evolving problems. Instead of labelling code as “bad” or “hacky,” practice describing the specific risk or limitation. Does it create a security issue? Is it hard to extend in a part of the system that changes frequently? Is it under-tested?

Reading code is a core skill. The more you do it, the easier it becomes to empathise with the constraints previous engineers faced. Remember: if the code is in production, it is delivering value today. Honour that, even as you improve it for tomorrow.

Thinking new code is inherently “better” than old code is a mistake - it’s almost always precisely the opposite. Old code is battle tested, and had time to reveal its bugs and flaws; new code has not, but those bugs are there - trust me.

Bringing it together

Imagine software development as building a bridge. On one side is the code; on the other, the user enjoying a solution. Many engineers polish their side of the bridge tirelessly while the other side remains difficult to access. Senior engineers ensure the entire structure serves the travellers crossing it.

Here’s the key takeaways to keep in mind:

  • Users care about behaviour. Invest in tests and feedback loops that prove value in action.
  • Every change has a cost. Make changes because they serve an outcome, not because the code could be prettier.
  • Testing is how you move fast without breaking things. Automated checks let you evolve systems without putting customers at risk.
  • Seek outcomes over aesthetics. Measure success by the impact you create, not the abstractions you refine.

When you shift your focus from the code you write to the value you unlock, you naturally step into senior-level influence. That’s where coaching, product thinking, and business context become as essential as your technical expertise - and where your work becomes even more rewarding.


Footnotes

  1. If you find yourself updating a lot of mocks then it’s likely your tests “know too much”. This could be a sign to “loosen” your tests - it could also be a sign your seams could be improved.