I get it – looking down the barrel of any new project might seem especially daunting, but your first steps will determine whether it develops into a beauty or a beast…

You’ve been handed the baton and a million different possible combinations stretch out before you. Where to begin, who to involve, what to do first? Then there’s the constant”I want it yesterday” battle with the project sponsor. How do you possibly start to navigate through all that?

The temptation to just pick up an idea and run with it is often overwhelming. 

In the first year of the lab I spent most of my time trying to get projects like these back on track, or stop new ones falling into the same routine. Services were being bought to the lab because the sponsor either didn’t know or wasn’t happy with the progress being made. The teams had gone through the right process – piloting out the new service at a local level. Measures had been set so that an evaluation could take place once the pilot was finished, making recommendations for whether the service continued.

Unfortunately, the service leaders/managers often tasked with implementing shiny and new services didn’t have the right skill-set, mind-set or time to design their offering properly. When projects were evaluated the usual conclusion was that the service changed so frequently during the pilot that most of the measures were now redundant… The word ‘pilot’ became synonymous with last minute scrambles to evidence whether the service worked or not.

How people envisage innovation works – sort of a ‘one-stage’ model
What actually happens when you rush to implement your ideas…

For the project teams immediacy and performance might as well be mutually exclusive – with immediacy winning out every time. ‘Design’ was too time consuming to be an option. We watched frustrated, wondering whether people knew they were scaling services that never worked in the first place..

In truth, good design (i.e. not only drawn up on the back of a cigarette packet) doesn’t just make things better and more fit for purpose, it could actually be faster.  Our innovation pipeline is designed to produce a minimum viable product that we can release in a safe, controlled test environment – not holding ‘creative sessions’ for weeks on end. Our pipeline gives us a focus and a reason to just get the ball moving.

We redefined our pipeline around ‘tests’ – our way of getting quick bursts of feedback about new concepts in the simplest way. Read the definitions below, taken from my earlier post on the Bromford Lab website.


Tests tend to focus explicitly on the building blocks of a new service. They are time-limited, closed off experiments that help us evaluate design components in isolation, without any of the noise that ‘real life’ might generate. Not that this real noise isn’t relevant – it just muddles things early on.

Light, fast and ‘dirty’ tests come with relatively low risk so we can afford to do lots of them – indeed we can even fail lots of them without worrying too much. Much better to fail quick, fail cheap, right? More often than not it’s not the idea that fails, just the way we’ve chosen to implement it or that colleagues have gone ‘off-piste’ with the delivery. Early testing has usually highlighted a new problem – e.g. engagement problems – that could have presented significant start up lag-time if we’d jumped straight to pilot.

Tests are also a great way of identifying weaknesses in the data you’re trying to collect. Data requirements should be small and focus solely on what effects you’re trying to observe, rather than exploring relationships between a new and existing service.

It’s important to document a test so we prep all concepts with a test framework that outlines all the aims/objectives, the methodology and all the data we’re trying to collect and whether the test is exploratory or aiming to answer a specific question or hypothesis.  When you’ve finished testing you should be able to pull together all the information and produce an up to date, field-tested service offer document that outlines your idea in depth, with any changes as a result of your testing included.


Pilots evaluate the whole, assembled service and usually take place over a protracted time-frame so you can spot the interactions you might’ve missed in testing stages. This is ‘adding the noise’ back in to see if your idea holds up.

Because of the resourcing, duration, risks, costs and difficulty in mobilizing – you don’t really want to fail pilots. Better to fail a pilot than have a rubbish service implemented, granted. Then again, much better to drop an idea as a result of a couple of failed tests too…

A whole swathe of measures will likely be drawn up for data-hungry pilots, which drastically limits your ability to adapt and change the way the pilot is ran… or if you do, you can’t trust your before/after measures any more.

Pilots are the only way you can test your idea out in real life situations, so are probably important or whatever… but don’t get caught in the trap of fetishizing about how many pilots you have on the go at any one time. Calving more unwieldy pilots into existence is not a badge of honour, it’s a badge of not valuing your own time.

Finally, Pilots should never be implemented or scaled into the business without being evaluated. If you’re not going to let the idea fail, there’s no point piloting it in the first place.


So – the lab has become the driving force for testing out new ideas. I like to use the following metaphors to illustrate how this mindset differs from the default “let’s go to pilot!” setting.

Test Driven Design

Adam wants to know if his fort can withstand an enemy invasion. Adam’s not wearing a dress – it’s a kilt, he’s Scottish. (Plus I can’t draw trousers that don’t look crap yet…)

His fort needs a little work, so Adam thinks about all the possible options and what ramifications they’d have on ‘invasion-proofness’. Does he make it 10ft taller or 50ft taller, use traditional bricks or spherical bricks, fill the moat with fire or water? Adam decides to build little towers to prototype each element.

Bigger is better, a water-moat is easier to maintain than a fire-moat and in hindsight spherical bricks were probably never going to work. The results of these tests are evaluated and Adam builds his fort accordingly. He still doesn’t actually know whether it’ll hold up to an invasion – he’d probably get his mates round for a dummy attack run, see how everything holds up. Maybe he’ll even let them in for a beer afterwards…

Pilot Driven Design

Dave wants to know if his fort is invasion proof too, but the cunning fox has spotted a funding opportunity that might help him make his fort an ass-kicking machine. He successfully negotiates a tender with his local NHS clinical commissioning group, on the premise he can demonstrate the fort improves the well-being and psychological resilience of his local community.

Dave embarks on weeks of interviews with his local community, collecting baseline data and identifying individual needs. Soon enough he’s had to change his designs and make a number of compromises all around.

Unexpectedly, accessibility was an issue so he had it converted into a bungalow. People also asked for columns, and they seemed to show an improvement in well-being scores so were incorporated. An invasion was due, so Dave quickly spent the rest of the money on anti-aircraft guns to deter an onslaught.

Unfortunately, most of the preparation time was spent on delivering the NHS targets and the objective of ‘building an impenetrable fort’ sort of fell by the wayside. When the invasion came, it did so on foot, immune to the weaponry. As it happens, the quality and craftsmanship of the pillars diffused the invaders, who went inside to drink tea rather than pillage. Aside from a few stress related deaths, the local community were also showing higher well being scores.

Confused? Dave is too..


Take Home Message

High performing products and services aren’t by accident, they’re the outcome of good design. If a project is truly worthwhile, why not invest time in the design of it rather than just winging it? Invest in the right people to guide the idea, people who can maintain professional distance and keep egos out of the equation – ultimately people who are willing to tell you if your idea sucks.


2 thoughts on “What’s The Difference Between A Test And A Pilot?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s