“Agile” Isn’t A Helpful Description; Lets Call It “Rapid Feedback”
When the Agile Manifesto came out in 2001 I was working at a DotCom in the City Of London and missed the memo. In a small startup, with only a few developers, working directly for clients, there wasn’t much scope for any big process waterfall. Oh the good o’ days. Me, the code, the client and the ‘puters in that order. Honour and glory were assured…
Um, no, actually, the DotCom did a DotBomb. These days rather than a DIY-developer approach to software engineering I prefer mixed discipline teams of collaborating specialist. Developers, user researches, user experience designers, business analysts, testers, coders, product owners, delivery managers, working in “two pizza” teams. Sprinkle in some further specialists such as devops, service designers, technical architects, content designers, security specialist, performance analysts, as needed and we can ship something both useful and supportable to the users.
All you need to fire up all those specialists to become a delivery juggernaut is to sprinkle some magic pixie dust on them by appointing a “scrum master” to run some “agile ceremonies” and success is assured. Er, no. That is a cargo cult approach to delivery. I am getting to the point where if someone tells me they are a “qualified scrum master” it is mapped in my mind to mean “two week iteration waterfall specialist”. Short iterations by small teams running ceremonies (aka “meetings”) isn’t guaranteed to get a good outcome. Why?
Rapid feedback is the secret sauce to the build methodologies branded as “agile”. It has two healthy properties. First you get quick feedback on what you are building (doh!). The second is you get “back-pressure”. It takes time to build something to show the users and this is a good thing. It enforces discipline upon stakeholders to pick the most important thing to do next; you have to choose between building something new, or fixing something, or improving something. That’s a million times better than sitting around dreaming up a lot of features as part of an upfront spec. The back-pressure of having to wait for the next small slice of the system forces people to build only what is needed; not what is only desired. You build the minimal viable service rather than attempt to build castles in the sky.
Why is this good? Feedback is a double edge sword. The environment changes the solution and in turn the solution changes the environment. That makes it nearly impossible to get the correct answer up front in any project which involves business transformation. This means that up-front design is the best way to ensure that you don’t get the result you want or your users deserve.
Putting real code in front of users and gathering honest feedback forces everyone to experience the complexities and any edge cases in the solution. The team feels the effects of nice-to-have features delaying must-have-features. In contrast if you do upfront design there is endless feature creep; every nice-to-have feature is a must-have. Product Owners are humans too. They make mistakes when trying to define a healthy minimum. You can see the process truly working when you see them coordinate the reversing out of a bad decision quickly when rapid user feedback highlights that a mistake has been made.
The most recent project pathology that I have witnessed is where the user experience and user research team don’t go out and test working code. They stay off in the distance and test click-thru mock-ups for months on end. This isn’t an “agile build” it is an “agile specification process”. This is justified as seeking to define and “test” concepts find the “optimal user experience”. Only you are not testing real code with the user engaging their problem solving brain. Their brain is in TV watching mode. It is only testing in the sense of “can I sell this concept”. It is not testing in the sense of “does this feature actually work”. Aiming for perfection is the sworn enemy of Good.
Not demoing working code, only a design, is upfront design. Iterating on such a process is an “agile specification process”. The build behind it is then likely a “water-scrum-fall”; an short iteration build without real user feedback until very near the end when you “fall” into a beta test. It lacks the back pressure effect. Good luck with that; you are going to need it.
You should aim for the shortest distance to the most deployed user value. That means keeping it basic and a bit unpolished until you get there. Once at the most deployed value use that bridgehead as a base for continuous improvement with feedback and back pressure to keep you on track.