“Think big, act small, fail fast, learn rapidly” is a statement made by Mary and Tom Poppendieck in the book: Lean Software Development. It is a profound statement.
Nevertheless, it seems we still have so many people having never ending discussions about Architecture and Agile. The questions like: “How much architecture should we have before starting development?” or “Can architecture emerge by doing TDD?”. The typical answer is something like: “Well, you should definitely spend a few weeks” or “It can emerge, but it does not happen by itself”. Also, we love to disagree with other side based on statements full of ambiguous words.
One example is Philippe Kruchten’s post. Another one is a talk “Agile Architecture Is Not Fragile Architecture” given by James Coplien & Kevlin Henney. There is a huge amount of blog posts by a bit less respected names than these, trying to explain wether agile and architecture contradict each other, should find a compromise, or is a marriage made in heaven.
The problem here is that we are talking about architecture as a thing. But, you cannot hold architecture and you cannot point your finger and say: “There it is!”. It is an abstract concept, just like Agile. Abstract concept * abstract concept = unlimited number of assumptions!. The questions above are simply invalid. What exactly do you need to have before starting development, and what exactly emerges from TDD? Answers on the first questions above are arbitrary, and do not give us insights.
This brings me to think big, act small statement. James and Kevlin do state that Agile never forbids to think about architecture up-front, as opposed to actually putting one in stone. That is clear and useful. But then, they confuse a hell of it by stating things like, we must have some up-front design, called RUFD (Rough Design Up Front). So, how much is “rough” and what is it? Jim emphasises abstract classes, and Kevlin interfaces, and “some” documents, and so on. It is eventually one hour of endless explaining of subtleties of an abstract concept, which by itself without specific context does not mean much.
After this opinion of mine, it might amaze you that I actually totally agree with pretty much everything they say. I just want these well-respected names to tell things that are actually usable in practice and a bit less prone to so many possible interpretations, causing huge failures.
The big confusion is caused by lack of clear distinction in the process of what we call architecture. Let me try to make some distinction:
Look ahead or have a vision
I never heard anyone saying that we should not spend time looking ahead, and having a clear vision of the product from different perspectives. “Think big” is the first part of Poppendiecks statement. I guess Philippe did hear that from some Agile coaches. Some might overlook the first part, and go directly for the second.
By having a vision, I mean first and foremost that everyone involved has it in their head. Is it also placed on a paper? Irrelevant question! It is just a mean of communication. Use youtube if you want, be creative. I don’t care, as long as you fulfil a real need. You can’t do much without knowing this need first.
There is a whole range of practices helping to define a vision. Some of these practices are not called “architectural” but definitely cover things considered architecture. Currently, we don’t have enough practices and games to support teams in this process. Yes, we do have a huge amount of overloaded architectural processes like TOGAF staying at meta level, focusing on navel-gazing models, producing documents. We need you (yes, I’m talking to you Jim :-)) to help in this process, tell about useful practices.
Some, when talking about architecture, mean to say: making or having a significant decisions about used programming language, technology, visible (system and user interface) or invisible (internal) structure. The main problem here is: What is significant? How much is really significant? Which things to postpone and how? What are the driving forces behind this? And, shouldn’t we use technologies and design techniques which enable us to minimise these significant decisions? Which one help us and which don’t?
So, once we have this decision, what do we do with this decision? The biggest and very clear difference between Agile and traditional dealing with architecture is the things we do after we have a significant decision. So, we usually design stuff. With design, I mean draw a picture or implement some code in order to….what? Prove it to someone? Whom? And is it a real proof without delivering real functionality in production first? Also, are we talking about overall internal design? E.g. definition of components on highest level, applying specific approach like CQRS, and so on. Or are we talking about user interface, or maybe something at lower level. E.g. caching technology, abstract classes, and so on.
Which one do we need to define beforehand and why? These things are completely different from each other and should be treated as such. We don’t have good enough answers to this question. Also, word “design” is just as ambiguous as “architecture”.
Besides, from a perspective of doing things up-front: How much time do we spend in this mental exercises compared to failing after 3 sprints and learning from failure? The devil is in these details. There are many different opinions and practices here:
- Abstract classes as mentioned by Jim.
- Interfaces as mentioned by Kevlin
- TDD will create proper design, so nothing up-front
- Agile software architecture sketches by Simon Brown
- and so on
Again, this is all great, but every single of them is completely arbitrary and so much context-dependent. Completely pointless discussing which one is good or bad. The reality is simply: tell me the context, and I will do my best to explain what the wisdom is.
Again, a relevant question here is wether we build overall decisions up-front or not. DSDM, so called Agile methodology, spends during elaboration phase (up-front) huge amounts of effort building significant decisions with, if you are lucky, one implemented use case.
Is this good, bad or inevitable? Again, it depends hugely on technology used and context. Luckily, there are more and more technologies, languages, and tools which lower the price of refactoring. It would be more interesting to have discussion about what kind of decisions must be put in code before starting with first user stories. It would also be interesting to see if we have or can improve certain technologies or design techniques in order to lower the price of refactoring.
Please, be reminded that distinction in different “steps” I make in this post is not my real message. You can probably make a better one.
My message is: Let’s not oversimplify these things by using generic words like architecture, design, structure, and so on. Let’s simplify things by talking about concrete things we actually do in software projects.