Loris Pozzobon

by | Aug 14, 2019

Imagicle: why choosing emergent design for Agile software development.

Want more awesome content? Sign up for our newsletter.

“The best architectures, requirements, and designs emerge from self-organized teams”. This is one of the Principles behind the Agile Manifesto I’ve briefly discussed in a previous post, talking about Sprint Planning. But it turns out to be a very useful point here today, as we’ll look at what elements we can leverage to create shock-proof software architecture and how Imagicle self-organizing teams allow the design to emerge.

Building a bridge. The problems with the upfront design.

If you started working in the software industry some time ago, probably you’ve already experienced what I’m talking about, and maybe you’ll agree that the way we used to build software was similar to the way constructors use to build bridges, that is:
 
  1. define the requirements;
  2. make arrangements with the client about what, how and when it will be done;
  3. design the solution;
  4. implement the solution;
  5. deliver it to the client.
Basically, the Waterfall development methodology.
 
For a bridge, this approach still works very well. And this is because the construction of a bridge typically respects these characteristics:
  1. it has precise requirements (what should be done, for how long it should last, etc.);
  2. the requirements do not change during construction (it is rather difficult to imagine that the client, with work in progress, says something like: “Listen, could you get it 1 km further downstream?”);
  3. doesn’t need to be developed for future modifications or extensions;
  4. if there are any doubts, they are known: i.e., I don’t know how many kg/m2 that foundation can hold, so I know I have to to find out;
  5. building the bridge is much more expensive than designing it.
 
Furthermore, realizing a wrong design choice would imply going back to some phases of the construction that could be impossible to modify (what if, after starting the construction of the road, you realized that you needed an extra pillar?).
 
Obviously, this is not an absolute truth: even a bridge may be subject to modifications during construction, or future renovations. However, they will be expensive, and their cost will be difficult to predict at the time of construction. Sometimes, rather than renovating an old bridge to make it more modern and capable of driving more cars, it’s more convenient to rebuild it from scratch.
We can say, in short, that bridges follow a construction method that optimizes the initial realization, to the detriment of future developments. Now, this can be considered an excellent system for making hardware (and I would say that a bridge can be regarded as a nice piece of hardware), but we produce software, right?
Are we sure we can apply the same system to it?

Well, let’s see which of the five factors mentioned above are typically respected in a software product.
 

1. Precise requirements.

Nothing like it. Usually, the requirements become clear as the product takes shape. And since we prefer to make our users happy rather than tell them “You signed for this 6 months ago”, the continuous feedback from users is much more important to us than the initial negotiation of the requirements. (Customer collaboration over contract negotiation – Agile Manifesto).

 

2. Stable requirements.

No: it happens all the time that some requirements change during the development, new ones come out or that which at first seemed indispensable end up being excluded. The market changes quickly, the needs change quickly, so the software must change accordingly (Responding to change over following a plan – Agile Manifesto).

 

3. Changes in progress.

Modifications and extensions are a daily occurrence when working on a software (think of all the news included in each release of our products winking face).

 

4. Doubts.

We are aware from the beginning of not knowing some aspects. But we discover other flaws the moment we encounter them. A bit like saying: if you know you don’t know, feel lucky…the problem is when you don’t know you don’t know slightly smiling face (little spoiler, we’ll see shortly that it’s better not to try to solve even the doubts we have in advance!).

 

5. High modification cost.

This is the only point that could be common with the bridge, and it’s the one from which we want to stay further away. If we happen to have a software that should be redone rather than extended, it would mean that the software became unmantainable, and we should understand what went wrong to avoid this situation in the future.
 
So, apparently, the upfront design, which is well suited for building bridges, doesn’t work very well with software products.
 
Designing the entire product before starting its development, in fact, has a number of easily identifiable disadvantages. Creating an hyper-extensible and super-abstract product is an expensive exercise that risks creating unnecessarily complex abstractions (KISS); the extensibility on which the investment was made could prove to be useless or excessive if we found out that a lower or much simpler level of abstraction was sufficient (YAGNI); and, in any case, it is complicated to predict what the requirements will be and to understand whether extensions will be needed and at what stage of development. Even if you try to do it, it’s very easy to go wrong, ending up wasting a great deal of time and money.
 
The solution is there: let the design emerge. 
Let’s see how.

Emergent design.

In his last post, Riccardo talked about TDD as a tool to make development more efficient, guarantee a high level of software quality through the Red-Green-Refactor cycle and obtain a suite of automated tests performed by our Continuous Integration system at each software build.
Well, TDD, Refactoring, and Continuous Integration are the exact ingredients we need to make the design emerge spontaneously. Now let’s see how this translates into practice.
 

A real use case: the Imagicle ApplicationSuite Web Services.

To date, Imagicle ApplicationSuite exposes dozens of REST Web Services. Some form public APIs, useful for users to integrate the suite with third-party products (such as those for controlling recordings or sending faxes). But to get to dozens of Web Services, we had to start from the first one. And we did it back in 2014, when we wrote the first REST Web Service using WCF. We knew it would be the first in a long series: the Agicle era had just begun, we had recently introduced the use of TDD; the most natural thing was writing a Web Service as a simple C# class, written in TDD, instead of a Web Service in ASMX technology.

A good software architect could have said:
“Well, since from now on we will write a lot of classes like this, let’s think about how to structure it, and then we think of a framework that makes it easy and fast to write similar classes.” 
In this way, we would have designed the abstraction up front, with the difficulties seen before.
 
But as you imagine, it didn’t happen that way.
We knew we should get a class, but we didn’t know much else about how that class should have been once it was finished. So, we wrote a test, we made it pass, we wrote another one, and so on with the Red-Green-Refactor cycle, till we ended up having exactly what we needed: a class that represented a functioning Web Service written in TDD
No more and no less.
After the first Web Service, we were satisfied with the result: we had a class written entirely in TDD, with a suite of tests performed by the Continuous Integration system that ensured its correct functioning and allowed us to continuously refactor the product code, keeping it always clean, clear and easily editable.
 
Plus, having written the class in TDD, it turns out to be natively compliant with the Single Responsibility Principle: the Web Service makes the Web Service; it doesn’t have to do also business logic and persistence.
 
Finally, to be able to write unit tests, we used Dependency Injection, so we could pass mock implementations in the tests. Basically we can say that, even at the level of a single class, the design complying with the SOLID principles emerged spontaneously from the tests that modeled its behavior, as well as from the continuous refactoring applied to the class during its development.
 
And then at some point it was time to write the second Web Service.
 
Well, well, and now what? 
We didn’t have very clear ideas about what parts would be common to all Web Services, if and what would have been the common processing scheme for each Web Service function, and so on. So, we simply started writing the second Web Service with TDD.
 
Once again, the result was very satisfying. Now we had two Web Service classes whose design had emerged directly from their tests. It was time to do some more refactoring, even outside of the class just written.
 
We noticed that, between the two implementations, we had some code that could be shared (DRY), with a level of abstraction minimally higher than that of the two separate codes. Hence, we introduced the abstract WebService class, which exposed some protected methods, for the use and consumption of the two Web Services.

And what about the tests?

No, we haven’t forgotten them. The tests continued to probe the two classes, without noticing anything. But this is refactoring, isn’t it? Modify the implementation of a class while keeping its behavior unchanged.
 
Since the tests verify the behavior of a class and not its implementation, the tests represented our spy to check that nothing breaks during refactoring.
This is a critical point, often underestimated. Several times I was asked: “How can we test this private method?”. Well, the question contains an intrinsic error: if I need to test a private method, it means that I need to test the private implementation of another public method.
But this would mean to tie the implementation of the class together with its tests. 
 

Well, we had come to have our first two Web Services with a very simple abstract class that shared a small piece of code. After a while, it was time to write the third Web Service, this time having some infrastructure ready.

But, as you can guess, it’s not all peaches and cream. Another trap is lurking.

Mind the trap.

One might think: “The code I put into the abstract class is already common to two other Web Services tested. I don’t need to write the tests that verify the same code also for the third Web Service: it would be a duplication”.
And here you go. 
This is the best way to fall into the trap, since:
 
  1. this assumption is based on the knowledge of the implementation of the three Web Services, which, as we have seen, is not the correct way to interpret the automatic tests. What if tomorrow, following a refactoring, that portion of common code would stop performing the functionality we decided not to test? That functionality would fail, and we wouldn’t even notice it;
  2. the fact that the other two Web Services, today, must have the same behavior as the third one (with regard to the part expressed by the common code), does not necessarily have to be valid tomorrow. Tomorrow we may want to change the tests that model that behavior in the first two Web Services, make the code common so that they pass and break the behavior of the third Web Service that was expected to behave as before.
 
But this would lead to duplicate the test code, right? In fact, if there is duplication in the code that tests the same behavior on three different Web Services, it means that the test code must be refactored and made common. The tests also need to be easily maintained and extended, so they must be continuously refactored and kept clean.
 
Obviously, the more code we have ready (within the infrastructure), the more tests will pass immediately, without writing a single line of production code (if we want, we can see it as a measure of the actual contribution of our infrastructure in terms of development efficiency).
So we wrote the third Web Service, with all its tests, and after writing it, another piece of infrastructure emerged, which could benefit all future Web Services.
The continuous design process has continued for five years, to date, and has reached the point where the WebService class contains the template that each function of each Web Service specializes, through the Command design pattern, that emerged as a product of the refactoring.
Not bad, considering that in many cases it is difficult even to simply add functionality to a class for five years, even without the addition of inheritors.

Strange case of software design.

At this point you may have some doubts: what about the UML diagrams? The software architects? Does everything disappear?
 
No, no. Absolutely not. open hands 
 
What we are trying to do is to minimize the architectural choices we need to make at the initial stage. Once we have defined some architectural choices, we try as far as possible to ensure that the software ignores them. upside-down face 
Okay, it may seem strange, but perhaps it makes sense if we think that what we usually call architectural choices are actually technological choices.
And the best way to create software that is difficult to maintain and evolve is to create it based on the technologies with which it interfaces.
 
“A good architect maximizes the number of decisions not made” – Uncle Bob (Robert C. Martin).
 
We use diagrams almost daily, and here lies the difference: there is no design phase in which the architect thinks about all the classes that will be written, prepares the UML, and the team implements them. 
The diagrams have the sole purpose of facilitating communication: when words are not enough to represent a concept, you go to the blackboard, you make a simple diagram that represents the immediate development to be done, you do it and then you delete the diagram.
 
“Working software over comprehensive documentation” – Agile Manifesto.
 
This approach has two advantages:  there are no diagrams to keep aligned with the code, and the code becomes the only reference to understand its architecture, so it will necessarily be clear.
 
In this way, the teams develop the design as a continuous process, part of the development itself, carried out by the team that is also in charge of coaching the most inexperienced people, so that they can soon make their contribution. After all, what better architect than those who change, extend, and evolve software every day?
 
Remember the principle from which we started? And that a software is not like a bridge? 
Well, here we go. smiling face with open mouth & smiling eyes 

Conclusions.

Well, guys. Let’s do a little recap.
In this article, we have seen how the architecture of our software emerges through a scale-up approach rather than upfront design.
 
TDD and refactoring are fundamental tools to allow the design to emerge. In particular, the tests are fundamental in order to refactor, which in turn is fundamental to be able to bring out the design while keeping the architecture both solid and extensible. And of course, we can’t forget Continuous Integration (otherwise who would be there to run the tests? slightly smiling face).
 
We also focused on a few traps we typically encounter when we are going to evolve software using TDD, and how to avoid them.
Finally, we have seen how the emergent design is not the negation of architecture, but is a different way of conceiving it, which forces the software to be soft-ware.
In fact, software easily subject to architectural evolutions will also be software on which it’s easy to do requirements changes or additions. And having software that easily evolves its architecture makes it natively keen to change and evolve following your needs in a smarter, easier, and safer way.
 
…And If you want to learn more about emergent design, share your thoughts below. backhand index pointing down 
 
 
#stayimagicle

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *