Wednesday, 18 March 2015

Contract testing

Let's imagine the following scenario...
We are working in distributed system with lots of applications.
The developers understand the importance of avoiding coupling amoung componets, so they decide to create restful applications to communicate via xml and json,
instead of building applications that are binary dependant with other applications.

During the development of a feature, the development team, did a change to the API, and unconciously they broke one of the consummer apps.
Unfortunately, this bug was really expensive, since the company just managed to discover it in its replica, pre-production environment by a long running
end to end functional test, after determining that what was broken was actually a marshaller of xml, there was no quick fix and they had to roll back.

In the root cause analysis meeting, developers from each of the teams, that own the apps that failed realised that the API change was the reason for the bug
and that there was no aditional work done in one of the unmarshallers.
The developers were told to fix the bug and also to come up with a solution that would avoid this from happening again.

After fixing the bug the developers toke some time to think how they could catch this kind of bugs before the pre-production environment where the expensive
integration tests run. One of them said, "What we need is consummer contract testing!"...

Consumer contract testing, allows consumers and providers of an API knowing if their latest changes on their marshallers or unmarshallers, could potentially be
harmful for the other party, without the necessity of performing an integration test. This is how it works:


1- The provider of the API, publishes an example of the API somewhere where he knows the consumer can access it(e.g publish it in a repo, sending it via email...).
2- The consummer takes the API example and writes a test that tolerantly accesses the values of interest.
   This in-document path(e.g xpath,jsonpath...) used to retrieve the values from the API example, is known as the contract.
3- The consummer publishes the contract in a place where knows the provider has access to it(e.g publish it in a repo, sending it via email...).
4- The provider will take the contract, and will use it in a test, against the generated output of the application. If the test fails when being run, the provider will know that they could potentially be breaking the the consumer, if they were to release the current version under test(a negotiation can take place).

Let's now have a look at a practical example of each of the steps above.

1- The developers that own the provider app, take from their passing acceptance test the output that the application is sending back to the consumer and they save
it into a file called "apiexample.xml", which looks like this:

 <output>  
      <content>  
           <partA>A</partA>  
           <partB>B</partB>  
      </content>  
 </output>  

They send this file over email to the team that owns the consumer application.

2- The developers that own the consumer app, will take the exampe and will write queries to it, to determine the contract they need. A unit test against the example, could be fine.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

3- Now that the developers know that the contract to access what they are interested in is:
 "/output/content/partB"
They can save it in a file called "contract.txt" and send it over email to the other team for they to make sure they will always be outputing according to the contract. Note that this tolerant
paths, allow to the provider to change any part of the API they want to change, as long as the contract is respected.

4- The provider will read the "contract.txt" file and will write a test where the contract will be applied to the applications output.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

Now when any of the teams run their builds, they will know if they are in breaching the contract and they will avoid the bug going further than the development environment.

You can find the complete source code of this example here.

Wednesday, 11 March 2015

Yet Another Blog Article About Acceptance Testing


Acceptance tests are tests conducted to determine if the requirements of a specification are met.
In modern software development, we call this specification, acceptance criteria.

“Whenever possible” it would be desirable to acceptance test, the system end to end.
By end to end, I mean talking to the system from the outside, through its interfaces.

Note that at the beginning of the previous paragraph, I said “Whenever possible”.
The reason for this, is that it would be risky and also costly to integration test our code(against other code, we don't control/own). Sometime applications within a system, don't even belong to our company or they are too costly and slow to run. Because of this, the amount of system full stack tests/functional tests, should be very reduced/almost none.

In acceptance testing we often start from an assumption about those external systems we cannot control. The parts out of our control are faked and the acceptance criteria, is aimed to those parts we control.

When writing an acceptance test, there is a commonly used format to define the acceptance criteria. It is well known as the “given,when,then” format:

- given: The setup/preconditions, of the scenario that we will test. Its contains what is that we expect from those remote systems(either internal or external) on which we depend.
- when: Is the specific call to the exposed interface we are testing.
- then: Is the validation of the results.

Today's acceptance test are written with the help of live specification frameworks, such as: Jbehave, Fit, Fitnesse, Concordion, Yatspec...
The use out this tools, will make easier to both understand complex scenarios and maintain criteria.


Understanding Yatspec

Next I will talk about writing acceptance tests with a popular live specification framework called Yatspec. I will explain some of its features and describe the way it presents the test report. Also I will explain with an example how we could stub systems out of our control and use them in our acceptance test.

About yatspec
-
its a Live specification framework for Java(https://code.google.com/p/yatspec/)
-produces readable Html
-supports table/parametrized tests
-allows writing in given-when-then style

 
The scenario
The application we will be testing, will receive a GET request from a client, then it will send subsequent GET requests to two remote systems(A and B), process the responses and POST the result to a third system(C), just before returning it to the client.



The criteria
-Given System A will reply 1 2 3
-And System B will reply 4 5 6
-When the client asks for the known odd numbers
-Then the application responds 1 3 5
-Then 'System C' receives 1 3 5


Creating html reports
Before going in depth into our example, I want to expend some time discussing how Yatspec reports look like, and what are the basics in order to create them(If you want to go directly to the scenario implementation, just skip this section).

When a Yatspec specifications are run, it will generate a html report. Advance options, can allow you to publish it remotely, but by default it will be written to a temporary file in the file system.
The terminal will tell you where it is like this:
Yatspec output:
/tmp/acceptancetests/KnownOddNumbersTest.html
We can navigate to it from the browsers url:
file:///tmp/acceptancetests/KnownOddNumbersTest.html

Lets have a look at how it is structured:


(a) Is the title of the report. If Yatspec finds the postfix 'Test' on the class name, it will remove it and just present the rest of the title.

 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
      //Your tests  
 ...  
 }  


(b) In the contents section you will see a summary of all the test names(There can be multiple tests) in the same specification.



(c)This is the test name. We don't need to add any additional, anotations, all we need is to write our test names in “camel case”. If the test throws any exception, it will not be shown in the report.


 @Test  
 public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
       //Test body...  
 }  


(d) At the beginning of each test, the criteria will be presented. Yatspec will use the contents of the method body to generate it. The methods given(), and(), when(), then() are inherited from TestState.java(latter I will explain how to use them).

 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  

(e) This is where test result will be shown. Yatspec will colour this part in green if the test passes , in red if the test fail or in orange it the test is not run.

(f)Interesting givens are the preconditions for the test to run. This preconditions are stored in the class TestState.java in an object called interestingGivens. The way we would commonly do this by passing a GivensBuilder object to the the method given(). Also the method and() can be used to add more information in our interesting givens.
 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     //...  
   }  
   private GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   private GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }  

(g) This are the captured inputs and outputs. Its purpose is to record values that go in or out of any component in the workflow. TestState.java contains an object called capturedInputsAndOutputs to which we can add or query from. Comonly we would indirectly add a value to the capturedInputsAndOutputs to track the response of our application so it can be verified latter, via a parameter of type ActionUnderTest.java to the when() clause method.

 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     when(aRequestIsSentToTheApplication());  
     //...  
   }  
 private ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captured) -> {   
 //The second object of this lambda is capturedInputsAndOutputs  
       captures.add("application response", newClient()  
           .target("http://localhost:9999/")  
           .request().get().readEntity(String.class));  
       return captures;  
     };  
   }  


(h) This are the final verifications. They are created by the then() method. You will distinguish if the output was generated by the then() method, because it is not highlighted in yellow.
An StateExtractor.java is responsible for the values in this section. The state extractor will take from the captures the values that where recorded previously so a matcher can verify if they are correct.


 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     then(theApplicationReturnedValue(), is("1,3,5"));  
   }  
 private StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
 }  

The scenario implementation
Now that we understand the criteria and we have some basic understanding of Yatspec reports. Lets write an acceptance test for the criteria described before.

In our scenario System A, B and C are out of our control(Lets imagine they are owned by companies). We need to first query A and B and then send the processed result to C before replying to the client.
This means that our interesting givens will be the values returned from A and B and our captured inputs and outputs will contain the input into C.

 
So let's have a look at how Systems A and B return the values previously saved in the interesting givens to the application and also how System C captures the input.

For this example, I created a class called FakeServerTemplate.java which contains the boiler plate code that is necessary to create an embedded server. Each System A, B and C will inherit from it and provide specific handler implementations.

 public abstract class FakeSystemTemplate {  
   private final HttpServer server;  
   protected InterestingGivens givens;  
   protected CapturedInputAndOutputs captures;  
   public FakeSystemTemplate(int port, String context,InterestingGivens givens, CapturedInputAndOutputs captures) throws IOException {  
     this.givens = givens;  
     this.captures = captures;  
     InetSocketAddress socketAddress = new InetSocketAddress(port);  
     server = HttpServer.create(socketAddress,0);  
     server.createContext(context, customHandler());  
     server.start();  
   }  
   public abstract HttpHandler customHandler();  
   public void stopServer() {  
     server.stop(0);  
   }  
 }  


Latter, when we create the acceptance test we will see how we will pass the interesting givens and the captured inputs and outputs to the Systems.
Systems A and B will return the values stored in the interesting givens using a unique key(Latter we will see how this keys are set in the givens).


 public class SystemA extends FakeSystemTemplate {  
   public SystemA(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system A returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system A", response);  
     };  
   }  
 } 
 
 public class SystemB extends FakeSystemTemplate {  
   public SystemB(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system B returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system B", response);  
     };  
   }  
 }  


For system C we will be capturing the arriving input.

 public class SystemC extends FakeSystemTemplate {  
   public SystemC(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       Scanner scanner = new Scanner(httpExchange.getRequestBody());  
       String receivedMessage = "";  
       while(scanner.hasNext()) {  
         receivedMessage += scanner.next();  
       }  
       scanner.close();  
       httpExchange.sendResponseHeaders(200, 0);  
       httpExchange.close();  
       captures.add("system C received value", receivedMessage);  
     };  
   }  
 }  


Now that our remote systems are ready, lets write our test.


 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
   private SystemA systemA;  
   private SystemB systemB;  
   private SystemC systemC;  
   private Application application;  
   @Before  
   public void setUp() throws Exception {  
     systemA = new SystemA(9996, "/", interestingGivens, capturedInputAndOutputs);  
     systemB = new SystemB(9997, "/", interestingGivens, capturedInputAndOutputs);  
     systemC = new SystemC(9998, "/", interestingGivens, capturedInputAndOutputs);  
     application = new Application(9999, "/");  
   }  
   @After  
   public void tearDown() throws Exception {  
     systemA.stopServer();  
     systemB.stopServer();  
     systemC.stopServer();  
     application.stopApplication();  
   }  
   @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  
 }  


By extending TestState.java we get acces to the interestingGivens and capturedInputsAndOutputs objects. We will pass them to the remote systems, this way Systems A and B will be aware of what we expect them to return and also C will be able to capture its input.

The methods used inside given(), and(), when() then() are just static fixture methods. I think it good to avoid making long classes so that's why the test class just contains the test, everything else is extracted into reusable fixture methods. Lets have a look at them.


 public class GivensFixture {  
   public static GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   public static GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }
 
  public class WhenFixture {  
   public static ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captures) -> {  
       captures.add("application response", newClient().target("http://localhost:9999/").request().get().readEntity(String.class));  
       return captures;  
     };  
   }  
 }
 
 public class ThenFixture {  
   public static StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
   public static StateExtractor<String> systemCReceivedValue() {  
     return captures -> captures.getType("system C received value", String.class);  
   }  
 }  


Once we run the application, the acceptance test would go red, the next thing to do if we were parcticing ATDD, would be to go into the production code and write unit tests to guide the creation of the code that is required to make the acceptance go green. Remember the ATDD cycle.

 
The TDD of the final solution is out of the scope for this blog post, but you can find all the completed codes at this git repo:



Wednesday, 4 February 2015

Exposing the data layer of your app using REST

The more we sepparate the concerns of our system, the more mainteinable it becomes.

It is very common to find applications written in such way that the data access mechanisms(SQL files, JDBC client code, ORM mappings...) are located just next to(coupled/interdependant) the service/bussiness logic. This often makes finding bug, making a change, etc.. harder.

Calculating a result and storing it, are different things. So why not sepparating those 2 responsibilities among different applications?

One would be responsible of making sure the results are calculated and the other will just provide data management support.
In my opinion the result of doing this is a system that is more understandable, maintainable and upgrade friendly.

In many companies, the data is often managed by database engineering teams which have: schedules, goals and even different managers than the development teams. In this type of organization, delays, missunderstandings, conflicts of interests and work de-synchronization are very common. So to make the most of a decoupled system, we not just need a good software approach, but also a process and team structure that are compatible with it(But this may be a topic for another post). This type of decoupling will not just make maintenance easy for the developers but also, it will probably encourage discussion about the process and the teams structure.

In my example I decided expose 2 persistent services via 1 url and persisting simultaniously in 2 types of databases(a sql and a no-sql DB).

This is the implementation of the no-sql adapter


 public class NoSqlAddressInsertAdapter implements CreateService {  
   private final MongoClient mongoClient;  
   @Inject  
   public NoSqlAddressInsertAdapter(MongoClient mongoClient) {  
     this.mongoClient = mongoClient;  
   }  
   @Override  
   public void create(Address address) {  
     DBCollection collection = mongoClient.getDB("radadata").getCollection("address");  
     collection.insert(toNoSqlAddress(address));  
   }  
   private AddressNoSql toNoSqlAddress(Address address) {  
     AddressNoSql addressNoSql = new AddressNoSql();  
     addressNoSql.append("firstline", address.getFirstLine());  
     addressNoSql.append("secondline", address.getSecondLine());  
     addressNoSql.append("postcode", address.getPostcode());  
     addressNoSql.append("persons", address.getPersons().stream().map(toNoSqlPersons()).collect(toList()));  
     return addressNoSql;  
   }  
   private Function<Person, PersonNoSql> toNoSqlPersons() {  
     return person -> {  
       PersonNoSql personNoSql = new PersonNoSql();  
       personNoSql.append("firstname", person.getFirstName());  
       personNoSql.append("secondname", person.getSecondName());  
       return personNoSql;  
     };  
   }  
 }  

This is the implementation of the sql-adapter


 public class SqlAddressInsertAdapter implements CreateService {  
   @Inject  
   public SqlAddressInsertAdapter() {  
   }  
   private static SessionFactory getSessionFactory() {  
     return HibernateUtil.getSessionFactory();  
   }  
   private Session session;  
   @Override  
   public void create(Address address) {  
     session = SqlAddressInsertAdapter.getSessionFactory().getCurrentSession();  
     session.beginTransaction();  
     Set<ORMPerson> ormPersons = address.getPersons().stream().map(toOrmPersons()).collect(toSet());  
     ORMAddress ormAddress = new ORMAddress();  
     ormAddress.setFirstLine(address.getFirstLine());  
     ormAddress.setSecondLine(address.getSecondLine());  
     ormAddress.setPostcode(address.getPostcode());  
     ormAddress.setOrmPersons(ormPersons);  
     session.save(ormAddress);  
     session.getTransaction().commit();  
   }  
   @Override  
   public void create(Person person) {  
     //  
   }  
   private Function<Person, ORMPerson> toOrmPersons() {  
     return person -> new ORMPerson(person.getFirstName(),person.getSecondName());  
   }  
 }  

Note that both adapters use their specific domain objects, one uses ORM(Those ORMClasses are hibernate entities) and the other doesn't.

This is a sample REST endpoint will allow access to those services simultaniously


 @Service  
 @Path("insertperson")  
 public class InsertAddressResource {  
   private final services.nosqlcrud.CreateService noSqlcreateService;  
   private final services.sqlcrud.CreateService sqlCreateService;  
   @Inject  
   public InsertAddressResource(services.nosqlcrud.CreateService noSqlcreateService,  
                  services.sqlcrud.CreateService sqlCreateService) {  
     this.noSqlcreateService = noSqlcreateService;  
     this.sqlCreateService = sqlCreateService;  
   }  
   @POST  
   @Consumes({"application/json"})  
   public void insert(Address address) {  
     noSqlcreateService.create(address);  
     sqlCreateService.create(address);  
   }  
   /*  
     A Sample Json to POST:  
     URL: http://localhost:9998/insertperson  
     Content Type: application/json  
     {  
      "firstline": "street bla bla",  
      "secondline": "town of bla bla",  
      "postcode": "ble ble ble",  
      "persons": [  
         {"firstname":"Armin","secondname":"Josef"},  
         {"firstname":"Johan","secondname":"Uhgler"}  
       ]  
     }  
   */  
 }  

This snippets of code are just part of a demo app I wrote some days ago to show how to expose the data layer via REST.
The rest of the project, can be found at: https://github.com/SFRJ/Rest-Approach-to-Data-Persistence-R.A.D.A-

Wednesday, 7 January 2015

Retrospectives – “Lets talk about it”(Part 2)

In the previous post, I briefly explained what retrospectives are, why they are important and also I explained what is often that happens before them and how the facilitator prepares for it.

The following posts will be more focussed on retrospective formats/styles that could help the self organized team in different scenarios.


The first format/style I would like to explain is what I call "The Diplomatic Open Retro".
This retrospective style is best suited for a team that its not very familiar with the concept of retrospectives and also has a necessity of improving mostly its internal team self organizational process(e.g internal communication, workload management, development practices, internal optimizations, etc...).

How it works
At the beggining every attendant receives some post-it notes and is asked to write down all the topics that would like to discuss. Ten minutes should be enough, but depending on many factors sometimes gathering topics is more difficult. In order to help people getting inspired, the facilitator can play some relaxing music, also could write some of the hot-topics from the previous analysis in a board or even encourage the people to talk to each other(as long as it is helping discover topics).

This period is a critical part of the retrospective and it should take as long as needed, nobody should feel rush and only when all are happy with the topics collected the retrospective will carry on. Also it is important to mention that in the post-it, the team members can write in whatever way they want, there is no predefined format, even a simple sentence could do. If a team member doesn't know what to write, it is perfectly fine(he/she doesn't have to).



The next step will be to go one round around the table in which each of the members will briefly with a couple of sentences, explain each of the cards they wrote. There will be no replica, this is just a pure diplomatic exercise in which the members will try to convince the others of voting on their topics to be discussed. The person talking will stand up and as he/she briefly explains the topic, will also start sticking them into the voting board. During this period it often happens that people are mentioning the same topic so, this will be also a great exercise to group the topics that are repeated together so the voting can after be more accurate.


Once the topics are on the board, it is the time for for voting. Each of the members will be asked to place 3 marks in those topics that considers more important to be discussed.
It is important to understand that the time for the retrospective is limited and not all the topics will be discussed so the team needs to have a mechanism of selecting those topics that are considered more important. No voted topics will be discarded(They will appear in future retrospectives if they are important).


The voted topics will be discussed in order of(most voted ones first). The facilitator will make sure that takes notes of possible actions and key points as the conversations goes. Each topic will be time boxed within 10 to 15 minutes, after that time the facilitator will ask every body to start proposing and deciding on actions and owners for those actions. Actions will need to be decided before moving on to the next topic. It is very common in retrospectives that there is a lot of debate but little actions, this retrospective style attempts to gather actions while topics are closed. For the team to decide that an action is not needed its ok but this is rare to occur and if it occurs it will have to be decided by all that no action is to be taken. See an example of how gathered actions look like:

ACTIONS
Low Team Capacity(4 votes)
  • Team unsure if should talk to HR, Management or other Dev team.(Owner: No action to be taken until we find out) 


Coolaboration between teams(3 votes)
  • Devs to assist testers before moving into next dev task(Owner: All devs)
  • Setup the machine of the new joiner(Owner: Team Leader)
  •  Review handover checklist before going on holidays(Owner: All devs)


Failing builds(3 votes)
  • Determine why the build is red for more than a month(Owner: Senior dev)


Cakes all over the office(2 votes)
  • -Stop eating unhealthy cakes and organize a team dinner to celebrate xmas(Owner: Team Leader)

Tech debt catch up(2 votes)

  • -Not enough time to discuss in this retro, add as a hot-topic for next retro(Owner: Facilitator)



Sometimes the team is unable to decide an action, because their dependency/blocker is outside of their team. In this case, they will need to identify who are those individuals that need to be influenced. But that is a topic that I will cover in another post.







Friday, 26 December 2014

Retrospectives – “Discovering our selves”(Part 1)

A retrospective is a well known practice in many Agile development teams. Its goal is to help the team reflect on the previous working weeks, commonly 2 or 3 weeks with the aim of distinguishing ways of improving the way they work.  Retrospectives are also very important for this agile self-organized teams, because since they don’t receive direct commands from managers(see my previous post), it is of extreme importance to have mechanisms that improve increase awareness and prevent from burning out.

What makes a retrospective  a little bit different from other meetings, is that it often follows an organized protocol for interaction. The retrospective's protocol is defined and applied by one or many persons external to the team, known as the facilitators.  The role of the facilitator, is to, in an impartial way facilitate the teams express their concerns and discover actions that can help them address those concerns.

Each facilitator, has its own technique/s for facilitating retrospectives. Different techniques are useful in different circumstances. That is why one of the first things the facilitator will do in order to prepare a good retrospective, will be to have a brief chat with some representatives from the team, to get some idea/highlights of what was going on lately: current work, most notorious blocker, absences, who will attend the retrospective, important events… 

This first mini reconnaissance mission is not a silver bullet but often, it helps the facilitator get a grasp of what type of retrospective format could be used.  Sometimes retrospectives will have a high level of technicalities, other times there will be lots of complaints about blockers, others there will be communication issues, process, etc…

Without going into an specific retrospective format yet(not in Part 1), I would like to just name a list of healthy tips that is useful to hang somewhere on the room for all to see and/or even say them out loudly(the facilitator can even ask for a volunteer/s to read them out) at the retrospective, just before commencing:

·         Don’t blame. We all have been working to the best of our abilities.
·         Don’t monopolize the conversation, be conscious when you should let others participate.
·         Don’t interrupt people when they are speaking.
·         Don’t be afraid of expressing what you think no matter how bad it is.
·         Don’t feel intimidated by anyone because of their position.
·         Do critic and welcome critics(Blame not equal to critic).
·         Do remember that change is always possible.
·         Do remember that your company will be what you want it to be.

Dialogue it’s a dexterity which is not easy to master. The goal of this tips(note that I didn’t say rules) are to just to encourage a healthier debate. Many times will be the case that people feel: shy, impatient, inferior, superior, lazy, pessimistic, etc …

To help break some of those psychological barriers another duty of the facilitator will be to make sure that the environment where the retrospective will be held is comfortable enough. The environment can significantly impact the results of a retrospective.  But of course, It is up to the creativity of each facilitator, how to do so.  In any case here some more tips:

·         A bit of not loud ambient music at the beginning or even during the hold retrospective, can help stimulate people and also reduce the uncomfortable sensation some people claim to have when the group is in silence.
·         Soft drinks and water could help avoid dry mouths when speaking.
·         Coffee and Tea can help give a boost to people if the retrospective has to be held on the last hours of the day.
·         Alcohol is often discouraged specially if it is expected the retrospective to last too long. Some facilitators don’t have nothing against it if when it is in moderation.
·         Sweet and salty snacks are often found in retrospectives, specially chocolate(Apparently there are scientific research that suggest that it can increase peoples happiness).
·         Fruit, it’s a healthy option that many people often appreciate in retrospectives.  
·         Appropriate jokes and even chit-chat are often common at the beginning of retrospectives, it is perfectly fine if the facilitator engages on himself on them briefly or even initiates them while the retrospective is not jet started or is about to start, as a way of icebreaking.

The facilitator should have at the beginning of the retrospective a list of the members of the team and their role, that are expected to attend the retrospective. The reason for this is that in many occasions there are other people external to the team, that also were invited to the retrospective and to make sure that everybody knows who is in the room it may be nice to just make sure that they briefly introduce themselves to all the team if they haven’t done it yet.

Once the retrospective has started and regardless of the format that the facilitator will decide to use, often there will be a round of what is known as “Temperature Read”.  It is not mandatory thing to do it but it is very common in almost every retrospective.  The goal of temperature reading can be different and it also have an specific format depending on what is that we want to get from the team.  It may go from just a simple icebreaker to a puzzle game where everybody is engaged.
Since this is a topic for itself, in this series of blog posts, I will not go deep into it, but next I will just briefly describe one of those exercises.

For example, It might be of interest of the facilitator to discover how often teams need to do a retrospective. The facilitator, will ask everybody to write a number on a post-it note from 1 to 5 where the smaller the number is, means they consider there is not need to have a retrospective right now, and the greater the number is, it means that they are really eager to have a retrospective right now. After the retrospective the facilitator will count the votes and depending on the predominant result, an action suggesting to change the frequency in which the team has retrospectives can be suggested to the team:

·         1 or 2 can appear if the team has retrospectives too often. Sometimes it becomes like a routine for the team and the quality of the retrospective result is not that good.
·         3 or 4 often indicates that the frequency in which the team has retrospectives is probably appropriate, often nice productive retrospectives with good usage of the time, etc…
·         5 may be a sign of the team needing retrospectives more often. It is common that in retrospectives where the predominant temperature was 5, many topics remain undiscussed due to lack of time.

Of course this previous bullet points were just an example and those patterns not necessarily need to apply and can even be interpreted differently by different people. If it is the desire of the team to research on that topic, they can do it and try to discover when is best for them to have a retrospective.


With this I conclude part 1 on this blog post series on retrospective facilitation.
Stay tuned, in the coming posts I will discuss in depth some of the most powerful retrospective formats(each of them for a different purpose), some of them used in many companies, from small start-ups to huge  mega corporations. Remember that the retrospective is a very helpful  thing for the self-organized team.

Monday, 22 December 2014

Meditating about the self directed I.T company

It is probably this times that we are living now that the "Agile method" to develop software has become in my "modest" opinion, one of the most important topics that all the I.T professionals without exception, need to understand if we want to build a successful, prosperous, rational, healthy, ethic, diverse... software development industry.

The eleventh principle of the "Agile manifesto" says:

"The best architectures, requirements, and designs emerge from self-organizing teams"

Self organization is such a broad topic, that covering it in a blog post, or even on a book would probably not be enough.

What I want to do in this brief post, is just share some thoughts that hopefully will transmit to the readers some curiosity about the huge potential i believe self-organized teams have, for not just building great software, but also for building great self-directed companies.

                                                                       

Given an stimulus of some sort(e.g. challenge,threat,desire,problem,need...), either from within or from the outside, a self-organized team will increase its awareness and will react to it:

  • Feelings for gathering information related to the stimulus will arise.
  • Need for requirements will start to exist.
  • Interesting doubts and questions both technical and non-technical will bloom.
  • Debate will take place.
  • Priorities will be decided in consensus.
  • Interaction with other teams will occur if necessary(more stimulus will be created).
  • Actions will be suggested by team/s.
  • Team/s decisions will be made.
  • Slowly but unstoppable, a self directed organization will start moving in as many directions as its collective mind considers and software will start emerging.
  • Feedback will arrive, the self-directed organization will use it and will keep moving.

                                                                       

The company that is composed of self-organized teams is capable of moving in multiple directions at the same time, without the need of central management or a central financing bodies. We say that the company is self-directed.

Self-organized teams are also self-created, the individuals can choose to join or leave a team whenever they want, and even hiring  is their responsibility. In fact the teams keep changing shape continuously. Exactly the same principle applies to every single aspect, even vacations. Imagine having as much vacation as you would like... We work for living we don't live for work!

In this type of company every individual has a salary and also an additional reward upon completion of team goals which is determined by the teams gentlemen agreements. This reward is not necessarily cash, it will also be equity ownership. The company will end up being owned by their employees.

If a team fails for whatever reasons its goals, the overall impact for the company would be minimal and for the individual it would not be harmful at all and even in the worse case scenario, the team members can either decide to build something else, or incorporate themselves to other teams.

This is for me a self-directed company and in my opinion, the company of the future.
Just for finishing, one beautiful quote that I think describes very well the spirit of teamwork, and also is useful to lower big individualistic egos ;)

"None of us would be something without the rest and the rest would not be something without each of us"


Saturday, 20 December 2014

avoiding integration when acceptance testing

 It is a good practice to exercise the whole system(end to end) when we do an acceptance test(goos book page 8). Unfortunately sometimes we don't control 100% of the pieces that compose a system(they belong to other department, other company...), so many times we have no choice than to assume how those parts behave...

Acceptance testing its an important part of the software development process.
This types of tests are focused in testing the scenarios that are valuable for the business.  Often acceptance tests are written with a life specification framework such as JBehave, Fittnesse, Cucumber...

When testing business value the developer, needs to make sure that has understood the acceptance criteria that the business has interested in having tested.

In some companies, the acceptance criterias/specifications are prepared in a planning session prior to the development cycle, in others it is up to developers,testers and business analysts to on spot decide what needs to be acceptance tested and why.

The important thing when acceptance testing, is to express "the whats" and not "the hows". In other words focus on the overall functionality under the part of the system that is under test and not in the deep detail.

about their use and scope
Sometimes development teams forget that the acceptance tests are there not just to be evaluated automatically at build time, at the end of the day it will be Business Analysts, Quality Assurance teams or other development teams who will read them, to understand what the software does. That's why they need to be concise.

In my opinion acceptance testing should not involve integration with parts out of our control, unless its really a must. Instead should serve itself from plenty of Mocks, Stubs, Primers, Emulators, etc... in order to be able to focus in the main functionality described in the specification, that needs to be tested.

sometimes is not easy
Acceptance testing  requires dextry for develop, maintain and enhance our own domain specific text harness.

 Also, just in my opinion, as per my personal professional experience(part of it in the gambling industry) in many occasions the non deterministic nature(e.g probabilities & statistics) in which software behaves could make acceptance testing very complex. That is why, it is key to pick wisely the scenarios to test and also the edge cases.

example
Next I will show a trivial example where I will isolate and acceptance test just a part of an application which is believed to hold some business valuable. To do so I will stub all its external dependencies. We should not integration test dependencies, we should stub them and assume they work.

Let's first look at the project structure and understand what is that we are testing:

In this example it is "SomeServiceAdapter", that holds business value and we decided to write an acceptance test for it. As we will soon see the other two adapters, represent access to remote systems which are out of our control.

The "UpstreamSystemAdapter", could for example be a controller for a GUI or maybe a Rest endpoint that is used to gather data for processing.
The "TargetSystemAdapter" could for example be the entrance to a persistence layer or rest client that forwards the result of processing to another system... Whatever those dependencies are, we don't care.

Initially when we write an acceptance test nothing exists, and we need to draft our requirements by creating new classes that will represent what we want to test, and also what we want to stub.

Many developers and also frameworks, like expressing the acceptance test in a common format, known as "The Given, When, Then format". It is just a more visually friendly way of understanding a well known testing pattern called "Arrange, Act, Assert". In other words, this pattern what they try to do is helping the developer writing the test, think about the Inputs/Premises(Given/Arrange) that are passed to some action in the code under test(When/Act) and the expected results(Then/Assert).
But we not necessarily need to follow that pattern, the important thing is that we make a concise and readable test. By the way, note that I am doing this in plain Java, without any framework, my goal is just to show a demo of how acceptance test could be written, but in real life you probably would like to write that code using your favourite live spec tool so you can get a beautiful output in some html page(e.g frameworks: Yaspec, Cucumber, Spock, JBehave, Fit, Fitnesse...). Also if you use a build tool such as Jenkins or TeamCity you should be able to nicely visualise your tests.

In the following simple example, we can see an acceptance test that tests "SomeServiceAdapter" and at the same time, stubs the dependencies.

public class SomeServiceAcceptanceTest {

    private UpstreamSystemStub upstreamSystemStub = new UpstreamSystemStub();
    private TargetSystemStub targetSystem = new TargetSystemStub();
    private SomeServiceAdapter someService = new SomeServiceAdapter(upstreamSystemStub, targetSystem);

    @Test
    public void shouldCalculateTheResultGivenTheReceivedDataAndPassItToTheTargetSystem() throws Exception{
        upstreamSystemStub.sends(asList(1, 2, 3));
        someService.calculate();
        targetSystem.hasReceived(6);
    }
}

Note how the class under test has the dependencies passed to its constructor. Also note that the types defined as parameters in the constructor are also interfaces, which are implemented by both the real classes that represent the dependencies, and their respective stubs(This way we make sure that the stub fulfils the contract with what the dependency does in reality).

One of the stubs:
 public class TargetSystemStub implements TargetSystem {  
   private Integer result;  
   @Override  
   public void receivesData(Integer result) {  
     this.result = result;  
   }  
   public void hasReceived(int expected) {  
     assertThat(result, is(expected));  
   }  
 }  

The other stub:
 public class UpstreamSystemStub implements UpstreamSystem {  
   private List<Integer> data = new ArrayList<Integer>();  
   @Override  
   public List<Integer> data() {  
     return data;  
   }  
   public void sends(List<Integer> values) {  
     data.addAll(values);  
   }  
 }  

Once the test is red, we can start implementing the production code. Important to mention, that this is just a very trivial example where the production code is so simple that the production code does not require to enter a TDD cycle  but in many cases, getting to see the acceptance test green, would require also to TDD each of the bits and pieces that enable the function called from the "when" to be properly tested. Just as a side note, when that is the case we also refer to that approach as ATDD(Acceptance Test Driven Development), it involves multiple TDD cycles prior to the completion of a business valuable acceptance tests.

Here just the production implementation of the class:
 public class SomeServiceAdapter implements SomeService {  
   private final UpstreamSystem upstreamSystem;  
   private final TargetSystem targetSystem;  
   public SomeServiceAdapter(UpstreamSystem upstreamSystem, TargetSystem targetSystem) {  
     this.upstreamSystem = upstreamSystem;  
     this.targetSystem = targetSystem;  
   }  
   public Integer calculate() {  
     Integer result = upstreamSystem.data().stream().reduce(0, (n1, n2) -> n1 + n2);  
     targetSystem.receivesData(result);  
     return result;  
   }  
 }  

I guess each developer has its own technique when writing acceptance tests, I just want to mention that I recently show somebody who starts writing his acceptance tests from the "then" and I thought that was a very interesting approach, because he said that doing it that way can focus more in what exactly is expected from the system that is about to be developed, but as I said it is up to each to decide how you like writing your acceptance test, just remember that it is about the "what" and not about the "how" also pick your battles and build test harnesses(avoid integration testing as much as you can)

Here the link to the complete source code: git acceptance testing example

YOLO! :)

Share with your frieds