Pages

Friday, December 26, 2014

Retrospectives – “Discovering our selves”(Part 1)

A retrospective is a well known practice in many Agile development teams. Its goal is to help the team reflect on the previous working weeks, commonly 2 or 3 weeks with the aim of distinguishing ways of improving the way they work.  Retrospectives are also very important for this agile self-organized teams, because since they don’t receive direct commands from managers(see my previous post), it is of extreme importance to have mechanisms that improve increase awareness and prevent from burning out.

What makes a retrospective  a little bit different from other meetings, is that it often follows an organized protocol for interaction. The retrospective's protocol is defined and applied by one or many persons external to the team, known as the facilitators.  The role of the facilitator, is to, in an impartial way facilitate the teams express their concerns and discover actions that can help them address those concerns.

Each facilitator, has its own technique/s for facilitating retrospectives. Different techniques are useful in different circumstances. That is why one of the first things the facilitator will do in order to prepare a good retrospective, will be to have a brief chat with some representatives from the team, to get some idea/highlights of what was going on lately: current work, most notorious blocker, absences, who will attend the retrospective, important events… 

This first mini reconnaissance mission is not a silver bullet but often, it helps the facilitator get a grasp of what type of retrospective format could be used.  Sometimes retrospectives will have a high level of technicalities, other times there will be lots of complaints about blockers, others there will be communication issues, process, etc…

Without going into an specific retrospective format yet(not in Part 1), I would like to just name a list of healthy tips that is useful to hang somewhere on the room for all to see and/or even say them out loudly(the facilitator can even ask for a volunteer/s to read them out) at the retrospective, just before commencing:

·         Don’t blame. We all have been working to the best of our abilities.
·         Don’t monopolize the conversation, be conscious when you should let others participate.
·         Don’t interrupt people when they are speaking.
·         Don’t be afraid of expressing what you think no matter how bad it is.
·         Don’t feel intimidated by anyone because of their position.
·         Do critic and welcome critics(Blame not equal to critic).
·         Do remember that change is always possible.
·         Do remember that your company will be what you want it to be.

Dialogue it’s a dexterity which is not easy to master. The goal of this tips(note that I didn’t say rules) are to just to encourage a healthier debate. Many times will be the case that people feel: shy, impatient, inferior, superior, lazy, pessimistic, etc …

To help break some of those psychological barriers another duty of the facilitator will be to make sure that the environment where the retrospective will be held is comfortable enough. The environment can significantly impact the results of a retrospective.  But of course, It is up to the creativity of each facilitator, how to do so.  In any case here some more tips:

·         A bit of not loud ambient music at the beginning or even during the hold retrospective, can help stimulate people and also reduce the uncomfortable sensation some people claim to have when the group is in silence.
·         Soft drinks and water could help avoid dry mouths when speaking.
·         Coffee and Tea can help give a boost to people if the retrospective has to be held on the last hours of the day.
·         Alcohol is often discouraged specially if it is expected the retrospective to last too long. Some facilitators don’t have nothing against it if when it is in moderation.
·         Sweet and salty snacks are often found in retrospectives, specially chocolate(Apparently there are scientific research that suggest that it can increase peoples happiness).
·         Fruit, it’s a healthy option that many people often appreciate in retrospectives.  
·         Appropriate jokes and even chit-chat are often common at the beginning of retrospectives, it is perfectly fine if the facilitator engages on himself on them briefly or even initiates them while the retrospective is not jet started or is about to start, as a way of icebreaking.

The facilitator should have at the beginning of the retrospective a list of the members of the team and their role, that are expected to attend the retrospective. The reason for this is that in many occasions there are other people external to the team, that also were invited to the retrospective and to make sure that everybody knows who is in the room it may be nice to just make sure that they briefly introduce themselves to all the team if they haven’t done it yet.

Once the retrospective has started and regardless of the format that the facilitator will decide to use, often there will be a round of what is known as “Temperature Read”.  It is not mandatory thing to do it but it is very common in almost every retrospective.  The goal of temperature reading can be different and it also have an specific format depending on what is that we want to get from the team.  It may go from just a simple icebreaker to a puzzle game where everybody is engaged.
Since this is a topic for itself, in this series of blog posts, I will not go deep into it, but next I will just briefly describe one of those exercises.

For example, It might be of interest of the facilitator to discover how often teams need to do a retrospective. The facilitator, will ask everybody to write a number on a post-it note from 1 to 5 where the smaller the number is, means they consider there is not need to have a retrospective right now, and the greater the number is, it means that they are really eager to have a retrospective right now. After the retrospective the facilitator will count the votes and depending on the predominant result, an action suggesting to change the frequency in which the team has retrospectives can be suggested to the team:

·         1 or 2 can appear if the team has retrospectives too often. Sometimes it becomes like a routine for the team and the quality of the retrospective result is not that good.
·         3 or 4 often indicates that the frequency in which the team has retrospectives is probably appropriate, often nice productive retrospectives with good usage of the time, etc…
·         5 may be a sign of the team needing retrospectives more often. It is common that in retrospectives where the predominant temperature was 5, many topics remain undiscussed due to lack of time.

Of course this previous bullet points were just an example and those patterns not necessarily need to apply and can even be interpreted differently by different people. If it is the desire of the team to research on that topic, they can do it and try to discover when is best for them to have a retrospective.


With this I conclude part 1 on this blog post series on retrospective facilitation.
Stay tuned, in the coming posts I will discuss in depth some of the most powerful retrospective formats(each of them for a different purpose), some of them used in many companies, from small start-ups to huge  mega corporations. Remember that the retrospective is a very helpful  thing for the self-organized team.

Sunday, December 21, 2014

Meditating about the self directed I.T company

It is probably this times that we are living now that the "Agile method" to develop software has become in my "modest" opinion, one of the most important topics that all the I.T professionals without exception, need to understand if we want to build a successful, prosperous, rational, healthy, ethic, diverse... software development industry.

The eleventh principle of the "Agile manifesto" says:

"The best architectures, requirements, and designs emerge from self-organizing teams"

Self organization is such a broad topic, that covering it in a blog post, or even on a book would probably not be enough.

What I want to do in this brief post, is just share some thoughts that hopefully will transmit to the readers some curiosity about the huge potential i believe self-organized teams have, for not just building great software, but also for building great self-directed companies.

                                                                       

Given an stimulus of some sort(e.g. challenge,threat,desire,problem,need...), either from within or from the outside, a self-organized team will increase its awareness and will react to it:

  • Feelings for gathering information related to the stimulus will arise.
  • Need for requirements will start to exist.
  • Interesting doubts and questions both technical and non-technical will bloom.
  • Debate will take place.
  • Priorities will be decided in consensus.
  • Interaction with other teams will occur if necessary(more stimulus will be created).
  • Actions will be suggested by team/s.
  • Team/s decisions will be made.
  • Slowly but unstoppable, a self directed organization will start moving in as many directions as its collective mind considers and software will start emerging.
  • Feedback will arrive, the self-directed organization will use it and will keep moving.

                                                                       

The company that is composed of self-organized teams is capable of moving in multiple directions at the same time, without the need of central management or a central financing bodies. We say that the company is self-directed.

Self-organized teams are also self-created, the individuals can choose to join or leave a team whenever they want, and even hiring  is their responsibility. In fact the teams keep changing shape continuously. Exactly the same principle applies to every single aspect, even vacations. Imagine having as much vacation as you would like... We work for living we don't live for work!

In this type of company every individual has a salary and also an additional reward upon completion of team goals which is determined by the teams gentlemen agreements. This reward is not necessarily cash, it will also be equity ownership. The company will end up being owned by their employees.

If a team fails for whatever reasons its goals, the overall impact for the company would be minimal and for the individual it would not be harmful at all and even in the worse case scenario, the team members can either decide to build something else, or incorporate themselves to other teams.

This is for me a self-directed company and in my opinion, the company of the future.
Just for finishing, one beautiful quote that I think describes very well the spirit of teamwork, and also is useful to lower big individualistic egos ;)

"None of us would be something without the rest and the rest would not be something without each of us"


Saturday, December 20, 2014

avoiding integration when acceptance testing

 It is a good practice to exercise the whole system(end to end) when we do an acceptance test(goos book page 8). Unfortunately sometimes we don't control 100% of the pieces that compose a system(they belong to other department, other company...), so many times we have no choice than to assume how those parts behave...

Acceptance testing its an important part of the software development process.
This types of tests are focused in testing the scenarios that are valuable for the business.  Often acceptance tests are written with a life specification framework such as JBehave, Fittnesse, Cucumber...

When testing business value the developer, needs to make sure that has understood the acceptance criteria that the business has interested in having tested.

In some companies, the acceptance criterias/specifications are prepared in a planning session prior to the development cycle, in others it is up to developers,testers and business analysts to on spot decide what needs to be acceptance tested and why.

The important thing when acceptance testing, is to express "the whats" and not "the hows". In other words focus on the overall functionality under the part of the system that is under test and not in the deep detail.

about their use and scope
Sometimes development teams forget that the acceptance tests are there not just to be evaluated automatically at build time, at the end of the day it will be Business Analysts, Quality Assurance teams or other development teams who will read them, to understand what the software does. That's why they need to be concise.

In my opinion acceptance testing should not involve integration with parts out of our control, unless its really a must. Instead should serve itself from plenty of Mocks, Stubs, Primers, Emulators, etc... in order to be able to focus in the main functionality described in the specification, that needs to be tested.

sometimes is not easy
Acceptance testing  requires dextry for develop, maintain and enhance our own domain specific text harness.

 Also, just in my opinion, as per my personal professional experience(part of it in the gambling industry) in many occasions the non deterministic nature(e.g probabilities & statistics) in which software behaves could make acceptance testing very complex. That is why, it is key to pick wisely the scenarios to test and also the edge cases.

example
Next I will show a trivial example where I will isolate and acceptance test just a part of an application which is believed to hold some business valuable. To do so I will stub all its external dependencies. We should not integration test dependencies, we should stub them and assume they work.

Let's first look at the project structure and understand what is that we are testing:

In this example it is "SomeServiceAdapter", that holds business value and we decided to write an acceptance test for it. As we will soon see the other two adapters, represent access to remote systems which are out of our control.

The "UpstreamSystemAdapter", could for example be a controller for a GUI or maybe a Rest endpoint that is used to gather data for processing.
The "TargetSystemAdapter" could for example be the entrance to a persistence layer or rest client that forwards the result of processing to another system... Whatever those dependencies are, we don't care.

Initially when we write an acceptance test nothing exists, and we need to draft our requirements by creating new classes that will represent what we want to test, and also what we want to stub.

Many developers and also frameworks, like expressing the acceptance test in a common format, known as "The Given, When, Then format". It is just a more visually friendly way of understanding a well known testing pattern called "Arrange, Act, Assert". In other words, this pattern what they try to do is helping the developer writing the test, think about the Inputs/Premises(Given/Arrange) that are passed to some action in the code under test(When/Act) and the expected results(Then/Assert).
But we not necessarily need to follow that pattern, the important thing is that we make a concise and readable test. By the way, note that I am doing this in plain Java, without any framework, my goal is just to show a demo of how acceptance test could be written, but in real life you probably would like to write that code using your favourite live spec tool so you can get a beautiful output in some html page(e.g frameworks: Yaspec, Cucumber, Spock, JBehave, Fit, Fitnesse...). Also if you use a build tool such as Jenkins or TeamCity you should be able to nicely visualise your tests.

In the following simple example, we can see an acceptance test that tests "SomeServiceAdapter" and at the same time, stubs the dependencies.

public class SomeServiceAcceptanceTest {

    private UpstreamSystemStub upstreamSystemStub = new UpstreamSystemStub();
    private TargetSystemStub targetSystem = new TargetSystemStub();
    private SomeServiceAdapter someService = new SomeServiceAdapter(upstreamSystemStub, targetSystem);

    @Test
    public void shouldCalculateTheResultGivenTheReceivedDataAndPassItToTheTargetSystem() throws Exception{
        upstreamSystemStub.sends(asList(1, 2, 3));
        someService.calculate();
        targetSystem.hasReceived(6);
    }
}

Note how the class under test has the dependencies passed to its constructor. Also note that the types defined as parameters in the constructor are also interfaces, which are implemented by both the real classes that represent the dependencies, and their respective stubs(This way we make sure that the stub fulfils the contract with what the dependency does in reality).

One of the stubs:
 public class TargetSystemStub implements TargetSystem {  
   private Integer result;  
   @Override  
   public void receivesData(Integer result) {  
     this.result = result;  
   }  
   public void hasReceived(int expected) {  
     assertThat(result, is(expected));  
   }  
 }  

The other stub:
 public class UpstreamSystemStub implements UpstreamSystem {  
   private List<Integer> data = new ArrayList<Integer>();  
   @Override  
   public List<Integer> data() {  
     return data;  
   }  
   public void sends(List<Integer> values) {  
     data.addAll(values);  
   }  
 }  

Once the test is red, we can start implementing the production code. Important to mention, that this is just a very trivial example where the production code is so simple that the production code does not require to enter a TDD cycle  but in many cases, getting to see the acceptance test green, would require also to TDD each of the bits and pieces that enable the function called from the "when" to be properly tested. Just as a side note, when that is the case we also refer to that approach as ATDD(Acceptance Test Driven Development), it involves multiple TDD cycles prior to the completion of a business valuable acceptance tests.

Here just the production implementation of the class:
 public class SomeServiceAdapter implements SomeService {  
   private final UpstreamSystem upstreamSystem;  
   private final TargetSystem targetSystem;  
   public SomeServiceAdapter(UpstreamSystem upstreamSystem, TargetSystem targetSystem) {  
     this.upstreamSystem = upstreamSystem;  
     this.targetSystem = targetSystem;  
   }  
   public Integer calculate() {  
     Integer result = upstreamSystem.data().stream().reduce(0, (n1, n2) -> n1 + n2);  
     targetSystem.receivesData(result);  
     return result;  
   }  
 }  

I guess each developer has its own technique when writing acceptance tests, I just want to mention that I recently show somebody who starts writing his acceptance tests from the "then" and I thought that was a very interesting approach, because he said that doing it that way can focus more in what exactly is expected from the system that is about to be developed, but as I said it is up to each to decide how you like writing your acceptance test, just remember that it is about the "what" and not about the "how" also pick your battles and build test harnesses(avoid integration testing as much as you can)

Here the link to the complete source code: git acceptance testing example

YOLO! :)

Thursday, December 11, 2014

Picture with a celebrity

Happy to have the priviledge to take a picture with one of the celebrities of the world of software development, R.C Martin. The event was held once more in central London in one of the many fabulous skyscrapers at "The sun" offices. Great atmosphere, drinks and pizza to make the huge crowd of IT professionals comfortable. Martin in a great speech counter attacked recent critics that were made to the Test Driven Development approach by David Heinemeir Hansson. Once more a great talk by one of the most important figures of modern software development.

Myself with R.C Martin(a.k.a Uncle Bob)

Monday, December 8, 2014

Tip for quicker coding in IntelliJ

Did you know that with IntelliJ you can use Live Templates, to quickly insert predefined snippets of code. This can speed up your coding...

All you have to do is create templates for anything that you want, assign an abbreviation to it and the editor will do the rest for you just by typing the abbreviation.
Here an screen that shows, where in the IDE you can find this feature:

  

This is a cool one I like very much. You just type 'test' then hit tab and you will get a cool fresh test method :)

 @Test  
 public void $END$() throws Exception {  
 }   

Fore more detailed info about live templates, you can read this nice article at JavaCodeGeeks.com

Wednesday, October 29, 2014

Indaface!



Recently my team leader gave me an S.L.A.P, hehehe...
Don't worry this is not a case of bullying, it just a way of refering to an important Object Oriented Principle known as the Single Level of Abstraction Principle.
As Usual at the office paring on an story, when we got to the refactoring bit, I was told to improve a method that had some ugly conditional logic on it, and also some duplication.
Instead of removing the duplication, I just delegated all the problematic part to another method, so the original method would look smaller and cleaner, but...
He said to me: Do you think that code you just delegated is located now at a different level of abstraction?... Indaface! Violation of S.L.A.P
Below, an example that somehow recreates todays funny situation :P Sorry, can't show you the real code(Dont wanna get in trouble :) ).

 /*  
     We have a vault that its being populated and categorized but there is a little 
     duplication issue when populating the vault with CAT-3 items. Items withGrook() 
     are definitely CAT-3, but those withTrook() are only categorized as CAT-3 if 
     klop.isHigh().   
 */  
    public Vault stuff(Klop klop, Vault vault) {  
     vault.include("CAT-1", azra());  
     vault.include("CAT-2", khy());  
     vault.include("CAT-3", things.stream().filter(withGrook()).
     map(toSponge()).map(toPlik(klop)).collect(toList()));  
     if(klop.isHigh())  
       vault.include("CAT-3", things.stream().filter(withTrook()).
       map(toSponge()).map(toPlik(klop)).collect(toList()));  
     return vault;  
     }  
 /*  
     At the beggining we may feel tempted to extract that vault.include, to its own
     method which will be specific to CAT-3 items, regardles that the stuff method looks
     shorter, the problem is that we did not remove the duplication plus we are disrespecting
     the S.L.A.P principle.  
 */      
      public Vault stuff(Klop klop, Vault vault) {  
           vault.include("CAT-1", azra());  
           vault.include("CAT-2", khy());  
           includeCat3Items(vault,klop);  
           return vault;  
        }  

   private void includeCat3Items(Vault vault, Klop klop) {  
           if(klop.isHigh())  
             vault.include("CAT-3", things.stream().filter(withGrook()).
             map(toSponge()).map(toPlik(klop)).collect(toList()));  
             vault.include("CAT-3", things.stream().filter(withTrook()).
             map(toSponge()).map(toPlik(klop)).collect(toList()));  
   }  
  
  /*If we want to respect S.L.A.P, in this case what we need to extract, is just the 
    changing part, and delegate the condition to check if klop.isHigh, to the delegate
     method.  
  */  
     public Vault stuff(Klop klop, Vault vault) {  
     vault.include("CAT-1", azra());  
     vault.include("CAT-2", khy());  
     vault.include("CAT-3", things.stream().filter(ook(klop)).
     map(toSponge()).map(toPlik(klop)).collect(toList()));  
     return vault;  
   }  

   private Predicate<String> ook(Klop klop) {  
     return klop.isHigh() ? withGrook(): withTrook();  
   }  

 /*  
     The final refactor could even go one step further by extracting implementation 
     detail into a plu(Klop klop) method  
 */      
   public Vault stuff(Klop klop, Vault vault) {  
     vault.include("CAT-1", azra());  
     vault.include("CAT-2", khy());  
     vault.include("CAT-3", plu(klop));  
     return vault;  
   }
  
   private List<Object> plu(Klop klop) {  
     return things.stream().filter(ook(klop)).map(toSponge()).map(toPlik(klop)).collect(toList());  
   }  
   private Predicate<String> ook(Klop klop) {  
     return klop.isHigh() ? withGrook(): withTrook();  
   }  


This post is dedicated to L.K, thanks for the patience :)

Tuesday, September 23, 2014

The Add-Delete ratio as a metric of code quality

Recently after reading some articles about software quality metrics, I started thinking a lot about what would be for me a metric that really could reflect the quality of the code in the system I am working in.

I am not a big fan of numbers but in the same way of an altimeter is a useful thing you want to look at when you are driving a plane, a good code quality metric could give you warning signs about system that you are building.

In page 15 of the book "Clean Code" by R.C Martin, it says that the only valid measure of code quality is, "What The Fucks per second".



Couldn't agree more, but the only problem with this metric is that is not easy to reflect in a chart so all others can see it and therefore have an idea of how bad the system really is...

Time ago while talking to a colleague here in London, we came to the conclusion that a really good metric to reflect the quality of the code we write, is the "Added-Removed code ratio".

So this is basically the ratio of added to removed lines of code in your system. 
To measure this, what you do is:

1 - At a point in time, take your system as a whole and count the amount of lines of code, of the target language, lets say Java(Just java, dont take xmls and other configuration files...).

2 - At a future point in time, take again the same system, and count the lines again.

3 - Write down the ratios and take note of the percentages the amounts of line are increasing, or decreasing and present the data in some chart.

 This metric if tracked often, such as the altimeter, will warn your department about when you need to invest more time in technical debt, and when your system is sustainable and you can keep pushing features.

I guess you wonder, how do I know this is right? 
Well, it is very simple, we just need to understand the definition of re-factoring which is:

"a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior."

Re-factoring its very important, because the more understandable the system is, the better quality it will have. So it is not about just the smaller your code is, but about tracking the amount of effort you expend in adding and removing(specially removing) stuff from it.
- In a system where we see that the remove ratio numbers across time are going in a down trend(smaller gap between added and removed lines), it means that the developers are taking care and the overall  quality is improving.

- In a system where we see that the remove ratio numbers across time are going in an up trend(big gap between added and removed lines), it means that the teams are not investing enough time in improving the quality of the software they make.

I hope you liked the post, I appreciate feedback, this is just my opinion.



Monday, July 14, 2014

Helping methods indentify their calling threads with ThreadLocal class

In multi threaded applications in occasions we want to identify the thread that called certain method.
We could pass some additional parameter for that purpose to the method we are calling, but that wouldn't be very elegant.
The class ThreadLocal allows us to manage an object within the current Thread scope.

Let's have a look at an example...

The class below called sample Thread, uses some service and that service has interest in knowing who called it.
So What we do is create some context with come kind of unique identifier(e.g ThreadName,UUID...) and then we give that context to the class ThreadLocal to handle it(We will create an special wrapper class to hold ThreadLocal).

 public class SampleThread extends Thread {  
   @Override  
   public void run() {  
     ThreadContext threadContext = new ThreadContext();  
     threadContext.setId(getName());  
     ContextManager.set(threadContext);  
     new BussinessService().bussinessMethod();  
     ContextManager.unset();  
   }  
 }  

The context could be anything we want, but in this example I will be using the Thread name.

 public class ThreadContext {  
   private String id = null;  
   public String getId() {  
     return id;  
   }  
   public void setId(String id) {  
     this.id = id;  
   }  
 }  

The class ContextManager will act as a container that will allow us to access and modify the context of the Thread.


 public class ContextManager {  
   public static final ThreadLocal threadLocal = new ThreadLocal();  
   public static void set(ThreadContext context) {  
     threadLocal.set(context);  
   }  
   public static void unset() {  
     threadLocal.remove();  
   }  
   public static ThreadContext get() {  
     return (ThreadContext) threadLocal.get();  
   }  
 }  

With this infrastructure in place, the Service can identify the calling thread.

 public class BussinessService {  
     public void bussinessMethod() {  
       ThreadContext threadContext = ContextManager.get();  
       System.out.println(threadContext.getId());  
     }  
 }  

Finally just to demo the concept we could just create a couple of Threads and run them to see how they are identified by the services via their context.

 public class Main {  
   public static void main(String args[]) {  
     SampleThread threadOne = new SampleThread();  
     threadOne.start();  
     SampleThread threadTwo = new SampleThread();  
     threadTwo.start();  
   }  
 }  

Friday, June 20, 2014

Creating an executable jar file with maven

It happen to me recently that I wanted to pack a little app with maven and run it from the console, but I always forget what is the config that I need to add to maven in order to make sure the manifest pointing to the class with the main method is included.

After wasting precious half hour around the internet and after I did what I wanted to do, I thought... What better place to write this config once and remember it forever, than my blog ;)

To do it just add the following to your pom.xml and make sure you write the correct path to your main class.


 <build>  
     <plugins>  
       <plugin>  
         <artifactId>maven-assembly-plugin</artifactId>  
         <configuration>  
           <archive>  
             <manifest>  
               <mainClass>com.djordje.tips.Main</mainClass>  
             </manifest>  
           </archive>  
           <descriptorRefs>  
             <descriptorRef>jar-with-dependencies</descriptorRef>  
           </descriptorRefs>  
         </configuration>  
         <executions>  
           <execution>  
             <id>make-assembly</id>  
             <phase>package</phase>  
             <goals>  
               <goal>single</goal>  
             </goals>  
           </execution>  
         </executions>  
       </plugin>  
       <plugin>  
         <groupId>org.apache.maven.plugins</groupId>  
         <artifactId>maven-compiler-plugin</artifactId>  
         <version>2.5.1</version>  
         <inherited>true</inherited>  
         <configuration>  
           <source>1.7</source>  
           <target>1.7</target>  
         </configuration>  
       </plugin>  
     </plugins>  
   </build>  

When the maven package gets executed two .jar files will be created.
The one with the text jar-with-dependencies on its name, will contain is the one we should run. To do it, just go to the terminal, navigate to it(inside the target folder) and type the command:

 java -jar myapp-jar-with-dependencies.jar  

Monday, February 24, 2014

paranoia testing

Somebody told me once a funny story about a woman who day after day kept arriving late to work, because once on her way to the office, she kept having the need to go back home again and again just to check that she unplugged the iron from the wall socket. Her unfounded fear was such that one day she finally decided to take the iron with her to the office in her handbag.

A common unfounded fear that some developers sometimes have when working with the database, is that they will fail to query it for whatever reason and they feel the need of testing that the database is returning them the expected values. It is common to see horrible tests like this one:

   @Test  
   public void whenAddingANewUserTheUserIsSavedToTheDatabase() {  
     //Tell hibernate to add this user to the database   
     ormAdapter.add(userFactory("djordje", "123"));  
     //Find the user and check its values  
     User savedValueInDatabase = adapter.find("djordje");  
     assertThat(savedValueInDatabase.getName(),is("djordje"));  
     assertThat(savedValueInDatabase.getPassword(),is("123"));  
     //Delete the test data  
     ormAdapter.delete(savedValueInDatabase.getName());  
   }   

The above test has many problems, not just it is slow, also it is wrongly aimed because is not testing the functionality of the application, it just testing that some third party framework(hibernate in this case) is working properly.

Developers who do this, don't understand that the database is just a detail/add-on of the application.
It is fine to perform a smoke test to ping the database if they want(just to verify that the configuration of the ORM framework was correctly done), but testing that the HQL or the hibernate criteria are returning some values is wrong.
"Hibernate, MySQL, Oracle... all are just vendors, their solutions will work, you don't need to test them again!"

Focus on testing the functionality by mocking the dependencies you have with the database.
The only thing that counts at the end of the day is that the inputs and outputs to the service layer are processed correctly. If the data comes from a database, a file, a socket... doesn't matter at all.
Here an alternative test that will mock a dependency to the database and will check that the data is processed correctly:

   @Test  
   public void loginTest() {   
     PersonManagementAdapterORM adapterORM = mock(PersonManagementAdapterORM.class);  
     when(adapterORM.find("djordje","123")).thenReturn(new Person("djordje","123"));  
     LoginService service = new LoginServiceImpl(adapterORM);  
     boolean authorized = service.login("djordje","123");  
     verify(adapterORM).find("djordje","123");  
     assertThat(authorized,is(true));  
   }  

Now tests main concern is the functionality and not how the data is added to, or retrieved from the database.
I trained the mock to behave as expected and I verified that its behavoir was executed as expected.
The database was not at all a concern.

Friday, January 24, 2014

Contextual Vs Composable design

Today at work, a colleague told me about a very interesting topic I never before thought about; Composable Vs Contextual software. I decided to read a bit more on the topic and just post some brief conclusions.

When designing new software, probably one of the most important questions we want to ask ourselves are:
-Who is going to use it?
-How is going to use it?

By talking to the future users of the solution, we can gather relevant information that will help us give answers to the above questions and therefore, determine if what we have to build has to be composable or contextual.

So what is composable and what is contextual then?
If you type in google "define:composable design" or "define:contextual design" you will get really good definitions(Just in case is not clear what I am about to say next). But I will also try to give a more reductionist definition with my own words:

-Composable: For me a composable system, is boundaryless in what regards the interactions the user can have with its components.

-Contextual: For me a contextual system is a system that defines constraints, flows, etc... to guide the user in a predefined way, when it comes to the interactions with its components.

Pros and Cons
So as I said before the important thing is, to understand who the user is and how is going to use the system, but is also good to be aware of some advantages and disadvantages of both designs:

Composable
Pro: Big flexibility.
Con: With great power comes great responsibility(It can be dangerous if in wrong hands).

Pro: Less maintainability
Con: May require training/mentorship for less experienced users

Pro: Generic/Multipurpose
Con: Non-Specific, could be used for the wrong purpose.

Pro: Fast for experienced users
Con: Slow for less experienced users

Contextual
Pro: Solves a problem
Con: Is limited to just that problem/Not reusable in other problems

Pro: Its easy to use
Con: Needs maintenance

Pro: Fast for less experienced users
Con: A constraint for experienced users

Pro: Does most of what you want fast and easy.
Con: Dietzler’s Law for Access(Does 80% of what the user wants easily, but does another 10% with difficulties and finally is unable to do the other 10%)

Just to complete this post, lets write some examples of Contextual vs Composable:

-GUI Vs Terminal
-Framework Vs Programmatical approach
-Wizard Vs copy&paste files into directories








As a conclusion, all I can say is that the decision on which design to go for, will depends on the type of user you are dealing with.

Resources:
-http://gigaom.com/2013/02/16/devops-complexity-and-anti-fragility-in-it-context-and-composition/
-http://nealford.com/memeagora/2013/01/22/why_everyone_eventually_hates_maven.html
-www.google/com

Sunday, January 19, 2014

It's been a while. Glad to see you are back.

When doing certain tasks that are tedious, difficult or just we are to exausted to do them; we often whish there was an altruistic someone to give as a hand and make them easier. Sometimes in this situations we would be ready to accept help from anybody, even from an strange visitor...


Visitor is a behavioral design pattern that intends to add or remove functionality from a class without changing its original implementation. It can help leverage complexity and also allows to keep concerns separated. Also is non intrusive, since will be only the clients choice to either allow the visitor to come by and provide support or not, by just calling a method.

Let's have a look at a bit of UML:

 
The only modification needed in the original class it to add a method that accepts a visitor.
That method will be called only by the client and the visitor to visit that class will also be provided by the client. The visitor just have a visit method that takes a concrete implementation of the class that has to visit. If you used the Visitable interface as parameter in visit method is not wrong but you will restrict the visitor to just what sees in the interface(In occasions can be desirable).

Now lets have a look at an example.
Here the Visitable interface and a couple of implementations:

 public interface FormulaOnePitOperation {  
   public void perform();  
   public void accept(Mechanic visitor);  
   public int dangerLevel();  
 } 
 public class PetrolFilling implements FormulaOnePitOperation{  
   private int petrolLevels;  
   public void perform() {  
     System.out.print("Filling the tank as we always do...");  
   }  
   public void accept(Mechanic visitor) {  
     visitor.visit(this);  
   }  
   public int dangerLevel() {  
     return petrolLevels;  
   }  
 } 
 public class WheelsReplacement implements FormulaOnePitOperation{  
   private int presureIndicator;  
   public void perform() {  
     System.out.print("Changing the wheels as we always do...");  
   }  
   public void accept(Mechanic visitor) {  
     visitor.visit(this);  
   }  
   public int dangerLevel() {  
     return presureIndicator;  
   }  
 }  

As we see in the above code, the visitor is passed in the parameter of the accept method that is used to trigger the visit.

Lets now have a look at the visitor's side

 public interface Mechanic {  
   public void visit(FormulaOnePitOperation formulaOnePitOperation);  
 } 
 public class PetrolPumpMechanic implements Mechanic {  
  public void visit(FormulaOnePitOperation formulaOnePitOperation) {  
     System.out.print("You are doing well guys keep going, I am just keeping an eye on the pump here. I won't disturb you");  
     if(formulaOnePitOperation.dangerLevel() > 100)  
       System.err.print("STOP!!! CLOSE THE PUMP NOW!!!!");  
   }  
 } 
 public class WheelsMechanic implements Mechanic{  
   public void visit(FormulaOnePitOperation formulaOnePitOperation) {  
     System.out.print("Just supervising the wheels replacement operation! Not disturbing");  
     if(formulaOnePitOperation.dangerLevel()>45)  
       System.out.print("Be careful you are pumping them too much");  
   }  
 }  

In the visitor side, the functionality of the FormulaOnePitOperation can be modified without changing any of the original code. Also another thing to notice in the above example is that since we are using the interface as parameter, we may be limmited to only use what is defined on it. In occasions is just enough and there is no need to access the concrete classes, but note that the UML diagram encorages to use the concrete implementations(In my personal opinion, I think there is not a big difference since the methods on the concrete classes could not be exposed publicly)

Now finally just the client class:

 public class Pit {  
   public static void main(String[] args) {  
     FormulaOnePitOperation petrolFilling = new PetrolFilling();  
     FormulaOnePitOperation wheelsReplacement = new WheelsReplacement();  
     petrolFilling.perform();  
     petrolFilling.accept(new PetrolPumpMechanic());  
     wheelsReplacement.perform();  
     wheelsReplacement.accept(new WheelsMechanic());  
   }  
 }  

Not much to say about the client, it just uses the classes as usual with the only difference that as per need now can pass a visitor to help add certain tasks. I mentioned at the beggining that Visitor is a non intrusive pattern, by this what I want to say is that nothing mandates the client to call a visitor if he doesn't want to.


A little dissadvantage that this patter has, both methods accept() and visit() have hardwired dependencies in their parameters that can make them hard to unit test.


          It's been a while. Glad to see you are back.
Pair programmers in action

Share with your friends