Sunday, July 12, 2015

Why big corporations don't like retrospectives?

Not too long ago, I used to facilitate retrospectives for some of the teams in the department.
But for some strange reason their interest in retrospectives just disappeared.
I thought a lot about it, but I couldn't come up with an answer. Why?

Is it that they don't understand the advantages of retrospectives?
Is it that they have no time for retrospectives?
Or is it that they feel powerless when they do a retrospective?

In this brief text I want to just share my opinion on why I think corporations, have no interest in retrospectives(A bit of an speculative opinion, but just an opinion).

A retrospective is well known Agile practice that aims to improve any aspect of work. During a retrospective a team debates about things that matter for them and for the company: ideas, solutions, process, collaboration, communication, quality, recruitment, salaries, etc...
This are just some topics.

Unfortunately retrospectives are not being taken serious in many companies:
- in some, retrospectives are discouraged and seen as a waste of time
- in others shyly implemented within a constraint environment without visibility or power to
take any action. Just as a way of ventilating frustrations. 

So why is this that they are so unpopular?

The Answer is simple:
Companies fear change but specially, they fear decentralisation and lose of control.

The fact is that well organised regular retrospectives, could easily challenge the stablish organisational structure by making evident and visible its inefficiencies.

Many young technical companies quickly gain terrain to the prehistoric corporates not just because their decentralized and autonomous nature but also because of their eagerness and necessity to communicate and improve. Most of the successful ones often practice retrospectives.

The old school of building software companies has its days counted. What employees have to say about process inefficiencies, technical challenges, etc ... Is as important, if not more than the absolutely false illusion of rush that the corporation sees in the market.

Big corporations are organisations with huge amounts of resources and great potential for making significant impact in industry. Helping them getting rid of their fears is key for building not just good companies but a
healthy industry.

Software is continuous change, so embrace change, embrace retrospectives, embrace the future

Thursday, July 9, 2015

Maven build in colours

Did you ever wonder, how you can make your maven build display colours?
In this quick post I will show you what you need to do to show colours in your maven build.

  1. Navigate to wherever you have installed maven and go into the lib folder ~/apache-maven-3.3.1/lib
  2. Delete slf4j-simple-1.7.5.jar
  3. Find on the internet the following jars and add them to the lib folder
    log4j-api-2.2.jar
    log4j-core-2.2.jar
    log4j-slf4j-impl-2.2.jar
    slf4j-ext-1.7.5.jar

  4. Get out of the lib folder and navigate into the conf folder once in there create a file called log4j2.xml and add the following content:

    <?xml version="1.0" encoding="UTF-8" ?>
    <Configuration>
      <Properties>
        <Property name="maven.logging.root.level">INFO</Property>
      </Properties>
      <Appenders>
        <Console name="console" target="SYSTEM_OUT">
          <PatternLayout pattern="%highlight{[%p] %msg%n%throwable}" />
        </Console>
      </Appenders>
      <Loggers>
        <Root level="${sys:maven.logging.root.level}">
          <Appender-ref ref="console"/>
        </Root>
      </Loggers>
    </Configuration>
     
  5. Go to the terminal and try building a maven project

Monday, June 8, 2015

Stefan Birkner's system-rules library

Just a super quick post about an interesting library I just discovered :)

Its very rare the case when you will have to test things such as an String being printed in the console or a System property being set from the program... But in occasions it happens that.

Last week I discovered a library called system-rules, from Stefan Birkner. http://stefanbirkner.github.io/system-rules/
It is really interesting and easy to use, it can help you easily do tests that involve java.lang.System

Lets just quickly saw an example.
Imagine that for some reason you want to test that the console prints some message... I don't know you, but the only way I know to do that, is to redirect the stream that goes to the console, to something that you can control and read from(e.g a file, a log...). This kind of test would have lots of boiler plate. It would look somehow like this:

 @Test  
   public void consoleOutputOldSchoolTest() throws FileNotFoundException {  
     //Create some text file where the output will be redirected, so you can make an assertion later.  
     File testingFile = new File("testingFile.txt");  
     //Create a testing stream and give the file to it  
     PrintStream testingStream = new PrintStream(testingFile);  
     //Keep a copy of the old console output stream  
     PrintStream consoleStream = new PrintStream(System.out);  
     //Reset the object out to the new stream  
     System.setOut(testingStream);  
     //Write something to the "console"  
     System.out.print("test");  
     //Rewire back to the original console output  
     System.setOut(consoleStream);  
     //Just an informative message  
     System.out.println("all back to normal");  
     //Read the output that we are testing  
     String output = new Scanner(testingFile).nextLine();  
     //Do your assertion  
     assertThat(output, is("test"));  
     //Delete the test file  
     testingFile.delete();  
   }  

With Stefan's library you can test the messages that go the terminal in the blink of an eye:

   @Rule  
   public final SystemOutRule systemOutRule = new SystemOutRule().enableLog();  
   @Test  
   public void systemRulesLibraryConsoleOutputTest() {  
     System.out.print("test");  
     assertThat(systemOutRule.getLog(), is("test"));  
   }  

Ok, now back to work now ;)

Sunday, April 12, 2015

Install and configure Java in ubuntu easy

Whenever I change laptop or format my hard drive, I end up making a mess with the soft links to configure java...

This post is just a reminder too myself, for the future. I found a clean way I am comfortable with for setting my system to use java, via the update-alternatives tool that ubuntu has. This is how I do it:

1- Download the JDK from oracle and copy it to your user directory.
I like having it under ~/Java/jdk1.8.0_40

2- I often like installing maven, so I do the same, just I create another directory for it and I put in there the downloaded maven. ~/maven/apache-maven3.3.1

3- Then I set the necessary environmental variables for my user in the ~/.bashrc file.

export JAVA_HOME=/home/djordje/Java/jdk1.8.0_40
export M2_HOME=/home/djordje/maven/apache-maven-3.3.1
export M2=$M2_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin:$M2  


4- I use update-alternatives to install in the system the java interpreter and the compiler.

sudo update-alternatives --install "/usr/bin/java" "java" "/home/djordje/Java/jdk1.8.0_40/bin/java" 1

sudo update-alternatives --install "/usr/bin/javac" "javac" "/home/djordje/Java/jdk1.8.0_40/bin/javac" 1

5- Again i use update-alternatives, but for this time to configure the default version of the interpreter and compiler to be used by default.

sudo update-alternatives --config java
sudo update-alternatives --config javac

When running the command, it shows all versions of java installed, to pick one

6- At the end I just run java -version to see that the system is using the version of java I want.

I like this way, it is easy to do and you can quickly switch to a different installed version by just using the --config option









Wednesday, March 18, 2015

Contract testing

Let's imagine the following scenario...
We are working in distributed system with lots of applications.
The developers understand the importance of avoiding coupling amoung componets, so they decide to create restful applications to communicate via xml and json,
instead of building applications that are binary dependant with other applications.

During the development of a feature, the development team, did a change to the API, and unconciously they broke one of the consummer apps.
Unfortunately, this bug was really expensive, since the company just managed to discover it in its replica, pre-production environment by a long running
end to end functional test, after determining that what was broken was actually a marshaller of xml, there was no quick fix and they had to roll back.

In the root cause analysis meeting, developers from each of the teams, that own the apps that failed realised that the API change was the reason for the bug
and that there was no aditional work done in one of the unmarshallers.
The developers were told to fix the bug and also to come up with a solution that would avoid this from happening again.

After fixing the bug the developers toke some time to think how they could catch this kind of bugs before the pre-production environment where the expensive
integration tests run. One of them said, "What we need is consummer contract testing!"...

Consumer contract testing, allows consumers and providers of an API knowing if their latest changes on their marshallers or unmarshallers, could potentially be
harmful for the other party, without the necessity of performing an integration test. This is how it works:


1- The provider of the API, publishes an example of the API somewhere where he knows the consumer can access it(e.g publish it in a repo, sending it via email...).
2- The consummer takes the API example and writes a test that tolerantly accesses the values of interest.
   This in-document path(e.g xpath,jsonpath...) used to retrieve the values from the API example, is known as the contract.
3- The consummer publishes the contract in a place where knows the provider has access to it(e.g publish it in a repo, sending it via email...).
4- The provider will take the contract, and will use it in a test, against the generated output of the application. If the test fails when being run, the provider will know that they could potentially be breaking the the consumer, if they were to release the current version under test(a negotiation can take place).

Let's now have a look at a practical example of each of the steps above.

1- The developers that own the provider app, take from their passing acceptance test the output that the application is sending back to the consumer and they save
it into a file called "apiexample.xml", which looks like this:

 <output>  
      <content>  
           <partA>A</partA>  
           <partB>B</partB>  
      </content>  
 </output>  

They send this file over email to the team that owns the consumer application.

2- The developers that own the consumer app, will take the exampe and will write queries to it, to determine the contract they need. A unit test against the example, could be fine.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

3- Now that the developers know that the contract to access what they are interested in is:
 "/output/content/partB"
They can save it in a file called "contract.txt" and send it over email to the other team for they to make sure they will always be outputing according to the contract. Note that this tolerant
paths, allow to the provider to change any part of the API they want to change, as long as the contract is respected.

4- The provider will read the "contract.txt" file and will write a test where the contract will be applied to the applications output.

 @Test  
    public void apiExampleGeneratesValidatesToContract() throws Exception {  
     XPath xPath = XPathFactory.newInstance().newXPath();  
     String value = xPath.evaluate("/output/content/partB", getSource(readExample("apiexample.xml")));  
     assertThat(value,is(notNullValue()));  
    }  

Now when any of the teams run their builds, they will know if they are in breaching the contract and they will avoid the bug going further than the development environment.

You can find the complete source code of this example here.

Wednesday, March 11, 2015

Yet Another Blog Article About Acceptance Testing


Acceptance tests are tests conducted to determine if the requirements of a specification are met.
In modern software development, we call this specification, acceptance criteria.

“Whenever possible” it would be desirable to acceptance test, the system end to end.
By end to end, I mean talking to the system from the outside, through its interfaces.

Note that at the beginning of the previous paragraph, I said “Whenever possible”.
The reason for this, is that it would be risky and also costly to integration test our code(against other code, we don't control/own). Sometime applications within a system, don't even belong to our company or they are too costly and slow to run. Because of this, the amount of system full stack tests/functional tests, should be very reduced/almost none.

In acceptance testing we often start from an assumption about those external systems we cannot control. The parts out of our control are faked and the acceptance criteria, is aimed to those parts we control.

When writing an acceptance test, there is a commonly used format to define the acceptance criteria. It is well known as the “given,when,then” format:

- given: The setup/preconditions, of the scenario that we will test. Its contains what is that we expect from those remote systems(either internal or external) on which we depend.
- when: Is the specific call to the exposed interface we are testing.
- then: Is the validation of the results.

Today's acceptance test are written with the help of live specification frameworks, such as: Jbehave, Fit, Fitnesse, Concordion, Yatspec...
The use out this tools, will make easier to both understand complex scenarios and maintain criteria.


Understanding Yatspec

Next I will talk about writing acceptance tests with a popular live specification framework called Yatspec. I will explain some of its features and describe the way it presents the test report. Also I will explain with an example how we could stub systems out of our control and use them in our acceptance test.

About yatspec
-
its a Live specification framework for Java(https://code.google.com/p/yatspec/)
-produces readable Html
-supports table/parametrized tests
-allows writing in given-when-then style

 
The scenario
The application we will be testing, will receive a GET request from a client, then it will send subsequent GET requests to two remote systems(A and B), process the responses and POST the result to a third system(C), just before returning it to the client.



The criteria
-Given System A will reply 1 2 3
-And System B will reply 4 5 6
-When the client asks for the known odd numbers
-Then the application responds 1 3 5
-Then 'System C' receives 1 3 5


Creating html reports
Before going in depth into our example, I want to expend some time discussing how Yatspec reports look like, and what are the basics in order to create them(If you want to go directly to the scenario implementation, just skip this section).

When a Yatspec specifications are run, it will generate a html report. Advance options, can allow you to publish it remotely, but by default it will be written to a temporary file in the file system.
The terminal will tell you where it is like this:
Yatspec output:
/tmp/acceptancetests/KnownOddNumbersTest.html
We can navigate to it from the browsers url:
file:///tmp/acceptancetests/KnownOddNumbersTest.html

Lets have a look at how it is structured:


(a) Is the title of the report. If Yatspec finds the postfix 'Test' on the class name, it will remove it and just present the rest of the title.

 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
      //Your tests  
 ...  
 }  


(b) In the contents section you will see a summary of all the test names(There can be multiple tests) in the same specification.



(c)This is the test name. We don't need to add any additional, anotations, all we need is to write our test names in “camel case”. If the test throws any exception, it will not be shown in the report.


 @Test  
 public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
       //Test body...  
 }  


(d) At the beginning of each test, the criteria will be presented. Yatspec will use the contents of the method body to generate it. The methods given(), and(), when(), then() are inherited from TestState.java(latter I will explain how to use them).

 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  

(e) This is where test result will be shown. Yatspec will colour this part in green if the test passes , in red if the test fail or in orange it the test is not run.

(f)Interesting givens are the preconditions for the test to run. This preconditions are stored in the class TestState.java in an object called interestingGivens. The way we would commonly do this by passing a GivensBuilder object to the the method given(). Also the method and() can be used to add more information in our interesting givens.
 
 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     //...  
   }  
   private GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   private GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }  

(g) This are the captured inputs and outputs. Its purpose is to record values that go in or out of any component in the workflow. TestState.java contains an object called capturedInputsAndOutputs to which we can add or query from. Comonly we would indirectly add a value to the capturedInputsAndOutputs to track the response of our application so it can be verified latter, via a parameter of type ActionUnderTest.java to the when() clause method.

 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     when(aRequestIsSentToTheApplication());  
     //...  
   }  
 private ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captured) -> {   
 //The second object of this lambda is capturedInputsAndOutputs  
       captures.add("application response", newClient()  
           .target("http://localhost:9999/")  
           .request().get().readEntity(String.class));  
       return captures;  
     };  
   }  


(h) This are the final verifications. They are created by the then() method. You will distinguish if the output was generated by the then() method, because it is not highlighted in yellow.
An StateExtractor.java is responsible for the values in this section. The state extractor will take from the captures the values that where recorded previously so a matcher can verify if they are correct.


 @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     //...  
     then(theApplicationReturnedValue(), is("1,3,5"));  
   }  
 private StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
 }  

The scenario implementation
Now that we understand the criteria and we have some basic understanding of Yatspec reports. Lets write an acceptance test for the criteria described before.

In our scenario System A, B and C are out of our control(Lets imagine they are owned by companies). We need to first query A and B and then send the processed result to C before replying to the client.
This means that our interesting givens will be the values returned from A and B and our captured inputs and outputs will contain the input into C.

 
So let's have a look at how Systems A and B return the values previously saved in the interesting givens to the application and also how System C captures the input.

For this example, I created a class called FakeServerTemplate.java which contains the boiler plate code that is necessary to create an embedded server. Each System A, B and C will inherit from it and provide specific handler implementations.

 public abstract class FakeSystemTemplate {  
   private final HttpServer server;  
   protected InterestingGivens givens;  
   protected CapturedInputAndOutputs captures;  
   public FakeSystemTemplate(int port, String context,InterestingGivens givens, CapturedInputAndOutputs captures) throws IOException {  
     this.givens = givens;  
     this.captures = captures;  
     InetSocketAddress socketAddress = new InetSocketAddress(port);  
     server = HttpServer.create(socketAddress,0);  
     server.createContext(context, customHandler());  
     server.start();  
   }  
   public abstract HttpHandler customHandler();  
   public void stopServer() {  
     server.stop(0);  
   }  
 }  


Latter, when we create the acceptance test we will see how we will pass the interesting givens and the captured inputs and outputs to the Systems.
Systems A and B will return the values stored in the interesting givens using a unique key(Latter we will see how this keys are set in the givens).


 public class SystemA extends FakeSystemTemplate {  
   public SystemA(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system A returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system A", response);  
     };  
   }  
 } 
 
 public class SystemB extends FakeSystemTemplate {  
   public SystemB(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       String response = givens.getType("system B returns", String.class);  
       httpExchange.sendResponseHeaders(200, response.length());  
       OutputStream outputStream = httpExchange.getResponseBody();  
       outputStream.write(response.getBytes());  
       outputStream.close();  
       httpExchange.close();  
       captures.add("output from system B", response);  
     };  
   }  
 }  


For system C we will be capturing the arriving input.

 public class SystemC extends FakeSystemTemplate {  
   public SystemC(int port, String context, InterestingGivens interestingGivens, CapturedInputAndOutputs capturedInputAndOutputs) throws IOException {  
     super(port, context, interestingGivens, capturedInputAndOutputs);  
   }  
   @Override  
   public HttpHandler customHandler() {  
     return httpExchange -> {  
       Scanner scanner = new Scanner(httpExchange.getRequestBody());  
       String receivedMessage = "";  
       while(scanner.hasNext()) {  
         receivedMessage += scanner.next();  
       }  
       scanner.close();  
       httpExchange.sendResponseHeaders(200, 0);  
       httpExchange.close();  
       captures.add("system C received value", receivedMessage);  
     };  
   }  
 }  


Now that our remote systems are ready, lets write our test.


 @RunWith(SpecRunner.class)  
 public class KnownOddNumbersTest extends TestState {  
   private SystemA systemA;  
   private SystemB systemB;  
   private SystemC systemC;  
   private Application application;  
   @Before  
   public void setUp() throws Exception {  
     systemA = new SystemA(9996, "/", interestingGivens, capturedInputAndOutputs);  
     systemB = new SystemB(9997, "/", interestingGivens, capturedInputAndOutputs);  
     systemC = new SystemC(9998, "/", interestingGivens, capturedInputAndOutputs);  
     application = new Application(9999, "/");  
   }  
   @After  
   public void tearDown() throws Exception {  
     systemA.stopServer();  
     systemB.stopServer();  
     systemC.stopServer();  
     application.stopApplication();  
   }  
   @Test  
   public void shouldReceiveResultWhenARequestIsSentToTheApplication() throws Exception {  
     given(systemARepliesWithNumbers("1,2,3"));  
     and(systemBRepliesWithNumbers("4,5,6"));  
     when(aRequestIsSentToTheApplication());  
     then(theApplicationReturnedValue(), is("1,3,5"));  
     then(systemCReceivedValue(),is("1,3,5"));  
   }  
 }  


By extending TestState.java we get acces to the interestingGivens and capturedInputsAndOutputs objects. We will pass them to the remote systems, this way Systems A and B will be aware of what we expect them to return and also C will be able to capture its input.

The methods used inside given(), and(), when() then() are just static fixture methods. I think it good to avoid making long classes so that's why the test class just contains the test, everything else is extracted into reusable fixture methods. Lets have a look at them.


 public class GivensFixture {  
   public static GivensBuilder systemARepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system A returns", numbers);  
       return givens;  
     };  
   }  
   public static GivensBuilder systemBRepliesWithNumbers(String numbers) {  
     return givens -> {  
       givens.add("system B returns", numbers);  
       return givens;  
     };  
   }
 
  public class WhenFixture {  
   public static ActionUnderTest aRequestIsSentToTheApplication() {  
     return (givens, captures) -> {  
       captures.add("application response", newClient().target("http://localhost:9999/").request().get().readEntity(String.class));  
       return captures;  
     };  
   }  
 }
 
 public class ThenFixture {  
   public static StateExtractor<String> theApplicationReturnedValue() {  
     return captures -> captures.getType("application response", String.class);  
   }  
   public static StateExtractor<String> systemCReceivedValue() {  
     return captures -> captures.getType("system C received value", String.class);  
   }  
 }  


Once we run the application, the acceptance test would go red, the next thing to do if we were parcticing ATDD, would be to go into the production code and write unit tests to guide the creation of the code that is required to make the acceptance go green. Remember the ATDD cycle.

 
The TDD of the final solution is out of the scope for this blog post, but you can find all the completed codes at this git repo:



Wednesday, February 4, 2015

Exposing the data layer of your app using REST

The more we sepparate the concerns of our system, the more mainteinable it becomes.

It is very common to find applications written in such way that the data access mechanisms(SQL files, JDBC client code, ORM mappings...) are located just next to(coupled/interdependant) the service/bussiness logic. This often makes finding bug, making a change, etc.. harder.

Calculating a result and storing it, are different things. So why not sepparating those 2 responsibilities among different applications?

One would be responsible of making sure the results are calculated and the other will just provide data management support.
In my opinion the result of doing this is a system that is more understandable, maintainable and upgrade friendly.

In many companies, the data is often managed by database engineering teams which have: schedules, goals and even different managers than the development teams. In this type of organization, delays, missunderstandings, conflicts of interests and work de-synchronization are very common. So to make the most of a decoupled system, we not just need a good software approach, but also a process and team structure that are compatible with it(But this may be a topic for another post). This type of decoupling will not just make maintenance easy for the developers but also, it will probably encourage discussion about the process and the teams structure.

In my example I decided expose 2 persistent services via 1 url and persisting simultaniously in 2 types of databases(a sql and a no-sql DB).

This is the implementation of the no-sql adapter


 public class NoSqlAddressInsertAdapter implements CreateService {  
   private final MongoClient mongoClient;  
   @Inject  
   public NoSqlAddressInsertAdapter(MongoClient mongoClient) {  
     this.mongoClient = mongoClient;  
   }  
   @Override  
   public void create(Address address) {  
     DBCollection collection = mongoClient.getDB("radadata").getCollection("address");  
     collection.insert(toNoSqlAddress(address));  
   }  
   private AddressNoSql toNoSqlAddress(Address address) {  
     AddressNoSql addressNoSql = new AddressNoSql();  
     addressNoSql.append("firstline", address.getFirstLine());  
     addressNoSql.append("secondline", address.getSecondLine());  
     addressNoSql.append("postcode", address.getPostcode());  
     addressNoSql.append("persons", address.getPersons().stream().map(toNoSqlPersons()).collect(toList()));  
     return addressNoSql;  
   }  
   private Function<Person, PersonNoSql> toNoSqlPersons() {  
     return person -> {  
       PersonNoSql personNoSql = new PersonNoSql();  
       personNoSql.append("firstname", person.getFirstName());  
       personNoSql.append("secondname", person.getSecondName());  
       return personNoSql;  
     };  
   }  
 }  

This is the implementation of the sql-adapter


 public class SqlAddressInsertAdapter implements CreateService {  
   @Inject  
   public SqlAddressInsertAdapter() {  
   }  
   private static SessionFactory getSessionFactory() {  
     return HibernateUtil.getSessionFactory();  
   }  
   private Session session;  
   @Override  
   public void create(Address address) {  
     session = SqlAddressInsertAdapter.getSessionFactory().getCurrentSession();  
     session.beginTransaction();  
     Set<ORMPerson> ormPersons = address.getPersons().stream().map(toOrmPersons()).collect(toSet());  
     ORMAddress ormAddress = new ORMAddress();  
     ormAddress.setFirstLine(address.getFirstLine());  
     ormAddress.setSecondLine(address.getSecondLine());  
     ormAddress.setPostcode(address.getPostcode());  
     ormAddress.setOrmPersons(ormPersons);  
     session.save(ormAddress);  
     session.getTransaction().commit();  
   }  
   @Override  
   public void create(Person person) {  
     //  
   }  
   private Function<Person, ORMPerson> toOrmPersons() {  
     return person -> new ORMPerson(person.getFirstName(),person.getSecondName());  
   }  
 }  

Note that both adapters use their specific domain objects, one uses ORM(Those ORMClasses are hibernate entities) and the other doesn't.

This is a sample REST endpoint will allow access to those services simultaniously


 @Service  
 @Path("insertperson")  
 public class InsertAddressResource {  
   private final services.nosqlcrud.CreateService noSqlcreateService;  
   private final services.sqlcrud.CreateService sqlCreateService;  
   @Inject  
   public InsertAddressResource(services.nosqlcrud.CreateService noSqlcreateService,  
                  services.sqlcrud.CreateService sqlCreateService) {  
     this.noSqlcreateService = noSqlcreateService;  
     this.sqlCreateService = sqlCreateService;  
   }  
   @POST  
   @Consumes({"application/json"})  
   public void insert(Address address) {  
     noSqlcreateService.create(address);  
     sqlCreateService.create(address);  
   }  
   /*  
     A Sample Json to POST:  
     URL: http://localhost:9998/insertperson  
     Content Type: application/json  
     {  
      "firstline": "street bla bla",  
      "secondline": "town of bla bla",  
      "postcode": "ble ble ble",  
      "persons": [  
         {"firstname":"Armin","secondname":"Josef"},  
         {"firstname":"Johan","secondname":"Uhgler"}  
       ]  
     }  
   */  
 }  

This snippets of code are just part of a demo app I wrote some days ago to show how to expose the data layer via REST.
The rest of the project, can be found at: https://github.com/SFRJ/Rest-Approach-to-Data-Persistence-R.A.D.A-

Share with your frieds