Data-driven unit testing in Java

Data-driven testing is a powerful way of testing a given scenario with different combinations of values. In this article, we look at several ways to do data-driven unit testing in JUnit. Suppose, for example, you are implementing a Frequent Flyer application that awards status levels (Bronze, Silver, Gold, Platinum) based on the number of status points you earn. The number of points needed for each level is shown here: level minimum status points result level Bronze 0 Bronze Bronze 300 Silver Bronze 700 Gold Bronze 1500 Platinum Our unit tests need to check that we can correctly calculate the status level achieved when a frequent flyer earns a certain number of points. This is a classic problem where data-driven tests would provide an elegant, efficient solution. Data-driven testing is well-supported in modern JVM unit testing libraries such as Spock and Spec2. However, some teams don’t have the option of using a language other than Java, or are limited to using JUnit. In this article, we look at a few options for data-driven testing in plain old JUnit. Parameterized Tests in JUnit JUnit provides some support for data-driven tests, via the Parameterized test runner. A simple data-driven test in JUnit using this approach might look like this: @RunWith(Parameterized.class)public class WhenEarningStatus {    @Parameters(name = "{index}: {0} initially had {1} points, earns {2} points, should become {3} ")    public static Iterable<Object[]> data() {        return Arrays.asList(new Object[][]{                {Bronze, 0,    100,  Bronze},                {Bronze, 0,    300,  Silver},                {Bronze, 100,  200,  Silver},                {Bronze, 0,    700,  Gold},                {Bronze, 0,    1500, Platinum},        });    }    private Status initialStatus;    private int initialPoints;    private int earnedPoints;    private Status finalStatus;    public WhenEarningStatus(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) {        this.initialStatus = initialStatus;        this.initialPoints = initialPoints;        this.earnedPoints = earnedPoints;        this.finalStatus = finalStatus;    }    @Test    public void shouldUpgradeStatusBasedOnPointsEarned() {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                                            .named("Joe", "Jones")                                            .withStatusPoints(initialPoints)                                            .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }} You provide the test data in the form of a list of Object arrays, identified by the _@Parameterized@ annotation. These object arrays contain the rows of test data that you use for your data-driven test. Each row is used to instantiate member variables of the class, via the constructor. When you run the test, JUnit will instantiate and run a test for each row of data. You can use the name attribute of the @Parameterized annotation to provide a more meaningful title for each test. There are a few limitations to the JUnit parameterized tests. The most important is that, since the test data is defined at a class level and not at a test level, you can only have one set of test data per test class. Not to mention that the code is somewhat cluttered - you need to define member variables, a constructor, and so forth. Fortunatly, there is a better option. Using JUnitParams A more elegant way to do data-driven testing in JUnit is to use [https://code.google.com/p/junitparams/|JUnitParams]. JUnitParams (see [http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22JUnitParams%22|Maven Central] to find the latest version) is an open source library that makes data-driven testing in JUnit easier and more explicit. A simple data-driven test using JUnitParam looks like this: @RunWith(JUnitParamsRunner.class)public class WhenEarningStatusWithJUnitParams {    @Test    @Parameters({            "Bronze, 0,   100,  Bronze",            "Bronze, 0,   300,  Silver",            "Bronze, 100, 200,  Silver",            "Bronze, 0,   700,  Gold",            "Bronze, 0,   1500, Platinum"    })    public void shouldUpgradeStatusBasedOnPointsEarned(Status initialStatus, int initialPoints,                                                        int earnedPoints, Status finalStatus) {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                                            .named("Joe", "Jones")                                            .withStatusPoints(initialPoints)                                            .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }} Test data is defined in the @Parameters annotation, which is associated with the test itself, not the class, and passed to the test via method parameters. This makes it possible to have different sets of test data for different tests in the same class, or mixing data-driven tests with normal tests in the same class, which is a much more logical way of organizing your classes. JUnitParam also lets you get test data from other methods, as illustrated here:     @Test    @Parameters(method = "sampleData")    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,                                                     int earnedPoints, Status finalStatus) {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                .named("Joe", "Jones")                .withStatusPoints(initialPoints)                .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }    private Object[] sampleData() {        return $(                $(Bronze, 0,   100, Bronze),                $(Bronze, 0,   300, Silver),                $(Bronze, 100, 200, Silver)        );    } The $ method provides a convenient short-hand to convert test data to the Object arrays that need to be returned. You can also externalize     @Test    @Parameters(source=StatusTestData.class)    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,                                                     int earnedPoints, Status finalStatus) {        ...    } The test data here comes from a method in the StatusTestData class:     public class StatusTestData {        public static Object[] provideEarnedPointsTable() {            return $(                    $(Bronze, 0,   100, Bronze),                    $(Bronze, 0,   300, Silver),                    $(Bronze, 100, 200, Silver)            );        }    } This method needs to be static, return an object array, and start with the word "provide". Getting test data from external methods or classes in this way opens the way to retrieving test data from external sources such as CSV or Excel files. JUnitParam provides a simple and clean way to implement data-driven tests in JUnit, without the overhead and limitations of the traditional JUnit parameterized tests. Testing with non-Java languages If you are not constrained to Java and/or JUnit, more modern tools such as Spock (https://code.google.com/p/spock/) and Spec2 provide great ways of writing clean, expressive unit tests in Groovy and Scala respectively. In Groovy, for example, you could write a test like the following: class WhenEarningStatus extends Specification {    def "should earn status based on the number of points earned"() {        given:        def member = FrequentFlyer.withFrequentFlyerNumber("12345678")                .named("Joe", "Jones")                .withStatusPoints(initialPoints)                .withStatus(initialStatus);        when:        member.earns(earnedPoints).statusPoints()        then:        member.status == finalStatus        where:        initialStatus | initialPoints | earnedPoints | finalStatus        Bronze        | 0             | 100          | Bronze        Bronze        | 0             | 300          | Silver        Bronze        | 100           | 200          | Silver        Silver        | 0             | 700          | Gold        Gold          | 0             | 1500         | Platinum    }} John Ferguson Smart is a specialist in BDD, automated testing, and software life cycle development optimization, and author of BDD in Action and other books. John runs regular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, and Automated Acceptance Testing.

Posted by on 24 July 2014 | 12:09 am

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 6 (Tech Tip #40)

This is the sixth part (part 1, part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality. Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader, --processor, --writer parameters to the newly added command. Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification. Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters. Part 4 added a new test for the command and showed how Forge can be used in debug mode. Part 5 fixed a bug reported by a community member and started work to make processor validation optional. This part shows: Upgrade from Forge 2.6.0 to 2.7.1 Fix the failing test Reader, processor, and writer files are now templates instead of source files Reader, processor, and writer are injected appropriately in test's temp project Enjoy! As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.

Posted by on 23 July 2014 | 5:44 pm

And towards JSF 2.3 we go!

For all JSF folks out there. Some important news happened. What is it? Well, Ed Burns announced Oracle's intent to file JSF 2.3 with me as co-spec lead. See the email to the EG at https://java.net/projects/javaserverfaces-spec-public/lists/users/archiv... Enjoy!

Posted by on 22 July 2014 | 8:05 pm

Shape the future of JBoss EAP and WildFly Web Console

Are you using WildFly ? Any version of JBoss EAP ? Would you like to help us define how the Web Console for future versions should look like ? Help the Red Hat UX Design team shape the future of JBoss EAP and WildFly! We are currently working to improve the usability and information architecture of the web-based admin console. By taking part in a short exercise you will help us better understand how users interpret the information and accomplish their goals. You do not need to be an expert of the console to participate in this study. The activity shouldn't take longer than 10 to 15 minutes to complete. To start participating in the study, click on the link below and follow the instructions. http://ows.io/tj/12t0qr48 I completed the study in about 12 mins and was happy that my clicking around helped shape the future of JBoss EAP and WildFly! Just take a quick detour from your routine for 10-15 mins and take the study. Thank you in advance for taking the time to complete the study.

Posted by on 18 July 2014 | 7:57 am

Getting Started with Docker (Tech Tip #39)

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well! Lets hear it from @solomonstre - creator of Docker project! In short, Docker simplifies software delivery by making it easy to build and share images that contain your application's entire environment, or application operating system. What does it mean by application operating system ? Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system. You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker. What are the main components of Docker ? Docker has two main components: Docker: the open source container virtualization platform Docker Hub: SaaS platform for sharing and managing Docker images Docker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this. Images are "build component" of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are "run component" of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the "distribution component" of Docker. Docker in turn contains two components: Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers. Client is a Docker binary that accepts commands from the user and communicates back and forth with daemon How do these work together ? Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host. Client can then start the Container using run command. The complete list of client commands can be seen here. Client communicates with Daemon using sockets or REST API. Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ? Docker daemon and client for different operating systems can be installed from docs.docker.com/installation/. As you can see, it can be installed on a wide variety of platforms, including Mac and Windows. For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac: bashunset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATHmkdir -p ~/.boot2dockerif [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi/usr/local/bin/boot2docker init /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375docker version~> bash~> unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH~> mkdir -p ~/.boot2docker~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi~> /usr/local/bin/boot2docker init 2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):23752014/07/16 09:57:13 Waiting for VM to be started..........2014/07/16 09:57:35 Started.2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set:2014/07/16 09:57:35     export DOCKER_HOST=tcp://192.168.59.103:2375~> docker versionClient version: 1.1.1Client API version: 1.13Go version (client): go1.2.1Git commit (client): bd609d2Server version: 1.1.1Server API version: 1.13Go version (server): go1.2.1Git commit (server): bd609d2 For example, Docker Daemon and Client can be installed on Mac following the instructions at docs.docker.com/installation/mac. The VM can be stopped from the CLI as: boot2docker stop And then restarted again as: boot2docker boot And logged in as: boot2docker ssh The complete list of boot2docker commands are available in help: ~> boot2docker helpUsage: boot2docker [] []boot2docker management utility.Commands:    init                    Create a new boot2docker VM.    up|start|boot           Start VM from any states.    ssh [ssh-command]       Login to VM via SSH.    save|suspend            Suspend VM and save state to disk.    down|stop|halt          Gracefully shutdown the VM.    restart                 Gracefully reboot the VM.    poweroff                Forcefully power off the VM (might corrupt disk image).    reset                   Forcefully power cycle the VM (might corrupt disk image).    delete|destroy          Delete boot2docker VM and its disk image.    config|cfg              Show selected profile file settings.    info                    Display detailed information of VM.    ip                      Display the IP address of the VM's Host-only network.    status                  Display current state of VM.    download                Download boot2docker ISO image.    version                 Display version information. Enough talk, show me an example ? Some of the JBoss projects are available as Docker images at www.jboss.org/docker and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as: ~> docker pull jboss/wildflyPulling repository jboss/wildfly2f170f17c904: Download complete 511136ea3c5a: Download complete c69cab00d6ef: Download complete 88b42ffd1f7c: Download complete fdbe853b54e1: Download complete bc93200c3ba0: Download complete 0daf76299550: Download complete 3a7e1274035d: Download complete e6e970a0db40: Download complete 1e34f7a18753: Download complete b18f179f7be7: Download complete e8833789f581: Download complete 159f5580610a: Download complete 3111b437076c: Download complete The image can be verified using the command: ~> docker imagesREPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZEjboss/wildfly       latest              2f170f17c904        8 hours ago         1.048 GB Once the image is downloaded, the container can be started as: docker run jboss/wildfly By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY. In addition, we'd also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port. So we'll run the container as: docker run -i -t -p 80:8080 jboss/wildfly=========================================================================  JBoss Bootstrap Environment  JBOSS_HOME: /opt/wildfly  JAVA: java  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================22:08:29,943 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final22:08:30,200 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final22:08:30,297 INFO  [org.jboss.as] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting22:08:31,935 INFO  [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)22:08:31,961 INFO  [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final22:08:31,974 INFO  [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final22:08:32,057 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors22:08:32,108 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem.22:08:32,110 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem22:08:32,133 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem22:08:32,178 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main]22:08:32,206 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.22:08:32,348 INFO  [org.jboss.as.security] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta122:08:32,397 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension22:08:32,442 INFO  [org.jboss.as.connector.logging] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final)22:08:32,512 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting22:08:32,512 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting22:08:32,570 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)22:08:32,660 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h222:08:32,736 INFO  [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final22:08:32,836 INFO  [org.jboss.as.naming] (MSC service thread 1-15) JBAS011802: Starting Naming Service22:08:32,839 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-15) JBAS015400: Bound mail session [java:jboss/mail/Default]22:08:33,406 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content22:08:33,540 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server.22:08:33,603 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting22:08:34,072 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on /0.0.0.0:808022:08:34,599 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments22:08:34,619 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-9) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]22:08:34,781 INFO  [org.jboss.ws.common.management] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final22:08:34,843 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://0.0.0.0:9990/management22:08:34,844 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://0.0.0.0:999022:08:34,845 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand) Container's IP address can be found as: ~> boot2docker ipThe VM's Host only interface IP address is: 192.168.59.103 The started container can be verified using the command: ~> docker psCONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                NAMESb2f8001164b0        jboss/wildfly:latest   /opt/wildfly/bin/sta   46 minutes ago      Up 12 minutes       8080/tcp, 9990/tcp   sharp_pare And now the WildFly server can now be accessed on your local machine as http://192.168.59.103 and looks like as shown: Finally the container can be stopped by hitting Ctrl + C, or giving the command as: ~> docker stop b2f8001164b0b2f8001164b0 The container id obtained from "docker ps" is passed to the command here. More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at github.com/jboss/dockerfiles/blob/master/wildfly/README.md. What else would you like to see in the WildFly Docker image ? File an issue at github.com/jboss/dockerfiles/issues. Other images that are available at jboss.org/docker are: KeyCloak TorqueBox Immutant LiveOak AeroGear   Did you know that Red Hat is amongst one of the top contributors to Docker, with 5 Red Hatters from Project Atomic working on it ?

Posted by on 17 July 2014 | 12:45 am

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 5 (Tech Tip #38)

This is the fourth part (part 1, part 2, part 3, part 4) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality. Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader, --processor, --writer parameters to the newly added command. Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification. Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters. Part 4 added a new test for the command and showed how Forge can be used in debug mode. This part shows: Fix a bug reported by a community member Started work another issue to make processor validation optional Enjoy! As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.

Posted by on 15 July 2014 | 5:52 pm

From framework to platform

When I started my career as a Java developer close to 10 years ago, the industry is going through a revolutionary change. Spring framework, which was released in 2003, was quickly gaining ground and became a serious challenger to the bulky J2EE platform. Having gone through the transition time, I quickly found myself in favour of Spring framework instead of J2EE platform, even the earlier versions of Spring are very tedious to declare beans. What happened next is the revamping of J2EE standard, which was later renamed to JEE. Still, dominating of this era is the use of opensource framework over the platform proposed by Sun. This practice gives developers full control over the technologies they used but inflating the deployment size. Slowly, when cloud application become the norm for modern applications, I observed the trend of moving the infrastructure service from framework to platform again. However, this time, it is not motivated by Cloud application. Framework vs Platform I have never heard of or had to used any framework in school. However, after joining the industry, it is tough to build scalable and configurable software without the help of any framework. From my understanding, any application is consist of codes that implement business logic and some other codes that are helpers, utilities or to setup infrastructure. The codes that are not related to business logic, being used repetitively in many projects, can be generalised and extracted for reuse.  The output of this extraction process is framework. To make it shorter, framework is any codes that is not related to business logic but helps to dress common concerns in applications and fit to be reused. If following this definition then MVC, Dependency Injection, Caching, JDBC Template, ORM are all consider frameworks. Platform is similar to framework as it also helps to dress common concerns in applications but in contrast to framework, the service is provided outside the application. Therefore, a common service endpoint can serve multiple applications at the same time. The services provided by JEE application server or Amazon Web Services are sample of platforms. Compare the two approaches, platform is more scalable, easier to use than framework but it also offers less control. Because of these advantage, platform seem to be the better approach to use when we build Cloud Application. When should we use platform over framework Moving toward platform does not guarantee that developers will get rid of framework. Rather, platform only complements framework in building applications. However, one some special occasions we have a choice to use platform or framework to achieve final goal.  From my personal opinion, platform is greater that framework when following conditions are matched: Framework is tedious to use and maintain The service has some common information to be shared among instances. Can utilize additional hardware to improve performance. In office, we still uses Spring framework, Play framework or RoR in our applications and this will not change any time soon. However, to move to Cloud era, we migrated some of our existing products from internal hosting to Amazon EC2 servers. In order to make the best use of Amazon infrastructure and improve software quality, we have done some major refactoring to our current software architecture.  Here are some platforms that we are integrating our product to: Amazon Simple Storage Service (Amazon S3) &  Amazon Cloud Front We found that Amazon Cloud Front is pretty useful to boost average response time for our applications. Previously, we host most of the applications in our internal server farms, which located in UK and US. This lead to noticeable increase in response time for customers in other continents. Fortunately, Amazon has much greater infrastructure with server farms built all around the worlds. That helps to guarantee a constant delivery time for package, no matter customer locations. Currently, due to manual effort to setup new instance for applications, we feel that the best use for Amazon Cloud Front is with static contents, which we host separately from application in Amazon S3. This practice give us double benefit in performance with more consistent delivery time offered by the CDN plus the separate connection count in browser for the static content. Amazon Elastic Cache Caching has never been easy on cluster environment. The word "cluster" means that your object will not be stored and retrieve from system memory. Rather, it was sent and retrieved over the network. This task was quite tricky in the past because developers need to sync the records from one node to another node. Unfortunately, not all caching framework support this feature automatically. Our best framework for distributed caching was Terracotta. Now, we turned to Amazon Elastic Cache because it is cheap, reliable and save us the huge effort for setting up and maintain distributed cache. It is worth to highlight that distributed caching is never mean to replace local cache. The difference in performance suggest that we should only use distributed caching over local caching when user need to access real-time temporary data. Event Logging for Data Analytics In the past, we used Google Analytics for analysing user behaviour but later decided to build internal data warehouse. One of the motivation is the ability to track events from both browsers and servers. The Event Tracking system uses MongoDB as the database as it allow us to quickly store huge amount of events. To simplify the creation and retrieval of events, we choose JSON as the format for events. We cannot simply send this event directly to event tracking server due to browser prevention of cross-domain attack. For this reason, Google Analytic send the events to server under the form of a GET request for static resource. As we have the full control over how the application was built, we choose to let the events send back to application server first and route to event tracking server later. This approach is much more convenient and powerful. Knowledge Portal In the past, applications access data from database or internal file repository. However, to be able to scale better, we gathered all knowledge to build a knowledge portal. We also built query language to retrieve knowledge from this portal. This approach add one additional layer to the knowledge retrieval process but fortunately for us, our system does not need to serve real time data. Therefore, we can utilize caching to improve performance. Conclusion Above is some of our experience on transforming software architecture when moving to the Cloud. Please share with us your experience and opinion.

Posted by on 14 July 2014 | 2:45 pm

BDD Requirements Management with JBehave, Thucydides and JIRA - Part 2

Thucydides is an open source library designed to make practicing Behaviour Driven Development easier. Thucydides plays nicely with BDD tools such as JBehave, or even more traditional tools like JUnit, to make writing automated acceptance tests easier, and to provide richer and more useful living documentation. In this series of articles, we look at the tight one and two-way integration that Thucydides offers with JIRA. The first article discussed basic one-way integration with JIRA. In this article, we will looking at taking that integration further. We will see how to insert links to the Thucydides reports into JIRA, how to update the state of JIRA issues based on the Thucydides test outcomes, and how to report on JIRA versions and releases in the Thucydides reports. The rest of this article assumes you have some familiarily with Thucydides. For a tutorial introduction to Thucydides, check out the Thucydides Documentation or this article for a quick introduction. Links from JIRA to Thucydides The simplest form of two-way integration between Thucydides and JIRA is to get Thucydides to insert a comment containing links to the Thucydides test reports for each related issue card. To get this to work, you need to tell Thucydides where the reports live. One way to do this is to add a property calledthucydides.public.url to your thucydides.properties file with the address of the thucydides reports. thucydides.public.url=http://buildserver.myorg.com/latest/thucydides/report This will tell Thucydides that you not only want links from the Thucydides reports to JIRA, but you also want to include links in the JIRA cards back to the corresponding Thucydides reports. When this property is defined, Thucydides will add a comment like the following to any issues associated with the executed tests: The thucydides.public.url will typically point to a local web server where you deploy your reports, or to a path within your CI server. For example you could publish the Thucydides reports on Jenkins using theJenkins HTML Publisher Plugin, and then add a line like the following to your thucydides.properties file: thucydides.public.url=http://jenkins.myorg.com/job/myproject-acceptance-tests/Thucydides_Report/ If you do not want Thucydides to update the JIRA issues for a particular run (e.g. when running your tests locally), you can also set thucydides.skip.jira.updates to true, e.g. thucydides.skip.jira.updates=true This will simply write the relevant issue numbers to the log rather than trying to connect to JIRA. Updating JIRA issue states You can also configure the plugin to update the status of JIRA issues. This is deactivated by default: to use this option, you need to set the thucydides.jira.workflow.active option to true, e.g. thucydides.jira.workflow.active=true The default configuration will work with the default JIRA workflow: open or in progress issues associated with successful tests will be resolved, and closed or resolved issues associated with failing tests will be reopened. If you are using a customized workflow, or want to modify the way the transitions work, you can write your own workflow configuration. Workflow configuration uses a simple Groovy DSL. The following is an example of the configuration file used for the default workflow:     when 'Open', {        'success' should: 'Resolve Issue'    }    when 'Reopened', {        'success' should: 'Resolve Issue'    }    when 'Resolved', {        'failure' should: 'Reopen Issue'    }    when 'In Progress', {        'success' should: ['Stop Progress','Resolve Issue']    }    when 'Closed', {        'failure' should: 'Reopen Issue'    } You can write your own configuration file and place it on the classpath of your test project (e.g. in theresources directory). Then you can override the default configuration by using thethucydides.jira.workflow property, e.g. thucydides.jira.workflow=my-workflow.groovy Alternatively, you can simply create a file called jira-workflow.groovy and place it somewhere on your classpath (e.g. in the src/test/resources directory). Thucydides will then use this workflow. In both these cases, you don’t need to explicitly set the thucydides.jira.workflow.active property. Release management In JIRA, you can organize your project releases into versions, as illustrated here: You can and assign cards to one or more versions using the Fix Version/s field: By default, Thucydides will read version details from the Releases in JIRA. Test outcomes will be associated with a particular version using the "Fixed versions" field. The Releases tab gives you a run-down of the different planned versions, and how well they have been tested so far: JIRA uses a flat version structure - you can’t have for example releases that are made up of a number of sprints. Thucydides lets you organize these in a hierarchical structure based on a simple naming convention. By default, Thucydides uses "release" as the highest level release, and either "iteration" or "sprint" as the second level. For example, suppose you have the the following list of versions in JIRA - Release 1 - Iteration 1.1 - Iteration 1.2 - Release 2 - Release 3 This will produce Release reports for Release 1, Release 2, and Release 3, with Iteration 1.2 and Iteration 1.2 appearing underneath Release 1. The reports will contain the list of requirements and test outcomes associated with each release. You can drill down into any of the releases to see details about that particular release You can also customize the names of the types of release usinge the thucydides.release.typesproperty, e.g. thucydides.release.types=milestone, release, version Conclusion Thucydides has powerful one and two-way integration with JIRA. In these articles, we have seen how you can incoporate links from Thucydides to JIRA, from JIRA to Thucyides, and even update the status of issues in JIRA based on the test results. And, if you are managing your versions in JIRA, you can also report on how well each version has been tested, and what remains to be tested before the next release. Want to learn more? Be sure to check out the Thucydides web site, the Thucydides Blog, or join theThucydides Google Users Group to join the discussion with other Thucydides users. Wakaleo Consulting, the company behind Thucydides, also runs regular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, and Automated Acceptance Testing.

Posted by on 14 July 2014 | 1:01 pm

How JDK 8 standardizes and augments to Guava library functionalities

JDK 8 introduced a lot of new features and improvements in the platform from Lamda expressions, Stream collection types, Functional interfaces, Type annotations, Nashorn etc. Guava library from Google provided some support for functional programming idioms prior to JDK 8. I have been using Guava for some of my projects. So here is a small write up on how new functionality added in JDK 8 makes it possible to achieve standardized way to functionality offered by Google's Guava. This article further highlights similarities and differences between the two APIs and was inspired by this discussion on google groups. The following table shows some of the API which I will cover in detail wrt. Guava and JDK 8 Functionality Guava JDK 8 Predicate apply(T input) test(T input) Combining predicates Predicates.and/or/not Predicate.and/or/negate Supplier Suplier.get Supplier.get Joiner/StringJoiner Joiner.join() StringJoiner.add() SettableFuture/CompletableFuture SettableFuture.set(T input) CompletableFuture.complete(T input) Optional Optional.of/ofNullable/empty Optional.of/fromNullable/absent Source Code The following code snippets are part of a complete sample available at https://github.com/bhakti-mehta/samples/tree/master/jdk8-and-guava For the sake of simplicty, I have a simple sample which has a collection of people's data. We start with a simple POJO Person as shown below for both the JDK 8 and Guava cases public class Person {    private String firstName;    private String lastName;    private int age;    private Optional suffix;...... As shown in the above snippet the Person class has fields like firstName, lastName, age, an Optional suffix and getters and setters for these. 1.0 Predicates A Predicate is a boolean valued function for an argument. Now we will define Predicate in Guava and JDK 8 and show how to get the list of people whose age is over 30. The following snippet shows how to use a Predicate which has an apply (Person input) method that takes a Person object as input and validates if the age of the person is above 30 1.1 Predicate with Guava Here is the code showing how to use com.google.common.base.Predicate   final List persons = Person.createList();        final Predicate ageOver30 = new Predicate() {            public boolean apply(Person input) {                return input.getAge() > 30;            }                    };        Collection filteredPersons = Collections2.filter(persons,                ageOver30);        The above snippet returns a Collection that satisfy that predicate ageOver30 by using Collections2.filter() method which takes a Predicate as an argument. 1.2 Predicate with JDK8 Here is a snippet of how to achieve the same behaviour using java.util.function.Predicate The Predicate has a test method what checks for the ageOver30 Predicate final List persons = Person.createList();        final Predicate ageOver30 = new Predicate() {            public boolean test(Person person) {                return person.getAge() > 30;            }        };        Stream filteredPersons = persons.stream().filter(                ageOver30); The above snippet transforms (List) into a Stream with the stream() method on the Collection interface. The filter() method takes the ageOver30 Predicate and returns a stream that satisfies the criteria. 2.0 Combining Predicates Predicates can be combined with other predicates. For example in our sample we need to find a list of people whose age is over 30 and whose name begins with "W", we can achieve this functionality with creating two Predicates ageOver30 nameBeginsWith Next we combine the 2 predicates by calling the and method with these two predicates. 2.1 Combining Predicates with Guava Here is a code snippet with Guava predicate class which defines 2 predicates ageOver30 and nameBeginsWith         final List persons = Person.createList();        final Predicate ageOver30 = new Predicate() {            public boolean apply(Person input) {                return input.getAge() > 30;            }        };        final Predicate nameBeginsWith = new Predicate() {            public boolean apply(Person person) {                return person.getLastName().startsWith("W");            }        };        Collection filteredPersons = Collections2.filter(persons,                Predicates.and(ageOver30, nameBeginsWith)); The above snippet returns a filtered list from the Collections2.filter() method by passing the combined predicates Predicates.and(ageOver30,nameBeginsWith) 2.2 Combining Predicates with JDK 8 Here is the same functionality using java.util.function.Predicate.and/or/negate public Stream getMultiplePredicates() {        final List persons = Person.createList();        final Predicate ageOver30 = new Predicate() {            public boolean test(Person person) {                return person.getAge() > 30;            }        };        final Predicate nameBeginsWith = new Predicate() {            public boolean test(Person person) {                return person.getLastName().startsWith("W") ;            }        };        Stream filteredPersons = persons.stream().filter(                ageOver30.and(nameBeginsWith));        return filteredPersons;    } The above snippet returns a stream by filtering the combined ageOver30.and(nameBeginsWith)) predicates. 3.0 Supplier Supplier is a functional interface that encapsulates an operation and allows lazy evaluation of the operation. It supplies objects of a particular type. 3.1 Supplier in Guava Here is a snippet of how to use com.google.common.base.Supplier in Guava public int getSupplier() {        Supplier person = new Supplier() {            public Person get() {                return new Person("James", "Sculley", 53,Optional.of("Sr"));            }        };        return person.get().getAge();    } As seen in the above snippet we create a new Supplier and the get() method returns a new instance of Person. 3.2 Supplier in JDK 8 The following code shows how to create a java.util.function.Supplier with Lamda expressions in JDK 8. public int getSupplier() {        final List persons = Person.createList();        Supplier anotherone = () -> { Person psn = new Person("James", "Sculley", 53, Optional.of("Sr"));            return psn;        };        return anotherone.get().getAge();    } As shown in above snippet, similar to the Guava case, we create a new Supplier and the get() method returns a new instance of Person. 4.0 Joiner/StringJoiner A Joiner in Guava/ StringJoiner in JDK8 joins text together separated by delimiters. 4.1 Joiner in Guava Here is an example of a Joiner in Guava which joins the various string delimited by ';'     public String getJoiner() {        Joiner joiner = Joiner.on("; ");        return joiner.join("Violet", "Indigo", "Blue", "Green", "Yellow", "Orange", "Red");    } 4.2 StringJoiner in JDK 8 The following snippet shows how the equivalent functionality in JDK 8 is:     public String getJoiner() {        StringJoiner joiner = new StringJoiner("; ");        return joiner.add("Violet").add( "Indigo").add( "Blue").add( "Green")        .add("Yellow").add( "Orange").add( "Red").toString();    } 5.0 java.util.Optional java.util.Optional is a way for programmers to indicate that there may have been a value initially, that is now set to null or no value was ever found. 5.1 Optional in Guava Here is a sample of com.google.common.base.Optional Optional.of(T) :Make an Optional containing the given non-null value, or fail fast on null. Optional.absent(): Return an absent Optional of some type. Optional.fromNullable(T): Turn the given possibly-null reference into an Optional, treating non-null as present and null as absent. Here is the code which declares the suffix of a Person as Optional Optional suffix = Optional.of("Sr") 5.2 Optional in JDK8 Optional.of(T); Returns an Optional with the specified present non-null value. Optional.ofNullable(T);Returns an Optional describing the specified value, if non-null, otherwise returns an empty Optional Optional.empty(); Returns an empty Optional instance. No value is present for this Optional. Here is the code which declares the suffix of a Person as Optional Optional suffix = Optional.of("Sr") 6.0 SettableFuture in Guava/ CompletableFuture in JDK 8 These extend the Future and provide asynchronous, event-driven programming model in contrast to the blocking nature of Future in java. SettableFuture is similar to CompletableFuture in JDK 8 which can help to create a Future object for an event or a task which will occur. Code calling future.get() will block forever. When the asynchronous task finishes execution it calls future.set(). Now all the code blocking on Future.get() will get the details. 6.1 SettableFuture in Guava Here is a simple case demonstrating this functionality in Guava and JDK8 public SettableFuture getSettableFuture() {        final SettableFuture future = SettableFuture.create();        return future;    }    public void handleFutureTask(SettableFuture sf) throws InterruptedException {        Thread.sleep(5000);        sf.set("Test");    } In the above snippet we create a new SettableFuture in default state using SettableFuture.create(). The set() method sets the value for this future object. 6.2 CompletableFuture in JDK8 The following code shows how the equivalent functionality is achieved with CompletableFuture in JDK 8. public CompletableFuture getCompletableFuture() {        final CompletableFuture future = new CompletableFuture();        return future;    }    public void handleFutureTask(CompletableFuture cf) throws InterruptedException {        Thread.sleep(5000);        cf.complete("Test");    } As shown in the above snippet we create a CompletableFuture and invoke the complete method to set the value for this future object. The above samples showed how JDK 8 standardizes in the platform and augments to some of the functionality with Guava library aimed to provide with JDK7. JDK 8 has been a great leap in terms of the newer capabilities it provided. Guava will definitely provide additional improvements using the standardized API.

Posted by on 14 July 2014 | 6:11 am

BDD Requirements Management with JBehave, Thucydides and JIRA – Part 1

Thucydides is an open source library designed to make practicing Behaviour Driven Development easier. Thucydides plays nicely with BDD tools such as JBehave, or even more traditional tools like JUnit, to make writing automated acceptance tests easier, and to provide richer and more useful living documentation. In a series of two articles, we will look at the tight one and two-way integration that Thucydides offers with JIRA. The rest of this article assumes you have some familiarily with Thucydides. For a tutorial introduction to Thucydides, check out the Thucydides Documentation or this article for a quick introduction. Getting started with Thucydides/JIRA integration JIRA is a popular issue tracking system that is also often used for Agile project and requirements management. Many teams using JIRA store their requirements electronically in the form of story cards and epics in JIRA Suppose we are implementing a Frequent Flyer application for an airline. The idea is that travellers will earn points when they fly with our airline, based on the distance they fly. Travellers start out with a "Bronze" status, and can earn a better status by flying more frequently. Travellers with a higher frequent flyer status benefit from advantages such as lounge access, prioritized boarding, and so on. One of the story cards for this feature might look like the following: This story contains a description following one of the frequently-used formats for user story descriptions ("as a..I want..so that"). It also contains a custom "Acceptance Criteria" field, where we can write down a brief outline of the "definition of done" for this story. These stories can be grouped into epics, and placed into sprints for project planning, as illustrated in the JIRA Agile board shown here: As illustrated in the story card, each of these stories has a set of acceptance criteria, which we can build into more detailed scenarios, based on concrete examples. We can then automate these scenarios using a BDD tool like JBehave. The story in Figure 1 describes how many points members need to earn to be awarded each status level. A JBehave scenario for the story card illustrated earlier might look like this: Frequent Flyer status is calculated based on pointsMeta:@issue FH-17Scenario: New members should start out as Bronze membersGiven Jill Smith is not a Frequent Flyer memberWhen she registers on the Frequent Flyer programThen she should have a status of BronzeScenario: Members should get status updates based on status points earnedGiven a member has a status of <initialStatus>And he has <initialStatusPoints> status pointsWhen he earns <extraPoints> extra status pointsThen he should have a status of <finalStatus>Examples:| initialStatus | initialStatusPoints | extraPoints | finalStatus | notes                    || Bronze        | 0                   | 300         | Silver      | 300 points for Silver    || Silver        | 0                   | 700         | Gold        | 700 points for Gold      || Gold          | 0                   | 1500        | Platinum    | 1500 points for Platinum | Thucydides lets you associate JBehave stories or JUnit tests with a JIRA card using the @issue meta tag (illustrated above), or the equivalent @Issue annotation in JUnit. At the most basic level, this will generate links back to the corresponding JIRA cards in your test reports, as illustrated here: For this to work, Thucydides needs to know where your JIRA server. The simplest way to do this is to define the following properties in a file called thucydides.properties in your project root directory: jira.url=https://myserver.atlassian.netjira.project=FHjira.username=jirauserjira.password=t0psecret You can also set these properties up in your Maven pom.xml file or pass them in as system properties. Thucydides also supports two-way integration with JIRA. You can also get Thucydides to update the JIRA issue with a comment pointing to the corresponding test result. Feature Coverage But test results only report part of the picture. If you are using JIRA to store your stories and epics, you can use these to keep track of progress. But how do you know what automated acceptance tests have been implemented for your stories and epics, and, equally importantly, how do you know which stories or epics have no automated acceptance tests? In agile terms, a story cannot be declared "done" until the automated acceptance tests pass. Furthermore, we need to be confident not only that the tests exist, but they test the right requirements, and that they test them sufficiently well. We call this idea of measuring the number (and quality) of the acceptance tests for each of the features we want to build "feature coverage". Thucydides can provide feature coverage reporting in addition to the more conventional test results. If you are using JIRA, you will need to add thucydides-jira-requirements-provider to the dependencies section of your pom.xml file:         <dependencies>            ...            <dependency>                <groupId>net.thucydides.plugins.jira</groupId>                <artifactId>thucydides-jira-requirements-provider</artifactId>                <version>0.9.260</version>            </dependency>        </dependencies> (The actual version number might be different for you - always take a look at Maven Central to know what the latest version is). You will also need to add this dependency to the Thucydides reporting plugin configuration:         <build>            ...            <plugins>                ...                <plugin>                    <groupId>net.thucydides.maven.plugins</groupId>                    <artifactId>maven-thucydides-plugin</artifactId>                    <version>0.9.257</version>                    <executions>                        <execution>                            <id>thucydides-reports</id>                            <phase>post-integration-test</phase>                            <goals>                                <goal>aggregate</goal>                            </goals>                        </execution>                    </executions>                    <dependencies>                        <dependency>                            <groupId>net.thucydides.plugins.jira</groupId>                            <artifactId>thucydides-jira-requirements-provider</artifactId>                            <version>0.9.260</version>                        </dependency>                    </dependencies>                </plugin>            </plugins>        </build> Now, when you run the tests, Thucydides will query JIRA to determine the epics and stories that you have defined, and list them in the Requirements page. This page gives you an overview of how many requirements (epics and stories) have passing tests (green), how many have failing (red) or broken (orange) tests, and how many have no tests at all (blue): If you click on an epic, you can see the stories defined for the epic, including an indicator (in the "Coverage" column) of how well each story has been tested. From here, you may want to drill down into the details about a given story, including what acceptance tests have been defined for this story, and whether they ran successfully: Both JIRA and the JIRA-Thucydides integration are quite flexible. We saw earlier that we had configured a custom "Acceptance Criteria" field in our JIRA stories. We have displayed this custom field in the report shown above by including it in the thucydides.properties file, like this: jira.custom.field.1=Acceptance Criteria Thuydides reads the narrative text appearing in this report ("As a frequent flyer…") from the Description field of the corresponding JIRA card. We can override this behavior and get Thucydides to read this value from a different custom field using the jira.custom.narrative.field property. For example, some teams use a custom field called "User Story" to store the narrative text, instead of the Description field. We could get Thucydides to use this field as follows: jira.custom.narrative.field=User Story Conclusion Thucydides has rich and flexible one and two-way integration with JIRA. Not only can you link back to JIRA story cards from your acceptance test reports and display information about stories from JIRA in the test reports, you can also read the requirements structure from JIRA, and report on which features have been tested, and which have not. In the next article in this series, we will learn how to insert links to the Thucydides reports into the JIRA issues, and how to actively update the state of the JIRA cards based on the outcomes of your tests. Want to learn more? Be sure to check out the Thucydides web site, the Thucydides Blog, or join the Thucydides Google Users Group to join the discussion with other Thucydides users. Wakaleo Consulting, the company behind Thucydides, also runs regular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, and Automated Acceptance Testing.

Posted by on 10 July 2014 | 4:48 pm