HTTP/2 and Java

As my colleague and co-spec lead Shing-Wai Chan announced last week, Oracle is getting ready to start JCP work on Servlet 4.0, including support for HTTP/2. While this JSR hasn't officially started, there are two undergoing efforts for HTTP/2 in Java that I'd like to bring to your attention. The first is from long-time Servlet EG member Greg Wilkins, who we all know as the man behind Jetty. Greg is also on the IETF httpbis working group. Greg's credentials for being on the httpbis WG include his implementation of the most recent version of the HTTP/2 specification in the latest versions of Jetty. I look forward to bringing Greg's experience to bear on exposing the important features of HTTP/2 to users of the Servlet API. The other effort is one you probably don't know about: the new HTTP 2 client in Java SE. For over 17 years has been the face of HTTP in Java SE. All that is about to change in Java SE 9 with the implementation of JEP 110: HTTP 2 Client. This work is being done under the umbrella JSR for Java SE 9. That mean's the java community is welcome to weigh in and contribute ideas. You can start out with the net-dev mailing list on openjdk. Shing-Wai and I are staying in touch with the development team behind JEP 110 and will very likely be re-using some code in the implementation of Servlet 4.0. Stay tuned for more developments on this exciting new area of the Internet.

Posted by on 29 July 2014 | 5:24 pm

Embedding JavaFX Scene Builder 2.0 in NetBeans - Part 4

Tired of JavaFX Scene Builder being run in a separate process? Fed up with no real integration between your favorite IDE and JavaFX Scene Builder? There may be a solution heading towards you. Follow this small series of blog entries to join me on my journey towards an embedded JavaFX Scene Builder in NetBeans. Welcome back (you did read the first three parts of this series?)! Ok, having done a bit of magic to Scene Builder 2.0 in the first two part of this series, let's go for some more integration. Goal of this part is to get change detection and undo/redo support working. Change detection is important to get the "Save" feature of NetBeans to work. Because the integration is actually based on reusing the fxml as a FXMLDataObject it is necessary to detect a change done in the Scene Builder and apply the result back to the FXMLDataObject. But how to detect the change. This can be done using the observable property revision from the JobManager which in turn can be retrieved from the EditorController. The JobManager tracks every action applied to the scene as a "Job" and increments the revision with each change. With this change detection can be done with the following code fragment editorController.getJobManager().revisionProperty().addListener((ov, oldValue, newValue) -> {    try {        String updatedFXMLText = editorController.getFxmlText();        EditorCookie editorCookie = dao.getLookup().lookup(EditorCookie.class);        editorCookie.getDocument().remove(0, editorCookie.getDocument().getLength());        editorCookie.getDocument().insertString(0, updatedFXMLText, null);    } catch (BadLocationException ex) {        Exceptions.printStackTrace(ex);    }}); This just reads the complete FXML as a pure text from Scene Builder and puts it back into the document managed by NetBeans. This triggers a change on the FXMLDataObject side of the world and the save logic is active. Knowing about the JobManager makes undo/redo integration quite simple. What we need from NetBeans perspective is an UndoRedo implementation, which is returned from the appropriate method of our SBFxmlMultiViewElement     @Override    public UndoRedo getUndoRedo() {        return null != undoRedo ? undoRedo : UndoRedo.NONE;    } What about the required implementation? Looking at the API's of UndoRedo and JobManager they are quite similiar. So just creating a SceneBuilderUndoRedoBridge should be fairly simple: public class SceneBuilderUndoRedoBridge implements UndoRedo {    private final JobManager manager;    private final ChangeSupport changeSupport = new ChangeSupport(this);    SceneBuilderUndoRedoBridge(JobManager manager) {        this.manager = manager;        this.manager.revisionProperty().addListener((observable) -> {            EventQueue.invokeLater(() -> {                changeSupport.fireChange();            });        });    }    @Override    public boolean canUndo() {        return manager.canUndo();    }    @Override    public boolean canRedo() {        return manager.canRedo();    }    @Override    public void undo() throws CannotUndoException {        Platform.runLater(()-> manager.undo());    }    @Override    public void redo() throws CannotRedoException {        Platform.runLater(()-> manager.redo());    }    @Override    public void addChangeListener(ChangeListener cl) {        changeSupport.addChangeListener(cl);    }    @Override    public void removeChangeListener(ChangeListener cl) {        changeSupport.removeChangeListener(cl);    }    @Override    public String getUndoPresentationName() {        return manager.getUndoDescription();    }    @Override    public String getRedoPresentationName() {        return manager.getRedoDescription();    }} Now just make sure that the new UndoRedo support is availble by adding just one more line of code to the getVisualRepresentation()method of SBFxmlMultiViewElement     undoRedo = new SceneBuilderUndoRedoBridge(editorController.getJobManager()); That's it - change detection and undo/redo integration working! I am sure the code can be improved - just let me know and create a pull request ;-) Now you ask - can I download this magic plugin somewhere? The answer is simple: Get it from the NetBeans Plugin Portal by downloading it from here or install it via your Plugin Manager from inside NetBeans. What you will need in addition to the plugin is a fairly recent (latest) JDK8_u20 ea build. There are some issues with DnD and JavaFX-Swing-Integration which have been fixed in latest builds. Stay tuned for the next part of this series, showing .. whatever I come up with next. Any feature requests? Head over to NbSceneBuilder and let me know.

Posted by on 28 July 2014 | 4:29 pm

Embedding JavaFX Scene Builder 2.0 in NetBeans - Part 3

Tired of JavaFX Scene Builder being run in a separate process? Fed up with no real integration between your favorite IDE and JavaFX Scene Builder? There may be a solution heading towards you. Follow this small series of blog entries to join me on my journey towards an embedded JavaFX Scene Builder in NetBeans. Welcome back (you did read the first two parts of this series?)! Ok, having done a bit of magic to Scene Builder 2.0 ea in the first two part of this series, let's go for some more integration. Goal of this part is to get the inspector view, the css panel and the library panel from Scene Builder integrated - looking like this To achieve this we create a new TopComponent for each panel to show. The major thing is trying to get a reference to the EditorController via the selected Node. The following example code registers a listener so that lookup changes will be propagated to our TopComponent.     @Override    public void componentOpened() {        nodeResult = Utilities.actionsGlobalContext().lookupResult(Node.class);        LookupListener nodeLkpL = (event) -> {            final Optional<? extends Node> optionalNode = nodeResult.allInstances().stream().findFirst();            if (optionalNode.isPresent()) {                editorControllerResult = optionalNode.get().getLookup().lookupResult(EditorController.class);                resultChanged(new LookupEvent(editorControllerResult));            }        };        nodeResult.addLookupListener(nodeLkpL);        nodeLkpL.resultChanged(new LookupEvent(nodeResult));    } Now that we are aware of changing EditorControllers - what to do with the EditorController retrieved from the lookup? We use is to create our own special panel via e.g. the LibraryController and add it to the TopComponent. If there is no EditorController anymore, only a message is shown, that there is actually no Scene Builder content availble to which this TopComponent can attach to.     @Override    public void resultChanged(LookupEvent le) {            final Optional

Posted by on 28 July 2014 | 8:23 am

On Heads, Trees, Cells and Brains - or: Why Flat Hierarchies Work

Image source: Today's business world is anything but peaceful. Instead, it is a rather cut-throat environment that doesn’t forgive anything and is certainly not too friendly towards solo players trying to achieve great goals. You either grow or you die... Or get eaten by someone with enough cash and interest in the niche you found for yourself. This roughly is the current business environment. Its motto is “achieve more with less”. This race for efficiency often takes its toll, and it’s neither the stakeholders nor the investors who pay the price - it’s staff. Motivated employees who do their best to move the business forward and often invest more time than they’re paid for can only take so much before emotions start to shift. Research [1] suggests that more than half of employees hate their job: they wake up every morning, dreading what will come at them during their day. Fatigue or stress, of course, can easily lead to worse, pathological, symptoms. In order to still achieve their goals, and to increase efficiency, companies therefore tend to turn to one of the following solutions: they either create an efficiency department, or they recognize that people are stressed, so they try to actually take care of employee happiness and invest into keeping them happy (and balanced). Unfortunately, often, when neither of the above works, it’s time to look at the problem in depth and analyze what is it that prevents companies from achieving higher levels of efficiency. If you do so, you’ll find that part of the problem is multiple levels of hierarchies. Information traversing those hierarchies gets lost or twisted heavily - whether it is on its way up or down. A nice anecdotical example of such a twist can even be found in movies for kids [2]. The result of recognizing traditional hierarchy as a problem is finding another approach - most often these days, it is flat hierarchy, where companies position themselves as a community of independent, but well-collaborating teams. The anatomy of an analogy Before I go into the benefits of this approach, let’s have a look at how to visually represent different kinds of hierarchies. While in the past we described management levels by using body part metaphors (controlled by the head, with arms and legs executing), flat hierarchies are better described by turning to cells. We describe such organizations as cell colonies working together, each providing some special function or output needed by others to perform their assigned task. Top-down versus flat The main difference between traditional top down hierarchies and flat hierarchies is that the latter removes communication barriers. While building hierarchies over centuries has helped us nail down a chain of command, hold people accountable and and keep organizations moving, traditional hierarchy in today’s organizations threatens the communication flow to the point of throwing businesses off balance. This is largely due to the fact that information travels incredibly fast, making each human checkpoint you can eliminate a competitive advantage you gain. A bag of beans At its core, our attempt to flatten hierarchy is a strategy to make communication more efficient - to establish new flows of information, to make a company grow more organically, naturally. This also makes it easier for a company structure to adapt to changes. A rigid tree like structure is good at withstanding the outside pressure without any or with very little of elasticity, all the way to the breaking point. Then it collapses. Companies tend to be like that too. The bigger they get, the more rigid they become. That is one big danger for startups maturing into something bigger. Companies that adopt flattened hierarchies have a better chance at not breaking: think of it as something similar to a bag of beans - it is big, full of small equal parts. You can kick it or sit on it or move it around, but it will not break, it will merely easily rearrange its contents. Brainy connections However, this bag of bean doesn’t eliminate the need for efficient communication, as another biological example illustrates. Whales, elephants and dolphins have more brain matter than we do - so it can’t be the simple size of the brain mass that makes us relatively advanced beings. It's the synapses between brain cells - those little connections that allow cells to communicate with each other. If you doubt that cell connections are the key to success, look at post surgery reports of patients with split-brain condition. This is a result of treatment in which synapses between their hemispheres were cut to stop seizures. Those people didn't lose any of their abilities, they know how to talk, they still recognize all objects around them, but since the connection between objects and their names were cut, they have to relearn how to call what thing. [3],[4],[5] Similarly, it's the connected individuals that make the difference in flat hierarchies. It can work only as long as individuals in smaller cells or groups keep talking to each other. The communication between individuals is what’s needed in the success of a flattened company structure. Any clandestine operations will kill all benefits of it. So if your company made the move and flattened its hierarchy, take the chance and make the change from being a head or arm to becoming a cell. Build connections, communicate well with other cell members to understand their goals and problems. Life in the bean bag is more fun than being a rigid tree branch! [1] [2] [3] Funnell, M. G., Colvin, M. K., & Gazzaniga, M. S. (2007). The calculating hemispheres: Studies of a split-brain patient. Neuropsychologia, 45(10), 2378-2386. [4] Gazzaniga MS, Holtzman JD, Deck MD, Lee BC. MRI assessment of human callosal surgery with neuropsychological correlates. Neurology 1985; 35:1763-66. [5] Eldridge, A. D. (n.d.). Discovering the unique individuals behind split-brain patient anonymity (Doctoral dissertation, University of North Carolina at Wilmington, Wilmington, NC)

Posted by on 28 July 2014 | 1:33 am

Red Hat JBoss Data Grid 6.3 is now available!

Red Hat's JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly. JBoss Data Grid is accessible for both Java and non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocol, or directly in process through a traditional Java Map API. Download bits Supported configurations Component details The key features of JBoss Data Grid are: Schema-less key/value store for storing unstructured data Querying to easily search and find objects Security to store and restrict access to your sensitive data Multiple access protocols with data compatibility for applications written in any language, using any framework Transactions for data consistency Distributed execution and map/reduce API to perform large scale, in-memory computations in parallel across the cluster Cross-datacenter replication for high availability, load balancing and data partitioning What's new in 6.3 ? Expanded security for your data User authentication via Simple Authentication and Security Layer (SASL) Role based authorization and access control to Cache Manager and Caches New nodes required to authenticate before joining a cluster Encrypted communication within the cluster Deploy into Apache Karaf and WebLogic Use as an embedded or distributed cache in Red Hat JBoss Fuse integration flows Enhanced map/reduce Improved scalability by storing computation results directly in the grid instead of pushing them back to the application Takes advantages of hardware's parallel processing power for greater computing efficiencies New JPA cache store that preserves data schema Improved remote query and C# Hot Rod client in technology preview JBoss Data Grid modules for JBoss Enterprise Application Platform (JBoss EAP) The complete list of new and updated features is described here. How can this be installed on JBoss EAP ? JBoss Data Grid has 2 deployment modes: Library mode (embedded distributed caches) Client-Server mode (remote distributed cache) - Install the Hot Rod client JARs in EAP, and have application reference these jars to use the Hot Rod protocol to connect to the JBoss Data Grid Server (remote cache). Why a new C# client ? The remote Hot Rod client is aware of the cluster topology and hashing scheme on Server and can get to a (k,v) entry in a single hop. In contrast, REST and memcached usually require an extra hop to get to an entry. As a results, Hot Rod protocol has higher performance, and is the preferred protocol (in Client-Server mode). JBoss Data Grid 6.1 only had a Java Hot Rod client - for all other languages, customers had to use memcached or REST. JBoss Data Grid 6.2 added C++ Hot Rod client. And now JBoss Data Grid 6.3 added a Tech Preview of C# client. Infinispan has a lot more Hot Rod clients. How would somebody use JBoss Data Grid with JBoss Fuse ? The primary purpose is caching in integration workflows. For example, remote JBoss Data Grid can be used with Fuse to cache search results. REST can be used to communicate with a remote cache, but Hot Rod can now be used starting with JBoss Data Grid 6.3. Fuse currently has camel-cache component which is based on EHCache. There is also a new camel-infinispan component was released in the community. JBoss Data Grid 6.3 can be used with the community version of camel-infinispan. Why would somebody use JBoss Data Grid on WebLogic ? Customers who run WebLogic stack and eventually want to migrate to JBoss stack can start migration by replacing Oracle Coherence with JBoss Data Grid. And here is a comparison between the two offerings: The complete documentation is available here and quick references are below: Release Notes Getting Started Guide Administration and Configuration Guide API Documentation Developer Guide Infinispan Query Guide Feature Support Document Some useful references: Getting started with Infinispan Refcard Infinispan 6.x user guide

Posted by on 24 July 2014 | 2:58 am

Data-driven unit testing in Java

Data-driven testing is a powerful way of testing a given scenario with different combinations of values. In this article, we look at several ways to do data-driven unit testing in JUnit. Suppose, for example, you are implementing a Frequent Flyer application that awards status levels (Bronze, Silver, Gold, Platinum) based on the number of status points you earn. The number of points needed for each level is shown here: level minimum status points result level Bronze 0 Bronze Bronze 300 Silver Bronze 700 Gold Bronze 1500 Platinum Our unit tests need to check that we can correctly calculate the status level achieved when a frequent flyer earns a certain number of points. This is a classic problem where data-driven tests would provide an elegant, efficient solution. Data-driven testing is well-supported in modern JVM unit testing libraries such as Spock and Spec2. However, some teams don’t have the option of using a language other than Java, or are limited to using JUnit. In this article, we look at a few options for data-driven testing in plain old JUnit. Parameterized Tests in JUnit JUnit provides some support for data-driven tests, via the Parameterized test runner. A simple data-driven test in JUnit using this approach might look like this: @RunWith(Parameterized.class)public class WhenEarningStatus {    @Parameters(name = "{index}: {0} initially had {1} points, earns {2} points, should become {3} ")    public static Iterable<Object[]> data() {        return Arrays.asList(new Object[][]{                {Bronze, 0,    100,  Bronze},                {Bronze, 0,    300,  Silver},                {Bronze, 100,  200,  Silver},                {Bronze, 0,    700,  Gold},                {Bronze, 0,    1500, Platinum},        });    }    private Status initialStatus;    private int initialPoints;    private int earnedPoints;    private Status finalStatus;    public WhenEarningStatus(Status initialStatus, int initialPoints, int earnedPoints, Status finalStatus) {        this.initialStatus = initialStatus;        this.initialPoints = initialPoints;        this.earnedPoints = earnedPoints;        this.finalStatus = finalStatus;    }    @Test    public void shouldUpgradeStatusBasedOnPointsEarned() {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                                            .named("Joe", "Jones")                                            .withStatusPoints(initialPoints)                                            .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }} You provide the test data in the form of a list of Object arrays, identified by the _@Parameterized@ annotation. These object arrays contain the rows of test data that you use for your data-driven test. Each row is used to instantiate member variables of the class, via the constructor. When you run the test, JUnit will instantiate and run a test for each row of data. You can use the name attribute of the @Parameterized annotation to provide a more meaningful title for each test. There are a few limitations to the JUnit parameterized tests. The most important is that, since the test data is defined at a class level and not at a test level, you can only have one set of test data per test class. Not to mention that the code is somewhat cluttered - you need to define member variables, a constructor, and so forth. Fortunatly, there is a better option. Using JUnitParams A more elegant way to do data-driven testing in JUnit is to use [|JUnitParams]. JUnitParams (see [|Maven Central] to find the latest version) is an open source library that makes data-driven testing in JUnit easier and more explicit. A simple data-driven test using JUnitParam looks like this: @RunWith(JUnitParamsRunner.class)public class WhenEarningStatusWithJUnitParams {    @Test    @Parameters({            "Bronze, 0,   100,  Bronze",            "Bronze, 0,   300,  Silver",            "Bronze, 100, 200,  Silver",            "Bronze, 0,   700,  Gold",            "Bronze, 0,   1500, Platinum"    })    public void shouldUpgradeStatusBasedOnPointsEarned(Status initialStatus, int initialPoints,                                                        int earnedPoints, Status finalStatus) {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                                            .named("Joe", "Jones")                                            .withStatusPoints(initialPoints)                                            .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }} Test data is defined in the @Parameters annotation, which is associated with the test itself, not the class, and passed to the test via method parameters. This makes it possible to have different sets of test data for different tests in the same class, or mixing data-driven tests with normal tests in the same class, which is a much more logical way of organizing your classes. JUnitParam also lets you get test data from other methods, as illustrated here:     @Test    @Parameters(method = "sampleData")    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,                                                     int earnedPoints, Status finalStatus) {        FrequentFlyer member = FrequentFlyer.withFrequentFlyerNumber("12345678")                .named("Joe", "Jones")                .withStatusPoints(initialPoints)                .withStatus(initialStatus);        member.earns(earnedPoints).statusPoints();        assertThat(member.getStatus()).isEqualTo(finalStatus);    }    private Object[] sampleData() {        return $(                $(Bronze, 0,   100, Bronze),                $(Bronze, 0,   300, Silver),                $(Bronze, 100, 200, Silver)        );    } The $ method provides a convenient short-hand to convert test data to the Object arrays that need to be returned. You can also externalize     @Test    @Parameters(source=StatusTestData.class)    public void shouldUpgradeStatusFromEarnedPoints(Status initialStatus, int initialPoints,                                                     int earnedPoints, Status finalStatus) {        ...    } The test data here comes from a method in the StatusTestData class:     public class StatusTestData {        public static Object[] provideEarnedPointsTable() {            return $(                    $(Bronze, 0,   100, Bronze),                    $(Bronze, 0,   300, Silver),                    $(Bronze, 100, 200, Silver)            );        }    } This method needs to be static, return an object array, and start with the word "provide". Getting test data from external methods or classes in this way opens the way to retrieving test data from external sources such as CSV or Excel files. JUnitParam provides a simple and clean way to implement data-driven tests in JUnit, without the overhead and limitations of the traditional JUnit parameterized tests. Testing with non-Java languages If you are not constrained to Java and/or JUnit, more modern tools such as Spock ( and Spec2 provide great ways of writing clean, expressive unit tests in Groovy and Scala respectively. In Groovy, for example, you could write a test like the following: class WhenEarningStatus extends Specification {    def "should earn status based on the number of points earned"() {        given:        def member = FrequentFlyer.withFrequentFlyerNumber("12345678")                .named("Joe", "Jones")                .withStatusPoints(initialPoints)                .withStatus(initialStatus);        when:        member.earns(earnedPoints).statusPoints()        then:        member.status == finalStatus        where:        initialStatus | initialPoints | earnedPoints | finalStatus        Bronze        | 0             | 100          | Bronze        Bronze        | 0             | 300          | Silver        Bronze        | 100           | 200          | Silver        Silver        | 0             | 700          | Gold        Gold          | 0             | 1500         | Platinum    }} John Ferguson Smart is a specialist in BDD, automated testing, and software life cycle development optimization, and author of BDD in Action and other books. John runs regular courses in Australia, London and Europe on related topics such as Agile Requirements Gathering, Behaviour Driven Development, Test Driven Development, and Automated Acceptance Testing.

Posted by on 24 July 2014 | 12:09 am

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 6 (Tech Tip #40)

This is the sixth part (part 1, part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality. Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader, --processor, --writer parameters to the newly added command. Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification. Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters. Part 4 added a new test for the command and showed how Forge can be used in debug mode. Part 5 fixed a bug reported by a community member and started work to make processor validation optional. This part shows: Upgrade from Forge 2.6.0 to 2.7.1 Fix the failing test Reader, processor, and writer files are now templates instead of source files Reader, processor, and writer are injected appropriately in test's temp project Enjoy! As always, the evolving source code is available at The debugging will continue in the next episode.

Posted by on 23 July 2014 | 5:44 pm

And towards JSF 2.3 we go!

For all JSF folks out there. Some important news happened. What is it? Well, Ed Burns announced Oracle's intent to file JSF 2.3 JSR with me as co-spec lead. See the email to the EG at Enjoy!

Posted by on 22 July 2014 | 8:05 pm

Shape the future of JBoss EAP and WildFly Web Console

Are you using WildFly ? Any version of JBoss EAP ? Would you like to help us define how the Web Console for future versions should look like ? Help the Red Hat UX Design team shape the future of JBoss EAP and WildFly! We are currently working to improve the usability and information architecture of the web-based admin console. By taking part in a short exercise you will help us better understand how users interpret the information and accomplish their goals. You do not need to be an expert of the console to participate in this study. The activity shouldn't take longer than 10 to 15 minutes to complete. To start participating in the study, click on the link below and follow the instructions. I completed the study in about 12 mins and was happy that my clicking around helped shape the future of JBoss EAP and WildFly! Just take a quick detour from your routine for 10-15 mins and take the study. Thank you in advance for taking the time to complete the study.

Posted by on 18 July 2014 | 7:57 am

Getting Started with Docker (Tech Tip #39)

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well! Lets hear it from @solomonstre - creator of Docker project! In short, Docker simplifies software delivery by making it easy to build and share images that contain your application's entire environment, or application operating system. What does it mean by application operating system ? Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system. You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker. What are the main components of Docker ? Docker has two main components: Docker: the open source container virtualization platform Docker Hub: SaaS platform for sharing and managing Docker images Docker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this. Images are "build component" of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are "run component" of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the "distribution component" of Docker. Docker in turn contains two components: Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers. Client is a Docker binary that accepts commands from the user and communicates back and forth with daemon How do these work together ? Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host. Client can then start the Container using run command. The complete list of client commands can be seen here. Client communicates with Daemon using sockets or REST API. Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ? Docker daemon and client for different operating systems can be installed from As you can see, it can be installed on a wide variety of platforms, including Mac and Windows. For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac: bashunset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATHmkdir -p ~/.boot2dockerif [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi/usr/local/bin/boot2docker init /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375docker version~> bash~> unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH~> mkdir -p ~/.boot2docker~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi~> /usr/local/bin/boot2docker init 2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):23752014/07/16 09:57:13 Waiting for VM to be started..........2014/07/16 09:57:35 Started.2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set:2014/07/16 09:57:35     export DOCKER_HOST=tcp://> docker versionClient version: 1.1.1Client API version: 1.13Go version (client): go1.2.1Git commit (client): bd609d2Server version: 1.1.1Server API version: 1.13Go version (server): go1.2.1Git commit (server): bd609d2 For example, Docker Daemon and Client can be installed on Mac following the instructions at The VM can be stopped from the CLI as: boot2docker stop And then restarted again as: boot2docker boot And logged in as: boot2docker ssh The complete list of boot2docker commands are available in help: ~> boot2docker helpUsage: boot2docker [] []boot2docker management utility.Commands:    init                    Create a new boot2docker VM.    up|start|boot           Start VM from any states.    ssh [ssh-command]       Login to VM via SSH.    save|suspend            Suspend VM and save state to disk.    down|stop|halt          Gracefully shutdown the VM.    restart                 Gracefully reboot the VM.    poweroff                Forcefully power off the VM (might corrupt disk image).    reset                   Forcefully power cycle the VM (might corrupt disk image).    delete|destroy          Delete boot2docker VM and its disk image.    config|cfg              Show selected profile file settings.    info                    Display detailed information of VM.    ip                      Display the IP address of the VM's Host-only network.    status                  Display current state of VM.    download                Download boot2docker ISO image.    version                 Display version information. Enough talk, show me an example ? Some of the JBoss projects are available as Docker images at and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as: ~> docker pull jboss/wildflyPulling repository jboss/wildfly2f170f17c904: Download complete 511136ea3c5a: Download complete c69cab00d6ef: Download complete 88b42ffd1f7c: Download complete fdbe853b54e1: Download complete bc93200c3ba0: Download complete 0daf76299550: Download complete 3a7e1274035d: Download complete e6e970a0db40: Download complete 1e34f7a18753: Download complete b18f179f7be7: Download complete e8833789f581: Download complete 159f5580610a: Download complete 3111b437076c: Download complete The image can be verified using the command: ~> docker imagesREPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZEjboss/wildfly       latest              2f170f17c904        8 hours ago         1.048 GB Once the image is downloaded, the container can be started as: docker run jboss/wildfly By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY. In addition, we'd also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port. So we'll run the container as: docker run -i -t -p 80:8080 jboss/wildfly=========================================================================  JBoss Bootstrap Environment  JBOSS_HOME: /opt/wildfly  JAVA: java  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================22:08:29,943 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final22:08:30,200 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final22:08:30,297 INFO  [] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting22:08:31,935 INFO  [] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)22:08:31,961 INFO  [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final22:08:31,974 INFO  [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final22:08:32,057 INFO  [] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors22:08:32,108 INFO  [] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem.22:08:32,110 INFO  [] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem22:08:32,133 INFO  [] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem22:08:32,178 INFO  [] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main]22:08:32,206 WARN  [] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.22:08:32,348 INFO  [] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta122:08:32,397 INFO  [] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension22:08:32,442 INFO  [] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final)22:08:32,512 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting22:08:32,512 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting22:08:32,570 INFO  [] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)22:08:32,660 INFO  [] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h222:08:32,736 INFO  [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final22:08:32,836 INFO  [] (MSC service thread 1-15) JBAS011802: Starting Naming Service22:08:32,839 INFO  [] (MSC service thread 1-15) JBAS015400: Bound mail session [java:jboss/mail/Default]22:08:33,406 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content22:08:33,540 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server.22:08:33,603 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting22:08:34,072 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on /,599 INFO  [] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments22:08:34,619 INFO  [] (MSC service thread 1-9) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]22:08:34,781 INFO  [] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final22:08:34,843 INFO  [] (Controller Boot Thread) JBAS015961: Http management interface listening on,844 INFO  [] (Controller Boot Thread) JBAS015951: Admin console listening on,845 INFO  [] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand) Container's IP address can be found as: ~> boot2docker ipThe VM's Host only interface IP address is: The started container can be verified using the command: ~> docker psCONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                NAMESb2f8001164b0        jboss/wildfly:latest   /opt/wildfly/bin/sta   46 minutes ago      Up 12 minutes       8080/tcp, 9990/tcp   sharp_pare And now the WildFly server can now be accessed on your local machine as and looks like as shown: Finally the container can be stopped by hitting Ctrl + C, or giving the command as: ~> docker stop b2f8001164b0b2f8001164b0 The container id obtained from "docker ps" is passed to the command here. More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at What else would you like to see in the WildFly Docker image ? File an issue at Other images that are available at are: KeyCloak TorqueBox Immutant LiveOak AeroGear   Did you know that Red Hat is amongst one of the top contributors to Docker, with 5 Red Hatters from Project Atomic working on it ?

Posted by on 17 July 2014 | 12:45 am