JavaLand 2015 Wrap-Up

JavaLand 2015 Wrap Up After months of preparation, it all came down to three days of intense execution, and I was just one speaker. I can only marvel at the logistical acumen that was on display from the JavaLand and DOAG team. I had an action packed agenda: two conference sessions, two Early Adopter's Area (EAA) session, and one training day session. Thrown into the mix were a couple 1:1 consulting sessions and a vJUG/NightHacking session. I especially hope the conference attendees enjoyed the Early Adopter's Area, capably coodinated by Andreas Badelt. Because of the high level of activity on my personal agenda, I was not able to attend as many sessions as I would have liked. In any case, this blog entry is my place to share my overall impressions of the conference, and of the sessions I did get a chance to attend. Day One Right off the bat, I want to tip my cap to Marcus Lagergren for remaining calm in the face of some AV problems. Even with all that, and the 45 minute session duration, Marcus managed to give a compelling whirlwind tour of his personal experience with Java from the beginning. More photos like the one on the right are available from Stefan Hildebrandt's flickr photo stream. I think there is a lot more room in the "20 year's of Java meme", however, and I applaud Marcus for wisely not attempting to speak for all of it and drawing from his own experiences. That's one great thing about the #java20 meme, everyone has their own story. Maybe at JavaOne 2015 they will have some sort of Story Corps type thing where people can give their stories. Come to think of it, if someone wants to build Story Corps as a Service (SCAAS), perpahs they can sell it to Oracle for use at the show. Shortly after Marcus's session, I presented with my good friend Oliver Szymanzki a 45 minute capsule of our full day training session about Java EE 7 from an HTML5 Perspective. It was tough to make a meaningful abstraction from a full day session to just 45 minutes, but I hope at least people could take something useful away from it. Then came my first exposure to the EAA, which was my only chance to present JSF content here at JavaLand. I gave a quick presentation and had an informal meeting with several JSF EG members who were at JavaLand. We covered f:socket, multi-component validation, and URL mapping. The evening community event was really not to be missed. If you ever have a chance to attend JavaLand, I really recommend you participate. Day Two I started out the day by presenting a modified version of my DevNexus session about Servlet 4.0 and HTTP/2. I basically dropped the demo and moved the Java SE 9 content to an EAA session in order to fit into the 45 minute window. Following my session I was able to enjoy Mark Little's keynote about Java Enterprise and Internet of Things. This session put out some hard-won truths of problems we have solved in Java as cautionary tales for newer stacks that seem intent on re-inventing wheels rather than standing on the shoulders of others. I must admit it was a feel-good session, but still realistic and largely kool aid free. In the afternoon, I supported David Blevins during his session about the new Security JSR. This was a very informative session that got a whole lot done in only 45 minutes. I hope it encouraged some people to get involved in JSR-375. Running back to the EAA, I presented the exciting work being done by Michael McMahon to bring a new Http Client to Java SE 9, including HTTP/2 support. I can't post the slides, but I'm sure we'll have something on this at JavaOne. My last engagment of the conference proper was to participate in a joint vJUG/NightHacking session regarding Adopt-a-JSR. This was lots of fun, and I thank Stephen Chin and Simon Maple for providing a vehicle for it. As a nice wind-down from the conference, and a bit of chill before the training day, I was invited by DOAG boss Fried Saacke to attend the 5 year celebration dinner for Java Aktuell magazine. I didn't know it at the time, but the invitation included an opportunity to give a short speech, in German, on the importance of JCP to the Java Community. I hope I didn't mangle my words too badly. After being blessed with many years of German conference opportunities at which I invariably bring home lots of chocolate, I felt it was time to give the Germans a taste of American-style sweets along with their pre-loaded VM usb sticks. These Tasty Kakes or a specialty of my home-town of Philadelphia, and each attendee of the session Java EE aus einer HTML5-Perspektive received some along with a full day of instruction and a USB stick with a VM containing the workshop materials. In summary, JavaLand has lots to recommend it. Come for the content, stay for the fun. AttachmentSize 20150324-lagergren-keynote-hassle.png330.87 KB 20150326-tastykakes-and-usb.jpeg165.42 KB

Posted by on 30 March 2015 | 4:32 pm

Private Certificate Authority

In a recent blog entry, I publicized my Final Quadrilogy of articles on Java SSL. This article is a much shortened version of the rambling Local CA article in that series. Overview CA-signed certificates are used for public server authentication. In this article, we consider a private network of devices connecting to our servers i.e. via client-authenticated SSL. In this case devices have client certificates which we explicitly trust (and none other). Let's setup a private CA to sign certificates with our own CA key. We will sign our server certificates, and consider signing client certificates as well. What is a digital certificate? Firstly we note that public key cryptography requires a private and public key. These are mathematically linked, such that the public key is used to encrypt data which can be decrypted only using the private key. Moreover, the private key can create a digital signature, which is verified using the public key. According to Wikipedia, A public key certificate (also known as a digital certificate or identity certificate) is an electronic document used to prove ownership of a public key. The certificate includes information about the key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct. If the signature is valid, and the person examining the certificate trusts the signer, then they know they can use that key to communicate with its owner. So a certificate is a document which contains a public key, and its related information such as its "subject." This is the name assigned by its creator, who is the sole holder of the corresponding private key. This document is digitally signed by the "issuer." If we trust the issuer then we can use the public key to communicate to the subject. Cryptographically speaking, we can use the public key to encrypt information, which can be decrypted only by the holder of the corresponding private key. Finally, X.509 is standard for Public Key Infrastructure (PKI) that specifies formats for public key certicates, revocation lists, etc. Root CA certificate By definition, a root certificate (e.g. a CA certificate) is self-signed, and so has the same "Issuer" and "Subject." For example, inspect a GoDaddy root certificate as follows: $ curl -s |     openssl x509 -text | grep 'Issuer:\|Subject:'        Issuer: ... OU=Go Daddy Root Certificate Authority        Subject: ... OU=Go Daddy Root Certificate Authority A self-signed certificate is a public key and its subject name, which is digitally signed using its corresponding private key. We can verify its signature using the public key, but have no other inherent assurances about its authenticity. We trust it explicitly via a "truststore." Keystore vs truststore A "keystore" contains a private key, which has a public key certificate. Additionally the keystore must contain the certificate chain of that key certificate, through to its root certificate (which is self-signed by definition). A "truststore" contains peer or CA certificates which we trust. By definition we trust any peer certificate chain which includes any certificate which is in our truststore. That is to say, if our truststore contains a CA certificate, then we trust all certificates issued by that CA. Note that since the keystore must contain the certificate chain of the key certificate, whereas the truststore must not contain the certificate chain of included trusted certificates, they differ critically in this respect. Client certificate management In order to review active credentials, we require a perfect record of all issued certificates. If a certificate is signed but not recorded, or its record is deleted, our server is forever vulnerable to that "rogue" certificate. We could record our signed certificates into a keystore file as follows: $ keytool -keystore server.issued.jks -importcert -alias client -file client.pem Certificate was added to keystore where this is not a truststore per se, but just a "database" of issued certificates. Interestingly, we consider signing our client certificates to avoid having such a truststore containing all our clients' self-signed certificates, but nevertheless end up with one - which is telling. We could similarly record revoked client certificates. However for private networks where the number of certificates is relatively small, it is simpler and more secure to trust clients explicitly, rather than implicitly trusting all client certificates signed by our CA, and managing a revocation list. If the number of clients is large, then probably we need to automate enrollment, which is addressed in the companion article Client Authentication in this series, which proposes a dynamic SQL truststore for client certificates. Alternatively we might use a client certificate authentication server, e.g. see my experimental Node microservice - which uses Redis to store certificates and their revocation list. Self-signed client certificates We prefer self-signed client certificates which are explicitly imported into our server truststore, where they can be reviewed. In this case, they are "revoked" by removing them from the truststore. However, self-signed client keys are effectively CA keys, and so rogue certificates can be created using compromised client keys, e.g. using keytool -gencert. So we implement a custom TrustManager for our server - see the Explicit Trust Manager article in this series. Private CA Consider that we must detect when our server has been compromised, and then generate a new server key. If using a self-signed server certificate, then we must update every clients' truststore. In order to avoid such a burden, our server certificate must be signed using a CA key which our clients trust. However, our clients must trust only our private server, and not for example any server with a Go Daddy certificate. So we generate a private CA key. This key controls access to our server. While our server naturally resides in a DMZ accessible to the Internet, its CA key should be isolated on a secure internal machine. In fact, it should be generated offline, where it can never be compromised (except by physical access). We transfer the "Certificate Signing Request" (CSR) to the offline CA computer, and return its signed certificate e.g. using a USB stick. In the event that our server is compromised, we generate a new server key, and sign it using our offline CA key. Our clients are unaffected, since they trust our CA, and thereby our new server key. However our clients must no longer trust the old compromised server key. It could be used to perpetrate a man-in-the-middle (MITM) attack. So we must support certificate revocation. For example, we could publish a certificate revocation list to our clients, or provide a revocation query service, e.g. an OCSP responder. Alternatively, we could publish the server certificate that our clients should explicitly trust. Before connecting, our clients read this certificate, verify that it is signed by our CA, and establish it as their explicit truststore for the purposes of connecting to our server. In general, it is better to be explicit rather than implicit, to have clarity. Explicit trust enables a comprehensive review of active credentials. We consider a scenario where the above "revocation" service and our server both suffer a simultaneous coordinated MITM attack. Generally speaking, our architecture should make such an attack expensive and detectable. Our revocation service should be divorced from our server infrastructure at least, to make it more challenging. Server certificate signing We create a keystore containing a private key and its self-signed certificate (for starters) using keytool -genkeypair. $ keytool -keystore server.jks -genkeypair -alias server -noprompt \      -dname "" -keyalg rsa -keysize 2048 -validity 365 Naturally the common name of a server certificate is its domain name. This is validated by the client e.g. the browser, that the certificate's "Common Name" matches the host name used to lookup its IP address. We export a "Certificate Signing Request" (CSR) using -certreq. $ keytool -keystore server.jks -alias server -certreq -rfc \      -file server.csr We can sign the CSR using using -gencert. $ keytool -keystore ca.jks -alias ca -gencert -infile server.csr \      -dname "" \      -validity 365 -rfc -outfile server.signed.pem \      -ext BasicConstraints:critical=ca:false,pathlen:0 \      -ext KeyUsage:critical=keyEncipherment \      -ext ExtendedKeyUsage:critical=serverAuth where we set the X509v3 extensions to restrict the key usage for good measure, as we see for certificates we buy from a public CA. We import this signed certificate reply into our server keystore. But keytool will not allow a signed certificate to be imported unless its parent certificate chain is already present in the keystore. So we must import our CA cert first. $ keytool -keystore server.jks -alias ca -importcert -file ca.pem$ keytool -keystore server.jks -alias server -importcert -file server.signed.pem Certificate chain We can list the certificate chain as follows: $ keytool -keystore server.jks -alias server -list -v...Certificate chain length: 2Certificate[1]:Owner: CN=server.comIssuer: CN=caCertificate[2]:Owner: CN=caIssuer: CN=ca The first certificate of the chain is our key certificate, and the last certificate is the root CA certificate. By definition the "root" certificate of a chain is self-signed. openssl We can use openssl to connect to our SSLServerSocket and inspect its key certificate chain as follows: $ openssl s_client -connect localhost:4444...Certificate chain 0 s:/   i:/CN=ca 1 s:/CN=ca   i:/CN=ca This demonstrates why the keystore requires a certificate chain, i.e. to send to the peer for validation. The peer validates the chain, and checks it against our trusted certificates. It stops checking as soon as it encounters a certificate in the chain that it trusts. Therefore the chain for a trusted certificate need not be stored in the truststore, and actually must not be - otherwise we trust any certificate issued by that trusted certificate's root, irrespective of the trusted certificate itself. Consider that our clients must trust only our server, whose certificate happens to be issued by GoDaddy - we don't want those private clients to trust any server with a certificate issued by GoDaddy! Client keystore We create the private keystore on each of our clients. $ keytool -keystore client.jks -genkeypair -keyalg rsa -keysize 2048 \      -validity 365 -alias client -dname "CN=client" We print our certificate as PEM text using `-exportcert -rfc` as follows: $ keytool -keystore client.jks -alias client -exportcert -rfc ----BEGIN CERTIFICATE-----MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx... We inspect the certificate using openssl. $ keytool -keystore client.jks -alias client -exportcert -rfc |    openssl x509 -text -in client.pemEnter keystore password:Certificate:    Data:        Version: 3 (0x2)        Serial Number: 345747950 (0x149bb1ee)    Signature Algorithm: sha256WithRSAEncryption        Issuer: CN=client        Validity            Not Before: Feb 14 11:27:19 2015 GMT            Not After : Feb 14 11:27:19 2016 GMT        Subject: CN=client        Subject Public Key Info:            Public Key Algorithm: rsaEncryption                Public-Key: (2048 bit)... Finally, we import each client's self-signed certificate into our server truststore. $ keytool -keystore -alias client -importcert -file client.pem Conclusion Public CA certificates are typically used for public server authentication. However, we are primarily concerned with private client authentication for access to a private server, i.e. a virtual private network. Our clients should trust only our server, and not any server certificate issued by some public CA. We sign the server certificate using an offline CA key which our clients solely trust. When our server is compromised, we can change our server key without changing our clients' truststores. However, we must somehow invalidate the old server certificate. We might publish the server certificate that our clients should explicitly trust, after verifying that it is signed by our CA. We prefer self-signed client certificates, which are explicitly trusted. However, we note that self-signed certificates are effectively CA certificates, and so a compromised private key can be used to create rogue certificates. So we should implement a custom "explicit trust manager" to ensure that the peer's key certificate itself is explicitly included in the truststore, i.e. disregarding its chain of signing certificates. Further reading See my experimental Node microservice, which uses Redis to store certificates and a revocation list. See the companion article Explicit Trust Manager. This is part of my Final Quadrilogy on Java crypto. @evanxsummers

Posted by on 24 March 2015 | 7:04 am

Minecraft Modding Course at Elementary School - Teach Java to Kids

Cross posted from Exactly two years ago, I wrote a blog on Introducing Kids to Java Programming using Minecraft. Since then, Devoxx4Kids has delivered numerous Minecraft Modding workshops all around the world. The workshop material is all publicly accessible at In these workshops, we teach attendees, typically 8 - 16 years of age, how to create Minecraft Mods. Given the excitement around Minecraft in this age range, these workshops are typically sold out very quickly. One of the parents from our workshops in the San Francisco Bay Area asked us to deliver a 8-week course on Minecraft modding at their local public school. As an athlete, I'm always looking for new challenges and break the rhythm. This felt like a good option, and so the game was on! My son has been playing the game, and modding, for quite some time and helped me create the mods easily. We've also finished authoring our upcoming O'Reilly book on Minecraft Modding using Forge so had a decent idea on what needs to be done for these workshops. Minecraft Modding Workshop Material All the workshop material is available at Getting Started with Minecraft Modding using Forge shows the basic installation steps. These classes were taught from 7:30am - 7:45am, before start of the school. Given the nature of workshop, the enthusiasm and concentration in the kids was just amazing. Minecraft Modding Course Outline The 8 week course was delivered using the following lessons: Week LESSON Java concepts 1 Watch through the video and understand the software required for modding Familiarity with JDK, Forge, Eclipse 2 Work through the installation and get the bundled sample mod running. This bundled mod, without any typing, allows to explain the basic concepts of Java such as class, packages, methods, running Minecraft from Eclipse, seeing the output in Eclipse panel. 3 Chat Items mod shows how to create a stack of 64 potatoes if the word "potato" is typed in the chat window. Create a new class in Eclipse Annotations and event-driven programming to listen for events when a player types a message in the chat window is introduced. String variable types and how they are enclosed within a quotes is introduced. 4 Continue with Chat Items mod and a couple of variations. Change the number of items to be generated. Generate different items on different words, or multiple items on same word. Integer variables for changing the number of items. How  use Eclipse allows code completion and scroll through the list of items that can be generated. Multiple if/else blocks and scope of a block. 5 Eclipse Tutorial for Beginners Some familiarity with Eclipse 6 Ender Dragon Spawner mod spawns an Ender Dragon every time a dragon egg is placed.  == to compare objects Accessing properties using . notation Creating a new class Calling methods on a class 7 Creeper Spawn Alert mod alerts a player when creeper is spawned  instanceof operator for loop java.util.List Enums && and || operators Parent/child class 8 Sharp Snowballs mod turns all snowballs into arrows 15-20 LOC of methods ! operator Basic Math in Minecraft Most of the kids in this 8-week course had no prior programming experience. And it was amazing to see them be able to read the Java code by week 7. Some kids who had prior experience finished the workshop in first 3-4 weeks, and were helping other kids. Check out some of pictures from the 8-week workshops:     Many thanks to attendees, parents, volunteers, Parent Teacher Association, and school authorities for giving us a chance. The real benchmark was when all the kids raised their hands to continue workshop for another 8 weeks ... that was awesome! Is Java difficult as kids first programming language? One of the common questions asked during these workshops is "Java is too difficult a language to start with". Most of the times these questions are not based on any personal experience but more on the lines my-friend-told-me-so or i-read-an-article-explaining-so. My typical answer consists of the following parts: Yes, Java is a bit verbose, but was designed to be readable by humans and computer. Ask somebody to read Scala or Clojure code at this age and they'll probably never come back to programming again. These languages serve a niche purpose, and their concepts are now anyway getting integrated into the mainstream language already. Ruby, Groovy, and Python are alternative decent languages to start with. But do you really want to start teaching them fundamental programming using Hello World. Kids are already "addicted" to Minecraft. Game is written using Java and modding can be done using Java. Lets leverage that addiction and convert that into their passion for programming. Minecraft provides a perfect platform for gamification of programming experience at this early age. There are 9 million Java developers. It is a very well adopted and understood language, with lots of help in terms of books, articles, blogs, videos, tools, etc. And the language has been around for almost 20 years now. Other languages come and go, but this is the one to stay! As Alan Kay said The best way to predict the future is to create it Lets create some young Java developers by teaching them Minecraft modding. This will give them bragging rights in their friends, parents a satisfaction that their kids are learning a top notch programming language, and budding Java developers to the industry. I dare you to pick up this workshop and run in your local school :) Minecraft Modding Course References Sign up for an existing Devoxx4Kids chapter in your city, or open a new one. If you are in the San Francisco Bay Area, then register for one of our upcoming workshops at There are several chapters in the USA (Denver, Atlanta, Seattle, Chicago, and others). Would your school be interested in hosting a similar workshop? Devoxx4Kids can provide train-the-trainer workshop. Let us know by sending an email to As a registered NPO and 501(c)(3) organization in the US, it allows us to deliver these workshops quite selflessly, fueled by our passion to teach kids. But donations are always welcome :)

Posted by on 22 March 2015 | 11:20 am

Java at London techhub's most successful startup

I thought some of you might be interested in hearing about Java and the Java dev team at a startup that's grown beyond the initial stage. Nexmo is a four year old startup headquartered in San Francisco but with the engineering team based out of techhub London; and is already one of the worlds largest cloud communications companies (cloud communications provides any application with the ability to communicate with people - eg sending a pin code or any message via SMS to a phone, or setting up a phone menu or a callback button). At Nexmo, the core system is implemented in Java. Of course externally it's a language agnostic interface (a simple http call to use any service), so you can't specifically see we use Java from outside the company (apart from the many positions we have open for Java developers ). In terms of technology, like any startup, we're very flexible on what's in use. Older proven tech like jetty, trove collections, lots of apache commons modules, sit side-by-side with more recently created tech like OpenHFT collections, MongoDB, Hazelcast. The core system is capable of massive throughput architected around a queue-and-forward set of Java microservices which allows essentially unlimited horizontal scaling while keeping latency relatively low: overall latency for an SMS message tends to be in seconds because of the carrier hop to the end device, but minimizing the additional latency we add is important and our architecture keeps this down to a few milliseconds per message regardless of throughput; voice technology is mature and low level communications is best offloaded to dedicated mature server technology - like any sensible company we prefer to integrate already existing successful technology rather than build our own. Having moved past the early startup phase, we emphasize good solid design patterns, simplicity and good engineering. Internally, the components are already highly asynchronous but quite stable, a great deal of our interactions, both upstream and downstream with clients and suppliers, require the use of asynchronous protocols operating highly concurrently. Our next challenges are similar to many tech companies: handling enormous amounts of data; how do we respond to the Internet-of-things (highly relevant to a comms company); how do we integrate with chat apps; where does webRTC come into our product mix. The culture is very typical "startup": breakouts for table tennis sessions, fresh fruit and various soft drinks constantly available, a relaxed fun atmosphere. The software development team of 15 (and growing) is enormously varied: we have every experience level from recent graduate to 20-year Java veteran; many ethnicities and nine nationalities (mostly various European); 40% of the team are women; and we include one Java champion. As someone who had previously spent over a decade in investment banks, it's a massive breath of fresh air, I find it fantastically free and convivial in comparison. I hope that gives you a flavour of Java at a next stage startup.

Posted by on 16 March 2015 | 3:56 am

MVC vs JSF a bit of a different perspective

Now that both JSRs are in full swing I am going to offer you all a bit of a different perspective between the 2 technologies. As I have stated before I view them both complementary to each other! I want to talk a bit about the actual work of doing the JSRs themselves. As part of the JSR we deliver a reference implementation, but in reality does the work stop there? No, it surely does not. For JSF we have years of work after the completion of any of its JSRs. So one part is working on a new JSR cycle. But in reality the buck does not stop there. I am talking about the nitty gritty of maintenance! I have now been involved in maintaining the Oracle implementation of JSF, named Mojarra since December 2011. What have I learned? Maintaining a piece of software that is backed by a specification is HARD. It by no means is boring, nor is it NOT challenging. Quite the contrary because we have to deliver fixes that stay within the confines of the specification it is at times quite challenging. Now offset this to the work that we are currently doing with the MVC specification. Is the MVC specification HARD? Yes, it is too! Weird huh? You would think writing a specification from scratch is easy as we have a clean slate. Well, because I have been involved in maintaining Mojarra whenever I look at the features we might or might not include in Ozark (the MVC reference implementation) one of the questions I ask myself is "Is there a potential for a lot of maintenance on this feature?". E.g in Ozark we have a SPI so people can plug in new ViewEngines. And we have had external contributors delivering some ViewEngines (a BIG thanks goes out to them). The question came up whether or not we should include them in the reference implementation. Since we simply cannot support all we opted for making the contributed ViewEngines community supported extensions and keep the 2 ViewEngines officially supported by Ozark to be JSP and Facelets. Why? Well, both of those are also EE specifications! Anyway, when you think about the JavaEE process and you wonder why sometimes things seem to go a bit slow, think about how long this software sticks around and that it has to meet the bar of TCK testing for every patch, bug fix or enhancement. I hope you enjoyed a look at this perspective. Note of course this is MY perspective on things ;) Enjoy!

Posted by on 13 March 2015 | 2:29 am

Maven's Inflexibility Is Its Best Feature

Over on my blog - how I learned to stop worrying and love Maven.

Posted by on 8 March 2015 | 9:01 pm

The Elephant In The Cloud

Imagine if, for example, the hypervisors that run EC2 were compromised - imagine almost every business you deal with online compromised, all at once. This is the never-talked-about problem with the cloud - over on my blog.

Posted by on 8 March 2015 | 1:18 pm

Case Study: Moving To The Pull Request Workflow & Integrating Quality Engineers On The Team For Better Software Quality

By Christopher W. H. Davis, Continuous Improvement Save 40% on Continuous Improvement with discount code jn15ci at The book Continuous Improvement walks the reader through the collection and analysis of data to provide metrics that guide the continuous improvement of agile teams. Each chapter ends in a case study outlining the application of this technique with specific types of data. This is an excerpt of a real world scenario from the end of Chapter 4 that shows the reader how to apply source control data toward continuous improvement. We’ve been through how to collect and analyze SCM and CI data so it’s time to see it in action. This real world scenario will show you how this can be applied to your process. The team in question was generating a lot of bugs. Many of them were small issues that likely should have been caught much earlier in the development process. Regardless, every time they would cut a release they would deploy their code, turn it over to the Quality Management (QM) team, and wait for the bugs to come in. They decided to make a change to get to better quality. After discussing the issue. the team decided to try out the pull request workflow. They were already using Git, but developers were all committing their code to a branch and merging it all into master before cutting a release. They decided to start tracking commits, pull-requests and bugs to see if using pull-requests decreased their bug count. After a few sprints they had the graph shown in Figure 1. Figure 1 Bugs aren’t trending down as the team starts doing pull requests. To make trends a bit easier to see, we will divide pull requests and commits by 2 so there isn’t such a discrepancy between the metrics. That is shown in Figure 2. Figure 2 The same data with variance decreases between bugs and other data That makes it a lot easier to see the variance. As you can see from Figure 2 not much changed; even though there is a big dip from bugs from sprint 18 to 19, we aren’t decreasing over time – there was just a big jump in bugs in sprint 18. After discussing it a bit more, the team decided to add some more data points to the mix. To see how much collaboration was happening in the pull requests, they started adding comments to their graphs as well. That resulted in the chart shown in Figure 3. To keep consistent, we’ll divide comments by 2 as well. Figure 3 Adding comments to our graph and finding an ugly trend Figure 3 shows that there aren’t many comments along with the pull request, which implies there wasn’t much collaboration going on at all. Since the bug trend wasn’t changing it looked like the changes to their process wasn’t quite taking effect yet. The workflow itself wasn’t effecting the change they wanted, they needed to make a bigger impact to their process. To do this, they decided to make their developers act like the QM team when they were put on a pull request. The perspective they needed wasn’t just “is this code going to solve the problem?”, but “is this code well-built and what can go wrong with it”? There was some concern about developers getting less done if they had to spend a lot of time commenting on other developers’ code and acting like the QM team. To help coach them, they moved one of their QM members over to the development team and the team agreed that if this could result in fewer bugs then the time spent up front was well spent. They started taking the time to comment on each other’s code and ended up iterating quite a bit more on tasks before checking them in. A few sprints of this resulted in Figure 4. Figure 4 Everything is trending in the right direction! Figure 4 shows that as collaboration between development and quality increased, in this case shown through comments in pull-requests, the number of bugs was going down. This was great news to the team so they decided to take the process one step further. They brought another member of the QM team down to work with the developers on code reviews and quality checks to avoid throwing code over the wall to the QM team. Test Engineers For a long time, the role of the quality department in software has involved checking to make sure features were implemented to spec. That is not an engineering discipline, and as result many people in the quality assurance (QA) and quality management (QM) space were not engineers. To truly have an autonomous team, quality engineering has to be a significant part of the team. The role of the quality engineer, aka QE aka SDET aka Test Engineer has started to become more and more popular. However as quality moves from one state to another in the world of software engineering, this role is not very clearly defined and often you have either someone with an old quality background who recently learned to write code, or you have an expert in test running technology. Neither of these actually works; you need a senior engineer with a quality mindset. As shown in Figure 4, over time commits and pull requests started increasing as well. As the development team started thinking with a quality mindset they started writing better code and producing fewer bugs. Also by combining the QM team with the development team many issues were found and fixed before deploying code out to the test environment. AttachmentSize fig1.jpg18.41 KB fig2.jpg19.04 KB fig3.jpg17.35 KB fig4.jpg20.93 KB

Posted by on 23 February 2015 | 3:25 pm

Go in Action: Exploring the Work Package by By Brian Ketelsen, Erik St. Martin, and William Kennedy

By Brian Ketelsen, Erik St. Martin, and William Kennedy Save 40% on Go in Action with discount code jn15goia at The purpose of the work package is to show how you can use an unbuffered channel to create a pool of goroutines to perform and control the amount of work that gets done concurrently. This is a better approach that using a buffered channel of some arbitrary static size that acts as a queue of work and throwing a bunch of goroutines at it. Unbuffered channels provide a guarantee that data has been exchanged between two goroutines. The approach this package takes by using an unbuffered channel allows the user to know when the pool is performing the work and pushes back when it can't accept any more work because it is busy. No work is ever lost or stuck in queue that has no guarantee it will ever be worked on. Let's take a look at the work.go code file from the work package: Listing 1 chapter7/patterns/work/work.go01 // Example provided with help from Jason Waldrip.02 // Package work manages a pool of goroutines to perform work.03 package work0405 import "sync"0607 // Worker must be implemented by types that want to use08 // the work pool.09 type Worker interface {10 Task()11 }1213 // Pool provides a pool of goroutines that can execute any Worker14 // tasks that are submitted.15 type Pool struct {16 work chan Worker17 wg sync.WaitGroup18 }1920 // New creates a new work pool.21 func New(maxGoroutines int) *Pool {22 p := Pool{23 tasks: make(chan Worker),24 }2526 p.wg.Add(maxGoroutines)27 for i := 0; i < maxGoroutines; i++ {28 go func() {29 for w := range {30 w.Task()31 }32 p.wg.Done()33 }()34 }3536 return &p37 }3839 // Run submits work to the pool.40 func (p *Pool) Run(w Worker) {41

Posted by on 23 February 2015 | 8:13 am

Comparing Spock and JUnit by Konstantinos Kapelonis from Java Testing with Spock

By Konstantinos Kapelonis, Java Testing with Spock Save 40% on Java Testing with Spock with discount code jn15spock at In the Java world, there has been so far only one solution for unit tests. The venerable JUnit framework is the obvious choice and has become almost synonymous with unit testing. JUnit has the largest mind share among developers who are entrenched in their traditions and don't want to look any further. Even TestNG which has several improvements and is also fully compatible with JUnit has failed to gain significant traction. But fear not! A new testing solution is now available in the form of Spock. Spock is a testing framework written in Groovy but able to test both Java and Groovy code. It is fully compatible with JUnit (it actually builds on top of the JUnit runner) and provides a cohesive testing package that also includes mocking/stubbing capabilities It is hard to compare JUnit and Spock in a single article, because both tools have a different philosophy when it comes to testing. JUnit is a Spartan library that provides the absolutely necessary thing you need to test and leaves additional functionality (such as mocking and stubbing) to external libraries. Spock has a holistic approach, providing you a superset of the capabilities of JUnit, while at the same time reusing its mature integration with tools and developments environments. Spock can do everything that JUnit does and more, keeping backwards compatibility as far as test runners are concerned. What follows is a brief tour of some Spock highlights. Writing concise code with Groovy syntax Spock is written in Groovy which is less verbose than Java. This means that Spock tests are more concise than the respective JUnit tests. Of course this advantage is not specific to Spock itself. Any other Groovy testing framework would probably share this trait. But at the moment only Spock exists in the Groovy world. Here is the advantage in a visual way, shown in figure 1. Figure 1 Less code is easier to read, easier to debug, and easier to maintain in the long run. Mocking and Stubbing with no external library JUnit does not support Mocking and Stubbing on its own. There are several Java framework that fill this position. This is the main reason that I got interested in Spock in the first place is the fact that it comes full batteries included as mocking and stubbing are supported out of the box. Figure 2 Spock is a superset of Junit I’ll let this example explain: David goes into a software company and starts working on an existing Java code base. He’s already familiar with JUnit (defacto testing framework for Java). While working on the project, he needs to write some unit tests that need to run in specific order. JUnit does not support this. So David also includes TestNG in the project. Later he realizes that he needs to use mocking for some very special features of the software (for example the credit card billing module). He spends some time to research all the available Java libraries (there are many). He chooses Mockito, and integrates it in the code base as well. Months pass and David learns all about Behavior-Driven Development in his local Dev Meeting. He gets excited! Again he researches the tools and selects JBehave for his project in order to accomplish BDD. Meanwhile Jane is a junior developer that knows only vanilla Java. She joins the same company and gets overwhelmed the first day because she has to learn 3-4 separate tools just to understand all the testing code. In an alternate universe David starts working with Spock as soon as he joins the company. Spock has everything he needs for all testing aspects of the application. He never needs to add another library or spend time researching stuff as the project grows. Jane joins the same company in this alternate universe. She asks David for hints on the testing code and he just replies “Learn Spock and you will understand all testing code”. Jane is happy because she has to focus on a single library instead of three. Even though Spock does not offer a full featured BDD workflow (as JBehave), it still offers the capability to write tests understandable by business analysts as shown in the next section. Using English sentences in Spock tests and reports Here is a bad JUnit test (I see these all the time). It contains cryptic method names that do not describe what is being tested. Listing 1 A JUnit test where method names are unrelated to business value public class ClientTest { @Test public void scenario1() #A { CreditCardBilling billing = new CreditCardBilling(); Client client client = new Client(); billing.chargeClient(client,150); assertTrue("expect bonus",client.hasBonus()); #B } @Test public void scenario2() #A { CreditCardBilling billing = new CreditCardBilling(); Client client client = new Client(); billing.chargeClient(client,150); client.rejectsCharge(); assertFalse("expect no bonus",client.hasBonus()); } #A A test method with a generic name #B Non technical people cannot understand test This code is only understandable by programmers. Also if the second test breaks, a project manager (PM) will see the report and know that “scenario2” is broken. This report has no value for the PM since he does not know what scenario2 does exactly without looking at the code. Spock supports an English like flow. Compare the same example in Spock: Listing 2 A Spock test where methods explain the business requirements class BetterSpec extends spock.lang.Specification{ def "Client should have a bonus if he spends more than 100 dollars"() { when: "a client buys something with value at least 100" #A def client = new Client(); def billing = new CreditCardBilling(); billing.chargeClient(client,150); then: "Client should have the bonus option active" #B client.hasBonus() == true } def "Client loses bonus if he does not accept the transaction"() { when: "a client buys something and later changes mind" #A def client = new Client(); def billing = new CreditCardBilling(); billing.chargeClient(client,150); client.rejectsCharge(); then: "Client should have the bonus option inactive" #B client.hasBonus() == false }} #A Business description of test #B human readable test result Even if you are not a programmer, you can read just the English text in the code (sentences inside quotes) and get the following: Client should have a bonus if he spends more than 100 dollars --when a client buys something with value at least 100 --then Client should have the bonus option active Client loses bonus if he does not accept the transaction --when a client buys something and later changes mind --then Client should have the bonus option inactive This is very readable. A business analyst could read the test, and ask questions for other cases (what happens if the client spends 99.9? What happens if he changes his mind the next day and not immediately?) Also if the second test breaks, the PM will see in the report a red bar with title “Client loses bonus if he does not accept the transaction.” He instantly knows the severity of the problem (perhaps he decides to ship this version if he considers it non-critical) Facts about Spock Spock is an alternative Test Framework written in the Groovy programming language A test framework automates the boring and repetitive process of manual testing which is essential for any large application codebase Although Spock is written in Groovy, it can test both Java and Groovy code Spock has built-in support for Mocking and Stubbing without an external library Spock follows the given-when-then code flow commonly associated with the Behavioral Driven Development paradigm Both Groovy and Java build and run on the JVM. A large enterprise build can run both JUnit and Spock tests in the same time. Spock uses the JUnit runner infrastructure and therefore is compatible with all existing Java infrastructure. For example, code coverage with Spock is possible in the same way as JUnit. One for the killer features of Spock is the detail it gives when a test fails. JUnit only mentions the expected and actual value, where Spock records the surrounding running environment mentioning the intermediate results and allowing the developer to pinpoint the problem with greater ease than JUnit Spock can pave the way for full Groovy migration into a Java project if that is what you wish. Otherwise it is perfectly possible to keep your existing JUnit tests in place and only use Spock in new code AttachmentSize fig1.png28.73 KB fig2.png29.55 KB

Posted by on 19 February 2015 | 8:28 am