And we have moved!

And we have moved to https://www.manorrock.com/blog/

Posted by on 5 July 2015 | 2:24 am

Webinar Notes: Typesafe William Hill Omnia Patrick Di Loreto

Webinar Notes: Typesafe William Hill Omnia Patrick Di Loreto My friend Oliver White is doing his usual bang-up job in his new gig at TypeSafe. One aspect is the humble webinar. Here are my notes for one that caught my eye, Using Spark, Kafka, Cassandra and Akka on Mesos for Real-Time Personalization. This is a very dense but well delivered presentation by Patrick Di Loreto who helped develop a new platform for his employer, the online gambling service, William Hill. Morally, I am sensitive to the real damage done to real lives and families that is caused by gambling, so I will include a link to an organization that offers help: 1-800-GAMBLER. That said, this is just another instance of the ancient tradition of technology development being driven by something that traditionally is seen as vice. (For a humorous, NSFW and prophetic Onion article, search google for “theonion internet andreessen viewing device”. I’m old enough to have first read that on an actual physical newspaper!) Now, on to the raw notes. YMMV of course, but if nothing else this will help you overcome the annoying problem of the slide advancing not being synched to the audio. Context: presentation by Patrick Di Loreto (@patricknoir) R&DEngineering lead for William Hill online betting.  The presenation wasdone for Typesafe as a Webinar on 14 June 2015.They have a new betting platform they call Omnia.- Need to handle massive amount of data- Based on Lambda Architecture from Nathan Marz  <http://lambda-architecture.net/>.- Omnia is a platform that includes four different components  * Chronos - Data Source  * Fates - Batch Layer  * NeoCortex - Speed layer  * Hermes - Serving layer03:47 definition of Lambda Architecture   “All the data must come from a unique place (data source)  They separate access to the data source into two different modes based  on timeliness requirements.  NeoCortex (Speed Layer) is to access the data in real time, but  without some consistency and correctness guarantees.  Optimized for  speed.  It has only recent data.  Fates (Batch Layer) is to access the data not in real time, but with  more (complete?) consistency.05:00 Reactive Manifesto slide06:15 importance of elasticty for them06:47 Chronos Data Source: “It’s nothing else than a container foractive streams”  “Chronos is a sort of middleware.  It can talk to the outside world  and bring the data into their system.”  Organize the data into a  stream of observable events, called “incidents”.  Can have different  viewpoints for different concerns   * Internal (stuff they need to implement the system itself)  * Product centric (which of the WH products such as “sports” “tweets”    “news”.  * External (“social media sharing”)  * Customer centric10:12 Chronos streams connect to the external system and bring it intoChronos  Adapter: Understand all the possible protocols that other systems  implement.  Connect to the other system.  Converter: Transform the incoming data into their internal format  Persistence Manager: Make the converted data durable.11:22 Chronos clustering  Benefits from the Akka Framework.  Distributes the streams across the cluster.  When failover happens, stream connection to outside source is migrated  to another node via Akka.  Keeps referential transparency.  Each  stream is an Actor which “supervises” its children: adapter,  converter, persistence manager.  12:41 Supervising (Slides diverged from audio) (Slide 12)  Supervision is key to allowing “error kernel  pattern”. <http://danielwestheide.com/blog/2013/03/20/the-neophytes-guide-to-scala-part-15-dealing-with-failure-in-actor-systems.html>    Basically, it is just a simple guideline you should always try to    follow, stating that if an actor carries important internal state,    then it should delegate dangerous tasks to child actors, so as to    prevent the state-carrying actor from crashing. Sometimes, it may    make sense to spawn a new child actor for each such task, but that’s    not a necessity.    The essence of the pattern is to keep important state as far at the    top of the actor hierarchy as possible, while pushing error-prone    tasks as far to the bottom of the hierarchy as possible.  Embrace failure as part of the design.  Connections are not resilient.14:08 They have extended Akka cluster to allow for need based elastic  redistribution14:20 First mention of Apache Kafka message broker.   This looks like a good article about the origin of Kafka:  <http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying>.)  Fates organizes incidents recorded by Chronos into timelines, grouped  by categories.  Can also create “Views” as an aggregation of timelines  or other views.15:56 (Slide 15)  More details on Timeline: history, sequence, order, of events from  Chronos.16:21 (Slide 15)   Customer timeline example.  17:16 (Slide 15)  First mention of Cassandra (18:11).  Use this as their NoSQL impl.18:37 (Slide 16)   More details on how Fates uses Cassandra.  This is where they define the schema.18:42  Every timeline category has a table, named <TimelineCategory>_tl.  Key definition is most important to enable fault tolerance and  horizontal scaling.  Key is  ( (entityId, Date), timestamp)19:23  If that had chosen the entityId as a partition, this would not have  been a good choice because customers are going to want to do things  with the entities.  This would result in an unbalanced cluster: some  nodes wold contain much more data than others.  Throwing in the date  and timestamp lets the data fan out over time.  Every day they define  a new key.20:19 (Slide 17) Views  Views are built by jobs.  Want to do machine learning and logical  reasoning.  Want to distinguish between deduction, induction and abduction  Deduction: the cause if the event.  If it’s raining, the grass will be  wet.  Induction: Not the strict mathematical definition!  A conclusion  performed after several observations.  Abduction: When your deduction is correct.  For example if we have  several customers that watch matches from the Liverpool team, then we  can conclude that they are supporters of Liverpool.  <https://www.butte.edu/departments/cas/tipsheets/thinking/reasoning.html>22:35 (Slide 18, 19)  Neo Cortex (Speed layer)  Nothing more than library built on Apache Spark.  22:56 (Slide 19) First mention of Microservices.  He said Neo Cortex was an ease of use layer that allows their  developers to create microservices on top of the omnia platform.  Use the distributed nature of Spark, while hiding the complexity of  interacting with the other subsystems.  Fast and realtime.  Looks like this is where their domain experts (data scientists) work.  Lots of terms from statistics in this section.  “Autoregressive  models” “Monoids” “Rings”.  Looks like the Breeze framework  <http://www.getbreezenow.com/> was mentioned here.24:14 (Slide 19)  Essentially, what we want with NeoCortex provide the building blocks  for their data scientists to generate recommendations, identify fraud,  optimize customer experience, etc.24:35 (Slide 20)  Scala code for one of their microservices.  Note, this doesn’t seem to need to be in Scala now that Java SE 8 has  Lambdas.  He mentions use of Observable (line 12), but interestingly does not  mention use of ReactiveX  <http://reactivex.io/documentation/observable.html>27:00 (Slide 21)  How Spark runs the code from slide 20.    Map allows them to leverage the power of parallelism.  Lambda in Map    function is performed by all the nodes in the cluster in parallel.    ReduceByKey still has parallelism (28:50). Process the Desktop and    Mobile channels in parallel (for example).  Because the parallelism    is reduced, this is going to be the most expensive of the processes    in slide 20.29:04 (Slide 21, 22, 23) Hermes   Still referring to slide 20 code, he points that every single lambda  of that thing is running on different nodes, and in parallel!  This is  what Neo Cortex does.  It understands very well spark, rdb partitions,  parallelism in Spark.  30:40 Simple full duplex communication for the Web.    Data as API21:10 (Slide 24)  Hermes distributed cache  Hermes JavaScript framework.  Allows their developers to interact  without leaving the domain.  “We want happy web developers”  32:09 Mentions use of JSON Path in order to have a graph which can  fully represent the domain model.  32:33 Hermes is responsible for caching in the web browser the  information relevant for the page.  32:39 The Hermes Node component (not node.js based?) mediator between  the two worlds.    34:12 Dispatcher, one of the most important components.  If there is a  lot of data heading to the client, but we know the client doesn’t  really need all of it, dispatcher will ensure only the last one gets  delivered.  Batches and optimizes network communication.  35:07 this is what differentiates Hermes from similar frameworks.  It  starts to be proactive, rather than reactive!  It enables prediction  based on user preferences.36:09 (Slide 25) Infrastructure  36:21 Mesos usage.  (Slide 26) “Game changing.  Slide 26 shows how IT  development changes in the last 20 years.”  It used to be a mainframe  with lots of nodes.  Because Moore’s law ending, the world changed the  other way around.  37:51 Use of Marathon.  A REST API built on Mesos to provide  elasticity to scale up and scale  down. <https://mesosphere.github.io/marathon/docs/>.  38:04 Docker  “It can be considered the same concept we have seen with the Actor  before.  The error kernel pattern we had in the actor model, and the  supervisor mechanism is a nice concept of failure.  If I have to fail,  I want to fail in isolation.  For this reason, every single component  of the Omnia platform should run inside a Docker container.  Lets them  contain failure.38:49 (Slide 27)  Example as Omnia is domain agnostic.  Each part of Omnia is provided with a JMX monitor (IMHO this is the  secret sauce).  Through Chronos, we can create a stream whose source  is the JMX data!  We have an observable that shows the health status  of the whole platform.  Through Fates we have stories about the  system.  Through NeoCortex, we can become aware that we need more  resources at certain times, for example around the schedule of footbal  matches.41:16 Oliver takes questions  Is it available for external use?  They are looking to open source it  in the long term.  42:23 Technology votes  Why didn’t you choose Akka streams at first?  Any tips on adopting  this technology.  Neo Cortex uses Spark core and spark streaming.    44:22 In addition to Cassandra, are you using any other big data  storage for fates?  No.  They looked at several others, but Cassandra  was a perfect fit.  It also has a good integration with Spark.  45:12 Any problems with persistence and Mesos?  Mesos is usually  mentioned in the context of persistent processing.  Yes.  They are  exploring how to do this.  Their Cassandra cluster is not yet  integrated into Mesos.    45:58 Loss of speaker.  50:59.  Cassandra has enough already without  putting it into Mesos.  47:38 How many people and how long did it take to have this ready for  production?  Omnia is not yet in production!  It’s part of the  research job they are doing at WH Labs.  Four engineers.  Staged  delivery.  He didn’t say how long.    48:34 is there a danger of using Omnia to monitor Omnia?  Not  something they will introduce little by little.  Don’t see too much  danger of that.  49:32 Have you considered using stream processing frameworks like  samsa or storm?  What is the difference between these and what you  use?  He likes Samsa, it fits with Kafka.  He found Spark streaming  better suited to distributed.  Better semantics.  More functional  approach.  51:52 Are you using public or private cloud? Private at this moment.  Reason: data sensitivity, legal framework.  52:36 Any thoughts on how Akka persistence compares to your  persistence stack?  They are using Akka persistence.  He didn’t talk  about the fact that data is represented as a graph using the Actor  model system.  Implemented using akka persistence on top of Cassandra.  53:32 What can you advise regarding career opportunities with Akka,  Play, Scala?  With the coming of these highly parallel systems, we  need to find a different way of programming.  This is why he likes the  reactive manifesto.  They are a JVM house.  They used to be a Java  house. Since they started to adopt Akka for referential integrity, he  doesn’t see much future in Spring or Java EE.

Posted by on 1 July 2015 | 3:28 pm

Using Apache Spark DataFrames for Processing of Tabular Data

This post will help you get started using Apache Spark DataFrames with Scala on the MapR Sandbox. The new Spark DataFrames API is designed to make big data processing on tabular data easier. A Spark DataFrame is a distributed collection of data organized into named columns that provides operations to filter, group, or compute aggregates, and can be used with Spark SQL. https://www.mapr.com/blog/using-apache-spark-dataframes-processing-tabul...

Posted by on 29 June 2015 | 1:09 am

The Curious Case of the char Type

It's been almost twenty years that Gary Cornell contacted me to tell me “Cay, we're going to write a book on Java.” Those were simpler times. The Java 1.0 API had 211 classes/interfaces. And Unicode was a 16-bit code. ♦ Now we have over 4,000 classes/interfaces in the API, and Unicode has grown to 21 bits. The latter is an inconvenience for Java programmers. You need to understand some pesky details if you have (or would like to have) customers who use Chinese, or you want to manipulate emoticons or symbols such as 'TROPICAL DRINK' (U+1F379). In particular, you need to know that a Java char is not the same as a Unicode “code point” (i.e. what one intuitively thinks of as a “Unicode chracter”). A Java String uses the UTF-16 encoding, where most Unicode code points take up one char value, and some take up two. For example, the tropical drink character, erm, code point is encoded as the sequence '\uD83C' '\uDF79'. So, what does that mean for a Java programmer? First off, you have to be careful with methods such as substring. If you pass inappropriate index values, you can end up with half a code point, which is guaranteed to cause grief later. As long as index values come from a call such as indexOf, you are safe, but don't use str.substring(0, 1) to get the first initial—you might just get half of it. The char type is now pretty useless for application programmers. If you call str.charAt(i), you might not get all of the code point, and even if you do, it might not be the ith one. Tip: If you need the code points of a string, call: int[] codePoints = str.codePoints().toArray(); I recently finished the book “Core Java for the Impatient”, where I cover the “good parts” of Java, for programmers who come from another language and want to get to work with Java without sorting through twenty years of historical baggage. In that book, I explain the bad news about char in somewhat mindnumbing detail and conclude with saying “You probably won’t use the char type very much.” All modesty aside, I think that's a little better than what the Java tutorial has to offer on the subject: char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive). Uffff. What is a “single 16-bit Unicode character”??? A few days ago, I got an email from a reader who had spotted a somewhat unflattering review of the book in Java Magazine. Did the reviewer commend me on giving readers useful advice about avoiding char? No sir. He kvetches that I say that Java has four integer types (int, long, short, byte), when in fact, according to the Java Language Specification, it has five integral types (the last one being char). That's of course correct, but the language specification has an entirely different purpose than a book for users of a programming language. The spec mentions the char type 113 times, and almost all of the coverage deals with arithmetic on char values and what happens when one converts between char and other types. Programming with strings isn't something that the spec cares much about. So, it is technically true that char is “integral”, and for a spec writer that categorization is helpful. But is it helpful for an application programmer? It would be a pretty poor idea to use char for integer values, even if they happen to fall in the range from 0 to 65535. I like to write books for people who put a programming language to practical use, not those who obsess about technical minutiae. And, judging from Core Java, which has been a success for almost twenty years, that's working for the reading public. I'll raise a glass of 'TROPICAL DRINK' (U+1F379) to that!

Posted by on 22 June 2015 | 4:57 am

SIP Servlet 2.0 and CDI

SIP Servlet 2.0 makes it possible to use CDI with SIP Servlet applications. It supports SIP Servlet POJOs as component classes that qualify as CDI managed beans. It also defines SIP specific CDI beans and scope types. Lets explore each of them. SIP Servlet POJOs qualify as CDI managed beans With this, now it is possible to inject CDI beans into SIP Servlet POJOs making all features of CDI available to SIP Servlet applications. Note that the lifecycle of the SIP Servlet POJOs are still managed by the SIP container just like other component classes defined in Java EE specification. It also applies to SIP listeners and regular SIP Servlets. SIP specific built-in beans There are five SIP specific built-in beans specified as listed below. javax.servlet.sip.SipFactory javax.servlet.sip.SipSessionsUtil javax.servlet.sip.TimerService javax.servlet.sip.SipApplicationSession javax.servlet.sip.DnsResolver These objects, which are otherwise familiar to SIP Servlet developers, can now be injected into the a SIP Servlet using @Inject. SIP specific CDI scopes @SipApplicationSessionScoped @SipInvocationScoped There are two standard scope types defined. When a CDI bean is of SipApplicationSession scope, the lifecycle of that bean will be bound to a SipApplicationSession. With this, applications can be developed without having to recreate state objects from attributes saved in SipApplicationSession. The lifecycle of the bean will be managed by the container. Given that containers usually manage concurrency and availability at the level of SipApplicationSession, this scope becomes an important feature. Similarly Lifecycle of an object with SipInvocationScope is tied to the invocation of a SIP Servlet POJO or any listener. Here is an example of a bean which is SipApplicationScoped.. @SipApplicationSessionScopedpublic class MyProxy implements Serializable {  private long startTime;  public void forward(SipServletRequest req) throws Exception {    SipURI uri = (SipURI) req.getRequestURI().clone();    req.setRequestURI(uri);    Proxy p = req.getProxy();    p.proxyTo(uri);    startTime = System.currentTimeMillis();  }  public void subsequentRequest() {    System.out.println("Total elapsed time is " +      (System.currentTimeMillis() - startTime));  }} Also, see how a POJO uses it. Note that an instance of myProxt will be is created for each call by the container. @SipServletpublic class SipHandler {  @Inject MyProxy myProxy;  @Invite  public void onInvite(SipServletRequest request)     throws Exception {    myProxy.forward(request);  }  @AnyMethod  public void onRequest(SipServletRequest request)     throws IOException {    myProxy.subsequentRequest();  }} Hope you find this useful.

Posted by on 19 June 2015 | 6:50 am

GeekOut 2015 Summary

GeekOut 2015 Summary I last had the pleasure of visiting the lovely Baltic city of Tallinn in 2012, when I presented JSF 2.2 and the Rockstar talk at GeekOut 2012. Now that I've got something new (for me anyway) to talk about I made the cut and was invited back to present Servlet 4.0 at GeekOut 2015. Attendence was capped at 400, giving this conference a very exclusive feel. Indeed, 99% of those that registered for the conference actually did attend. This was the 5th installment of the GeekOut conference, hosted by ZeroTurnaround. This was the first time the conference had two tracks, so my report here only covers the sessions I actually attended. All of the sessions were video recorded, and I expect the sessions will be made available soon. Day One Day one started with back-to-back plenary sessions offering two different perspectives on the #java20 theme. Stephen Chin gave a historically rich but technically light session featuring lots of freshly recorded video clips with Java luminaries. Of course there was ample content from James Gosling, who I would like to congratulate for winning the 2015 IEEE John von Neumann Medal. This puts James in the company of such titans as Leslie Lamport, Donald Knuth, Ivan Sutherland, and Fred Brooks. I was happy to see that Stephen dove deeper and offered the perspectives of John Rose and Georges Saab on more fundamental aspects of the history of Java. Martin Thompson followed Steven with a very complimentary session. The session was so complimentary I'd almost say they coordinated. Martin's session gave his personal perspective on Java over the years, with some very interesting stories from his work on making Java perform well. I liked his perspective on the causes and challenges of bloat in a long-lived software ecosystem. Another very interesting perspective was the extent to which high frequency trading drives advances in performance (in Java and in the entire industry). Martin's talk piqued my desire for a #java20 talk about all the companies that have been spawned directly or indirectly by the Java ecosystem. I'm thinking Interface21, Tangosol, JBoss, NoFluffJustStuff, ZeroTurnaround, Atlassian, Parleys and there are many others. Hey, I'm pretty sure there's an interesting talk in there somewhere. After the plenary sessions, we broke out into the two tracks, starting with my session on Servlet 4.0 and a session on Cassandra. My session was quite well attended, and it went pretty smoothly. We'll see how the feedback shows up, however! After my session, I went down to see Markus Eisele talk about Apache Camel. I hadn't followed the progress in the Camel community and I'm happy to see it is doing well since. Also nice to see my old pal Gregor Hohpe represented virtually, as his book is represented in spirit in Camel itself. I was very keen to see the Vaadin talk from Peter Lehto. I had long been perplexed at Vaadin's ability to decouple itself from GWT, particularly as GWT's popularity has dwindled. This talk, at last, promised to lay bare the secret at the heart of Vaadin: its runtime is dependent on GWT. I was not disappointed, but I was also very pleasantly surprised. Mr. Lehto directly addressed the question of the relevance of server side UI frameworks, including Vaadin (and JSF, though he didn't name it specifically) in an HTML5 JavaScript framework world. He did so by pointing out the importance of abstraction, which I've long been pointing out when presenting on JSF. In the case of Vaadin and JSF, their core value add is the authoring experience. With Vaadin, it's Java programmers who want to treat the world like Swing. With JSF it's "page developers" who want to treat the world like some form of VisualBasic environment. For Vaadin, its existing abstraction allows their underlying runtime to leverage W3C Web Components (or the polymer implementation of the same) for some Vaadin components while relying on GWT for others. Peter put a strong stake in the ground and predicted that W3C Web Components are the future for web development. I don't disagree, but JSF is well positioned to leverage W3C Web Components because it fits in nicely with the JSF abstraction. Day Two Day two started out with Attila Szegedi's highly technical Rhino talk. This was the first talk of the day, after the party night, so it was a little lightly attended. However, those that made it there were rewarded with an in-depth understanding of the rationale for some performance related design decisions in the implementation of Nashorn. The 10:30 slot was another effectively plenary session, but out in the demo area. Stephen Chin's highly effective NIGHTHACKING brand came to GeekOut with a panel discussion on the #java20 theme. The video is on the NIGHTHACKING website. This was a lot of fun, and I got to put my Javagator old-timer test out there. I also had the pleasure of a brief chat with Stephen regarding JSF 2.3 and Servlet 4.0. I was really looking forward to Tomasz Nurkiewicz's session about CompletableFuture, particularly because of its use in the Java SE9 HTTP/2 client. Thomas managed to pack a whole lot into a short, well constructed, code powered session. It's not easy to explain the differences between thenApply(), thenCombine(), thenCompose() and many other methods in the API, but Tomasz succeeded. He even surfaced an important naming inconsistency between the CompletableFuture API and the java.util.Optional API: thenCompose() == flatMap(). For more on this topic from Tomasz, check out his blog entry The Definitive Guide to Completable Future. Personally, I think it's a bit bold to give a single blog entry such a lofty title, but you can't argue that it does indeed cover the topic very well. I meant to ask Tomasz if his code samples from the talk were taken from an upcoming book. Tomasz, if you happen to see this little blog entry, please plug the book if there is one. I had high hopes for the next talk, Gleb Smirnov's concurrency talk. It was probably a great talk, but sadly this is when my jetlag hit hard and I was struggling to keep up. I'll look for the video! I took a pass for the 15:00 slot due to the afore mentioned jetlag and opted to save my energy for one final session, Andrzej Grzesik's Go. I'd taken a quick look at Go before the session, so I was in a good position to enjoy it. This session made no excuses about having nothing to do with Java and instead just tried to give a quick tour of the Go language and programming environment, with a view towards lowering the barriers to entry to give it a try. Go succeeds because it rules several fundamental things as simply out of scope. There is no dynamic linking. There is no UI. There is no API to the threading model. There is no inheritance. I'm glad Go is out there because sometimes you don't need that stuff. For what it's worth, here's a nice post on Go from the Docker guy. Finally, there were some brief and tasteful closing remarks from ZeroTurnaround founders and my good friends Jevgeni Kabanov and Toomas Römer. I'm glad to see these guys doing well.

Posted by on 16 June 2015 | 4:28 pm

Automating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud Service

Automating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud Service

Posted by on 4 June 2015 | 7:25 am

Recent Ripple of JSF Extensions

Recent Ripple of JSF Extensions My colleague Manfred Riem tipped me off to a new framework built on JSF, ButterFaces. This whimsical name started an amusing Twitter thread, but also, and much more importantly brought several other new JSF extensions to light. This is the sort of thing that I used to look to Kito Mann's JSF Central Frameworks Page for, but it seems that needs an update. So, in addition to ButterFaces, here are several other JSF extensions that are definitely worth a look. BootsFaces focuses on Twitter Bootstrap 3 and jQuery UI. Material Prime is a PrimeFaces extension that lets you build web apps that conform to Google's "Material" design language. Generjee is a Java EE application generator that outputs JSF+JavaEE 7+CDI code. FWIW, I would be remiss if I didn't point out that there is nothing named JEE. So, I suggest this project rename itself to generee. I think that's snappier anyway. This recent flurry of activity shows two things. First, the continuing vitality of the JSF ecosystem. By now I think we all know that the ecosystem around the technology is just as important as the technology itself. It's like this. A technology has to be "good enough" to get the job done. That's a given. Heck, it could even be excellent technology. But without an ecosystem around it, even the best technologies will not survive. Second, there is still interest and value in server side state UI frameworks. My recent work with HTTP/2 in Servlet 4 has shown me that such frameworks are very well positioned to take advantage of the optimizations in the HTTP/2 protocol, as well as other performance optimizations shown in Ilya Grigorik's excellent High Performance Browser Networking. So, check out these new frameworks and don't be afraid to question conventional wisdom.

Posted by on 1 June 2015 | 5:30 pm

Trying out the Java 9 REPL

One of the joys of programming with a dynamic language such as Lisp or Python is the instant feedback you get from the interpreter. You try out a thing or two, something starts working, you move on to the next issue, and before you know it, your task is done. Technically, the program that reads and evaluates the language statements is called a REPL, for “read-evaluate-print loop”. And it's not just dynamic languages. Scala is statically typed, and it comes with a REPL. Behind the scenes, the statements that you enter are compiled and executed. (As a potentially entertaining aside, make a Google image search for REPL. Perhaps because I am on sabbatical in Switzerland, I get an ad for “Hypower Musli capsules”.) Why can't we get a REPL with Java? Good question. There are some ways of coming close. Way back when, BeanShell was a plausible solution, but it never caught up to Java 5. Some educational environments, such as BlueJ and Doctor Javahave perfectly reasonable facilities for interactively evaluating Java code. But somehow, even people who use and love these environments don't seem to use that feature a lot. Personally, whenever I need to run some experiments with some new Java API, I use the Scala REPL. But I get frustrated because things like varargs, collections, and lambdas are tedious to bridge and get in the way of rapid exploration. I tried using the Nashorn REPL that comes with Java 8, and that's worse—the chasm between Java and JavaScript is just too great. So, after more than twenty years, Java is finally going to do it right in Java 9. For now, you still have to build the REPL by hand—it's not yet in the binary distribution. But it's not hard to do. These instructions should get you going on Mac OS X. But if you use Linux, your life is even simpler. (I haven't found any instructions for Windows, and in general, the Windows build process for the JDK seems fiddly. If you run Windows and want to check this out, just use a Linux VM.) Here goes: Visit https://jdk9.java.net/download/, download the most current JDK 9 build, and untar it in your home directory. It ends up in ~/jdk1.9.0 In your home directory, run hg clone http://hg.openjdk.java.net/kulla/dev jshellcd jshellwget http://repo1.maven.org/maven2/jline/jline/2.12/jline-2.12.jarsh get_source.shexport JAVA_HOME=~/jdk1.9.0export PATH=$JAVA_HOME/bin:$PATHexport JLINE2LIB=~/jshell/jline-2.12.jarcd langtools/replsh scripts/compile.sh To run the shell, execute these commands: export JAVA_HOME=~/jdk1.9.0/export PATH=$JAVA_HOME/bin:$PATHexport JLINE2LIB=~/jshell/jline-2.12.jarcd ~/jshell/langtools/replsh scripts/run.sh Now you are ready to roll. Let's say you want to run some experiments with streams. First import some packages. import java.util.stream.*;import java.nio.file.*; Let's read all words from /usr/share/dict/words into a stream: -> Files.lines(Paths.get("/usr/share/dict/words"))|  Expression value is: java.util.stream.ReferencePipeline$Head@6108b2d7|    assigned to temporary variable $1 of type java.util.stream.Stream<String> As you can see, this is a rather verbose REPL, but maybe that's not a bad thing for users who aren't accustomed to one. You can now work with the temporary variable $1 and process it further: $1.filter(w -> w.length() > 20)|  Expression value is: java.util.stream.ReferencePipeline$2@180bc464|    assigned to temporary variable $2 of type Stream<String> Why not make an explicit variable? You can, but then you have to know its type: Stream<String> res = Files.lines(Paths.get("/usr/share/dict/words")) In a dynamically typed language, where variables don't have types, this issue doesn't arise. And in Scala, types are inferred for variables, so you don't have to worry about declaring their types either. But in the Java REPL, you are likely to use the $n variables a lot. So, to complete our example, you might type $2.collect(Col Then you can hit the TAB key, and you get autocompletion suggestions: Collection    Collections   Collector     Collectors When you complete to Collectors.to, you get more suggestions: toCollection(      toConcurrentMap(   toList()           toMap(             toSet()      Type L TAB ) to get $2.collect(Collectors.toList())|  Expression value is: [Andrianampoinimerina's, counterintelligence's, counterrevolutionaries, counterrevolutionary's, electroencephalogram's, electroencephalograms, electroencephalograph, electroencephalograph's, electroencephalographs]|    assigned to temporary variable $3 of type List<String> Now we can be bolder and try it all at once, this time yielding an array: Files.lines(Paths.get("/usr/share/dict/words")).filter(w -> w.length() > 20).toArray()|  Expression value is: [Ljava.lang.Object;@153f5a29|    assigned to temporary variable $4 of type Object[] Ok, that's interesting—we get the Java weirdness that arrays inherit a useless toString method. It's easy enough to recover: Arrays.toString($4)|  Expression value is: "[Andrianampoinimerina's, counterintelligence's, counterrevolutionaries, counterrevolutionary's, electroencephalogram's, electroencephalograms, electroencephalograph, electroencephalograph's, electroencephalographs]"|    assigned to temporary variable $5 of type String Maybe they could relent and print arrays for us. Also. it's a bit of a pain that one needs to pay close attention to the numbers with the temporary variables. Maybe $ or $0 could refer to the last result? Overall, the REPL is quite nice, but with a bit of polish, it could be even nicer. These are early days, so there is hope. (As an aside, try a Google image search for hope.) Check it out and let them know what you think! There is some documentation here, and the mailing list is here. AttachmentSize Repl.jpg72.35 KB hope.jpeg5.76 KB

Posted by on 25 May 2015 | 6:56 am

Leveraging the Oracle Developer Cloud from Eclipse

In an earlier post I wrote about Getting to Know the Developer Cloud Service. There wasn't an IDE used in that post and I'm a big fan of IDEs. So in this post we'll look at how Eclipse, in combination with the Oracle Developer Cloud Service, can be used to support the complete application lifecycle, from inception to production. In between we'll create bugs, create code branches, initiate code reviews, and merge code branches. We'll also slip in some continuous integration and continuous delivery. This is often also referred to as DevOps. Prerequisites You have Maven installed. You've installed the Oracle Enterprise Pack for Eclipse (OEPE) or added the OEPE plugin repository to your existing Eclipse installation. You have to be at a minimum version of OEPE 12.1.3 6 (the latest at the time of this writing). You have an Oracle Developer Cloud Service (DevCS) account. You can Try It for free. You have a Java Cloud Service (JCS) instance available for deployment. JCS trials are available to you via your local Oracle sales consultant. A local installation of WebLogic Server. The version must match the JCS version. WebLogic is free for desktop development environments. Configure Oracle Enterprise Pack for Eclipse (OEPE) The Oracle Cloud window is available from the Java EE perspective, so switch to that perspective if necessary From the Oracle Cloud window, click the Connect link: Add your Developer Cloud Service connection information: Active your Java Cloud Service instance: To activate, you need your private key file and the directory location of your local WebLogic runtime: Once Activated, you can start the SSH tunnel: Create Developer Cloud Service Project Log into the Developer Cloud Service and create a new project, DevOps Example:               A Private project is accessible to invited members only. A Shared project is visible to all members of the team, however, team members still need to be added in order to interact with the project, which we'll do in a later step.           If desired you can select from a project template. Project templates contain Git source repositories, wiki pages, Maven artifacts, and build jobs that are cloned to the new project. For this example we will be starting without a template. Finally, select your preferred Wiki Markup language. I'll be using Confluence in this example. Then wait a few seconds as your project is provisioned: After which you'll be presented with the home page of your project: The home page contains a steam of information about what has happened in our project, such as changes to source, code reviews, build jobs and wiki updates. Actually, you can configure any RSS/Atom feed of interest. Add Project Team Members Since I've set the Project Security as Shared, all members of my team can see the project. However, I will need to add them so they can interact with the project. On the Home page, switch to the Team view: Then click the "Click to add a new team member" label: Now Catalina and Raymond will be able to fully interact with the project. Create a Project Wiki The Wiki allows us to collaborate as a team. For our purposes, we'll create a simple Wiki outlining our project goals. Switch to the Wiki tab and click New Page: Enter some text: If necessary, click the Help button to view the Confluence Markup Cheat Sheet: Preview your work: Optionally add Attachments: And optionally restrict access rights to All (non-project members), Members and Owners or just Owners: And finally Save the page: Eclipse/Maven Now we will switch to Eclipse, where we will create a new Maven project. For our purposes, the project will be a simple web application, created using the maven-archtype-webapp. Switch to Eclipse File > New > Maven Project Select maven-archetype-webapp: Set the GroupID and ArtifactID and click Finish to generate the project. The POM is missing the Java Servlet dependency, hence the red x: Double-click the pom.xml and switch to the Dependencies tab and Add the Servlet API: Save and close pom.xml. In a few seconds, the red x should clear from the Project Explorer window: Eclipse/Git Is it now time to put our project under source code control. Create a Git Repository (File > New > Other > Git > Git Repository) and browse to the project location: Right-click the project and select Team > Add to Index: Right-click the project again and select Team > Commit. Then add a message and select Commit and Push, which will also push the changes to the Developer Cloud Service: When you are prompted for the Destination Git Repository, return to the Developer Cloud Service and select the Code tab. This page provides everything you need, whether you're working with Git from the command line, and IDE or a tool like SourceTree. Copy and Paste the HTTP URI from Developer Cloud Service to the URI field of the Push Branch master dialog. Also, enter your Developer Cloud Service password and optionally select Store in Secure Store. Click Next: And click Next to review the Push confirmation: Wait a few seconds for the Push Results dialog: Developer Cloud Service - Build Project Now that we have project code we can configure a build in our project. Switch to the Home tab where you can see the activity stream thus far on our project: Copy the HTTP URI for the Git repository as we will need it in the next step when we configure our build. Navigate to the Build page: There's a Sample_Maven_Build that can be used for reference (and will probably successfully build our project), but let's create a build from scratch. Select New Job: Under Source Code, select Git and paste the Git repository URL: Under Build Triggers, set the SCM polling schedule to * * * * * (every minute). This is standard Cron schedule formatting. This will trigger a build anytime Git is updated. Add a build step to invoke Maven 3. The defaults for this step suffice: Finally, under Post-build Actions, select Archive the artifacts and add target/* to the Files to Archive. This will archive the devopsexample.war for deployment. Click Save. Click Build Now. An executor will pick up the build and the build will be queued. With in a minute the build should kick off. You can click the Console icon to monitor its progress: ... You can see the build status and retrieve the artifacts from the build job # page: Developer Cloud Service - Deploy Project Once we have a successful build, we can deploy the project to the Oracle Java Cloud Service. The project can also be deployed to directly from Eclipse, just as you would to any remotely configured server. For this exercise, we will configure the Developer Cloud Service to deploy our project. Oracle Developer Cloud Service deploys applications to the Oracle Java Cloud Service server through an SSH tunnel to the Oracle Java Cloud Service Instance Admin VM. Authentication is done through an Oracle Developer Cloud Service generated private-public key pair. You must import the public key into the Oracle Java Cloud Service Instance Admin VM to allow Oracle Developer Cloud Service access to the VM. Follow the steps at Installing the Oracle Developer Cloud Service Public Key in the Oracle Java Cloud Service VM to complete this one time exercise. Get the Public IP address of the Administration Server Domain from the Java Cloud Service Console: Switch to the Deploy tab: Click New Configuration and fill in the details. I've configured the deploy to occur with each successful build: For the Java Service, you need to configure the remote target (JCS): Supply the IP address you selected in step 1 and supply your WebLogic server administration credentials (set when you created the JCS instance): Click Test Connection: Click Use Connection: Click Save, which will create the Deployment: Right-click the gear and select Start: Wait while the application is started: And within a minute the deployment succeeds: Run the Application For this exercise, we need to know the IP address of our load balancer, which is also available in the image of the Alpha01JCS instance above: 129.152.144.48. Therefore we could expect our application to be available at https://192.152.144.48/devopsexample, but it's not. Unless explicitly specified in a deployment descriptor, the Developer Cloud Service generates its own context root. To find it, we need to look in the WebLogic Admin Console. Click the Java Service link to launch the WebLogic Admin Console. Navigate to the Deployments page where you'll find devopsexample: Click the devopsexample deployment to find the Context Root: The application is accessible at https://129.152.144.48/deploy1318416548384467224/: Developer Lifecycle - Issues/Tasks Now that we've successfully created and deployed our application, let's work through some developer lifecycle issues (bugs, code reviews, etc.). I know one problem that we want to fix is the context root of our project. In addition, we'll make the home page more personal. We'll start by submitting an issue to address both of these items (yes, these two issue should be tracked separately). Developer Cloud Service - Issues Switch to the Issues page: Click New Issue and enter some values. Note, the content for most of the fields you see on this page are configurable from the Administration page, including the ability to add custom fields. Click Create Issue. Eclipse - Tasks The Oracle Developer Cloud Service is integrated with the Mylyn task and application lifecycle management framework. Let's see how this works. In the Oracle Cloud window, expand the Developer node, which will reveal your Developer Cloud Service projects: Double-click the DevOps Example project to activate it. Then do the same for Issue and Mine, which we cause Eclipse to fetch the issue from the Developer Cloud Service: At this point, the issue is also viewable in the Eclipse Task List window: Double-click either location to open the task: Accept the task and click Submit. The Status update is reflected in the Developer Cloud Service: Eclipse - Git/Tasks We will create a new branch to work on this task. Right-click the project and select Team > Switch To > New Branch: Name the branch Task 1 and click Finish: In the Task List, right-click the task and select Activate. This will associate the task with the issue: Open index.jsp and change the heading to Hello DevOps Example: Ctrl+N and add a new Oracle WebLogic Web Module Descriptor: And place it in the src/main/webapp/WEB-INF folder: And set the Context Root to devopsexample: Right-click the project and Select Team > Commit. Because of the Task association, the commit knows these changes apply to Task 1. Select weblogic.xml to include it in the commit: Select Commit and Push: Click Next: Click Finish and wait for the Push Results: Developer Cloud Service - Merge Request Now we'll initiate a code review. If all looks acceptable, we'll merge the code into the master branch. Switch to the Developer Cloud Service Merge Requests tab: Click New Request. Select the Repository (there's only 1 at the moment in this project), Target Branch, Review Branch, Reviewers (yes, I'm reviewing my own code for this example): Click Create, which will open the Review: Click the commit (6fefe85 in my case) to view a summary of the changes: From here the review team can click any line to add a comment. For example: When I'm finished with my review, I can Publish my comments: Once satisfied, click Approve: And the Reviewers pane will update to reflect my status to the rest of the review team. Notice I can also always add additional reviewers at any time. Once all reviewers approve, click Merge to merge the changes into the master branch: The Fun Begins Recall we triggered our builds to run after a source code commit. Checking the build page I see a new build did indeed run: Recall we triggered our deploys to occur after a successful build. Checking the Deploy page I see a new deploy did indeed happen: Most importantly, does our application now behave as desired: In a development environment, if I wanted to bypass the review cycle, I would simply commit my changes to the master branch, which would trigger the automatic build and deploy. Alternatively, I could have a development branch to which I commit to directly and then a QA branch which undergoes a code review. The possibilities are up to your design. Cleaning Up and Other Tidbits To put a bow on what might be the longest blog I've ever written, let's return to Eclipse. Resolve the Task Switch to Task 1 and notice it's marked as having incoming changes: Click the link to refresh the task, then submit the task Resolved as Fixed. Pull the Latest Master With index.jsp open in the editor, switch back to the master branch (Team > Switch To > Master). You'll notice index.jsp reverts to Hello World!. Right-click the project and select Team > Pull: index.jsp refreshes to show Hello DevOps Example!. View/Run Builds from Eclipse You can monitor the build status, as well as launch builds, directly from Eclipse. In the Oracle Cloud Window, double-click the builds: Double-click Build #2 to view its details: Finally, you can right-click the build job to launch a build from Eclipse:  

Posted by on 22 May 2015 | 9:21 am