Leveraging the Oracle Developer Cloud from Eclipse

In an earlier post I wrote about Getting to Know the Developer Cloud Service. There wasn't an IDE used in that post and I'm a big fan of IDEs. So in this post we'll look at how Eclipse, in combination with the Oracle Developer Cloud Service, can be used to support the complete application lifecycle, from inception to production. In between we'll create bugs, create code branches, initiate code reviews, and merge code branches. We'll also slip in some continuous integration and continuous delivery. This is often also referred to as DevOps. Prerequisites You've installed the Oracle Enterprise Pack for Eclipse (OEPE) or added the OEPE plugin repository to your existing Eclipse installation. You have an Oracle Developer Cloud Service (DevCS) account. You can Try It for free. You have a Java Cloud Service (JCS) instance available for deployment. JCS trials are available to you via your local Oracle sales consultant. A local installation of WebLogic Server. The version must match the JCS version. WebLogic is free for desktop development environments. Configure Oracle Enterprise Pack for Eclipse (OEPE) The Oracle Cloud window is available from the Java EE perspective, so switch to that perspective if necessary From the Oracle Cloud window, click the Connect link: Add your Developer Cloud Service connection information: Active your Java Cloud Service instance: To activate, you need your private key file and the directory location of your local WebLogic runtime: Once Activated, you can start the SSH tunnel: Create Developer Cloud Service Project Log into the Developer Cloud Service and create a new project, DevOps Example:               A Private project is accessible to invited members only. A Shared project is visible to all members of the team, however, team members still need to be added in order to interact with the project, which we'll do in a later step.           If desired you can select from a project template. Project templates contain Git source repositories, wiki pages, Maven artifacts, and build jobs that are cloned to the new project. For this example we will be starting without a template. Finally, select your preferred Wiki Markup language. I'll be using Confluence in this example. Then wait a few seconds as your project is provisioned: After which you'll be presented with the home page of your project: The home page contains a steam of information about what has happened in our project, such as changes to source, code reviews, build jobs and wiki updates. Actually, you can configure any RSS/Atom feed of interest. Add Project Team Members Since I've set the Project Security as Shared, all members of my team can see the project. However, I will need to add them so they can interact with the project. On the Home page, switch to the Team view: Then click the "Click to add a new team member" label: Now Catalina and Raymond will be able to fully interact with the project. Create a Project Wiki The Wiki allows us to collaborate as a team. For our purposes, we'll create a simple Wiki outlining our project goals. Switch to the Wiki tab and click New Page: Enter some text: If necessary, click the Help button to view the Confluence Markup Cheat Sheet: Preview your work: Optionally add Attachments: And optionally restrict access rights to All (non-project members), Members and Owners or just Owners: And finally Save the page: Eclipse/Maven Now we will switch to Eclipse, where we will create a new Maven project. For our purposes, the project will be a simple web application, created using the maven-archtype-webapp. Switch to Eclipse File > New > Maven Project Select maven-archetype-webapp: Set the GroupID and ArtifactID and click Finish to generate the project. The POM is missing the Java Servlet dependency, hence the red x: Double-click the pom.xml and switch to the Dependencies tab and Add the Servlet API: Save and close pom.xml. In a few seconds, the red x should clear from the Project Explorer window: Eclipse/Git Is it now time to put our project under source code control. Create a Git Repository (File > New > Other > Git > Git Repository) and browse to the project location: Right-click the project and select Team > Add to Index: Right-click the project again and select Team > Commit. Then add a message and select Commit and Push, which will also push the changes to the Developer Cloud Service: When you are prompted for the Destination Git Repository, return to the Developer Cloud Service and select the Code tab. This page provides everything you need, whether you're working with Git from the command line, and IDE or a tool like SourceTree. Copy and Paste the HTTP URI from Developer Cloud Service to the URI field of the Push Branch master dialog. Also, enter your Developer Cloud Service password and optionally select Store in Secure Store. Click Next: And click Next to review the Push confirmation: Wait a few seconds for the Push Results dialog: Developer Cloud Service - Build Project Now that we have project code we can configure a build in our project. Switch to the Home tab where you can see the activity stream thus far on our project: Copy the HTTP URI for the Git repository as we will need it in the next step when we configure our build. Navigate to the Build page: There's a Sample_Maven_Build that can be used for reference (and will probably successfully build our project), but let's create a build from scratch. Select New Job: Under Source Code, select Git and paste the Git repository URL: Under Build Triggers, set the SCM polling schedule to * * * * * (every minute). This is standard Cron schedule formatting. This will trigger a build anytime Git is updated. Add a build step to invoke Maven 3. The defaults for this step suffice: Finally, under Post-build Actions, select Archive the artifacts and add target/* to the Files to Archive. This will archive the devopsexample.war for deployment. Click Save. Click Build Now. An executor will pick up the build and the build will be queued. With in a minute the build should kick off. You can click the Console icon to monitor its progress: ... You can see the build status and retrieve the artifacts from the build job # page: Developer Cloud Service - Deploy Project Once we have a successful build, we can deploy the project to the Oracle Java Cloud Service. The project can also be deployed to directly from Eclipse, just as you would to any remotely configured server. For this exercise, we will configure the Developer Cloud Service to deploy our project. Get the Public IP address of the Administration Server Domain from the Java Cloud Service Console: Switch to the Deploy tab: Click New Configuration and fill in the details. I've configured the deploy to occur with each successful build: For the Java Service, you need to configure the remote target (JCS): Supply the IP address you selected in step 1 and supply your WebLogic server administration credentials (set when you created the JCS instance): Click Test Connection: Click Use Connection: Click Save, which will create the Deployment: Right-click the gear and select Start: Wait while the application is started: And within a minute the deployment succeeds: Run the Application For this exercise, we need to know the IP address of our load balancer, which is also available in the image of the Alpha01JCS instance above: 129.152.144.48. Therefore we could expect our application to be available at https://192.152.144.48/devopsexample, but it's not. Unless explicitly specified in a deployment descriptor, the Developer Cloud Service generates its own context root. To find it, we need to look in the WebLogic Admin Console. Click the Java Service link to launch the WebLogic Admin Console. Navigate to the Deployments page where you'll find devopsexample: Click the devopsexample deployment to find the Context Root: The application is accessible at https://129.152.144.48/deploy1318416548384467224/: Developer Lifecycle - Issues/Tasks Now that we've successfully created and deployed our application, let's work through some developer lifecycle issues (bugs, code reviews, etc.). I know one problem that we want to fix is the context root of our project. In addition, we'll make the home page more personal. We'll start by submitting an issue to address both of these items (yes, these two issue should be tracked separately). Developer Cloud Service - Issues Switch to the Issues page: Click New Issue and enter some values. Note, the content for most of the fields you see on this page are configurable from the Administration page, including the ability to add custom fields. Click Create Issue. Eclipse - Tasks The Oracle Developer Cloud Service is integrated with the Mylyn task and application lifecycle management framework. Let's see how this works. In the Oracle Cloud window, expand the Developer node, which will reveal your Developer Cloud Service projects: Double-click the DevOps Example project to activate it. Then do the same for Issue and Mine, which we cause Eclipse to fetch the issue from the Developer Cloud Service: At this point, the issue is also viewable in the Eclipse Task List window: Double-click either location to open the task: Accept the task and click Submit. The Status update is reflected in the Developer Cloud Service: Eclipse - Git/Tasks We will create a new branch to work on this task. Right-click the project and select Team > Switch To > New Branch: Name the branch Task 1 and click Finish: In the Task List, right-click the task and select Activate. This will associate the task with the issue: Open index.jsp and change the heading to Hello DevOps Example: Ctrl+N and add a new Oracle WebLogic Web Module Descriptor: And place it in the src/main/webapp/WEB-INF folder: And set the Context Root to devopsexample: Right-click the project and Select Team > Commit. Because of the Task association, the commit knows these changes apply to Task 1. Select weblogic.xml to include it in the commit: Select Commit and Push: Click Next: Click Finish and wait for the Push Results: Developer Cloud Service - Merge Request Now we'll initiate a code review. If all looks acceptable, we'll merge the code into the master branch. Switch to the Developer Cloud Service Merge Requests tab: Click New Request. Select the Repository (there's only 1 at the moment in this project), Target Branch, Review Branch, Reviewers (yes, I'm reviewing my own code for this example): Click Create, which will open the Review: Click the commit (6fefe85 in my case) to view a summary of the changes: From here the review team can click any line to add a comment. For example: When I'm finished with my review, I can Publish my comments: Once satisfied, click Approve: And the Reviewers pane will update to reflect my status to the rest of the review team. Notice I can also always add additional reviewers at any time. Once all reviewers approve, click Merge to merge the changes into the master branch: The Fun Begins Recall we triggered our builds to run after a source code commit. Checking the build page I see a new build did indeed run: Recall we triggered our deploys to occur after a successful build. Checking the Deploy page I see a new deploy did indeed happen: Most importantly, does our application now behave as desired: In a development environment, if I wanted to bypass the review cycle, I would simply commit my changes to the master branch, which would trigger the automatic build and deploy. Alternatively, I could have a development branch to which I commit to directly and then a QA branch which undergoes a code review. The possibilities are up to your design. Cleaning Up and Other Tidbits To put a bow on what might be the longest blog I've ever written, let's return to Eclipse. Resolve the Task Switch to Task 1 and notice it's marked as having incoming changes: Click the link to refresh the task, then submit the task Resolved as Fixed. Pull the Latest Master With index.jsp open in the editor, switch back to the master branch (Team > Switch To > Master). You'll notice index.jsp reverts to Hello World!. Right-click the project and select Team > Pull: index.jsp refreshes to show Hello DevOps Example!. View/Run Builds from Eclipse You can monitor the build status, as well as launch builds, directly from Eclipse. In the Oracle Cloud Window, double-click the builds: Double-click Build #2 to view its details: Finally, you can right-click the build job to launch a build from Eclipse:  

Posted by on 22 May 2015 | 9:21 am

Getting Started with Spark and Hadoop

This tutorial will help you get started with Standalone Spark applications on the MapR Sandbox. https://www.mapr.com/products/mapr-sandbox-hadoop/tutorials/spark-tutorial

Posted by on 21 May 2015 | 12:25 pm

ConFESS 2015 Wrap Up

ConFESS 2015 Wrap Up Hard on the heels of JavaLand was ConFESS. This was the eighth installment of the conference that started life in 2008 as JSF Days, switching to the name "ConFESS" in 2011. The name stands for the "Conference for Enterprise Software Solutions". Last year, ConFESS was held as a partnership with JavaLand in Brühl Germany. Neither party was satisfied with how that turned out and in 2015 ConFESS returned to its home in Vienna, where it will stay. It was a relatively small event, with just over 200 participants. It nicely filled out the venue, the C3 event center in the 3rd district. In my opinion, its small size is a large asset. The ability to have the entire event schedule on two sides of a 4x5 inch card is very convenient. My overall impression of the conference was very positive. There was a wide variety of talks from speakers I hadn't seen before on the conference circuit. There was a good breadth of coverage in diverse tracks ranging from agile/methods to Java EE to tools to client side technologies, and there was an excellent band on Tuesday night, Florian Braun and FSG Company. There was also a Lego Mindstorms EV3 competition that got rave reviews, but I didn't attend that portion of the event. The full set of abstracts from the conference are available at the regonline site for the event. You can use that site to learn more about the sessions for which I will give my brief impressions in the remainder of this blog entry. Tuesday The Tuesday Keynote was from Oracle Labs's Thomas Wuerthinger. Thomas presented his exciting work on the Graal VM. First off, I'm glad to see that Oracle has continued Sun's tradition of funding long-term research in the spirit of Sun Labs, founded by computing pioneer Ivan Sutherland (yep, just checked, he still works for Oracle). The basic idea of Graal appears to be: take the abstract syntax tree concept from compiler design and make it a first class part of the JIT process, allowing the runtime to rewrite itself as the program runs to achieve greater performance without sacrificing agility. Cool stuff, and great for a keynote. Sticking with the JSF heritage of the conference, next up was Cagatay Civici's talk about PrimeFaces. Cagatay introduced the new "layouts" concept, built on JSF 2.2 Resource Library Contracts. The base offering consists of two new layouts, Sentinel and Spark. One thing I've always liked about PrimeFaces is how they take the base concepts of the core JSF specification and use them to maximum effect, taking full advantage of new features, large and small. Diving down a level, Johannes Tuchscherer from CloudFoundry talked about Docker and how it relates to offerings from Pivotal. Johannes put the hype into perspective, showing how you still need other technologies to actually create value with Docker. Sticking in the Pivotal realm, Jürgen Höller gave the Spring 4.1 overview talk. It was nice to see that they were able to leverage Java SE 8 features while producing a binary that runs on Java SE 6. I was happy to have the opportunity to ask Juergen how pulled that trick off and the answer is basically build-time static code analysis. They compile with Java SE 8 with -source and -target 1.6, and have a build-time tool that looks for usages of Java SE 8 only idioms and APIs, and flags them as failures. The next talk I attended was a really practical hands on session about Java Flight Recorder from Johan Janssen. I'm a big fan of learning to get more out of tools I already have. JFR has been a part of the JDK for quite some time. Wednesday I was happy to see my good friend and fellow Oracle employee Mike Keith given the Wednesday keynote slot. Mike is a veteran of the conference trail, author of Pro JPA2 and former JPA spec lead. Mike was talking about an exciting new product from Oracle: Mobile Back End As A Service (MBaaS). In a nutshell, this product packages up everything enterprises need to deploy mobile based applications that are built on their existing infrastructure. Mike's slides are available for download. My own session was up next, at which I gave a status update on JSF 2.3. Briefly, it's a community driven release aimed at preserving your existing investment in JSF. I've uploaded my slides to slideshare. As a counterpart to Johan Janssen's session yesterday I attended Anton Arhipov's session about ZeroTurnaround's XRebel product. I liked his straightforward pitch: most applications receive very little profiling attention, let's make a super simple product that lets you get the low hanging fruit with maximum performance gain. Indeed, the slick browser based UI is very easy to use. When asked about various corner cases, Anton was honest and answered the current state of the product is very narrowly focused on where the most value can be easily extracted. This focus is a key success factor for ZT, in my opinion. I've always talked up the importance of maintainability, and sold that as a strong suit of the Java EE stack, so it was with great interest that I attended Bernhard Keprt's session about maintenance. One reason I like attending conferences is to remove my 3rd order ignorance by exposing me to technologies I otherwise would not encounter. During Bernhard's talk, he introduced me to VersionEye. The value-add of this tool is easy to perceive: given that you have lots of dependencies, let's have a tool that keeps an eye on them and lets you know when they update. Stefan Schuster gave a session from his experiences in developing apps for the three big flavors of mobile deployment platforms: native, Apache Cordova, and mobile web app. I liked this session for its first-hand perspective. To close out my 2015 ConFESS session attendance I viewed Alex Göschl's session on AngularJS. Alex shared his experiences in deployng Angular 1 for the jobs portal for conference sponsor Willhaben. FWIW, I found nine job postings for JSF on the site and four for Angular. This was an enjoyable talk and Alex did a great job explaining the extremely heterogeneous set of tools and technologies used in the project. Prior to switching the jobs portal to Angular 1, they were using GWT. It was pretty much a complete rewrite. The most useful aspect of the talk to me was the ease with which such an apparently complex tool chain is now accepted and leveraged by your average front end team. For example, the following nine step dev time build process was rattled off as if it were no big deal. clean build targets compile less to css copy vendor libraries copy assets compile and optimize angular templates compile and check typescript copy to tomcat inject velocity templates It must just be my Java EE roots that makes me feel that the preceding list is a lot more complex than a similar build process in a Java EE stack. I need to spend more time getting to know the workflow in current front end shops. Can anyone recommend a user group or meetup in Orlando, FL? Following the two day conference was my full day of workshops. I had a small but dedicated room of students and I hope they enjoyed the sessions. AttachmentSize 20150415-confess-2015.jpg206.67 KB 20150414_confess-2015.jpg159.91 KB

Posted by on 18 May 2015 | 5:02 pm

Glassfish is not dead

You might have heard some folks in the JavaEE community scream "Glassfish is dead!" As I work on 2 technologies that are going to end up in JavaEE 8 I can say that for me that is certainly not the case. Certainly we do not do a lot of "official" releases of the RI implementation of JavaEE, but does that mean it is dead? Not in the slightest! For Mojarra and Ozark we run integration builds using the daily builds of Glassfish! Do I see a lot of breakage running against daily builds? No! So if you want a more recent "release" of Glassfish just use a daily build. After all what is a version number? You need 4.x or can it just be 4.1.YYYYMMDDHHSS ;) See http://download.java.net/glassfish/4.1/nightly/ Enjoy!

Posted by on 13 May 2015 | 12:19 am

Going back in memory lane

When writing an article about HtmlUnit and Maven integration testing I never expected that article to become as popular as it has. Most of my blog entries have a modest number of reads, but apparently HtmlUnit integration testing is popular enough to warrant 11,109 reads as of today. For a technical blog I consider that a good number ;) For the original blog entry, see https://weblogs.java.net/blog/mriem/archive/2011/12/13/htmlunit-and-mave... Enjoy!

Posted by on 11 May 2015 | 12:33 pm

Maintaining Mojarra and its testing stack

Software is an interesting thing. We currently live in a very fast paced society where changes seem to come and go. However that is really only true for consumer electronics. Most systems that consumers are hardly aware of run stacks that are a couple to several years old and for those it is not economical to change at the rate consumer electronics does. Mojarra is a piece in such a stack. As part of our day job we have to maintain a large set of code lines for our customer base. If it sometimes looks that we are not moving forward fast enough just imagine we would have to support your phone for 10 years, sounds crazy right? Well for enterprise grade software that is the reality. Is that bad? Certainly not. Just keep it in mind ;) So while we keep innovating Mojarra we also have to keep maintaining your older stack. And thus we have to maintain our own stack so we can test Mojarra itself. As part of a large migration / update we are now happy to report that the several older test pieces of the Mojarra build are now all in the same Maven build structure. To put it in perspective this is the culmination of work that was started in 2012. With this all in place we hope we can be more agile in responding to issues coming from our customers and innovating Mojarra going forward. For those that want to contribute to Mojarra, please ping me and I'll explain it in more detail. Enjoy!

Posted by on 1 May 2015 | 1:12 am

An Inside Look at the Components of a Recommendation Engine

Recommendation engines help narrow your choices to those that best meet your particular needs. In this post, we’re going to take a closer look at how all the different components of a recommendation engine work together. We’re going to use collaborative filtering on movie ratings data to recommend movies. The key components are a collaborative filtering algorithm in Apache Mahout to build and train a machine learning model, and search technology from Elasticsearch to simplify deployment of the recommender. https://www.mapr.com/blog/inside-look-at-components-of-recommendation-en... This tutorial will describe how a surprisingly small amount of code can be used to build a recommendation engine using the MapR Sandbox for Hadoop with Apache Mahout and Elasticsearch. https://www.mapr.com/products/mapr-sandbox-hadoop/tutorials/recommender-...

Posted by on 13 April 2015 | 9:02 am

Mojarra 2.3.0 Milestone 2 has been released!

The JavaEE 8 process is underway and JSF 2.3 is making progress. We have just released our 2nd milestone. See https://javaserverfaces.java.net/2.3/releasenotes.html for the release notes Download it from https://javaserverfaces.java.net/2.3/download.html Enjoy!

Posted by on 9 April 2015 | 2:28 pm

Maps and markers, the Magnolia way: the Google Places module

After a short hiatus from blogging, I’d like to show you something exciting today. I can’t take the credit for all of the work - the development was originally started by my son Martin, then picked up by my colleague Jaroslav. I’ve really just added a few finishing touches to make the module releasable. So voilà: I present to you the Google Places module! It’s an integration of Magnolia and Google Maps and Places done a little differently from what you might expect. All the places you’ll ever need What you typically want when you deploy Magnolia in your organization is to take maximum advantage of its UI. What you typically get when you ask devs to place a map in your website is a fixed size map, perhaps with some text file or direct html access where you can edit a list of markers. This changes with Magnolia: now you have one pretty Magnolia-style app that your editors can use to load, edit, categorize and sort all the markers they use on one or different maps across their site. Using marker categories Every marker can be assigned one or more categories. When putting together a map, you then pull in the relevant category or categories. One marker can therefore be used in multiple maps, depending on how many categories it’s relevant in. This allows the editor to organize the markers according to type, but then re-use them on many different maps according to category. To make that sample little bit more interesting, we’ll deliver the module with set-up markers showing locations of Magnolia’s offices and those of all partners around the world. So for example, if you want to assemble a map with all Magnolia partners, just tell the map to draw in all markers categorized as “Partner”. Boom! Importing places If you already have a list of locations saved in a spreadsheet, save it as an Excel file and get it imported into the app directly. While you can (and really should) specify exact locations using latitude and longitude for each marker, you can also just leave in the address - the exact position will be retrieved on the fly using the Google Places API. Due to the limitations in the free usage of the API, you should not do that for too many markers, or you should buy unlimited access if you need to do that. Download, use, contribute Now for the best part - the module is already released and ready for you to use. Feel free to contribute more functions and improvements to it, if you like it. Jira Git google-places-1.0.jar Pom snippet: <dependency>  <groupId>com.neatresults.mgnlextensions</groupId>  <artifactId>google-places</artifactId>  <version>1.0</version></dependency> AttachmentSize Screen_Shot_2015-03-29_at_21.47.56.png268.86 KB Screen_Shot_2015-03-29_at_21.49.13.png1.09 MB Screen_Shot_2015-03-29_at_21.51.52.png131.65 KB

Posted by on 7 April 2015 | 2:03 am

Java Project in Machine Learning

Java/Akka based technology models, each of which model a different technology, are active in a distributed Internet community. Any of the technology models may have a definition of a better future version of itself. A technology model that aspires to improve itself engages in conversations with other models in the community; seeking to discover behaviors that are exportable from other technology models which it can integrate into itself to achieve its goal. Read the project abstract at : http://jcmansigian.webfactional.com/aed-abstract.html Get the project source and documentation with command: $ git clone https://github.com/aed-project/aspire aspire-emergent-design

Posted by on 2 April 2015 | 3:53 pm