Wednesday, December 31, 2008

Synchronous JBI with Apache Camel and OpenESB

Introduction

The present tutorial shows how to expose a JBI InOut synchronous endpoint through CamelSE, the Apache Camel Service Engine of OpenESB.
OpenESB works with a collection of different Service Engines, the mainly known are JavaEE-SE and the BPEL-SE. Have a look at the Components Catalogue for more. Some useful SE are in the incubator phase, I'm especially interested in the POJO-SE and CamelSE because they expose a simple and clean programming model. I'm personally not the biggest fan of BPEL as I feel much more comfortable with programming by textual representations instead of the arrow-and-boxes visual flows of BPEL (much like I usually find more effective to write actual code instead of UML drawings, while I like the option to create drawings from working code). In this article I want to point out some very nice possibilities provided by alternatives and also underline that it is not mandatory to use BPEL inside OpenESB at all.

Unfortunately (IMHO) most efforts of the OpenESB community are still headed to the BPEL stuff and optimization of the related SE, while I think majority of developers are more interested in classical textual programming paradigms, first of all because of a much better productivity when implementing typical Enterprise Integration Patterns and Event-Driven SOA solutions.

Here I want to present how to expose a JBI endpoint from the CamelSE, as the sample service definition provided by the CamelSE wizard still can create a One-way operation only. In a future example I will show how to rewrite the Report Incident Camel tutorial within OpenESB, and I will show why I think OpenESB is probably the most effective environment to develop Apache Camel based EIP.

Apache Camel

Apache Camel is a Spring based Integration Framework which implements the Enterprise Integration Patterns with powerful Bean Integration. Camel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor. Apache Camel uses URIs so that it can easily work directly with any kind of Transport or messaging model such as HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF Bus API together with working with pluggable Data Format options. Apache Camel is a small library which has minimal dependencies for easy embedding in any Java application.

CamelSE

Apache Camel JBI Service Engine (a.k.a CamelSE) is a JBI Component that can be used to run Apache Camel Application in a JBI platform such as OpenESB. The CamelSE also enables Camel Applications (via Camel endpoints) to do message exchange with the service providers and consumers deployed in other JBI Components such as BPEL SE, HTTP BC etc. by providing a Camel Component to the Camel framework that can create Camel Endpoints mapped to the JBI Service Endpoints (consumer or provider) in the CamelSE.

The Example

Preparation

Follow the standard CamelSE installation instructions. I tested this with Camel 1.5.0 and OpenESB nigthly build # 20081209 (based on Netbeans 6.1), but it should work with the latest GlassfishESB as well.

CamelSE Project

1. In Netbeans click "New Project", then select "Camel JBI Module" from "Service Oriented Architecture" folder:


2. Choose a project name (I went for "CamelInOut") and leave other options as default. Click finish button. The wizard creates the typical CamelSE project structure:


3. The default jbi2Camel.wsdl contains a one-way operation only, it is necessary to modify it to create a request-reply operation to implement our scenario.

So change the original jbi2camel.wsdl, replacing the one-way operation
<portType name="CamelInOut_interface">
<operation name="oneWay">
<input name="oneWayIn" message="tns:anyMsg"/>
</operation>
</portType>



With a request-reply
<portType name="CamelInOut_interface">
<operation name="exchange">
<input name="exchangeIn" message="tns:anyMsg"/>
<output name="exchangeOut" message="tns:anyMsg"/>
</operation>
</portType>



4. Edit the default AppRouteBuilder.java

package cameljbimodule1;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.spring.Main;

/**
 * A Camel Router
 * @author maurizio
 */
public class AppRouteBuilder extends RouteBuilder {

    /**
     * A main() so we can easily run these routing rules in our IDE
     */
    public static void main(String... args) {
        Main.main(args);
    }

    /**
     * Lets configure the Camel routing rules using Java code...
     */
    public void configure() {
// Use this route when receiving messages from jbi endpoint
// jbi uri format = "jbi://

        String jbiURI = "jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint";

        System.out.println("@@@ jbiURI=" + jbiURI);

        from(jbiURI).multicast().to("seda:log").process(new Processor() {

            public void process(Exchange exchange) throws Exception {
                String outBody = "OK";
                exchange.getOut().setBody(outBody, String.class);
                System.out.println("@@@ return: " + outBody);
            }
        });

        from("seda:log").process(new Processor() {

            public void process(Exchange exchange) throws Exception {
                String inBody = exchange.getIn().getBody(String.class);
                System.out.println("@@@ received: " + inBody);
            }
        });
    }
}

The idea is to use the Camel JBI URI as the entry point for the service:
String jbiURI = "jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint";
This is the default jbiURI string as created by the wizard.

Then add a Camel multicast() method to start a parallel flow of execution. There are two legs of the multicast: first is a to("seda:log"), then a Processor inline class which actually sends back the response as a JBI Exchange:

from(jbiURI).multicast().to("seda:log").process(new Processor() {
    public void process(Exchange exchange) throws Exception {
        String outBody = "OK";
        exchange.getOut().setBody(outBody, String.class);
        System.out.println("@@@ return: " + outBody);
    }
});
The seda: component provides asynchronous SEDA behavior so that messages are exchanged on a BlockingQueue and consumers are invoked in a separate thread to the producer.

Next the seda:log queue is read in a separate thread, so that the service response anf further processing are excute physically in parallel
from("seda:log").process(new Processor() {
    public void process(Exchange exchange) throws Exception {
        String inBody = exchange.getIn().getBody(String.class);
        System.out.println("@@@ received: " + inBody);
    }
});
5. Create a new Composite Application (CA), drag and drop the CamelInOut project into the CASA window, add a SOAP-BC, connect it to the JBI module then build and deploy:


6. Create and run a Test Case to see what is going on. As usual, right-click over the Test node of the CA to create a New Test Case. Your input.xml of this test could be something as follow:
<soapenv:Envelope
xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:cam="http://openesb.org/jbi2camel/message/CamelInOut">
<soapenv:Body>
<cam:AnyMessage>Here is the input message</cam:AnyMessage>
</soapenv:Body>
</soapenv:Envelope>

The output.xml after executing the test, should be like this:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<cam:AnyMessage xmlns:cam="http://openesb.org/jbi2camel/message/CamelInOut"
xmlns:msgns="http://openesb.org/jbi2camel/CamelInOut">OK</cam:AnyMessage>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
The flow is really executed in multiple threads, as you can verify by checking the server.log of your Glassfish domain:
[#|2008-12-31T11:56:29.012+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=31;_ThreadName=caCamelInOut-CamelInOut;|
@@@ jbiURI=jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint|#]

....

[#|2008-12-31T11:57:11.307+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=35;_ThreadName=pool-5-thread-3;|@@@ return: OK|#]

[#|2008-12-31T11:57:11.309+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=33;_ThreadName=seda:log thread:2;|@@@ received: Here is the input message|#]

[#|2008-12-31T11:57:11.314+0100|INFO|sun-appserver9.1
|camel-jbi-se.org.openesb.components.camelse.CamelSEComponentLifeCycle
|_ThreadID=36;_ThreadName=pool-5-thread-4;| Message Exchange Provider received DONE : END of service invocation|#]

Conclusions

This short tutorial shows how easy it is to use Apache Camel together with OpenESB, allowing for a simple and very effective implementation of a routing and transformation ESB language. It also demonstrate how to glue JBI endpoints with Camel, exposing JBI request-reply synchronous services directly from CamelSE modules. In a second tutorial I will rewrite a classic Apache Camel example to show that using OpenESB it requires less coding and efforts.

References

- Apache Camel Tutorials
- Implementing Fuji integration scenario using Camel SE [Louis Polycarpou's blog]

Friday, December 26, 2008

Services Virtualization and the Need for Lifecycle Management

Introduction

What I personally call "Natural Requirements" (NR) are requirements not coming from end users, analysts or marketing people. Natural Requirements are often more operational than functional or technical, let me call the latter "static requirements", and I mean the one we usually capture in the early phases of the requirements gathering process. NR instead derive from the natural and time-dependant evolution of a system well after the systems has been built and put into production. NR are naturally imposed by the environment, the context and by the fact that systems are always evolving and requires a lot of maintenance, and there is nothing we can do to eliminate them: instead we need to deal with them from a strategic standpoint, looking at the long run.

A SOA is made of a collection of interacting business services, each one with its own history and evolution. Starting from domain and data models, services defines an atomic and well identified piece of data, algorithmic result or a combination of both. Services creates a public API, they expose internal technical artefacts as integrated business, higher-level information. We (almost) all know what the SOA objectives are, so it is not interesting to repeat them here.



Natural Requirements for Service Virtualization

Services anyway pose a maintenance problem: they evolve, need to be monitored and policies have to be enforced. A public API, once published, can't be changed easily, as it is a public contract and we cannot control how many clients and how they make use of the API. When we change a public API we know that we are going to break contracts with clients, and clients need to be changed to adapt to the new interfaces.

A first Natural Requirement is that services have to be versioned to reach a level of backward compatibility. For this purpose, modern SOA Governance solutions offer not only a Registry but also a Repository - ideally a combination of both - where service interfaces and messages can be versioned to avoid breaking contracts when things change. Each client can point to the proper version of the same service and related messages.

Services implements business logic atomic features, but in their lifecycle they also deal with other orthogonal concerns, which are not functional in their nature: Policies, End point definitions, messages structure manipulation and message formats are all non-business aspects.
So Separation of Concerns is one of the main and most frequent Natural Requirement we could ever encounter in our daily routine as SOA architects. Orthogonal concerns must be consistently keep divided from business logic and other aspects. For example, one of the most common orthogonal concerns we must split from business logic is Security. For the sake of this discussion, we could identify a couple of concrete technical ways to implement services: SOAP messages over HTTP and REST Web Services. The first are usually secured by using some complex WS-Security implementation, the latter by simpler HTTP usual methods, such as basic authentication + SSL. Policy enforcement and core services must be kept separated, business logic developers shouldn't have to know that such a thing like WS-Security even exists.




"Composite applications must have the business logic defined separately from each application's infrastructure. For example, instead of baking security semantics into the code, use an external, general-purpose security framework. This will be your template for enforcing authentication and authorization rules based on declarative policies that specify the required security semantics for the service within the context of a specific application."
[ Anne Thomas Manes - The Elephant Has Left The Building ]

Services Virtualization is usually defined as a virtual view (much like a database view, at least conceptually) of a real serviced managed through a policy. A Policy is a declarative (not-programmatic) set of administrative tasks. For example you can have authentication, authorization and generally speaking access management policies. Or you can define a set of minimal performances level policies in terms of SLAs. Or then you can have billing policies for your services (think at Amazon's web services) based on volumes or number of calls.




Natural Requirements for PLM

SOA implies that business services and Composite Applications (CA) have to be handled like real products, so PLM (Product Lifecycle Management) issues and practices apply. Many vendors now offers (almost) comprehensive governance solutions which embrace policies + registry + artifacts repository and all the related versioning, but that's not enough. Governance solutions are platforms, but PLM is about the product and the process. Services needs to be created as the building blocks for CA, so they are part of the product lifecycle: the final product is the CA, while services are the components, which can be re-assembled and mixed to create other CA.
The main steps are:

Phase 1: Conceive
Imagine, Specify, Plan, Innovate

Phase 2: Design
Describe, Define, Develop, Test, Analyse and Validate

Phase 3: Realize
Manufacture, Make, Build, Procure, Produce, Sell and Deliver

Phase 4: Service
Use, Operate, Maintain, Support, Sustain, Phase-out, Retire, Recycle and Disposal

Why we should talk about PLM and not about ALM (Application lifecycle management)? ALM is just about software, while PLM is about manufacturing, so the first acronym sounds more natural. Because applications are monolithic, they are not composite in the same terms we use when building a product in manufacture. The roads to assemble a Composite Application are somehow different from Application ones, actually they are closer to manufacturing processes. The reason is that we want to create a predictable and repeatable assembly chain for services. ALM is really a subset of PLM: the Use, Operate, Maintain, Support, Sustain, Phase-out, Retire, Recycle and Disposal activities of Phase 4 needs to be operated at the level of each single Service and not at the Application level only. When we talk about SOA, services (the building blocks) are way more crucial than the resulting Composite Applications, like reliable foundations are for a house.


The Model T's body is joined to its chassis at the Highland Park plant

When we think again about the manufacturing metaphor, and specifically about car manufactures, we could assimilate the car as the final product of an assembly of other products. Let's say car wheels and tires has a clearly defined interface and the car maker just buys the product from another supplier. Tires producer knows its product can be assembled by a number of different clients, so it cannot make too many assumptions on the external world: they just have to respect the contract.

In conclusions, to be effective Services construction, assembly and maintenance require us to build our own assembly chain, following best PLM practices. But still best practices in this field are in their infancy.

The description of SOA I enjoy most is still Dan North's A Classic Introduction to SOA, which is additionally a very easy reading.