Wednesday, December 31, 2008

Synchronous JBI with Apache Camel and OpenESB

Introduction

The present tutorial shows how to expose a JBI InOut synchronous endpoint through CamelSE, the Apache Camel Service Engine of OpenESB.
OpenESB works with a collection of different Service Engines, the mainly known are JavaEE-SE and the BPEL-SE. Have a look at the Components Catalogue for more. Some useful SE are in the incubator phase, I'm especially interested in the POJO-SE and CamelSE because they expose a simple and clean programming model. I'm personally not the biggest fan of BPEL as I feel much more comfortable with programming by textual representations instead of the arrow-and-boxes visual flows of BPEL (much like I usually find more effective to write actual code instead of UML drawings, while I like the option to create drawings from working code). In this article I want to point out some very nice possibilities provided by alternatives and also underline that it is not mandatory to use BPEL inside OpenESB at all.

Unfortunately (IMHO) most efforts of the OpenESB community are still headed to the BPEL stuff and optimization of the related SE, while I think majority of developers are more interested in classical textual programming paradigms, first of all because of a much better productivity when implementing typical Enterprise Integration Patterns and Event-Driven SOA solutions.

Here I want to present how to expose a JBI endpoint from the CamelSE, as the sample service definition provided by the CamelSE wizard still can create a One-way operation only. In a future example I will show how to rewrite the Report Incident Camel tutorial within OpenESB, and I will show why I think OpenESB is probably the most effective environment to develop Apache Camel based EIP.

Apache Camel

Apache Camel is a Spring based Integration Framework which implements the Enterprise Integration Patterns with powerful Bean Integration. Camel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor. Apache Camel uses URIs so that it can easily work directly with any kind of Transport or messaging model such as HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF Bus API together with working with pluggable Data Format options. Apache Camel is a small library which has minimal dependencies for easy embedding in any Java application.

CamelSE

Apache Camel JBI Service Engine (a.k.a CamelSE) is a JBI Component that can be used to run Apache Camel Application in a JBI platform such as OpenESB. The CamelSE also enables Camel Applications (via Camel endpoints) to do message exchange with the service providers and consumers deployed in other JBI Components such as BPEL SE, HTTP BC etc. by providing a Camel Component to the Camel framework that can create Camel Endpoints mapped to the JBI Service Endpoints (consumer or provider) in the CamelSE.

The Example

Preparation

Follow the standard CamelSE installation instructions. I tested this with Camel 1.5.0 and OpenESB nigthly build # 20081209 (based on Netbeans 6.1), but it should work with the latest GlassfishESB as well.

CamelSE Project

1. In Netbeans click "New Project", then select "Camel JBI Module" from "Service Oriented Architecture" folder:


2. Choose a project name (I went for "CamelInOut") and leave other options as default. Click finish button. The wizard creates the typical CamelSE project structure:


3. The default jbi2Camel.wsdl contains a one-way operation only, it is necessary to modify it to create a request-reply operation to implement our scenario.

So change the original jbi2camel.wsdl, replacing the one-way operation
<portType name="CamelInOut_interface">
<operation name="oneWay">
<input name="oneWayIn" message="tns:anyMsg"/>
</operation>
</portType>



With a request-reply
<portType name="CamelInOut_interface">
<operation name="exchange">
<input name="exchangeIn" message="tns:anyMsg"/>
<output name="exchangeOut" message="tns:anyMsg"/>
</operation>
</portType>



4. Edit the default AppRouteBuilder.java

package cameljbimodule1;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.spring.Main;

/**
 * A Camel Router
 * @author maurizio
 */
public class AppRouteBuilder extends RouteBuilder {

    /**
     * A main() so we can easily run these routing rules in our IDE
     */
    public static void main(String... args) {
        Main.main(args);
    }

    /**
     * Lets configure the Camel routing rules using Java code...
     */
    public void configure() {
// Use this route when receiving messages from jbi endpoint
// jbi uri format = "jbi://

        String jbiURI = "jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint";

        System.out.println("@@@ jbiURI=" + jbiURI);

        from(jbiURI).multicast().to("seda:log").process(new Processor() {

            public void process(Exchange exchange) throws Exception {
                String outBody = "OK";
                exchange.getOut().setBody(outBody, String.class);
                System.out.println("@@@ return: " + outBody);
            }
        });

        from("seda:log").process(new Processor() {

            public void process(Exchange exchange) throws Exception {
                String inBody = exchange.getIn().getBody(String.class);
                System.out.println("@@@ received: " + inBody);
            }
        });
    }
}

The idea is to use the Camel JBI URI as the entry point for the service:
String jbiURI = "jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint";
This is the default jbiURI string as created by the wizard.

Then add a Camel multicast() method to start a parallel flow of execution. There are two legs of the multicast: first is a to("seda:log"), then a Processor inline class which actually sends back the response as a JBI Exchange:

from(jbiURI).multicast().to("seda:log").process(new Processor() {
    public void process(Exchange exchange) throws Exception {
        String outBody = "OK";
        exchange.getOut().setBody(outBody, String.class);
        System.out.println("@@@ return: " + outBody);
    }
});
The seda: component provides asynchronous SEDA behavior so that messages are exchanged on a BlockingQueue and consumers are invoked in a separate thread to the producer.

Next the seda:log queue is read in a separate thread, so that the service response anf further processing are excute physically in parallel
from("seda:log").process(new Processor() {
    public void process(Exchange exchange) throws Exception {
        String inBody = exchange.getIn().getBody(String.class);
        System.out.println("@@@ received: " + inBody);
    }
});
5. Create a new Composite Application (CA), drag and drop the CamelInOut project into the CASA window, add a SOAP-BC, connect it to the JBI module then build and deploy:


6. Create and run a Test Case to see what is going on. As usual, right-click over the Test node of the CA to create a New Test Case. Your input.xml of this test could be something as follow:
<soapenv:Envelope
xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:cam="http://openesb.org/jbi2camel/message/CamelInOut">
<soapenv:Body>
<cam:AnyMessage>Here is the input message</cam:AnyMessage>
</soapenv:Body>
</soapenv:Envelope>

The output.xml after executing the test, should be like this:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<cam:AnyMessage xmlns:cam="http://openesb.org/jbi2camel/message/CamelInOut"
xmlns:msgns="http://openesb.org/jbi2camel/CamelInOut">OK</cam:AnyMessage>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
The flow is really executed in multiple threads, as you can verify by checking the server.log of your Glassfish domain:
[#|2008-12-31T11:56:29.012+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=31;_ThreadName=caCamelInOut-CamelInOut;|
@@@ jbiURI=jbi:http://openesb.org/jbi2camel/CamelInOut/CamelInOut_service/jbi2camel_endpoint|#]

....

[#|2008-12-31T11:57:11.307+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=35;_ThreadName=pool-5-thread-3;|@@@ return: OK|#]

[#|2008-12-31T11:57:11.309+0100|INFO|sun-appserver9.1|javax.enterprise.system.stream.out
|_ThreadID=33;_ThreadName=seda:log thread:2;|@@@ received: Here is the input message|#]

[#|2008-12-31T11:57:11.314+0100|INFO|sun-appserver9.1
|camel-jbi-se.org.openesb.components.camelse.CamelSEComponentLifeCycle
|_ThreadID=36;_ThreadName=pool-5-thread-4;| Message Exchange Provider received DONE : END of service invocation|#]

Conclusions

This short tutorial shows how easy it is to use Apache Camel together with OpenESB, allowing for a simple and very effective implementation of a routing and transformation ESB language. It also demonstrate how to glue JBI endpoints with Camel, exposing JBI request-reply synchronous services directly from CamelSE modules. In a second tutorial I will rewrite a classic Apache Camel example to show that using OpenESB it requires less coding and efforts.

References

- Apache Camel Tutorials
- Implementing Fuji integration scenario using Camel SE [Louis Polycarpou's blog]

Friday, December 26, 2008

Services Virtualization and the Need for Lifecycle Management

Introduction

What I personally call "Natural Requirements" (NR) are requirements not coming from end users, analysts or marketing people. Natural Requirements are often more operational than functional or technical, let me call the latter "static requirements", and I mean the one we usually capture in the early phases of the requirements gathering process. NR instead derive from the natural and time-dependant evolution of a system well after the systems has been built and put into production. NR are naturally imposed by the environment, the context and by the fact that systems are always evolving and requires a lot of maintenance, and there is nothing we can do to eliminate them: instead we need to deal with them from a strategic standpoint, looking at the long run.

A SOA is made of a collection of interacting business services, each one with its own history and evolution. Starting from domain and data models, services defines an atomic and well identified piece of data, algorithmic result or a combination of both. Services creates a public API, they expose internal technical artefacts as integrated business, higher-level information. We (almost) all know what the SOA objectives are, so it is not interesting to repeat them here.



Natural Requirements for Service Virtualization

Services anyway pose a maintenance problem: they evolve, need to be monitored and policies have to be enforced. A public API, once published, can't be changed easily, as it is a public contract and we cannot control how many clients and how they make use of the API. When we change a public API we know that we are going to break contracts with clients, and clients need to be changed to adapt to the new interfaces.

A first Natural Requirement is that services have to be versioned to reach a level of backward compatibility. For this purpose, modern SOA Governance solutions offer not only a Registry but also a Repository - ideally a combination of both - where service interfaces and messages can be versioned to avoid breaking contracts when things change. Each client can point to the proper version of the same service and related messages.

Services implements business logic atomic features, but in their lifecycle they also deal with other orthogonal concerns, which are not functional in their nature: Policies, End point definitions, messages structure manipulation and message formats are all non-business aspects.
So Separation of Concerns is one of the main and most frequent Natural Requirement we could ever encounter in our daily routine as SOA architects. Orthogonal concerns must be consistently keep divided from business logic and other aspects. For example, one of the most common orthogonal concerns we must split from business logic is Security. For the sake of this discussion, we could identify a couple of concrete technical ways to implement services: SOAP messages over HTTP and REST Web Services. The first are usually secured by using some complex WS-Security implementation, the latter by simpler HTTP usual methods, such as basic authentication + SSL. Policy enforcement and core services must be kept separated, business logic developers shouldn't have to know that such a thing like WS-Security even exists.




"Composite applications must have the business logic defined separately from each application's infrastructure. For example, instead of baking security semantics into the code, use an external, general-purpose security framework. This will be your template for enforcing authentication and authorization rules based on declarative policies that specify the required security semantics for the service within the context of a specific application."
[ Anne Thomas Manes - The Elephant Has Left The Building ]

Services Virtualization is usually defined as a virtual view (much like a database view, at least conceptually) of a real serviced managed through a policy. A Policy is a declarative (not-programmatic) set of administrative tasks. For example you can have authentication, authorization and generally speaking access management policies. Or you can define a set of minimal performances level policies in terms of SLAs. Or then you can have billing policies for your services (think at Amazon's web services) based on volumes or number of calls.




Natural Requirements for PLM

SOA implies that business services and Composite Applications (CA) have to be handled like real products, so PLM (Product Lifecycle Management) issues and practices apply. Many vendors now offers (almost) comprehensive governance solutions which embrace policies + registry + artifacts repository and all the related versioning, but that's not enough. Governance solutions are platforms, but PLM is about the product and the process. Services needs to be created as the building blocks for CA, so they are part of the product lifecycle: the final product is the CA, while services are the components, which can be re-assembled and mixed to create other CA.
The main steps are:

Phase 1: Conceive
Imagine, Specify, Plan, Innovate

Phase 2: Design
Describe, Define, Develop, Test, Analyse and Validate

Phase 3: Realize
Manufacture, Make, Build, Procure, Produce, Sell and Deliver

Phase 4: Service
Use, Operate, Maintain, Support, Sustain, Phase-out, Retire, Recycle and Disposal

Why we should talk about PLM and not about ALM (Application lifecycle management)? ALM is just about software, while PLM is about manufacturing, so the first acronym sounds more natural. Because applications are monolithic, they are not composite in the same terms we use when building a product in manufacture. The roads to assemble a Composite Application are somehow different from Application ones, actually they are closer to manufacturing processes. The reason is that we want to create a predictable and repeatable assembly chain for services. ALM is really a subset of PLM: the Use, Operate, Maintain, Support, Sustain, Phase-out, Retire, Recycle and Disposal activities of Phase 4 needs to be operated at the level of each single Service and not at the Application level only. When we talk about SOA, services (the building blocks) are way more crucial than the resulting Composite Applications, like reliable foundations are for a house.


The Model T's body is joined to its chassis at the Highland Park plant

When we think again about the manufacturing metaphor, and specifically about car manufactures, we could assimilate the car as the final product of an assembly of other products. Let's say car wheels and tires has a clearly defined interface and the car maker just buys the product from another supplier. Tires producer knows its product can be assembled by a number of different clients, so it cannot make too many assumptions on the external world: they just have to respect the contract.

In conclusions, to be effective Services construction, assembly and maintenance require us to build our own assembly chain, following best PLM practices. But still best practices in this field are in their infancy.

The description of SOA I enjoy most is still Dan North's A Classic Introduction to SOA, which is additionally a very easy reading.

Sunday, October 19, 2008

When all are responsible, nobody is

Democracy doesn't fit well with software development, committees don't work for software development. Individual accountability works. Ask few good individuals for commitment and give them the power to act.
I'm trying to recall what the context usually is when things go wrong and what are the conditions when a project succeed. If I look back to my experience most of the successful software development projects I've been involved succeeded exactly when the decision chain was very short and decision taken promptly, especially the unpopular ones.

I think it was Fred Allen who said, "A committee is a group of men who individually can do nothing, but as a group decide that nothing can be done."

This is known also as a variant of the bystander effect. "The bystander effect (also known as bystander apathy, Genovese syndrome, diffused responsibility or bystander intervention) is a psychological phenomenon in which someone is less likely to intervene in an emergency situation when other people are present and able to help than when he or she is alone."

Really big distributed projects, say Linux and the Python language, have clear leadership expressed by few or even one person. Linux has Linus, Python has Guido. Are really our projects any bigger or more complex?

Now integration is organizationally harder than application development because integration is cross-department and under control of different ownership domains. If you leave the execution control to a committee than you can be pretty sure it is not going to deliver: distributed responsibility means almost no responsibility.

"A good plan violently executed now is better than a perfect plan executed next week."
- George S. Patton

If the team is big the Scrum agile project management process is something which could help splitting the team into smaller teams, each one with a local leader who reports to the top team leader (or top Scrum Master). The team leaders (local and top) are responsible to deliver the solution, the Time & Resource Manager is in charge to check time and budget, while the QA Manager is responsible to provide testing and validation. That's it.
If you think you can deliver any SOA or EAI project by creating a committee made of members of all IT department involved in the integration effort then you are doomed, you'll end into analysis-paralisys very soon because everybody want to say something. The next common mistake is adding developers to a late project: as Tom DeMarco taught, adding manpower to a late project actually makes it later.

“Craziness is doing the same thing and expecting a different result.”
- Tom DeMarco

We are convinced that software development management is hard, but too many times I'm experiencing self-handicapping management, when consultants or end customers are scared of taking any decision and just start blaming the tools they are using and start opening support incident without relying on what, if they, know. Then a two-months integration flow takes three years and twenty times the estimated budget.
If you have a problem, fix it now. When a better solution comes, refactor to integrate it to your fix.

"Anticipate the difficult by managing the easy."
- Lao Tzu

Friday, October 10, 2008

Java CAPS 6 Tutorials and Sample Projects

Recent tutorials and screencasts about JavaCAPS 6 can be found here. Version 6 of JavaCAPS introduces many new and additional features, so it usually takes some time to figure out the whole thing. The Grok JavaCAPS Wiki is another complete source of updated information.

Saturday, September 13, 2008

Sun Releases Milestone 1 for GlassFish ESB

Today Sun made the public release of Milestone 1 for GlassFish ESB.

"GlassFish ESB is a binary distribution of OpenESB. It consists of subset of the components in OpenESB. Sun will provide commercial support for GlassFish ESB."

GlassFish ESB delivers a lightweight and agile ESB platform that packages the innovation happening with Project OpenESB into a commercially supported, enterprise-class platform. In essence, GlassFish ESB is a binary distribution that combines technology from Project OpenESB, the GlassFish application server and the NetBeans IDE communities into a supported, commercial distribution.
  • GlassFish ESB is a binary distribution from the open source bits in OpenESB.
  • Sun will support GlassFish ESB just like any other product: it will not just support the latest version, but also older versions. Once released, fixes to GlassFish ESB will be made on a branch behind the firewall and will be merged periodically to the head of the OpenESB code repository. This is an important point for customers who don’t like to continuously upgrade in production.
  • Sun will continue to develop in open source on the OpenESB head.
  • GlassFish ESB will be released on Dec 5th; there will be two more milestone releases in between.
  • The GlassFish ESB site will soon live on sun.com. There are a few pages for GlassFish ESB on OpenESB just temporarily. We’re trying to separate the commercial aspects (i.e. GlassFish ESB) from OpenESB as much as possible: OpenESB is and should remain an open source community.
  • The GlassFish ESB downloads will remain on OpenESB because the bits are being developed in Open Source.
  • This does not change anything for the ESB Suite, MDM, and JavaCAPS: we will continue to develop them. There is and will remain a value differentiation between GlassFish ESB and the other products.
  • There will be separate component releases next to GlassFish ESB. E.g. IEP will release soon as a separate component.
Main Features
  • A flexible platform supporting multiple architectural styles (SOA, EJB, MoM, BPM)
  • Modular architecture enabling a tailored solution platform for specific project and enterprise needs.
  • Leading SOA and WS-* (WS-IT/Metro) support with industry leading interoperability with other platforms
  • Integration tooling, based on the award-winning NetBeans IDE with integrated service development, deployment and testing
  • Support for JBI, Java EE 5 and a wide range of other key industry standards
  • Based on fully open communities Project OpenESB, GlassFish and NetBeans
  • Backed by Sun's software support services
  • Backed by a large and growing community with an exciting roadmap and future vision
Learn more

Tuesday, September 9, 2008

What BPEL stands for?

I have been involved in several projects where one recurring request was to "use BPEL", or more generically "to implement some level of Business Process Management (BPM)". BPEL means Business Process Execution Language and the OASIS Standard defines it as:

"WS-BPEL provides a language for the specification of Executable and Abstract business processes. By doing so, it extends the Web Services interaction model and enables it to support business transactions. WS-BPEL defines an interoperable integration model that should facilitate the expansion of automated process integration in both the intra-corporate and the business-to-business spaces."

An extended definition of SOA says:

"Service-oriented architectures (SOA) promise to implement composite applications
that offer location transparency and segregation of business logic. Location
transparency allows consumers and providers of services to exchange messages
without reference to one another’s concrete location. Segregation of business logic
isolates the core processes of the application from other service providers and
consumers."
(from: Implementing Service-Oriented Architectures (SOA) with the Java EE 5 SDK - Sun Microsystems)

So basically BPEL allows to segregate business logic into pieces of reusable sequences of atomic business activities, expressed through an XML syntax. SeeBeyond and Sun JavaCAPS 5.1 supported the BPEL 1.x standard through the eInsight engine, while the new, opensource Sun OpenESB supports BPEL 2.0., but there are of course many other products from different vendors.

However the point is that BPEL is about "segregation of business logic", it is not about "arrow and boxes programming". I underline this because I am seeing many projects where BPEL is very misused and abused: they are kind of replacing Java with BPEL because the latter is more buzzword-oriented and all those arrows and boxes look nicer. But then it happens that resulting BPEL is just polluted by technical activities, while I'm stressing the fact it should contain pure business logic. So my hint is to always put your orthogonal concerns (logging, error handling, auditing, etc...) somewhere else and leave your BPEL as clean as possible. For example, with OpenESB (commercially aka JavaCAPS 6) there is a nice JavaEE-SE which is the JBI Service Engine to execute EJBs (practically, it is a JBI bridge to the below application server). Here is a list of all JBI components you can use, have a look at Aspect SE, Camel SE (Apache Camel Service Engine) and Scripting SE for other very interesting opportunities to avoid messing-up your business logic...

Tuesday, August 5, 2008

New Dates for Java CAPS Horizons Summit - EMEA 2008

*UPDATED* Registration is open, point your browser to:

http://de.sun.com/sunnews/events/2008/horizons_emea/


13th/14th of October - Horizons Summit
15th of October - Horizons Engineering Workshops and Feedback Sessions

Location: Munich, Germany

Please go directly to the Horizons Wiki for updates, registration and any additional information.

Let's meet in Munich then!

Friday, August 1, 2008

Creating a new CAPS 6-ready GlassFish domain

Louis Polycarpou has written a blog entry to explain an important point about JavaCAPS 6 and Glassfish configuration of additional domains.

CAPS 5.1.x users will notice that there is no longer a domain manager tool to simplify domain management tasks such as domain creation. This was effectively tied to the Sun SeeBeyond Integration Server, which no longer exists, but the domain manager may return in future releases.

Read the full article.

On my side I hope Sun is going to make this procedure simpler, as it was with version 5. The suite is already big and complex enough, we do not really need to remove tools which make our life easier.

Visiting Google's Headquarters


A couple of weeks ago I had the opportunity to personally visit the Google's HQ in Mountain View, called Googleplex. I was very curious, as this is one of the most talked companies in the world and it is ranked #1 in the Fortune's list of best places to work in the USA.

The visit was quite instructive and everything was good as expected. The buildings are spread in a very green campus in Mountain View, with a lot of open space and outdoors for employees, so you can see people having a coffee under the Californian sun, holding team meetings outside but also having fun.

You can have a look to some additional pictures here.

My personal feelings are that of a company that is really putting Peopleware as the primary asset. Walking around the building you feel that people seems to be proud of being part of the team and happy working there. Of course, Google is very rich so it is not easy for a lot of other companies to emulate the same environment, but I strongly suggest others, especially European's, to visit them and realize how it is possible to create a place which boosts productivity. For the fortunate US technology worker is not that hard to find situations close to this, but believe me American fellows: the rest of the world is not often that lucky.

I do not even mention Italian companies, because in Italy we are so far away from buildings where people can work productively. I have a long term experience of dark undergrounds barely enlightened and very uncomfortable chairs, where all you can think about is how long does it take to go back home... And when I mention my country I am not talking about small startups, where often you can find intelligent entrepreneurs who know how the working environment is important for productivity, but I mean multi-billion corporations which don't spend enough money to make their employees working in safe and decent conditions.

On my side I have been lucky in my career, my last two employers were SeeBeyond and Sun Microsystems, they have very nice buildings and comfortable offices in Italy, but anyway they are American companies...

Wednesday, May 21, 2008

First and Only Java CAPS Book is Available

Sun's Michael Czapski has recently published a book about Java CAPS. As far as I know that is the very first book about the subject, it covers the product deeply by showing how to implement common integration design patterns. I hope I can buy a copy soon, it should be available through Amazon.

stcqueueviewer: Dinamically Monitoring JCAPS JMS Server Queues

Disclaimer: the below procedure is undocumented and not officially supported, so you are using this at your own risk only. Please contact Sun's JavaCAPS support or Professional Services for more

I recently had to dynamically monitor some JMS queues within JavaCAPS , especially my need was to count the number of messages in a given queue before posting additional messages, to avoid unnecessary queue flooding. JavaCAPS is fully JMX compliant, the embedded Sun Application Server is well-documented on that side but the SeeBeyond IQ Manager (default JavaCAPS' JMS server implementation) is lacking some info, so I decided to go for some hacking.
My colleague Paul pointed me in the right direction, he suggested to have a look at the com.stc.jms.stcqueueviewer.jar contained into
logicalhost\is\stcms\lib
This library is almost undocumented, so I had to reverse-engineer its classes by using the nice DJ tool. The most interesting file is called Server.java, the Server class contains what is necessary to fully monitor the SeeBeyond IQ manager from a Java program. Below an example:

package queuemonitor;

import com.stc.jms.queueviewer.*;

public class Main {

public static void main(String[] args) throws Exception {

Server sv = new Server();

sv.connect("localhost", 18007, "Administrator", "STC");

QueueStatistics queueStatistics = new QueueStatistics();

sv.getQueueStatistics(queueStatistics, "qStoreDocument");
System.out.println("MinSeqNo=" + queueStatistics.MinSeqNo);
System.out.println("MaxSeqNo=" + queueStatistics.MaxSeqNo);
System.out.println("MessageCount=" + queueStatistics.MessageCount);
sv.disconnect();
}
}

To compile and run this with Netbeans 6 you need to import some additional JAR files from the JavaCAPS Logicalhost logicalhost\is\stcms\lib folder:
  • com.stc.jms.stcjms.jar
  • com.stc.jms.stcqueueviewer.jar
  • jms.jar


By the way, if you have not tried the new Netbeans 6 yet, then shame on you!

If you want instead to do the same within a JavaCAPS JCD you need to execute the following steps:
1. create a new JCD
2. Import the com.stc.jms.stcqueueviewer.jar JAR file into repository
3. Import the above JAR into the JCD


4. Do something useful with the dynamic information you get


The above JCD example is pretty silly, just to show you something

That's it. There other useful methods into Server class, you can explore and discover other interesting things. Hopefully somewhere more queueviewer's documentation is available, so it won't be necessary to decompile the Java classes...

Saturday, March 29, 2008

Dynamic Data Models

A recent article on Infoq: Beyond SOA: A New Enterprise Architecture Framework for Dynamic Business Applications gives an interesting point of view about development of Enterprise Applications both from an architecture and a methodology perspective. Article's conclusions are:

"the evolution of flat, stateless, static, client-server web-based solutions have contributed to the disconnect between IT architecture and the real-world of hierarchical, stateful, dynamic, distributed business. We also discussed how traditional engineering approaches do not support the development of adaptive systems capable of supporting the dynamics of business."


A fundamental issue with present enterprise SOA is about the abuse of static data models. As the author explained, relational databases are for static data relationship, but only some parts of the enterprise ecosystem can be considered as being ruled by static relationships. For a more technical example, XML was supposed to be here to overcome relational limitations, as it should provide more robust data representations. In fact, an XML document is robust because by definition you could add elements without breaking the semantic of the document itself, but this is hardly true if then the XML document is parsed into static object wrappers, as most of the "modern" tools provide. As a consequence, proliferation of strict XML schema validation totally eliminates XML intrinsic flexibility and robustness. A more dynamic approach in the manipulation and representation of data flowing in the enterprise should be the focus of next generation softwares. Dynamic data manipulation will provide much better flexibility than static DOM object wrappers that are so common today.

Unfortunately present software tools are still focused on what the author calls static engineering frameworks which are good, even indispensable, to build bridges or airplanes, but very bad to create software useful to represent the dynamic and fuzzy business behavior. The well celebrated web-centric architectural model is good for information representation and retrieval, but when that model is used as a framework to capture and model business real life enterprise applications, then its static nature miserably fails the target.

Basically, a picture of a dog is not a dog, but looks like the software industry still has not fully understood this evidence.

Friday, March 21, 2008

Eric Lerognon's JCAPS 6 to SAP ECC 5.0 notes

This note, sent by my former colleague Eric Lerognon of Sun France, demonstrates how to connect from JCAPS 6 to SAP ECC 5.0.

Merci beaucoup Eric!

Monday, March 17, 2008

Will Java CAPS 6 be based on Netbeans 6.1 ?

Rumors are that the next generation of Java CAPS will be based on a new Netbeans version. No more fights with the eDesigner, eventually. ICAN 5.0 and JCAPS 5.1 IDEs were based on a very old Netbeans (v 3.5) heavily customized by SeeBeyond, called the eDesigner. Additionally jcaps 6 JEE runtime is going to be Glassfish, the Sun open-source application server (at present jcaps runs over old SJAS 8.0).
For seasoned jcaps developers, used to workaround some eDesigner's weird behaviors, it will be a huge improvement, as Netbeans 6 represents probably the most advanced Java IDE in the market (and yes, I mean even better than Eclipse). Glassfish then is one of the best JEE 5 and EJB 3 implementation available, this means Java CAPS 6 is evolving to a complete and powerful Java enterprise application development environment, not limited to EAI only. That shows a strong Sun's commitment in favor of jcaps, good for both partners and customers.

Saturday, March 1, 2008

A Java CAPS Custom Security Provider Enablement

Recently I had to enable a Java custom security provider implemented by a customer. The implementation is based on a set of JAR files included into Java Collaborations, providing some special signing and hashing security features.
To have that libraries working I just had to add few lines to the server.policy file, which resides into:
logicalhost\is\domains\domain1\config
being domain1 my target domain. The JAR files were put into
logicalhost\is\domains\domain1\lib\ext

Below the additional permissions fragment:

// Basic set of required permissions granted to all remaining code
grant {
...
// Java CAPS needs these permissions so that the Bouncy Castle provider can be used
permission java.security.SecurityPermission "insertProvider.BC";
permission java.security.SecurityPermission "removeProvider.BC";
permission java.security.SecurityPermission "putProviderProperty.BC";

//----------------------------------------------------------------------------
// "InnoSec" custom security provider
//----------------------------------------------------------------------------
permission java.security.SecurityPermission "insertProvider.InnoSec";
permission java.security.SecurityPermission "removeProvider.InnoSec";
permission java.security.SecurityPermission "putProviderProperty.InnoSec";
//----------------------------------------------------------------------------
...

Friday, February 29, 2008

You Can't Buy SOA Governance

Customers and software vendors have now a brand new keyword: SOA Governance! That is, if you listen to vendors, about the set of practices and (mainly!) products which help the enterprise to manage complex ecosystem of technical and business services, with their related meta-data. You can have a look at this Wikipedia definition for a concise definition of SOA-Governance.

That sounds good, isn't it? But usually when I listen to both customers and software vendors I find their ideas about governance quite misleading. Few points of mine:

1. SOA governance is about a conceptual organizational and operational framework which aims to align IT governance with business analysis procedures: people first, then technicalities.
2. You can't buy SOA, you can't buy SOA Governance. It is an organizational process, if you don't change the way you operate your IT projects you cannot achieve neither SOA nor any governance;
3. You need to build a set of SOA practices together with a governance strategy, and the human factor here is more important than any possible product: there is no magical SOA-in-a-box product.
4. You can't have a SOA without governance, you can't build a SOA governance strategy without an established service-oriented analysis process (because you end having nothing to govern...)

IT is broken: we clearly see this simple and true fact daily, but I hardly find organizations able to recognize the fact that they need to reconcile their IT operations with their business needs first through organizational changes. They too often just try to buy the next magical box and hope that plugging it into their IT department it will fix problems.

Again and as usual: lot of money, few successes and SOA is becoming another abused bad word.

Sunday, February 24, 2008

The Fluxology Office is Now a Sun Belgium's Services Partner

The Fluxology Office s.r.l., being committed to grow a stable and fruitful relationship with Sun Microsystems in all EMEA regions, has established a partnership with Sun Belgium. An important part of the relationship will be to develop expertise and provide feedback about the incoming new Java CAPS release, which will represent a quantum leap in terms of EAI, SOA and complex events processing tools.
Even if features of new product are still undisclosed (not to say secret), hopefully it will be the very first fully Sun-engineered release after the SeeBeyond acquisition in 2005, and it will be fully compliant with the latest JEE 5 specifications.
Here at Fluxology we think Sun is going to offer the most complete and technologically advanced application and integration JEE development platform.