Destination: SOA

For once there is no dissent: market analysts say it is the next big thing; vendors say their products already support it; and some users say they thought of it first. Everyone seems to agree that the service oriented architecture (SOA) is the new destination for enterprise software. Now all that is needed is agreement on the best way to get there.

As anyone who has planned a family excursion on a bank holiday will know, deciding on where to go is only the first, and often the easiest decision to make. The next steps are trickier: when should you start; which route is least likely to be congested; who should drive; what needs to be taken on the journey (and what left behind); and, crucially, what do you do when you arrive?

Such supplementary questions are pertinent when planning a corporate migration to SOA. But to date, the only one that has been widely addressed by businesses is the first – when you should start.

SOA roadmap

In fact, without realising it at the time, many businesses embarked on the road to SOA several years ago when they made their first investments in component-based software technologies and, in particular, in application servers (AS).

Before Sun Microsystems pioneered the AS concept with its Java application server in the late 1990s, software was still largely treated as arcane lines of code, or at best collections of vaguely recognisable objects. Some object-oriented programming enthusiasts talked effusively of code re-use and the productivity benefits of building repositories of ready-to-use, off-the-shelf software components. However, these benefits were more often talked of than realised; those that were realised had little visible impact outside the development shop.

The advent of AS technology began to change this by promoting a subtle but profound shift in how software is perceived. For the first time developers started to think of code in terms of functions and services such as transaction management, load balancing and security. These services were common to all systems, regardless of platform. Software professionals began to take seriously the idea that functionally identifiable components can be re-used, and shared across multiple different systems.

Such theories are a recognisable part of the SOA promise, although SOA claims to offer such re-use across larger scale components, such as business processes. "SOA and Web Services are a natural evolutionary development from the component-based technologies that appeared towards the end of the 1990s,"says IDC's director of European software infrastructure research, Rob Hailstone.

These technologies made a key contribution to a services-oriented future, says Hailstone, by breaking the "point-to-point mentality" that had dominated software thinking up to that point. It was a philosophy "that obliged you to do the same things over and over again. Using components, and web services, you do something once, and then you use it over and over again," says Hailstone.

At the moment, analysts such as Hailstone suspect that most early adopters of SOA and web services have made the leap primarily as a means of realising the benefits of service level re-use.

This allows these pioneers to make better use of existing software assets, increase productivity of in-house development teams and improve the ability to respond to business change. However, there is a potential downside, if end users come to equate re-use, and in particular re-use based purely on overlaying web services wrappers onto existing software components, with full-blooded SOA.

Spare parts

There is actually a lot more to building an enterprise SOA platform than top-and-tailing one or two legacy applications with web services interfaces and plugging them together, says Butler Group principal analyst, Michael Thompson. "Once they have recovered from the 'eureka moment' they are going through" would-be SOA adopters must start to address some of the other questions left on their route planner.

The first of these is likely to be the issue of deriving value from legacy applications. As confidence in the practical viability of code and process re-use grows, there may be a tendency for companies to regard all legacy systems as a treasure trove of software assets. This could hold true, but users will only really be able to tell how true by making a concerted effort to sort through this treasure trove and separate the wheat from the chaff.

"You will have to do a discovery process to find out what functional assets you really have," says Thompson. Then, there may be some tough decisions to be made about what to keep and what to throw away and, since code that is kept will form the foundation of a functional repository that will be used for years to come, they are not decisions that can be easily fudged.

Although some tool vendors, notably those with a mainframe lineage, like to give the impression that, under SOA, no old code is bad old code, the truth is rather different. "Some legacy systems have been around for 40 years or more," points out Thompson. And even though they may still be working, and doing something useful, "I don't think all of it will really be worth keeping," he says.

And, even some of the old code that is kept may not be in pristine condition. In these circumstances, says Thompson, companies can get away with "wrappering" as an interim measure. Eventually though, and contrary to what many vendors would like them to believe, SOA adopters will be forced to modernise and transform much of their legacy code.

The bus route

Having assessed software assets, the next step ought to be which route offers the most reliable route to a working SOA platform. In this case, that requires an evaluation of the inter-services communications and service metadata infrastructure that will support the SOA infrastructure. While few organisations have reached this point, the vendors are already offering answers of their own.

The industry's major infrastructure players – IBM, Oracle, Microsoft, Tibco, WebMethods, BEA et al – all want to become their customers' primary SOA infrastructure suppliers, as do many less well known, but innovative suppliers such as Neon Systems, Sonic, Amber Point, Actional and Oblix. Picking between these vendors, with their very different scale of operations, market backgrounds and core technological expertise, is more difficult. Still, there are some investment decisions that companies committed to the SOA vision cannot afford to put off; the most obvious being which category of underlying messaging technology to implement.

At least here, a consensus is emerging that enterprise service bus (ESB) based systems offer the requisite industrial strength needed for such a fundamental part of an SOA platform, but without compromising the heterogeneous, loosely-coupled philosophy which is at the core of its appeal.

Assuming that an organisation's choice of SOA infrastructure is successful, two final questions need to be answered: What is to be done with the SOA platform once it is completed, and who will take control of it?

These two questions are closely linked. A fully realised SOA platform is expected to give business users much greater, direct control over their business software infrastructure. However, it is unlikely that this handover of control, from the software shop floor to the boardroom, is going to happen overnight and no one is sure how many incremental stages will be required in between.

To begin with, SOA adopters must create the right conditions for business and IT professionals to co-ordinate their efforts – most probably by setting up multi-disciplinary strategy committees. Populating these groups with business people prepared to let some technology under their finger nails (by at least mastering the nuances of business process modelling) will be essential, as will the co-opting of software professionals prepared to take an architecture rather than code-driven view of their responsibilities.

However, at some point, companies must take a final decision on who is most suited to defining their software infrastructure needs. Can it be entirely handed over to business professionals, as proponents of a policy-based computing future expect? Or will business people remain back-seat drivers of an event-driven infrastructure that is more directly overseen by a new breed of business aware IT leaders? The answer to this question is still too difficult to call.

 

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...