Unintended consequences are perhaps the only certainty in any IT project. So it was for UK distribution and retail company Nisa when it moved to an online order processing system in 2007.
This system allowed its retailer customers to order stock through their electronic point of sale (ePOS) systems, which triggered a change in their ordering behaviour.
“Before we introduced the online systems, when retailers were placing orders through mobile handsets, we would receive orders from 8am through to 1pm, which is our cut- off point,” explains head of IT David Morris.
“Once the online system was in place, we found that the retailers would wait until just before the cut-off point before placing an order. That meant they could wait to see if they were running out of milk or bread, and wouldn’t have to place a blind order at eight in the morning.”
With 4,500 retail outlets in Nisa’s distribution network, this led to a spike of load on the order processing system just before 1pm. This adversely affected the performance of the system – which is based on Oracle’s database infrastructure and a bespoke .NET front-end – just when the customers needed it most. What’s more, the system would occasionally crash.
At the time, Morris says, Nisa couldn’t tell exactly what was causing the performance degradation. “We needed to understand what impact this heavy traffic was having – whether it was affecting the application or the database, or something else,” he recalls. “At the time, we were blind to what the issue was.”
Nisa’s development partner suggested that it try an application performance management solution to diagnose the problem, and Morris eventually selected dynaTrace from Compuware.
This allowed Morris and his staff to analyse the performance of the overall order processing system from a number of angles, including the user experience, the actual code of the application and the flow of traffic through the system and its back- end infrastructure.
One of the most immediate issues that this identified was the fact that the web servers supporting the system were all configured differently. This allowed Nisa’s IT team to test each configuration to find which was optimum, and roll that out across the other servers.
“It also taught us that there was some code, particularly with database-heavy transactions, which needed performance tuning and identified where the performance overhead was occurring,” says Morris. This revealed that it was not necessarily the act of placing an order that was slowing the system down, but additional functionality Nisa had added to make recommendations when certain items were out of stock.
The dynaTrace system made specific recommendations as to how the code could be improved, and these were implemented by Nisa’s development partner.
“The 80/20 rule very much applied here,” explains Morris. “dynaTrace made a number of suggestions, but there were just a few that addressed a significant proportion of the performance issues.”
In all, the changes that Nisa made after using dynaTrace led to a 46% improvement in the user experience of the system, as measured by speed and responsiveness.
Since making the changes, Nisa has seen a 20% increase in the volume of transactions on the system. While it is impossible to divine exactly how much of this increase is due to organic growth, and how much is due to improved performance, Morris says, “I think it’s fair to assume that having a better- performing system is paying dividends.”
Nisa now uses dynaTrace as a day-to-day tool in its operations and development work. “We see it as part of our performance management toolset now,” says Morris. “We have an internal test team, and every day someone will spend some time looking at dynaTrace to make sure everything is rosy.”
Using the tool, he adds, has increased the IT organisation’s confidence that it can roll out new functionality without a hitch: “It’s improved our ability to deliver something once and for it not to have any issues.”