Artificial intelligence (AI) and machine learning (ML) are shifting from being business buzzwords toward wider enterprise adoption. The efforts around strategies and adoption are reminiscent of the cycle and tipping point for enterprise cloud strategies four years ago when companies no longer had the option to move to the cloud and it only became a question of when? And how? AI and ML strategies are in the same evolution mode as companies build their approaches. Below are some thoughts around the how.
Forrester recently reported that almost two-thirds of enterprise technology decision-makers have either implemented, are currently implementing, or are expanding their use of AI. The exercise and effort is driven by the enterprise data lakes that reside within companies which, thanks to compliance and low-cost storage, are sitting mostly idle. Tapping into these rich repositories to have AI answer the questions which we are not asking, and may not know to ask, is the bounty which enterprises need to understand… before someone else does it before them.
With enterprise spending on AI technologies expected to hit over $47 billion in 2020, up from $8 billion in 2016, according to International Data Corp, the juice needs to be worth the squeeze – and the squeeze needs to be done properly.
Organisations across all sectors will continue to embrace AI and ML technology over the coming years, transforming their core processes and business models to take advantage of machine learning systems for enhanced operations and greater cost efficiencies. As business leaders start drawing up plans and strategies for how to make the best use of this technology, it’s important for them to remember that the road to AI and ML adoption is a journey, rather than a race. Organisations should begin by considering the following seven steps.
1. Clearly define a use case
It’s important for business leaders and their project managers to start by spending time on clearly defining and articulating the particular problems or challenges they would like AI to solve; the more specific the goal is, the better chance of success for their implementation of AI.
Stating that the organisation would like to ‘increase online sales by 10%’, for example, is not sufficiently specific. Instead, a more defined statement such as ‘aiming to increase online sales by 10% by monitoring the demographics of site visitors’ is much more useful in articulating the goal and ensuring it is clearly understood by all stakeholders.
2. Verify the availability of data
The next step, once the use case has been clearly defined, is to ensure the processes and systems already in place are capable of capturing and tracking the data needed to perform the required analysis.
A considerable amount of time and effort is spent on data ingestion and wrangling, so organisations must ensure the right data is being captured in sufficient volumes and with the right variables or features such as age, gender, or ethnicity. It’s worth remembering that, as the quality of the data is as critical to a successful outcome as its volume, organisations should make data governance procedures a priority.
3. Carry out basic data exploration
It may be tempting for a business to leap headfirst into a model building exercise, but it is crucial that it first carries out a quick data exploration exercise in which it can validate its data assumptions and understanding. Doing so will help to establish whether the data is telling the right story based on the organisation’s subject matter expertise and business acumen.
Such an exercise will also help the organisation understand what the significant variables or features should (or could) be, and the kind of data categorisations that should be created for use as input for any potential models.
4. Define a model-building methodology
Rather than concentrating on the end goal the hypothesis should achieve, it’s important to focus on the hypothesis itself. Running tests to determine which variables or features are most significant will validate the hypothesis and improve its execution.
Business and domain experts should be involved, as their continuous feedback is critical for validation and for ensuring all stakeholders are on the same page. Indeed, as the success of any ML model is dependent on successful feature engineering, a subject matter expert will always be more valuable than an algorithm when it comes to deriving better features.
5. Define a model-validation methodology
The definition of performance measures will assist in the evaluation, comparison and analysis of results from multiple algorithms which will, in turn, help to further refine specific models. Classification accuracy, for example, i.e. the number of correct predictions made divided by the total number of predictions made, and multiplied by 100, would be a good performance measure when working with a classification use case.
Data will need to be divided into two data sets: a training set, on which the algorithm will be trained, and a test set, against which it will be evaluated. Depending on the complexity of the algorithm, this may as simple as selecting a random split of data, such as 60% for training and 40% for testing, or it may involve more complicated sampling processes.
As with testing the hypothesis, business and domain experts should be involved to validate the findings and ensure that everything is moving in the right direction.
6. Automation and production rollout
Once the model has been built and validated, it must then be rolled out into production. Beginning with a limited rollout of a few weeks or months, upon which business users can provide continuous feedback on the model behaviour and outcome, it can then be rolled out to the wider audience.
The right tools and platforms should be selected to automate the data ingestion, with systems put in place to disseminate results to the appropriate audiences. The platform should provide multiple interfaces to account for different degrees of knowledge among the organisation’s end-users. Business analysts may want to carry out further analysis based on the model results, for example, while casual end users may just want to interact with data via dashboards and visualisations.
7. Continue to update the model
Once a model has been published and deployed for use, it must be continuously monitored as, by understanding its validity, an organisation will be able to update the model as required.
Models can become out of date for a number of reasons. The market dynamics may change, for example, or the enterprise itself and its business model. Models are built on historical data in order to predict future outcomes, but as market dynamics move away from the way an organisation has always done business, so the model’s performance can deteriorate. It’s important, therefore, to remain mindful of what process must be followed to ensure the model is kept up to date.
Enterprise AI is rapidly moving beyond hype and into reality and is set to have a significant impact on business operations and efficiencies. Taking time now to plan its implementation will put organisations in a far stronger position to enjoy its benefits further down the line.
Sourced by Prentiss Donohue, senior vice president, professional services, OpenText
Nominations are now open for the Women in IT Awards Ireland and Women in IT Awards Silicon Valley. Nominate yourself, a colleague or someone in your network now! The Women in IT Awards Series – organised by Information Age – aims to tackle this issue and redress the gender imbalance, by showcasing the achievements of women in the sector and identifying new role models