In a recent report from Forrester Research, nearly 40% of global data and analytics decision-makers said they are already implementing and expanding big data technology, with an additional 30% planning to do so over the next year.
Big data is bringing with it the ability to transform industries and the potential to turn traditional, long-standing business models on their head. For businesses, the issue to be addressed is how to make the most of all this data, turning it into actionable insights.
Help is at hand in the form of data science. The analysis of patterns in the data allows organisations to build models that create forecasts of what can happen under different scenarios.
By doing so, companies can state objectives in a mathematical language, optimise over possible actions, and recommend be best action to the user. This forms the basis for transformative software, creating solutions that reliably turn data into actionable insights.
Data science has seen a rapid increase in adoption levels, evolving from being a question of if it should be used to a question of what frequency.
It has become a necessity as technologies have exploded the volume of data available for analysis and as cloud computing has made storing and analysing the data more affordable and feasible.
Central to cloud computing’s contribution to data science is the elasticity of its computing power. We now have the ability to add CPU horsepower as needed to execute analysis, however complex, on data, however large. There was a time when any science design was only as good as its ability to run within a timeframe determined by the available CPU power.
With cloud computing and the ability to add processing power on demand, those limitations are no longer in place. This gives rise to things like computationally expensive machine learning where the software can learn and evolve.
Ultimately, the real question for a business that wants to leverage data science is how to use it to make better decisions and improve operations. If it’s to be of use, it needs to be consumable – meaning the algorithms employed by the software shouldn’t be a black box where the user has little sense (or trust) in what is happening in the background.
On the other hand, software should not over-power the user with so much information that they are left confused. There is a middle ground where the science is accessible by the user. Here, users are given a basic knowledge of the logic employed, the majority of recommendations can be accepted automatically, and in the few instances where the user requires more information, it is made available to them.
Data science is enabling the next generation of enterprise software, resulting in solutions that tell users what is going to happen and what they should do about it today.
How much inventory should we carry today to meet future demand? How should we price our items to guarantee future profitability? Which port should we have a shipment depart from and come into in order to de-risk the estimated time of arrival? What products should we recommend to customers to increase the probability of a cross-sell?
Across industries, it’s valuable to have a CRM application with the ability to predict which customers are most likely to make the next purchase, which products will be part of that purchase, and which customers are at risk for attrition.
An intelligent CRM engine should be able to identify the individual products a customer is most likely to purchase and the probability that the purchase will occur within a specified time. Sales associates can make use of this information to prioritise which customers they contact or to filter customers based on specific products they want to push.
For counter sales locations, the CRM software can alert checkout staff to remind customers about complementary purchases they might have forgotten. And, in retail environments, customers can receive coupons or recommendations on arrival in a shop, instead of at the checkout when the sale is already completed.
In a business environment where supply chains are increasingly complex and interdependent, the smooth operation of the global supply chain is essential if businesses are to avoid costly delays and inventory shortages.
Data scientists are making full use of increased computing power and global supply chain data to automatically model supply chain timelines and anticipate where disruptions are likely to occur. This enables businesses to take the necessary steps to mitigate against them.
The combination of machine learning and big data provides a better overall understanding of which delays are normal and which are true system-wide disruptions (e.g. strikes and natural disasters).
There are systems available to continuously monitor the data feed in real time, which will the send out a notification if there is a problem, before going on to suggest a whole range of solutions.
When there is a disruption, businesses can dive into affected shipments, for example, with a view to understanding the impact of the disruption for each individual shipment. And, by integrating this data across an entire business, it helps to deepen the understanding of where the business’s priorities lie.
So, how will it affect the business downstream? Which shipments are more important? What’s the optimal trade-off – faster shipping times or additional costs? In short, as well as helping to identify disruptions, data science has a significant role to play in a charting a successful course around any supply chain bottlenecks.
By better connecting global companies to their many and varied partners throughout the world, the solutions enabled by data science afford businesses real-time visibility of order and shipment statuses, helping decision makers to optimise their supply chains for greater agility, increased flexibility and lower costs.
New data sources
By utilising RFID and in-store sensors, retailers, distributors and manufacturers have the ability to search every piece of inventory in stores and warehouses – seeing not only what inventory is available but also detecting its movement in the store or warehouse.
The wealth of data this is providing from traditional brick-and-mortar locations will allow data scientists to conduct analysis that was never possible before.
Across retail, this data can also be used to understand cannibalisation effects, as one item is picked up and another is put down. Additionally, customer foot patterns in the store will allow for the creating of better store layouts.
In the retail fashion, businesses will be able to use the data gathered by sensors to assess the fit of items based on the frequency of items that don’t make it out of the fitting room. And away from the shop floor, warehouse and inventory managers can use similar solutions to obtain accurate inventory levels, without the need for a time-consuming and error-prnone manual count.
Data science is the only sure-fire way of creating and validating solutions to improve decision-making across the board. For modern, forward-thinking businesses that find themselves with more data than they know what to do with, the appliance of data science will be the difference between sink or swim.
Sourced from Ziad Nejmeldeen, chief scientist, Infor