Industry commentators believe that innovation won’t occur in the cloud, but on the edge. Yet, edge computing is simply just an extension of cloud computing. So, what do they mean? Well, the likelihood is that cloud computing and edge computing will work together. That said, questions still arise about whether the likes of technologies such as facial recognition – which Apple’s recently launched iPhone X is promoting – leads to an increased danger of Big Brother watching over people’s every move.
Previously, Apple’s devices used fingerprint recognition, and some Android device equivalents have been designed to use iris recognition. So, science fiction has quickly become science fact.
>See also: Looking ahead to the cloud in 2018
Businesses need to think ahead – particularly as the European Union’s General Data Protection Regulation is due to come into force five months from now. To ensure that retailers, government agencies, emergency services, and other organisations don’t overstep the mark, there is a need for them to consider how technologies such as facial recognition, number plate recognition, body-worn video and connected vehicle sensors can comply with GDPR.
Jim McGann, VP Marketing and Business Development at Index Engines, offers his thoughts on the regulations: “GDPR puts personal data back in the hands of the citizens. So, if you have a company doing business in the EU – including from the US – you have to comply.”
He adds that GDPR raises a key problem that organisations have with data management. Quite often they find it hard to locate the personal data on their systems or in their paper records. Subsequently, they can’t know whether the data needs to be kept, deleted, modified or rectified. So, with the potentially enormous fines looming over their heads, GDPR will place a new level of responsibility on their shoulders.
Nevertheless, he suggests there is a solution: “We provide information management solutions; the ability to apply policies to ensure compliance to data protection regulations. Petabytes of data has been collated, but organisations have no real understanding of what data exists. Index Engines provide the knowledge of this data, by looking at the different data sources to understand what can be purged. Many organisations can free up 30% of their data, and this allows them to manage their data more effectively. Once organisations can manage the content – the data, they can then place the policies around it as most companies know what kinds of files contain personal data.”
“Much of this is very sensitive and so not many companies don’t like to talk on the record about this, but we do a lot of work with legal advisory firms to enable organisations with their compliance.,” continues McGann.
Index Engines, for example, completed some work with a Fortune 500 electronics manufacturer that found that 40% of its data no longer contained any business value. So, the company decided to purge it from its data centre.
“This saves data centre and management costs: They are gaining positive results by cleansing their data, but if you are a public company you can’t delete data randomly as there are regulatory compliance issues”, he points out. In some cases, there is a need to keep files for up to 30 years. “So, you need to ask if the files have business value or any regulatory compliance requirements”*, he advises. For example, if there is no legal reason for keeping the data, then it can be deleted. Some firms are also migrating their data to the cloud to remove their data from their data centre.
As part of this process, they are examining whether the data has any business value to make their data migration decisions. Organisations, including data centres, need to consider what lies within their files – no matter whether edge computing or cloud computing are used for data management, backup and storage.
It’s therefore important to explore how organisations can prevent new technologies from being used in ways neither of us as consumers and citizens would not like, and to consider how the data can be used to create value for both organisations and their consumers. to both the organisations utilising it for the purposes of delivering, using, securing and improving digital services.
For example, facial recognition has many applications – not just for allowing people to unlock smartphone applications or for making payments. However, reports suggest that with smartphone facial recognition, an image is held locally on the device. Still, a certain amount of data about us will still need to be held on a database, and that too needs protection from hackers who could utilise personal data for malicious purposes.
Innovation on edge
With the increasing investment in research into autonomous cars and smart cities, and with the growth of connected vehicle technologies, such as Automatic Emergency Braking (AEB), 2018 there is also the need to think about where innovation will lie, and about whether there needs to be a balance between regulatory compliance and innovation.
Furthermore, it is increasingly being suggested that innovation will lie in edge computing rather than in the cloud, and yet edge computing is merely an extension of cloud computing. Even if data were to be analysed closer to its source, an increasing amount of big data will still need to be analysed elsewhere. With data and network latency being something of an historical hindrance, the hope is that the effects of latency can be either reduced or mitigated.
Edge computing also allows for the decentralisation of data centres, allowing for a plethora of smaller datacentres to store, manage, and analyse data while permitting some data can be managed and locally analysed by a disconnected device or sensors (such as those found on connected and autonomous cars). Once a network connection becomes present, the data can then be backed up to a cloud to allow for further actionable intelligence to take place.
The reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerably negative effect on data throughput. Without machine intelligence solutions such as PORTrockIT, the effects of latency and packet loss can inhibit data and backup performance.
This could translate into delays at airports if the facial recognition databases can’t communicate your identity and immigration status quickly, and it could lead to accidents or to technical issues arising in autonomous cars.
With autonomous cars, there is going to be a mixture of data travelling to and from a vehicle in a constant manner. Some of this data, such as critical status and safety data, requires a fast and responsive turnaround, whilst other data is biased towards road information, such as traffic flows and speeds. Sending the safety critical data all the way back to a central cloud location over a 4G or 5G network can add considerable delays in the turnaround due to network latency, before you even start to crunch the data. There is no easy and cost-effective way of reducing latency across networks. The speed of light is the governing factor we just can’t change. It’s therefore crucial to think about how network and data latency can be managed effectively and efficiently.
Autonomous cars, according to Hitachi, will create around 2 petabytes of data a day. Connected cars are also expected to create around 25 gigabytes of data per hour. Now, consider that there are currently about 800+ million cars in USA, China and Europe. So, if there were to be 1 billion cars in the near future, with about half of them being fully connected and assuming that they are used for an average journey of 3 hours per day, 37,500,000,000 gigabytes per day would be created.
If, as expected, the majority of new cars will be autonomous by the mid-2020s, that number will look insignificant. Clearly, not all that data can be instantaneously be shipped back to the cloud without some level of data verification and reduction. There must be a compromise, and that’s what edge computing can offer in support of such technologies, such as autonomous vehicles.
Storing the ever-increasing amount of data is going to be a challenge from a physical perspective. Data size sometimes does matter of course. With it comes a financial and economic matter of cost per gigabyte. So, for example, while electric vehicles are being touted as the flavour of the future, power consumption is bound to increase. So too will the need to ensure that the personal or device-created data doesn’t fall foul of data protection legislation.
However, the myriad of initiatives – such as edge computing, fog computing and cloud computing – that have emerged over the past few years to connect devices together have created much confusion. They are often hard to understand if you are someone who’s looking from outside of the IT world into it. It is reasonable to say, therefore, that people now live in foggy times because new terms are being bounced around that often relate to old technologies that have been given a new badge to enable future commercialisation.
Back to the future
A case is therefore being drawn up for the increasing use of old technologies, such as the backing up of data via tape storage. That’s something that companies such as Index Engines do to ensure that organisations can comply both with GDPR and use the data to maintain and create value – by making it easier to index and locate data that may be used to stimulate innovation.
By making it possible to locate and utilise historical data that would otherwise lie wasted, it becomes possible to create or improve new products, solutions and services. In each case, cloud computing is likely to be involved at one stage or another – and with some applications the edge computing cloud plays a part, too.
It is always interesting to look back a few years to those futurist projections of how technology will evolve in relation to what is now reality. In many cases these predictions are optimistic in their time scales but tend to underestimate the effects.
>See also: Painting a multi-cloud masterpiece
For example, the internet and mobile devices are just two from a long list of technologies that have had an impact on organisation’s and people’s lives. 4G and now 5G cellular technologies are therefore being rolled out with its higher bandwidth capabilities than many of us have with our home broadband. Consequently, this is going to have a major impact on IoT and how we manage it.
There is little doubt that autonomous vehicles, personalised location aware advertising and personalised drugs – to name but a few innovations – are going to radically change the way organisations and individuals generate and collect data, the volumes of data collected and how this data is crunched.
Without doubt too, they will have implications for data privacy. The perceived wisdom, when faced with vast new amounts of data to store and crunch, is to therefore run it from the cloud.
Yet that may not be the best solution. So, organisations should consider all the possibilities out there in the market – and some of them may not emanate from the large vendors. That’s because smaller companies are often touted as the better innovators.
Let’s return for a moment to the connected and autonomous cars theme. If these vehicles do eventually take off, as many of their proponents expect them to, the drivers are going to become passengers in the same way that rail commuters don’t drive the trains that take them to work each day. So, with the burden of driving removed, they will have more time to entertain themselves in the same way that many rail commuters have done so over the last few years. They will be able to catch up with TV programmes, watch movies while on the go, or listen to music using streaming services.
>See also: Forecasting retail’s future in the cloud
Streaming technologies are fast becoming the way to deliver content to consumers. Such services will add to the surge in data, emanating from connected and autonomous vehicles. The supporters of these automobiles therefore predict – perhaps with their own self-interests in mind – that consumers will agree to move from car ownership to a car-sharing subscription model – similar to the one employed by the likes of Uber and Lyft. These companies offer vehicle on-demand services via a mobile app.
This predicted future of personal transport has yet to play out. However, as everyone knows there has been and always will be holes in mobile coverage. So the organisations offering such infotainment services will also need to consider how to deliver smooth buffering for their services in high definition. This, after all, is what everyone has come to expect as consumers in their own homes.
In this world social media organisations such as Facebook, Google, Twitter and Snapchat will seek to offer location-based advertising, as someone sitting in a vehicle as a passenger in a driverless vehicle presents a captive audience. However, this also raises privacy and data protection concerns.
00For example, a recent report that was published by Privacy International criticises car rental firms for “relying on the small print in terms and conditions”, whenever it comes to dealing with data amassed by in-car entertainment systems.
These infotainment systems sync up to mobile devices via Bluetooth, storing a range of data such as location logs, as well as information from on-board systems for web browsing, phone calls or streamed music. So, there is a need for players within the autonomous vehicle ecosystem to agree to uphold transparent data-sharing practices – particularly between autonomous vehicle manufacturers, insurance companies and other players within the market.
So, with GDPR on the horizon, will organisations have to seek agreement from individuals to use this data? How are they going to manage collect, manage and use autonomous car data, as well as data created by CCTV cameras and facial recognition systems around the world? Will this be achieved at a small number of cloud locations around the world? These are but some of the questions that require answers. Overall though, transparency is going to be the key word.
In many ways this is exactly the problem CERN was facing with the large Hadron Collider. It generates petabytes of raw data for each run, but it does not have the capacity onsite to run full analyses of the data. From the raw data it runs a triage of algorithms to remove all the noise around the real data, before passing the data out to other organisations to do the final and detailed analysis.
Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge – perhaps in expanded base stations and then shipped back or from the cloud. This may, for example, involve a hybrid cloud edge infrastructure. Does this solve everything? Not quite. Some fundamental problems still remain, such as the need to think about how to move vast amounts of data around the world – especially if it contains personal encrypted data. More to the point, for innovation to lie anywhere, it’s going to continue to be crucial to consider how to get data to users at the right time, and to plan now how to store the data well into the future.
Sourced by David Trossell, CEO and CTO of Bridgeworks