The future of private AI: open source vs closed source

As regulation of artificial intelligence evolves, the future of AI could be private in nature - here's how adoption of open and close source capabilities would compare

Since the launch of OpenAI’s GPT-4 last year the proliferation of AI tools has been exponential, with generative AI adoption predicted to climb to 77.8 million users within two years of the release of ChatGPT. Yet, the technology is not without risks. Companies such as Apple, Samsung and the BBC, have banned its use in their organisations, citing privacy and compliance issues. Meanwhile, governments — including the UK, which is hosting the world’s first AI summit — are currently introducing legislation to regulate its use.


How generative AI regulation is shaping up around the worldWith generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world.


We’re still in the early stages of understanding the full impact of generative AI. A recent McKinsey report estimates that generative AI and other technologies could automate enough work activities to free up 60 to 70 per cent of employees’ time. However, there are many legitimate concerns around the data privacy and ethical implications of generative AI, including bias and fairness, intellectual property rights, and job displacement.

Related to these concerns, there is ongoing debate about whether generative AI should be publicly available to users through open source AI tools. Some experts believe it is critical to improve our understanding of AI first before making source code publicly available.

In this regard, however, the genie is seemingly already out of the bottle. Meta’s powerful LLaMA2 AI model, released in July, is open source. In June, French President Emmanuel Macron announced a €40m investment in an open ‘digital commons’ for French-made generative AI projects to attract more capital from private investors. This news is particularly interesting for those in the EU, where AI tends to be more regulated.

For UK businesses, open source AI could be hugely beneficial in enabling developers to build, experiment and collaborate on generative AI models while bypassing the typical financial barriers. However, it is vital that organisations recognise the risks and implement the correct measures from the start to use the technology responsibly and avoid critical data falling into the wrong hands.


Mitigating the organisational risks of generative AIRisks including bias and hallucinations abound across generative AI projects. Jeff Watkins explains how businesses can mitigate them long-term.


The private AI model

Organisations are understandably reluctant to share their data with public cloud AI providers that might use it to train their own models. Private AI offers an alternative that lets companies reap the transformative benefits of AI for process efficiency while maintaining ownership of their data. 

With private AI, users can purpose-build an AI model to deliver the results they need, trained on the data they have and able to perform the behaviours they want — all the while, ensuring their data never escapes their control. Users get unique models and the guarantee that their data benefits only them and their customers, not their competitors or a public cloud provider.

Data privacy is a critical reason to choose private AI, especially for companies whose data is a competitive advantage or highly confidential, such as medical, healthcare, financial services, insurance and public sector organisations. Data is one of the most valuable assets an organisation can have. Therefore, it is vital that it remains secure. With private AI, businesses can keep critical data safe and protected against exploitation by competitors and cyber criminals.

The control you retain with private AI is another part of the appeal. Businesses and organisations that take a private AI approach can tailor and adjust their AI model to their needs. This enables them to generate far more relevant and accurate information with their AI solutions. In contrast, the wider pool of disparate data sources used by public AI algorithms can lead to vague outputs, resulting in inefficiency and a need for more human intervention to prevent misinterpretation of data.

While public AI may initially appear more cost-effective, the long-term benefits of private AI significantly outweigh the initial investment.

Choosing an AI adoption strategy

There are two approaches to adopting a private AI model: developing and training AI algorithms in-house (open source) or taking a platform-based (closed source) approach. Platforms with private, generative AI capabilities can be used to quickly train models on proprietary business data without sharing it with third parties, including the platform provider. Moreover, the platform-based approach offers a set of services that support the full AI management lifecycle: from pulling together data from multiple sources to training AI algorithms, integrating them into processes and workflows and scaling AI applications across the business. This has significant advantages for improving efficiency and driving AI adoption.

When deciding which approach to take, investment is always a consideration. Developing private AI models in-house typically involves a greater investment than platform or public cloud options, as it requires businesses to fund and build a team of experts, including data scientists, data engineers and software engineers. On the other hand, taking a platform approach to private AI does not require a team of experts, which significantly reduces the complexity and cost associated with private AI deployment.

Speed of deployment is another consideration. There is a common misconception that training private AI models is very time-consuming, but this is not always the case. For instance, organisations that use a platform-based approach to private AI may be able to train a new AI model in as little as a few hours or days, which significantly speeds up private AI deployment. By contrast, fully training AI models in-house tends to be slower, as it typically requires more time and human resources to gather and prepare data and integrate information from multiple sources to feed into the AI algorithms.


Salesforce launches Einstein Copilot and Copilot StudioDuring Dreamforce this week, Salesforce announced new copilot tools for its AI model Einstein, helping bolster productivity.


Open source AI vs. closed AI

Another important factor to consider when choosing an AI strategy is whether to train AI using an open source AI or a closed AI model. While open source AI is pre-trained on huge sets of publicly available data, the security and compliance risks associated with this approach are significant. To mitigate risks, organisations can adopt a hybrid open source AI model, where their data is kept private but the code, training algorithms and architecture of the AI model are publicly available.

Closed AI models, on the other hand, are kept private by the organisations that develop them, including the training data, AI codebase and underlying architecture. This approach provides full control over the whole AI infrastructure while enabling businesses to use their AI intellectual property as a competitive advantage.

Fostering a culture of AI adoption

Implementing private AI helps foster a culture of AI adoption among employees. With the knowledge that AI tools are safe, reliable and built using secure internal data, employees are likely to be more open to embracing AI, which can improve operational efficiency and free up their time for more creative and strategic tasks.

This democratisation of AI empowers all employees, not just a select — perhaps curious — few to access and benefit from the same information source. Data is one of your most valuable assets. And generative AI models are inherently dependent upon data. Therefore, those who own their data have the most to gain. There is immense potential for organisations to uncover insights, optimise operations and stay ahead of their competitors using AI. However, one should not overlook the importance of data privacy, control and long-term ROI.

Private AI is a logical and secure solution for organisations looking to safeguard their data, and gain a competitive advantage in this new era of artificial intelligence technology.

Malcolm Ross is senior vice-president of product strategy at Appian.

Related:

Six steps to overcoming the digital adoption challengeExploring the innovation barriers that businesses are facing, and how to overcome them.