ChatGPT won’t kill Google — it makes search more important

Here's how the business and societal roles of Google and the wider search engine space is set to evolve alongside generative AI tools like ChatGPT

In recent months, we’ve heard a lot about ChatGPT being a “Google Killer,” paving the way for Microsoft to gain market share thanks to the advantages of generative AI. However, this prophecy of killing search hasn’t — and won’t — come to fruition. Not only has Google entered the fray by launching a comparative service called Bard, but they have also explicitly stated that Bard is a “complement to search,” not a replacement. ChatGPT and Bard are taking the internet by storm, but businesses (except those in the creative space) are scratching their head and asking how employees can best benefit from this new technology.

Why are businesses uncertain?

Story after story has told us how individuals are using ChatGPT to get things done, expand their reach, and automate mundane tasks. Bard is right behind. But despite all the excitement from individuals, many businesses are uncertain of where they might be able to use generative AI models. While great at natural language, creativity, conversing, and summarising, they struggle in fact-based and nuanced contexts because they may hallucinate; it’s hard to validate their sources; and they don’t have the latest information — three areas that are crucial for use in the enterprise. To get to the heart of why this is an issue, we have to analyse the basis of the technology itself: large language models (LLMs).

LLMs are trained to generate text based on language patterns, and for that, they are compelling — often writing perfect prose and confident, convincing arguments. However, the writing is based on probabilities of words in language, not on an understanding of how the world works, so these models can’t be relied on to convey accurate information. That’s critical for most business applications, and that’s where search comes in.

GLLMs and search are different

Search is about retrieval. You request information from an insight engine, and it will find what’s most relevant and give it to you. That’s fundamentally different from generative LLMs (GLLMs), which don’t surface pre-existing material but rather create a response on the subject based on their training data and rules of language. This can feel like search because it presents relevant material, but unlike search, it doesn’t serve up authoritative content (with links to the sources), but rather creates a response that is a reflection of what it was trained on. Because it’s a reflection rather than a reproduction, it can be wrong or outdated, and it’s not easily validated because there is no traceability to the source. 

Search to the rescue

This is where search makes all the difference: feeding the results of a search into a generative LLM eliminates these shortcomings. With search serving as the information source for generative LLMs (rather than the LLMs themselves), the response is constructed from accurate, up-to-date, and traceable information. This approach capitalises on the strengths of each tool: the knowledge comes from the most relevant information found by the search, while the phrasing comes from the generative LLM. The result? Accurate, up-to-date information (from search) expressed in natural language (from the GLLM).

Since the accuracy of the response is directly related to the quality of information fed into the GLLM, the more relevant the search results are, the more reliable (and complete) the response is. Good relevance means high trustworthiness. As GLLMs get adopted by businesses, search becomes more important than ever, as it addresses the challenges of using GLLMs for information and knowledge.

But that’s not the best part

Search also solves the biggest challenge of using GLLMs in the enterprise: their lack of awareness of corporate content. ChatGPT and other GLLMs (GPT-4, Bard, LLaMA, etc.) are trained on public content — the internet. They know nothing about the knowledge contained within an enterprise, and training them on that content is ridiculously time-consuming and expensive. But enterprise search knows everything about a company; it has broad and secure access to all the corporate repositories, content, and institutional knowledge — and can provide that knowledge to a GLLM.

Using enterprise search to feed a GLLM ensures that it has the most accurate and relevant results from all content, regardless of source, format, or language. It also ensures security by only providing the GLLM with information that the employee has permission to access. Combining search with GLLMs means your GLLM is:

  • Aware – include your company’s knowledge, not just public content.
  • Accurate – the information comes directly from your corporate content, not the model, for fact-based summaries and almost no hallucinations.
  • Transparent – with explicit links to sources, knowledge is traceable.
  • Current – search results make use of the most up-to-date information.

Looking ahead

The hype we’re seeing about GLLMs like ChatGPT and Bard is warranted. They will change the way we live and work by completely changing how we interact with information. But, as impressive as they are, we’re just at the beginning of this revolution, and we will rapidly see even more capable models that will enable new applications that we can’t even imagine yet. 

But it has become clear that GLLMs need a reliable source of concise, accurate, and relevant knowledge to achieve this vision, particularly in the enterprise. That source can be found with intelligent search. Search brings GLLMs to the workplace so that employees can converse with their content and do so with confidence.

Ulf Zetterberg is co-CEO at enterprise search provider, Sinequa.

Related:

The challenge of using ChatGPT for search enginesLarge language models (LLMs) such as ChatGPT may be emerging as complements for search engines, but there are still pitfalls to consider.

Capability testing of GPT-4 revealed, as EU regulatory pressure persistsA ‘red team’ dedicated to testing the capabilities of GPT-4 has revealed its findings, as scrutiny from EU authorities continues.