AI’s “Last Mile”: Generative AI as the Next Leap in Digital Transformation

Picture of Adi Chikara

Adi Chikara

Chief Technology Officer

7 min read

Listen to the latest articles and insights from our experts.

Listen to the latest articles and insights from our experts.

A Transformational Opportunity

Large Language Models (LLM) like OpenAI’s ChatGPT are all the rage for good reason: Artificial Intelligence (AI) doesn’t simply deliver incremental or iterative change. AI presents a transformational opportunity. This is the case both for consumer applications and for organizational workflow efficiency gains. 

Generative AI’s (GenAI) potential is exponentially greater when systems incorporate forms of input/output other than language, such as Large Multimodal Models (LMMs) that combine text capabilities with images, audio, video, and more. 

But organizations, with their essential security, privacy, and infrastructure requirements, need to approach LLMs in a different way. This new technology can still be transformational, but not in the way most people think, and not when used alone.   

For example, the Boston Consulting Group recently implemented a private, secure ChatGPT instance that allowed for a chatbot that would search its proprietary content. The result was a performance increase of 17% for top-half-skill workers. A positive result, but hardly a transformational one.  

Fortunately, enterprises have other ways to implement GenAI that can deliver far greater – even transformational – results. 

GenAI, the “Last Mile” of AI

In order to make the best use of GenAI technology, enterprise decision-makers need to keep several rules in mind. 

  1. A GenAI implementation usually requires multiple technologies – just like any other software solution. A typical custom software system relies on 5 to 7 different platforms: AWS or Azure for cloud hosting, Mulesoft or Boomi for interoperability, a CI/CD system for releases. GenAI is no different; it only works in a large system setting when it’s matched with other necessary tools and incorporated into the users’ workflow. 
  2. There’s a growing number of GenAI models on the market that have end-user applications built on top of them, and each has its own specialty. This goes for OpenAI, Bard, Claude, DALL-E, MidJourney, and other popular solutions: some excel at conversation, some at image generation, and some at creating descriptions. Don’t rely on a single GenAI model for all uses. Instead, choose the best-in-breed product for each particular need, and integrate them side-by-side. 
  3. The real power and potential of GenAI comes from chaining together multiple systems. In this way, an LLM is often the first layer powering a User Interface – the application that translates user asks into machine-readable queries – but it’s far from the only link in the chain. 

The last point is key. LLMs provide implicit software solutions, interpreting language from individual users (queries) and translating them into machine code that can be combined with explicit instructions to achieve powerful results. It’s this implicit link that has been missing for years of AI development. Now that it’s here, GenAI promises to unlock proven AI capabilities that are already in use. 

In a product implementation, LLMs are like Last Mile delivery. They’re the most visible link – the one that users actually see and interact with directly – in a complex chain that includes numerous and sophisticated components.  

A Real-World Example

Tricon recently undertook a GenAI project with a client that wanted to improve the search experience for users of its reference products. The obvious solution was to switch from a standard keyword-based search (which simply compares individual words or phrases in a search query against a database of content) to a vector-based search (a mathematical model of the text query).  

This change likely would deliver a 15-30% improvement in search efficiency (measured by how often the user engages with the search result, versus having to repeat the search), but to truly deliver a noticeable improvement, it was necessary to do two things: 

  1. Rethink the search process and experience itself. What matters most isn’t what a user types in; what matters is the user’s intent. By clarifying what it is the user seeks (and, in some cases, why), the system can deliver the best possible information.
  2. Combine a series of AI tools and traditional applications, each of which is ideal for a particular task. 

The result was a methodical chain of applications to deliver a new search experience. 

Step 1 
Use a Large Language Model application to clarify the user’s intent through a series of refining questions. The result is a vastly improved, far more specific prompt. 

Step 2 
Translate the new, revised intent from the LLM into a machine query that can be understood by a database using a query model AI tool. 

Step 3 
Convert the text query into a hybrid search query that combines the benefits of both vector and full text search. This is proven to find the most relevant matches from across a range of media (including images, audio, and video) or in other languages (which can be machine-translated for the user, if necessary). 

Step 4 
Summarize, classify, and display search results in different, easily distinguishable categories (using an AI classification model), e.g., most relevant answer, most recent answer, most trustworthy source, etc. 

Each step in the chain utilized different tools and models, all combined to provide a seamless, end-to-end experience. 

The new platform delivered a 300% increase in search efficiency over the previous application (measured by the user’s likelihood of clicking on a search result, rather than starting over with a new search).  

The LLM – the Last Mile application – served to help clarify the user’s intent using conversational language, and then passed the improved user queries to a series of invisible back-end applications for analysis.  

Conclusion

LLMs bring the capability to serve customer intent effectively. However, it’s also clear that LLMs are numerous, varied, and in some cases incomparable. No single GenAI application can solve every problem, nor can they exist in a vacuum. 

By focusing on outcomes, identifying the right tools, and chaining together GenAI systems with other essential applications and data, enterprise organizations can truly deliver transformational experiences. 

Share Post: