Prompt Chains: A basic but powerful technique for improving LLM performance

One of the many simple yet powerful techniques for tackling more complex processes and tasks with Large Language Models (LLMs) is through the use of Prompt Chains. One of the first internal experiments I ran with GPT-3.5, upon its initial general availability, involved building an intelligent agent integrated with an orders API to chat with users about the status and details of their orders, with the LLM reasoning on the accuracy of the information provided by the users. This was built behind a chat interface intended for exposure via WhatsApp Business or SMS. The simple goal was to reduce overhead and improve efficiency by implementing established business processes for an e-commerce platform as a series of LLM calls.

From a terminology standpoint, a component of this workflow was context verification from the LLM to determine whether the agent has enough information to check on an order. Prompt Chaining consists of two or more prompt templates used in series or succession. There are numerous approaches to this implementation, but understanding the concept is key. The primary concept is connecting the output of the first prompt into the input of a second prompt. This allows for more complex workflows that can do numerous things, like pull in and leverage external knowledge (e.g., via Retrieval Augmented Generation), run intermediate tasks, make API calls, and so forth. The key point to understand here is that at every step, you can insert a reasoner with instructions onto some existing data, while passing the output along to the next step to increase the quality and accuracy of the final output with respect to your goal.

A simple scenario where you might see Prompt Chaining is for Article Generation. In the scenario of Article Generation, you may want to extract cited information from a source, and then insert those facts back into a subsequent prompt to finally generate a sourced article from that cited information. This approach may require 2-3 LLM calls, first for the information extraction (e.g. find relevant information to “topic”, use relevant information from “topic” to generate article).

Another example of Prompt Chaining is for multi-step problem solving, i.e. Traveling Planning. A user may ask an agent to plan a trip for them to visit Japan and see the top attractions over a certain date range. An agent may internally develop a multi-step plan to accomplish this, first by identifying top attractions in Japan, and then getting the schedule of those top attractions to determine the best dates/times to visit those attractions, finally responding to the user with the respective result.

By understanding and effectively implementing techniques such as Prompt Chaining, it's possible to create sophisticated and dynamic solutions that can handle more complex tasks (and solve real business problems 😊).

However, it's crucial to remain vigilant about the challenges and potential pitfalls in deploying these technologies in real-world environments. With careful planning and a focus on both strengths and weaknesses, LLMs can significantly enhance various workflows and drive forward the capabilities of automated systems.

Curious about what Rokk3r is doing in AI? Contact us at info@rokk3r.com to learn more

Previous
Previous

Lessons for Cultivating an Innovation Culture l June Newsletter

Next
Next

Use Jobs To Be Done for Corporate Innovation l Newsletter