Architecture patterns for building Generative AI Applications
In this blog post I will take you through some of the most common usage patterns we are seeing with customers for Generative AI. We will explore techniques for generating text and images, creating value for organizations by improving productivity. This is achieved by leveraging foundation models to help in composing emails, summarizing text, answering questions, building chatbots, and creating images.
Below are few categories on which the Architecture Pattern will be discussed.
Architecture Pattern for Text Generation
But before getting into the details lets try to understand LANGCHAIN it is an open source library for orchestrating language models( which are stateless) into workflows that may keep memory and combine range of tools.
Text Generation with Simple Prompt
Text Generation with LangChain
Text Generation with Context and LangChain
Architecture Pattern for Text Summarization
Text Summarization with small files
Text Summarization with Large Files and LangChain
The pattern is useful for summarizing documents which are much larger than the maximum token limit of the OpenAI models involved in the summarization process.
Architecture Pattern for Question Answer
Question Answer with Simple Prompt
Question Answer with Context
Question Answering with Retrieval-Augmented Generation(RAG) via Self-Managed Vector Store
This pattern addresses the needs to leverage/convert data retrieved from existing systems to generate a new output (structured or unstructured) to be passed to downstream processes or other parties. This pattern is discussed and implemented in detail under
Question Answering with Retrieval-Augmented Generation(RAG) via Search Engine
Architecture Pattern for Chatbot
Basic Chatbot
Chatbot with Context
The list is not exhaustive but my sincere effort to bring few architecture patterns in the emerging field of Generative AI