ai    •    ChatGPT    •    GenAI    •    Multi-Agent Systems    •    LLM    •    May 25, 2024 3:29:30 PM

Anatomy of an AI Multi-Agent

Anatomy of an AI Multi-Agent system: which "body parts" do we need to build one. ChatGPT, LLM, OpenAI, Integrail.

In the previous parts of our AI Multi-Agent series, we explored why ChatGPT alone does not constitute true AI and how to build a simple “Poor Man’s RAG” agent that utilizes context before interacting with you. Now, it’s time to delve into the intricate design of a general AI agent, a concept built upon more than 50 years of AI research. A recommended resource for serious AI designers is the classic book “Artificial Intelligence: A Modern Approach.”

In this article, we will examine the anatomy of a functional and efficient AI multi-agent, broken down into three key subchapters. We will also discuss practical components you can use for each part of the agent.

1. Humans vs Agents

Since we are trying to design something intelligent, humans serve as the best model. Here is a scheme of a typical human:

Typical Human Being (Schematics)
  • Senses: Eyes and other senses perceive the outside world.
  • Communication: Mouths allow us to communicate intelligently.
  • Memory and Knowledge: Stored in the brain, used to process information.
  • Reasoning Engine: Makes decisions based on sensory input and knowledge.
  • Hands: Execute changes in the external world.

Similarly, an AI agent must have analogous components. Can ChatGPT fulfill all these roles? Of course not, it can only simulate conversation. However, it can serve as a crucial part of a multi-agent system modeled after a human:

Generic AI Multi-Agent
  • Sensing: AI needs to understand its environment using sensors.
  • Communication: Interfaces to interact with humans.
  • Processing: Modules to understand input and generate responses.
  • Memory/Knowledge Base: Stores data for context and retrieval.
  • Reasoning Engine: Plans and refines solutions.
  • Actuation: Interacts with external systems.

This is a very general scheme, but it captures the high-level design of pretty much any AI agent we can imagine — self-driving car, autonomous robot, agent on the web, etc.

2. Agent Anatomy

A well-designed AI agent integrates several critical components to function effectively. It begins with a communication interface akin to a human's ability to speak or listen, facilitating interaction with users through chat or voice interfaces. Sensors, such as cameras and lidars for autonomous vehicles or combinations of LLMs and image-to-text models for web-based agents, enable the agent to perceive and understand its environment. A processing module synthesizes inputs from sensors and users to comprehend tasks at hand. A memory or knowledge base stores and retrieves pertinent information, enhancing decision-making through retrieval augmented generation (RAG). A robust reasoning engine plans, critiques, refines, and formulates execution strategies. Finally, actuators execute actions in the external world, enabling the agent to interact with software systems and physical environments effectively.

These AI multi-agents and their design have been studied for ages, e.g. in the book I recommended at the beginning of this article. Invention of LLMs and other Generative AI models makes it possible to build upon this research and implement it at the new level of technology, and finally start building AI agents that are useful in a general sense, as opposed to for specialized extremely narrow tasks.

3. Specific AI Agent “Anatomy Parts” Design

Let’s move from our human analogy to discussing the specific parts needed to build a versatile AI Multi-Agent operating on the internet.

Human and AI together

Sensors: These need to:

  • Read websites as humans do.
  • Discover and use various APIs available online.

To build these sensors, we need a combination of LLMs and models that understand images, along with software that can crawl and download data from URLs. Proper prompt engineering will enable these sensors to convert what they “read” into formats suitable for further processing.

Human Interaction. This is the part everyone is familiar with by now thanks to ChatGPT — you can type what you want, or say it with words, and it will be processed by another LLM-based module for further reasoning.

Process Input. This, also LLM-based, module takes whatever Sensors give it together with the current request from a Human, and tries to formulate a clear Task Request for our Plan Solution module — arguably, the main part of the “brain”. It is also absolutely critical for this module to consult the Knowledge Base via RAG and make it part of the context when formulating the Task Request.

Knowledge Base / RAG. This module stores the data and knowledge that may be relevant to our agent’s operations. This can be all kinds of publicly available data accessed via “regular” internet search, as well as so-called Vector Databases, which represent unstructured text via numerical vector embeddings. This provides a much faster search “by meaning” as opposed to simply “by keywords” and is a crucial part of our Agent.

Plan Solution. This is normally a bunch of different LLMs working together, as it has to take the Task Request, analyze what kind of resources (and “hands”) are available to the agent to execute this request, iteratively plan such an execution using various critique / step-by-step planning approaches, design sub-agents that are currently missing but needed for the task execution, and finally orchestrate execution using the “hands” or sub-agents available to our agent. This is an extremely interesting and fast-developing area of AI research, and we at Integrail focus lots of resources on making an “AI Brain” that uses “self-learning” and automatic “subagent development”. In some other, more specialized agents (e.g., in games), approaches such as Reinforcement Learning are quite useful as well.

“Hands” or Execute Actions. All of the above would be completely useless if our agent didn’t have “hands” to do something that a human asked of it. These hands are pieces of code that can call external APIs, press buttons on web-pages, or in some other way interact with existing software infrastructure.

All of the above is possible to design and create today, and times could not be more exciting for this activity. We at Integrail are building not just such agents but also a platform to design and build them very easily without any programming knowledge. If you want to participate or follow us on this journey — do subscribe to this blog, we are constantly sharing our progress and giving away GenAI Token Credits to our friends :)

Related Articles
Migrate from GPT-3x Models Using Integrail's Benchmarking Tool

Migrate from GPT-3x Models Using Integrail's Benchmarking Tool

The AI industry is constantly evolving, and with it comes inevitable changes that developers and businesses must navigate. One of the most recent and...

Read More
When Was AI Invented?

When Was AI Invented?

Artificial Intelligence (AI) is no longer confined to science fiction. Today, it is a crucial part of our daily lives, but when did this...

Read More
Beyond ChatGPT: Business Challenges with LLMs

Beyond ChatGPT: Business Challenges with LLMs

1. Lack of Domain-Specific Knowledge While ChatGPT and similar large language models (LLMs) are trained on large amounts of data, they often lack the...

Read More
Stay informed on our new tools and AI industry trends. Subscribe in one click!

Exclusive Offer

flag-free

Are you an early AI adopter?

Try free for 3 months and receive $100 credits!

We make people 10x more productive

Start your journey with Integrail

card-studio-2

AI Studio by Integrail

Try AI Studio by Integrail FREE and start building AI applications without coding.

card-courses-02

AI University by Integrail

Join our FREE AI University by Integrail and learn Agentic AI with expert guidance.