If you’re reading this, you want to create a multi-agent system (MAS). But why would you want to do that? Let’s break it down using a framework for creating multi-agent systems called Crew AI. These agents work together to solve complex problems. We'll also walk through creating your own beginner-friendly "crew" of agents (a multi-agent system).
Key highlights:
- Create your own agentic system
- Learn about OpenAI alternatives for LLM
- Differentiate agents, tasks, tools, processes and crew
Why MAS?
When building a multi-agent system, the first question that comes up is: why split tasks between multiple agents? Why not just assign everything to one?
Instead of overloading one agent with everything, you give different tasks to different agents. For example, one agent could specialise in identifying what you need to learn, while another could find the best learning resources. They can work simultaneously, making the process faster, and if one agent makes a mistake, you only need to fix that specific task without affecting the rest of the system. It’s like having a team of specialised helpers instead of one overworked assistant.
Introducing Crew AI
Let me introduce you to Crew AI, a framework that lets you set up entire teams of agents to execute tasks automatically. It's open-source and supports LangChain, which gives you a ton of built-in functionality. I’ll show you how to install it, set it up, and explore it together.
Crew AI is growing quickly, with over 20k stars on GitHub and a growing number of contributors. New features are being added all the time. You can find it on GitHub:
Installation
Our goal today is to create a simple multi-agent system with Crew AI. Let’s start with the basics. After you create your project, run pip install crewai
to install it. Then, import the necessary components from Crew AI, such as Agent, Task, Crew, and Process. We’ll talk about these components in more detail soon.
You'll also want to securely set your API key using dotenv. For example, you could set up an OpenAI API key like this:
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("***")
os.environ["OPENAI_MODEL_NAME"] = os.getenv("gpt-4")
Choosing the LLM
Agents are like smart specialists. The first thing they need is to understand you and respond like a human. You can choose from closed models like OpenAI or use open-source local models like LLaMA for free. Local models are great because they are free, but closed models like OpenAI have the advantage of being constantly updated and able to access real-time data.
Once you define the keys and model to use, Crew AI will default to using OpenAI. If you want to use a local model or another service, define the following pipeline:
os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
llm = ChatGroq(
api_key=os.getenv("GROQ_API_KEY"),
model="mixtral-8x7b-32768"
)
This example creates an instance of theChatGroq class, which interacts with a large language model hosted by Groq.
Agents and Their Roles
Now, let’s understand the main components of a multi-agent system in Crew AI. The key components are Agent, Task, Crew, and Process.
Here’s an example of agents that help with math:
math_expert = Agent(
role="Math Expert",
goal="Evaluate any math expression",
backstory="You are a math expert.",
verbose=True,
tools=[calculate],
llm=llm
)
Let’s go through each component it is using. I think you understand Role, Goal and Backstory because the title tells for itself.
- When Verbose is set to True, the agent provides detailed outputs, such as logs.
- LLM shows which language model to use to execute tasks.
- The agent uses tools to achieve its goals.
Easy, right? Let’s create another agent:
writer = Agent(
role="Writer",
goal="Give simple explanations from math equation results.",
backstory="You are a content writer who explains complex topics in simple language.",
verbose=True,
llm=llm
)
Assigning Tasks to Agents
Tasks help to define agents’ responsibilities. Here’s how we create two tasks for our example:
task1 = Task(
description=f"{math_input}",
expected_output="Provide the details in bullet points.",
agent=math_expert
)
task2 = Task(
description="Explain the math equation results in detail.",
expected_output="Explain in detail and save in markdown.",
output_file="markdown/math_solution.md",
agent=writer
)
Bringing It All Together
Now comes the exciting part – creating the Crew. This is where we bring everything together: agents, tasks, tools, and processes.
crew = Crew(
agents=[math_expert, writer],
tasks=[task1, task2],
process=Process.sequential,
verbose=2
)
- The crew consists of two agents: the math expert and the writer.
process=Process.sequential
, means the tasks are executed sequentially. First, the math expert completes Task 1. Then, the writer handles Task 2. Currently, crewAI has only sequential hierarchical processes, but more complex processes like consensual and autonomous are being worked on.
- The verbosity level is set to 2, providing detailed logs during execution.
Starting the Crew
The time flew by and now we are finalising our process by seeing the output of our work.
result = crew.kickoff()
print(result)
This command starts the process of executing the tasks. The math_expert agent solves the math problem first, and then the writer agent explains the results. The crew.kickoff()
method manages the interaction between agents and tasks, so that everything goes smoothly and in the correct order.
Here we go, it was all for this task! Nice job! ⭐️
This was a beginner-friendly introduction to crewAI and multi-agents. You were able to create your own crew! Do not stop here and continue integrating more agents, tasks and tools! There are no limitations to your imagination.