From ReAct to Function Calling: How LangChain and CrewAI Simplify Multi-Model AI Agents

As LLMs (Large Language Models) get more powerful, developers are building chatbots and agents that don’t just chat, they reason, plan, call tools and enhance text like what i did with this post 😀 .

But there’s a catch:

How do we let LLMs call functions or tools without tightly coupling our app to just one provider like OpenAI?

In this post, I’ll walk through:

  • What ReAct is, and why it matters
  • The rise of function calling with OpenAI
  • The coupling problem when working with multiple LLMs
  • How LangChain abstracts away those problems
  • What really happens when you run agent.invoke()
  • And finally, how CrewAI lets you go multi-agent, easily


🧂​ What is ReAct?

ReAct stands for Reasoning + Acting -> a prompting pattern that teaches language models to think step-by-step and take actions (like calling tools).

The model loops through steps like:

Thought: I need to look something up.
Action: search["Paris météo aujourd'hui"]
Observation: Il fait 26°C à Paris, ciel dégagé.
Final Answer: Il fait beau aujourd'hui à Paris, 26°C.

This pattern is the base of many agent frameworks like LangChain, CrewAI, AutoGen, etc. It’s great because it gives the model the freedom to think, plan, and act -> instead of jumping straight to an answer.


🧂​ What is OpenAI Function Calling?

In 2023, OpenAI introduced a new way for GPT models to interact with tools: function calling. Instead of writing out "Action: search['Paris météo']" in plain text, the model sends a structured JSON payload like this:

Edit{
"name": "get_weather",
"arguments": {
"city": "Paris"
}
}

Then your backend runs the function and gives the result back to the model. GPT continues reasoning from there.

✅ It’s clean, efficient, and powerful -> but there’s one issue.


🚧​ The Problem: Tight Coupling

Function calling is great… if you only use OpenAI.

But what if you want to switch to Claude, Mistral, or even a local LLM? Now you’ve got a problem:

  • Those models don’t support OpenAI-style JSON function calling
  • You’d have to write custom parsing logic for each one
  • Your app logic becomes harder to maintain

So you’re now tied ! tightly ! to OpenAI’s way of working.


🧩 Enter LangChain: Abstraction for LLMs + Tools

LangChain is a Python framework designed to make LLM applications modular and scalable. One of the best things it does is abstract function/tool calling.

You define your tools once, like this:

from langchain.agents import tool

@tool
def donner_meteo(ville: str) -> str:
return f"À {ville}, il fait actuellement 25°C avec du soleil."

Then you plug them into an agent. LangChain handles:

  • Which tool to call
  • How to format the prompt
  • How to parse the output (whether from GPT, Claude, Mistral, etc.)

🧠 What happens when you run agent.invoke()?

This is important. The method agent.invoke() behaves differently depending on the agent_type you chose when creating the agent.

Here are a few common types:

agent_typeUses ReAct?Uses Function Calling?Notes
"zero-shot-react-description"✅ Yes❌ NoClassic ReAct-style
"structured-chat-zero-shot-react-description"✅ Yes❌ NoBetter ReAct with structure
"openai-functions"❌ No✅ YesOnly for GPT-3.5/4
"chat-zero-shot-react-description"✅ Yes❌ NoChat-optimized ReAct

So:

agent = initialize_agent(
tools=[donner_meteo],
llm=ChatOpenAI(model="gpt-4"),
agent_type="openai-functions"
)

This will use OpenAI’s native function calling. But change the type to "zero-shot-react-description" and it’ll switch to ReAct prompting.

That’s what makes LangChain powerful -> same logic, just different agent behavior.


🧪 Example

from langchain.agents import tool, initialize_agent
from langchain.chat_models import ChatOpenAI

@tool
def donner_meteo(ville: str) -> str:
return f"À {ville}, il fait actuellement 25°C avec du soleil."

llm = ChatOpenAI(model="gpt-4")

agent = initialize_agent(
tools=[donner_meteo],
llm=llm,
agent_type="openai-functions" # or "zero-shot-react-description"
)

response = agent.invoke("Quel temps fait-il à Lyon ?")
print(response)

The best part: if tomorrow you switch from OpenAI to Claude or Mistral, the agent still works. You just swap the LLM.


🤝 Bonus: CrewAI for Multi-Agent Workflows

Want multiple agents working together, like a “researcher” and a “summarizer”? That’s where CrewAI comes in.

It builds on LangChain and lets you define a team of agents with different roles, tools, and responsibilities.

🧪 Example: Researcher + Summarizer

from crewai import Agent, Task, Crew
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
role="Chercheur IA",
goal="Trouver les dernières tendances en IA",
tools=[search_tool],
backstory="Spécialiste en veille technologique IA",
llm=ChatOpenAI(model="gpt-4")
)

summarizer = Agent(
role="Rédacteur",
goal="Rédiger un résumé clair pour le blog",
backstory="Expert en vulgarisation de concepts techniques",
llm=ChatOpenAI(model="gpt-4")
)

task = Task(
description="Recherche les tendances IA et prépare un résumé pour les lecteurs du blog.",
agent=researcher
)

crew = Crew(
agents=[researcher, summarizer],
tasks=[task],
verbose=True
)

crew.kickoff()

Result: researcher runs web searches, summarizer writes the blog-style summary all powered by LLMs.


✅ Final Thoughts

ReAct showed us how LLMs can reason and act. OpenAI function calling made tool use more structured , but also more vendor-specific.

LangChain abstracts away these differences. Whether you’re using GPT, Claude, or Mistral, it gives you the same developer experience. And if you want your agents to collaborate? CrewAI brings that orchestration to life.

If you’re building smart assistants or internal AI agents, start with LangChain. You’ll be future-proof from day one.


Laisser un commentaire