From ReAct to Function Calling: How LangChain and CrewAI Simplify Multi-Model AI Agents

As LLMs (Large Language Models) get more powerful, developers are building chatbots and agents that don’t just chat, they reason, plan, call tools and enhance text like what i did with this post 😀 . But there’s a catch: How do we let LLMs call functions or tools without tightly coupling our app to just … Lire la suite From ReAct to Function Calling: How LangChain and CrewAI Simplify Multi-Model AI Agents

Évaluez ceci :

How I Built a RAG Chatbot Over Confluence with Chainlit, Postgres pgvector, and Python

Sorry for posting less and less lately, but as you probably know, AI is changing insanely fast. Every time I start writing a new article about some cool thing, it can become outdated in no time.So now, I’ve decided to focus on writing about use cases I’m actually building for my customers stuff they love … Lire la suite How I Built a RAG Chatbot Over Confluence with Chainlit, Postgres pgvector, and Python

Évaluez ceci :

The Road to LLM: Measuring Sentence Similarity & Extractive Summarization Day 5]

The Road to LLM Advent Calendar 2023: Sentence Similarity & Extractive Summarization 1)Understanding Sentence Similarity When we say two sentences are similar, humans often look at shared words or overall meaning: The cat is sleeping on my chair. The cat sleeps in my bed. Both sentences share the subject (“cat”) and the action (“sleep”), even … Lire la suite The Road to LLM: Measuring Sentence Similarity & Extractive Summarization Day 5]

Évaluez ceci :

The Road to LLM: What Does It Mean to Embed Words? [Day 4]

Hello everyone! We explored tokenizers—tools that divide text into units called tokens. In today’s post, we’ll dive into the concept of word embedding, using a machine learning model called Word2Vec, which focuses on word-level embeddings. What Is Word Embedding? Simply put, it looks like this: ID Word 1 cat 2 dog 3 bird 4 fox 5 tiger … Lire la suite The Road to LLM: What Does It Mean to Embed Words? [Day 4]

Évaluez ceci :

The Road to LLM: What is a Tokenizer? [Day 3]

Hello everyone! 👋 In our previous article, we explored Natural Language Processing (NLP) and how computers need to convert text into numbers (distributed representations) to understand human language 🗣️ ➡️ 🔢 Today, we’ll examine the crucial step that makes this conversion possible: tokenization! ✨ A tokenizer breaks down text into smaller pieces that can be … Lire la suite The Road to LLM: What is a Tokenizer? [Day 3]

Évaluez ceci :

The road to LLM ~ What is Natural Language Processing? ~ [Day 2]

Summary Hello everyone ! 👋 This is Kbilel and welcome to our second article on LLM . In the previous blog, we discussed what LLM (Large Language Model) is and provided an example to illustrate its usage. 🧠💻 As a post-training topic, we briefly looked at how to evaluate the performance of machine learning models. … Lire la suite The road to LLM ~ What is Natural Language Processing? ~ [Day 2]

Évaluez ceci :

The road to LLM ~What is machine learning anyway? With an explanation of this project~[Day 1]

🚀 Hello everyone! This is Kbilel  Over the past few months, I’ve dived deep into the fascinating world of generative AI. 🤖 Starting today, as part of my Advent Calendar for Q4 2024 project, I’m excited to share everything I’ve learned. From the basics to the brain-benders, there’s something here for everyone!  🎉 This is my debut in planning an Advent Calendar, and I’m pumped to bring you a series that’s both enlightening and fun.  Stay tuned, dive in, and let’s explore the future of AI together! 🌟  What is the advent calendar like? Have you ever heard of the term LLM? It stands for Large Language Model . From now on, we will use … Lire la suite The road to LLM ~What is machine learning anyway? With an explanation of this project~[Day 1]

Évaluez ceci :

You can execute function calling faster and more efficiently! ! I tried parallel execution of Function calling announced at OpenAI DevDay.

Welcome back! In this concise second article, we’re exploring the efficient world of parallel function execution as highlighted in OpenAI DevDay. Let’s quickly dive into what makes it exciting ! Feature overview Function calling has been updated. Now, you can call multiple functions with a single message. (To be precise, the response side instructs to … Lire la suite You can execute function calling faster and more efficiently! ! I tried parallel execution of Function calling announced at OpenAI DevDay.

Évaluez ceci :

[For Beginner] I tried the QuickStart tutorial of OpenAI API

Introduction Welcome to my first article on the ever-evolving world of AI. We’re starting with a hands-on exploration of the OpenAI API, offering a glimpse into the vast potential of AI. This is just the beginning of a series where we’ll uncover more about AI’s capabilities and applications. Stay tuned!  We will proceed based on … Lire la suite [For Beginner] I tried the QuickStart tutorial of OpenAI API

Évaluez ceci :

What to do when you want to skip processing with Terraform’s Optional attribute

If you set null to Terraform’s Optional attribute value, it will skip the process nicely. Introduction It’s been a long time since i wrote articles for personal reasons, i’ll try to be more active and share my tips with you . When creating Terraform resources, there were cases where you wanted to skip or omit … Lire la suite What to do when you want to skip processing with Terraform’s Optional attribute

Évaluez ceci :