In this folder you will find exercises to help increase your understanding of the building blocks behind AI Orchestration of large language models (LLMs) into applications. We introduce concepts like Tokens, Embeddings and Vectors which allow AI Orchestration to overcome the size limitation of Prompts.
Below is a list of each of the labs in this section and what each one sets out to achieve.
In this lab, we'll walk through the basics of Tokens, how they relate to LLMs, and why counting them is important.
In this lab, we'll walk through the basics of creating Vectors using Embeddings and why they are important when it comes to AI Orchestration.
In this lab, we'll walk through using an open source vector store called Qdrant to understand the basics.
In this lab, we'll use Azure CosmosDB for MongoDB (vCore) as a vector store and as a semantic cache.
In this lab, we'll walk through Azure AI Search vector store and search ranking capabilities.
In this lab, we'll walk through integrating external APIs into your chain.