LLMs
LLM Fundamentals
Large Language Models (LLMs) are a type of Artificial Intelligence known as Predictive Models. They are trained on vast amounts of text data, allowing them to understand and generate human-like language by predicting the most probable next word (or "token") in a sequence. A key characteristic of a standard LLM is that its knowledge is "frozen" at the time of its training. This means its knowledge is limited to the data it was trained on and doesn't include any information or events that occurred after that point.
graph TD
A[Training Data] --> B{Model}; C[Question] --> B; B --> D[Token];
A basic LLM takes training data to build a model, which then predicts the next token in a sequence based on an input question.
Example Interaction:
User: To be, or not to be, that is the...
LLM: question.
Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) is a technique designed to overcome the "frozen knowledge" limitation of LLMs. It allows a model to access current, external data that was not part of its original training set. This is achieved by first searching an external knowledge base (like a company's internal documents or a real-time database) for information relevant to the user's query. This retrieved information is then provided to the LLM as additional context along with the original prompt, enabling it to generate a more informed, accurate, and up-to-date answer.
graph TD
A[User's Question] --> B{RAG Process}; B --> C[(Search External Data)]; C --> D[Relevant Information]; A --> E{LLM}; D --> E; E --> F[Generated Answer];
The RAG process intercepts a user's question, searches for relevant information, and provides that information to the LLM as context to generate a better answer.
Example of RAG in action:
User's original question
User: What was our company's Q4 revenue?
Step 1: RAG system searches internal documents and finds the following text in "budget.xlsx":
"Q4 2025 revenue reported at $1,000,000"
Step 2: The system combines the user's question with the found context
Augmented Prompt for LLM:
Context: "Q4 2025 revenue reported at $1,000,000"
Question: What was our company's Q4 revenue?
Step 3: LLM provides an answer based on the new context
LLM: Our company's Q4 revenue was $1,000,000.
Tool Calling
Tool calling significantly expands an LLM's capabilities beyond simple information retrieval and text generation. It gives an LLM the ability to interact with and take action on external systems. When an LLM is configured with tools, it receives a description of what each tool does (e.g., a function or an API) and what inputs it requires. Based on the user's request, the model can then intelligently decide which tool to call and with what arguments to achieve a specific goal. This transforms the LLM from a passive text generator into an active system that can execute tasks.
graph TD
A[User's Question] --> B{LLM}; C[Available Tools] --> B; B --> D[Decide Tool to Call]; D --> E[Execute Tool]; E --> F[Tool Result]; F --> B; B --> G[Final Answer];
The LLM is provided with a set of tools. When asked a question, it determines which tool to call, executes it, and uses the result to formulate the final answer.
Example of the Tool Calling process:
// 1. User asks a question that requires an action
{ "user_prompt": "Please order a pizza for me." }
// 2. The LLM is given a description of available tools
{
"available_tools": [
{
"name": "place_order",
"description": "Places an order for a food item.",
"parameters": {
"item": "string",
"quantity": "integer"
}
}
]
}
// 3. The LLM decides which tool to use
{
"thought": "The user wants to order a pizza. I should use the 'place_order' tool.",
"tool_to_call": "place_order",
"parameters": {
"item": "pizza",
"quantity": 1
}
}
// 4. The system executes the tool and gets a result
{ "tool_response": { "status": "success", "order_id": "12345" } }
// 5. The LLM uses the result to answer the user
{
"llm_response": "I have successfully placed an order for one pizza. Your order ID is 12345."
}
Examples of tools include:
pull_finances
place_order
send_letter
delete_data
sequence_dna
call_rideshare
LLMs vs. Agents
With the ability to use tools, LLMs evolve from being simple models into the core of agents. An LLM acts as the "brain" of an agent. The agent framework provides the LLM with memory (to recall past interactions) and access to a suite of tools. This combination enables the agent to reason about a task, break it down into a sequence of steps, and execute those steps by calling the appropriate tools. This ability to autonomously plan and act on the real world is the key differentiator between a basic LLM and an agent.
graph TD A[Agent] --> B[LLM (Brain)]; A --> C[Memory]; A --> D[Tools];
An Agent is composed of an LLM (the brain), Memory, and a set of Tools it can use to interact with the world.
Agents are more than just predictive models; they can:
Act on their own
Remember their past interactions
Take action in the real world
The Role of an MCP Server
A Model Context Protocol (MCP) server is a standardized bundle of tools that an agent can connect to and use. Think of it as a universal API gateway or a service directory for LLM agents. It exposes a collection of tools in a consistent format, so that any compatible agent can connect to it, understand the available capabilities, and start using them without needing custom integration for each individual tool. While the MCP standard is still evolving, it aims to create an interoperable ecosystem of tools for AI agents. Common installation methods for servers providing these tools include npm, http, and Docker.
graph TD
subgraph MCP Server direction LR T1[Tool 1] T2[Tool 2] T3[Tool 3] end A1[Agent 1] --> MCP Server; A2[Agent 2] --> MCP Server; A3[Agent 3] --> MCP Server;
An MCP Server acts as a central hub, providing a bundle of tools that multiple different agents can connect to and utilize.
BigID Agentic Automation App
The Agentic Automation app is a practical implementation of these concepts, packaging an MCP server specifically for the BigID ecosystem. This app allows an agent to interact directly with BigID's data intelligence platform, enabling automations like classifying data, initiating scans, or generating reports based on natural language commands. It includes a modal (UI component) and other automation capabilities, all integrated into a single BigID app. The BigID MCP server is awaiting packaging and will be downloadable in the future.
App URL: https://agentic.bigid.tools