LLMs
LLM Fundamentals
Large Language Models (LLMs) are a type of Artificial Intelligence known as Predictive Models. They are trained on vast amounts of text data, allowing them to understand and generate human-like language by predicting the most probable next word (or "token") in a sequence. A key characteristic of a standard LLM is that its knowledge is "frozen" at the time of its training. This means its knowledge is limited to the data it was trained on and doesn't include any information or events that occurred after that point.
Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) is a technique designed to overcome the "frozen knowledge" limitation of LLMs. It allows a model to access current, external data that was not part of its original training set. This is achieved by first searching an external knowledge base (like a company's internal documents or a real-time database) for information relevant to the user's query. This retrieved information is then provided to the LLM as additional context along with the original prompt, enabling it to generate a more informed, accurate, and up-to-date answer.
Tool Calling
Tool calling significantly expands an LLM's capabilities beyond simple information retrieval and text generation. It gives an LLM the ability to interact with and take action on external systems. When an LLM is configured with tools, it receives a description of what each tool does (e.g., a function or an API) and what inputs it requires. Based on the user's request, the model can then intelligently decide which tool to call and with what arguments to achieve a specific goal. This transforms the LLM from a passive text generator into an active system that can execute tasks.
Examples of tools include:
pull_finances
place_order
send_letter
delete_data
sequence_dna
call_rideshare
LLMs vs. Agents
With the ability to use tools, LLMs evolve from being simple models into the core of agents. An LLM acts as the "brain" of an agent. The agent framework provides the LLM with memory (to recall past interactions) and access to a suite of tools. This combination enables the agent to reason about a task, break it down into a sequence of steps, and execute those steps by calling the appropriate tools. This ability to autonomously plan and act on the real world is the key differentiator between a basic LLM and an agent.
Agents are more than just predictive models; they can:
Act on their own
Remember their past interactions
Take action in the real world
The Role of an MCP Server
A Model Context Protocol (MCP) server is a standardized bundle of tools that an agent can connect to and use. Think of it as a universal API gateway or a service directory for LLM agents. It exposes a collection of tools in a consistent format, so that any compatible agent can connect to it, understand the available capabilities, and start using them without needing custom integration for each individual tool. While the MCP standard is still evolving, it aims to create an interoperable ecosystem of tools for AI agents. Common installation methods for servers providing these tools include npm, http, and Docker.
BigID Agentic Automation App
The Agentic Automation app is a practical implementation of these concepts, packaging an MCP server specifically for the BigID ecosystem. This app allows an agent to interact directly with BigID's data intelligence platform, enabling automations like classifying data, initiating scans, or generating reports based on natural language commands. It includes a modal (UI component) and other automation capabilities, all integrated into a single BigID app. The BigID MCP server is awaiting packaging and will be downloadable in the future.
App URL: https://agentic.bigid.tools