LLMs
Agentic Automation
LLM Fundamentals
Large Language Models (LLMs) belong to a set of AI called Predictive Models. They are trained on a fixed dataset and predict the next token (word or part of a word) in a sequence.
Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation, or RAG, allows LLMs to access current data that was not part of their original training set. It enables the model to pull in information from external sources to provide more accurate and up-to-date answers.
Tool Calling
Tool calling gives an LLM the ability to interact with external systems and data sources. Instead of just retrieving information, the LLM can be given access to "tools" (like functions or APIs) that it can call to perform actions.
Examples of tools include:
pull_finances
place_order
send_letter
delete_data
LLMs vs. Agents
With the ability to use tools, LLMs evolve into agents. Agents are more than just predictive models; they can:
Act on their own
Remember their past interactions
Take action in the real world
The Role of an MCP Server
A Model Context Protocol (MCP) server is a bundle of tools that can be installed for an LLM to use. It provides a standardized way for an LLM to discover and use available tools. However, the MCP standard is still evolving, and there is no single standard installation method. Common methods include npm, http, and Docker.
BigID Agentic Automation App
The Agentic Automation app packages an MCP server for BigID, including a modal and automation capabilities, into a BigID app. It can be installed in your BigID environment. The BigID MCP server is awaiting packaging and will be downloadable in the future.
App URL: [1]