Connectors/Learn

From BigID Developer Portal

What is a BigID Connector

BigID Connectors allow your BigID system to provide insights about new types of data. Whether that's a known data type like CSV from a new type of data source or something completely new to the BigID ecosystem, a connector will allow you to bring BigID's data discovery capabilities to that system.

Why do we need connectors?

Every data source has its own way of communicating with third parties. Some data sources return information nicely organized, others return it as a jumbled mess. In order for BigID to give you the insights you expect, data needs to be fed to BigID in a consistent way. Connectors work as translators between the multitude of formats that data sources have adapted to the standard format BigID expects. Note that even if the data format is the same (REST JSON, REST XML, GraphQL, etc) small differences make it difficult to reuse connectors. Think of a connector as a way to interface with a single system.

How are connectors implemented?

Connectors can either be implemented as a REST API or as a Java JAR file. REST connectors are bound by the limitations of HTTP connections including timeouts, size limitations and more. Java connectors are well suited for complex use cases especially those involving data sources that stream data.

BigID Scanning Process

While BigID has different scanning methods (snapshots, metadata scans, Hyperscan), they all depend on scanners. Scanners allow BigID to contact data sources and create the search maps that are used to power the BigID system. Depending on your deployment model you may have scanners located in the BigID cloud, on-premise, or in your organization's cloud provider accounts. Scanners take the form of a Docker container and require only outbound network access.

In a scan, the scanner will do the following:

  • If correlation is enabled, load all correlation records in order to find them within data sources.
  • Scan table and file metadata to determine access permissions and ownership
  • Classify data streams

After a user starts a scan, the scanner will use the data in the scan request to determine what type of connection to make. In the case of REST API scans, the connector will reach out to your connector. This means your REST connector must allow inbound network access from your scanner, and your data source must allow inbound access from the connector. For Java connectors, the scanner will directly communicate with the data source.

Connector Types

There are two different types of connectors supported within BigID. Which type of connector you want to use to connect to your data source will have broad implications on setup, network security settings, and connector installation.

Internal (Java-based Connectors)

Most of the connectors you are familiar with are Java-based connectors.

These connectors are written in the Java programming language and distributed as JAR files. To install a new Java-based connector, an administrator must manually load the connector JAR file into the scanner using the command-line. Thankfully, the 50+ BigID written internal connectors are bundled in the scanner by default. The scanner directly uses these connectors’ code to connect to your data sources.

These connectors allow large amounts of customization in the scanning process and the connection to your data source. Due to the customization options, they are more complicated to create and are not the recommended connector development method for BigID customers.

External (Generic REST API Connectors)

External connectors allow you to create a connector in your favorite programming language. The scanner will communicate with your connector over HTTPS so as long as your programming language of choice can respond to web requests, it can be used to create an external connector.

External connectors can be hosted on any server that has a network connection to both your scanner and your data source.

There are two different types of external connectors that you can create: unstructured and structured.

Unstructured External Connector

Unstructured connectors allow BigID to scan files from a given data source. An example of an unstructured data source is Google Drive.

Structured External Connector

Structured connectors allow BigID to scan databases. An example of a structured connector would be our MySQL connector.

The Simplest BigID Connector

Below is sample code for the simplest REST connector you can make.

const express = require('express')
const app = express()
const port = 3000;

// This is our fake data for the connector
const FAKE_DATA = {
    Customers: [
        { Id: 1, Name: "Michael", Address: "100 World Center Drive, Orlando FL" },
        { Id: 2, Name: "Bob", Address: "1 Universal Drive, Orlando FL" },
        { Id: 3, Name: "Stewart", Address: "1 Universal Drive, Orlando FL" }
    ],
    Orders: [
        { Id: 1, Item: "banana", customerId: 2, Price: 1 }
    ]
};

/**
 * Describes all objects inside a data source
 *
 * @async
 * @param {{ domain: string; user: string; pass: string; header: string; }} login login information from BigID
 * @param {string} object name of object
 */
async function describeObjects(login) {

    const objects = [];

    // Create a new object for each table
    for (let table in FAKE_DATA) {
        let thisObject = { objectName: table, fields: [] };

        // Get field definitions for the object using the first item in each object
        for (let field in FAKE_DATA[table][0]) {
            thisObject.fields.push({ fieldName: field, fieldType: typeof FAKE_DATA[table][0][field] });
        }
        objects.push(thisObject);
    }

    return {
        status: 'success',
        objects: objects
    };
}


/**
 * Describes a specific object in the data source.
 *
 * @async
 * @param {{ domain: string; user: string; pass: string; header: string; }} login login information from BigID
 * @param {string} object name of object
 */
async function describeObject(login, object) {

    const objectFields = [];

    // Get field definitions for the object using the first item in the object
    for (let field in FAKE_DATA[object][0]) {
        thisObject.fields.push({ fieldName: field, fieldType: typeof FAKE_DATA[table][0][field] });
    }
    objectFields.push(thisObject);

    return {
        status: "success",
        objectName: object,
        fields: objectFields
    }
}

/**
 * Get records from data source for a specific object
 *
 * @async
 * @param {{ domain: string; user: string; pass: string; header: string; }} login login information from BigID
 * @param {string} object
 * @param {number} offset
 * @returns {unknown}
 */
async function getRecords(login, object, offset, count) {
    let records = [];

    // an offset defines where we should start. Use that as the starting point
    // TODO splice only elements needed
    const recordWindow = FAKE_DATA[object];

    for(let record of FAKE_DATA['object']) {
        
        let thisResult = {
            id: record.Id, // All records *must* have a unique ID for BigID to identify them
            data: []
        };
        for(let field in record){
            thisResult.data.push({fieldName: field, fieldType: typeof record[field], fieldValue: record[field]});
        }
        records.push(thisResult);
    }

    return {
        status: "success",
        records: records,
        offset: records.length
    }

}

/**
 * Get a count of the number of records of a given object type
 *
 * @async
 * @param {{ domain: string; user: string; pass: string; header: string; }} login login information from BigID
 * @param {string} object object name
 * @returns {unknown}
 */
async function countRecords(login, object) {
    return {
        status: "success",
        count: FAKE_DATA[object].length
    };
}

/**
 * Search through objects
 *
 * @param {{ domain: string; user: string; pass: string; header: string; }} login
 * @param {*} object
 * @param {*} search
 */
function search(login, object, search) {
    //TODO
}

/**
 * Returns BASIC formatted auth data from the authorization header
 *
 * @param {*} req
 * @returns {{ user: string; pass: string; header: string; }}
 */
function getAuthData(req) {
    
    if (req.headers.authorization === undefined) {
        console.log("No Login information supplied");
        res.status(401).json({
            status: 'error'
        })
        return null;
    }

    const b64auth = (req.headers.authorization || '').split(' ')[1] || ''
    const [login, password] = Buffer.from(b64auth, 'base64').toString().split(':');

    return {
        header: 'Basic ' + Buffer.from(login + ':' + password).toString('base64')
    }
}

app.get('/objects/', async (req, res) => {
    console.log(`LISTOBJS()`);

    const login = getAuthData(req);
    if (login === null) { return; }

    const obj = await describeObjects(login);
    if (obj.status === "success") {
        return res.json(obj)
    }

    return res.status(401).json(obj);
});

app.get('/objects/:object/describe', async (req, res) => {
    console.log(`DESCRIBEOBJS(object=${req.params.object})`);

    const login = getAuthData(req);
    if (login === null) { return; }

    let result = await describeObject(login, req.params.object);

    if (result.status === "success") {
        return res.json(result);
    }
    return res.status(401).json(result);

});

app.get('/objects/:object/records', async (req, res) => {
    console.log(`GETRECORDS(offset=${offset},count=${count},object=${req.params.object})`);

    const login = getAuthData(req);
    if (login === null) { return; }

    let offset = req.query.Offset || 0;
    let count = req.query.Count || 25;
    if (count > 200) { count = 200 }

    return res.json(await getRecords(login, req.params.object, offset, count));
});

app.get('/objects/:object/count', async (req, res) => {
    console.log(`COUNTRECORDS(object=${req.params.object})`);

    const login = getAuthData(req);
    if (login === null) { return; }

    return res.json(await countRecords(login, req.params.object));
});

app.listen(port, () => {
    console.log(`Sample Connector listening on port ${port}`)
});

Knowledge Check

1 What is the primary purpose of BigID connectors? (Select one)

To translate various data formats into a standard format expected by BigID.
To store large volumes of data.
To manage user access permissions.
To generate reports automatically.

2 Which statement about REST connectors is true? (Select one)

REST connectors are bound by HTTP limitations such as timeouts and size constraints.
REST connectors allow unlimited data transfer.
REST connectors do not require network access.
REST connectors are better suited for complex use cases involving data streams.

3 What are the two types of external connectors in BigID? (Select two)

Structured
Unstructured
Metadata
Hyperscan

4 How are Java-based connectors installed in BigID? (Select one)

By manually loading the JAR file into the scanner using the command line.
By uploading them through the BigID web interface.
By configuring them in the network settings.
By downloading them directly from the BigID cloud.


Writing a Structured Connector

Structured Connector
Databases
Structured...
Object
Something in the world that you are storing multiple of.

Examples: Customers, Products, Orders.

All Objects must have the same Fields
Object...
Contains
Contains
Record
An individual item you are storing.

Examples: Your customer Michael Mouse, an apple you sell, an order Michael made for the product

All Records must have a unique ID.
Record...
Contains
Contains
Fields
Pieces of information about a record.

Examples: Michael Mouse's address is 100 World Drive, an apple costs $1, Michael's order was placed on January 1st.

Every field has a field name, a field value and a field type. Field types can be any string, but typically represent database types like string or number
Fields...
Unstructured Connector
Files/Streams
Unstructur...
Container
A grouping of files or streams. Think of a file folder

Examples: Invoices, 2024, LogStreamForApp
Container...
Contains
Contains
Object
An individual file or stream of information

Examples: InvoiceJan1.pdf, TaxNoticeFor2024.pdf, Kenesis Stream named LogStreamForApp

All Objects must have a unique ObjectName.
Object...
Contains
Contains
Metadata
Pieces of information about an Object.

Examples: Last Modified Date, Owner, Name, File Permissions
Metadata...
Contains
Contains
InputStream
The data inside the file in UTF8 format
InputStream...
Structured connectors are used to connect to systems that organize data in a predefined format. Examples of structured data sources are databases, spreadsheets, and data warehouses.

As we mentioned before, BigID connectors serve as translators between an external system's format and BigID's data format. The diagram to the left shows the structure of that internal data format. Structured data sources consist of Objects. You can think of objects as things in the real world like products, or invoices, or even your customers. If you're familiar with relational databases, an object most often takes the form of a table in your database.

Inside of objects we have records. Records are an individual instance of an object. Because fields are defined at the object level, all records of the same type will have the same fields. There is only one required field for a Record and that is a unique ID. BigID will use this ID when attempting to fetch an individual record so it's important that it's unique.

Inside of records, we have fields and values. Fields and values are the actual data BigID will scan for your object.


While your data source may store its data differently, you will need to reorganize it into this structure for BigID to scan the data. Remember that connectors are translators. Improper translation from your data source's format to BigID's is the largest issue when testing and developing custom connectors. See the below example of a JSON file for a single user converted into this format:

{
    "user":"[email protected]",
    "phone": "(321) 555-5555",
    "information": [
        {"key": "name", "value": "user"},
        {"key": "favcolor", "value": "green"}
    ]
}

translates to

{
  "objectName": "User",
  "fields": [
    {
      "fieldName": "Name",
      "fieldType": "string"
    },
    {
      "fieldName": "FavColor",
      "fieldType": "string"
    },
    {
      "fieldName": "Phone",
      "fieldType": "string"
    }
  ]
}
{
  "id": "[email protected]",
  "fields": [
    {
      "fieldName": "phone",
      "fieldValue": "(321) 555-5555"
    },
    {
      "fieldName": "name",
      "fieldValue": "user"
    },
    {
      "fieldName": "favcolor",
      "fieldValue": "green"
    }
  ]
}

Connector Endpoints

BigID will request and view your connector’s hierarchy through six endpoints:

  • List what objects exist (/objects)
  • List what fields an object has (GET /objects/<objectName>/describe)
  • List what records are inside an object (GET /objects/<objectName>/records)
  • Count the number of records inside an object (GET /objects/<objectName>/count)
  • Return the fields and values for a given record ID (GET /objects/<objectName>/<recordID>)
  • Search for records in an object (POST /objects/<objectName>/sar)


List what fields an object has

Since all records within an object must contain the same fields, BigID uses an object's fields to tell users what type of information is inside your data source. BigID expects the following response by your connector to see what fields an item has:

GET /objects/User/describe HTTP/1.1


{
  "status": "success",
  "objectName": "User",
  "fields": [
    {
      "fieldName": "Name",
      "fieldType": "string"
    },
    {
      "fieldName": "FavColor",
      "fieldType": "string"
    },
    {
      "fieldName": "Phone",
      "fieldType": "string"
    }
  ]
}