Skip to main content
Version: v5.1

Custom MCP Tools or Related Query Tools

Overview#

This page provides guidelines for developers to help decide whether it is to best to use Custom MCP Tools or alternatively, Related Query Tools to achieve their requirements.

This page lists a series of different possible approaches and makes observations on each strategy.

Recommended approach#

Based on our analysis, overall, the Custom MCP Tool approach is recommended but please study the information below to understand how this recommendation was made.

Alternatively, you can go straight to our Overall recommendation section.

Comparing the Custom MCP Tool versus the Related Query Tool approach#

When deciding whether to implement a Custom MCP Tool or a Related Query Tool with a SearchRelatedItemsTool pipeline, you need to assess a range of factors to identify the most suitable approach for retrieving related asset data.

When making this comparison, some of the factors to address in your decision making, include:

  • Software behavior
  • Software performance (latency)
  • Accuracy of response
  • Solution scalability

Approach 1: Custom MCP Tool — Results and Observations#

In this scenario, a custom GetAssetsTool was implemented using the getAssetsByCriteria script, configured with the following payload structure supporting filters: name, dtCategory, dtType, searchText,_pageSize, and _offset.

Tool payload#

[  {    "_name": "GetAssetsTool",    "_description": "Retrieves assets by asset name, dtCategory, dtType, or fallback search text.",    "_namespaces": ["test_MpwvILLp"],    "_schema": {      "type": "object",      "properties": {        "name": {          "type": "string",          "description": "Filter by asset name. Supports regex-style matching."        },        "dtCategory": {          "type": "string",          "description": "Filter by dtCategory value. Supports regex-style matching."        },        "dtType": {          "type": "string",          "description": "Filter by dtType value. Supports regex-style matching."        },        "searchText": {          "type": "string",          "description": "Fallback free-text filter when name, dtCategory, or dtType is not explicitly provided."        },        "_pageSize": {          "type": "integer",          "description": "Number of results to return. Default is 10.",          "default": 10        },        "_offset": {          "type": "integer",          "description": "Number of results to skip. Default is 0.",          "default": 0        }      },      "additionalProperties": false    },    "_script": {      "_userType": "asset_scripts",      "_scriptName": "getAssetsByCriteria"    }  }]

Backend script: getAssetsByCriteria#

async function getAssetsByCriteria(input, libraries, ctx, callback) {  const { PlatformApi } = libraries;  const { IafItemSvc } = PlatformApi;
  function toRegexString(str) {    if (!str || typeof str !== 'string') {      return '.*';    }
    const value = str.trim();    const quoted = value.match(/^'(.*)'$/);
    if (quoted) {      const escaped = quoted[1].replace(/[.*+?^${}()|[\]\\]/g, '\\$&');      return '.*(' + escaped + ').*';    }
    const terms = value      .split(/\s+/)      .filter(Boolean)      .map(function (term) {        return term.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');      });
    return '.*(' + terms.join('|') + ').*';  }
  function emptyResult(message, pageSize, offset) {    return {      _list: [],      _total: 0,      _pageSize: pageSize,      _offset: offset,      message: message    };  }
  const criteria = { $or: [] };  const pageSize = Number.isFinite(Number(input._pageSize)) ? parseInt(input._pageSize, 10) : 10;  const offset = Number.isFinite(Number(input._offset)) ? parseInt(input._offset, 10) : 0;
  const options = {    page: {      _pageSize: pageSize,      _offset: offset    }  };
  if (input.name) {    criteria.$or.push({      'Asset Name': {        $regex: toRegexString(input.name),        $options: 'i'      }    });  }
  if (input.dtCategory) {    criteria.$or.push({      'properties.dtCategory.val': {        $regex: toRegexString(input.dtCategory),        $options: 'i'      }    });  }
  if (input.dtType) {    criteria.$or.push({      'properties.dtType.val': {        $regex: toRegexString(input.dtType),        $options: 'i'      }    });  }
  if (criteria.$or.length === 0 && input.searchText) {    const rx = toRegexString(input.searchText);
    criteria.$or.push(      {        'Asset Name': {          $regex: rx,          $options: 'i'        }      },      {        'properties.dtCategory.val': {          $regex: rx,          $options: 'i'        }      },      {        'properties.dtType.val': {          $regex: rx,          $options: 'i'        }      }    );  }
  if (criteria.$or.length === 0) {    return emptyResult(      'Provide at least one of name, dtCategory, dtType, or searchText.',      pageSize,      offset    );  }
  const colQuery = {    query: {      _userType: 'iaf_ext_asset_coll',      _itemClass: 'NamedUserCollection'    }  };
  const colOptions = {    project: { _userType: 1, _itemClass: 1 },    sort: { _name: 1 },    page: { _offset: 0, _pageSize: 1 }  };
  const colResponse = await IafItemSvc.getNamedUserItems(colQuery, ctx, colOptions);
  if (!(colResponse && Array.isArray(colResponse._list) && colResponse._list.length > 0)) {    return emptyResult(      'Asset collection with _userType "iaf_ext_asset_coll" was not found.',      pageSize,      offset    );  }
  const assetCollection = colResponse._list[0];
  return await IafItemSvc.getRelatedItems(    assetCollection._id,    { query: criteria },    ctx,    options  );} 

Performance of this approach#

The following performance characteristics were noted:

  • Consistent end-to-end response time of 30 – 40 seconds. It can be lower depending on the size of the response. This includes script execution (fetching assets via IafItemSvc.getRelatedItems) and LLM processing/observation of the result.
  • No significant latency spikes observed across multiple runs.

Accuracy#

The Custom MCP tool returned accurate and relevant results consistently across all tested prompts, including the following prompts:

  • "Get all doors"
  • "Get all ceilings"
  • "Get all porcelain"
  • "Fetch all stone cladding system"
  • "Retrieve all roof"

All the prompts returned the correct asset results every time. The Regex-based matching logic in toRegexString() handles both quoted and unquoted inputs gracefully, and the $or query structure across Asset Name, dtCategory, and dtType ensures broad yet relevant coverage.

Strengths#

The main strength of this approach are:

  • It is a direct, deterministic query execution. This means that there is no LLM-generated query involved which reduces ambiguity.
  • Fallback searchText covers cases where specific filters are not provided.
  • Returns reliable and repeatable results regardless of prompt phrasing.

Approach 2: Related Query Tool and SearchRelatedItemsTool — Results and Observations#

In this scenario, a Related Query Tool was used, combined with the SearchRelatedItemsTool.

Agent background / system prompt#

You are an AI agent that extracts assets from the Asset Collection using a two-step tool pipeline.
Available tools:
* RelatedQueryTool: Generates a query from the user prompt* SearchRelatedItemsTool: Executes the query and returns results
Execution rules:
1. Always send the user prompt exactly as received to the RelatedQueryTool.2. Do not alter, enrich, or interpret the prompt.3. Extract the query from the RelatedQueryTool response.4. Pass the extracted query exactly as-is to the SearchRelatedItemsTool.5. Return BOTH:
   * The JSON query generated by RelatedQueryTool   * A natural language response based on the SearchRelatedItemsTool output  .
Response format (STRICTLY follow this format):
***query***: <The exact JSON query generated by RelatedQueryTool>***Response***: < Human-readable explanation of SearchRelatedItemsTool results, including field names, their meaning, and values if in response .Provide all data properly>
Failure handling:
* If SearchRelatedItemsTool fails:
  Return in the same format:
***query***:  <JSON RelatedQueryTool response>***Response***: SearchRelatedItemsTool failed to execute the query
Strict constraints:
* Do not generate queries yourself* Do not modify tool outputs* Do not skip tools* Do not add extra explanations unless there is an error* Always strictly follow the response format

Agent configuration#

In this scenario, configure the agent using a two-step pipeline:

  1. Set up RelatedQueryTool - generates a structured query from the user prompt based on schema definitions.
  2. Set up SearchRelatedItemsTool - executes the generated query against the asset collection.

Performance of this approach#

The end-to-end response times were noted as follows:

  • Best case: around 30 - 35 seconds
  • Worse case: around 1.5 - 2 minutes

Details of best case#

The best case (30 - 35 seconds) occurs when the following conditions are met:

  • RelatedQueryTool processes only one collection
    • Simplified schema
    • Faster query generation
    • Minimal schema definition overhead
  • SearchRelatedItemsTool returns a small response
    • Less data for LLM to process
    • Resulting in faster final response generation

Details of worst case#

The worse case (1.5 - 2 minutes) occurs when the following conditions prevail:

  • RelatedQueryTool processes multiple collections
    • More schema definitions are contaied in the prompt
    • Increased internal processing time
    • Complex query generation (such as findWithRelated, findWithRelatedGraph)
  • SearchRelatedItemsTool returns a large response
    • The LLM has to process more data.
    • This results in increased latency in generating the final response.

Accuracy#

On the accuracy of this approach, the following observations were made:

  • Works well for simple, clearly named asset category. For example: "get all doors", "get all ceilings", "retrieve all roof".
  • Fails or returns inaccurate results for prompts with ambiguous or non-standard terminology. Examples include:
    • "get all system panel": Incorrect query generated; inaccurate results.
    • "get all system panel belonging to Revit family": Correct query generated and accurate results returned.
  • The tool relies heavily on schema definitions to identify the correct property fields. When the prompt does not align closely with schema field names, the generated query picks incorrect properties, leading to inaccurate or empty results.
  • Adding more contextual information to the prompt significantly improves accuracy, but this places an additional burden on the end user.

Limitations#

The main limitations noted for this approach are:

  • LLM-dependent query generation introduces non-determinism. This means that the same prompt may yield different queries every time you run it.
  • No built-in fallback when schema fields are ambiguous or when the prompt is vague.
  • Requires user awareness of schema structure for reliable results.

Overall recommendation#

Based on our analysis, the Custom MCP Tool approach is recommended. Study the table below to see our reasons for this recommendation.

CriteriaCustom MCP ToolRelated Query Tool Pipeline
Response Time30 - 40 seconds30 - 35 seconds for best case, 1.5 - 2 minutes for worse case
AccuracyHigh, consistentVariable, prompt-dependent
ReliabilityDeterministicNon-deterministic
User EffortMinimalRequires detailed prompts
ScalabilityStraightforwardSchema-bound complexity
Best prompt examplesShort and direct prompts work perfectly. No need to mention field names or property context as the LLM decides internally. Examples: "Get all doors", "Get all ceilings", "Fetch all stone cladding system", "Retrieve all roof", "Get all porcelain", “Fetch all carpark floor”, “Can you please find all curtain wall“Works well with short direct prompts for common/well-known asset types. However, for ambiguous or less common types, results can be inaccurate unless proper field context is added. Examples: "Get all assests where dtcategory having doors", "Fetch all assests with dttype having suspended ceiling", "Fetch all assests having mark value 6", "Fetch all common scope value." See note below table for how to improve these prompts
Prompt flexibilityHigh level of flexibility as same simple prompt style works every time regardless of asset type.Medium level of flexibilty as simple prompts work for well-known types but require added field context for ambiguous or less common asset types.

Notes for improving prompts for Related Query Tool#

These queries can be simplified into more natural prompts such as:

  • “Get all doors”
  • “Show suspended ceiling”
  • “Get all mark with value 6”
  • “Show all common assets”

To support such natural queries, the schema field descriptions should be enhanced. Also provide clear, user-friendly descriptions for each field.

Include examples and mapping hints (For example, “Use this field when the user mentions doors, glazing, HVAC, etc.”)

This enables the system to automatically map user intent to the correct fields without requiring explicit field-based prompts.

Use case recommendations#

Use caseRecommended approachReason
Retrieving well-known asset types such as doors, ceilings, or roofs.Custom MCP ToolFaster, deterministic, no schema dependency
Production workflows where accuracy is criticalCustom MCP ToolConsistent field targeting, no LLM variability
High-volume or frequent queriesCustom MCP ToolLower LLM overhead (one call instead of three calls), more scalable
Searching across multiple collectionsRelated Query ToolOnly approach that supports dynamic collection resolution
Exploratory / ad-hoc queries by power usersRelated Query ToolFlexible schema-driven querying without tool reconfiguration

Conclusion#

Overall, the Custom MCP Tool provides faster, more reliable, and more accurate results with minimal prompt engineering required from the user. The Related Query Tool approach may be considered in scenarios where flexible, schema-driven querying is essential and users can provide structured, context-rich prompts but it is not suitable as a general-purpose replacement at this time.