Trigger a text-to-video search filtered by asset metadata.

Triggers an asynchronous text-to-video search within the specified dataset (`dataset_id`), restricted to assets matching every entry in `metadata_filters` (e.g., `genre = musical`, `rating != PG`, `headline contains olympics`, `impressions > 1000`). The `text_query` is encoded into an embedding using the dataset's configured vision-language encoder (e.g., CLIP, SigLIP, Perception Encoder) and ranked against video keyframe embeddings via vector similarity, while the metadata filters are applied as a pre-scoring constraint on the asset metadata fields. Returns the Databricks `run_id` immediately; use the status and result endpoints to poll for the final ranked list of videos (each with the parent video, top composite slice, top keyframe, metadata, score, and optional moderation score).

Authentication

AuthorizationBearer

Bearer authentication of the form Bearer <token>, where token is your auth token.

Request

This endpoint expects an object.
dataset_idstringRequiredformat: "uuid"
Dataset to search within
text_querystringRequired>=2 characters
Natural language query encoded into an embedding to score visual content via vector similarity.
limitintegerOptional1-100Defaults to 40
Max number of results to return
offsetintegerOptional>=0Defaults to 0
Number of results to skip before returning
metadata_filterslist of objectsOptional
List of filter objects applied against asset metadata fields before scoring. All filters are combined with AND.
skip_moderationbooleanOptionalDefaults to false

When true, moderation scoring is skipped and moderation_score will be null on all results.

Response

Successful Response
run_idinteger
Databricks job run ID to use for status polling and result retrieval

Errors

422
Unprocessable Entity Error