Dynamic Tags (beta)

Dynamic Tags offer a fast, scalable way to generate metadata across large datasets by classifying media using user-defined keywords or natural language phrases—such as objects, activities, locations, and more. Powered by foundation models, Dynamic Tags automatically match descriptive terms to content, eliminating the need for manual concept creation.

This feature enables rapid organization and analysis of media—processing millions of images or hours of video in minutes—making it ideal for metadata tagging and downstream analytics.

Built on Coactive’s multimodal AI, the system maps content to your tag list and assigns a relevance score to each tag. These scores can be queried directly in SQL, allowing you to filter and analyze content at scale with precision.

How It Works

Creating and refining metadata across your entire dataset is easy with Dynamic Tags.

Creating Tags

Step 1: Define Your Tags

  • Log in to your Coactive account.
  • Navigate to the Dynamic Tags tab.
  • Click “Add Group.”
  • Choose the relevant dataset and give your tag group a name (e.g., “Celebrities”).
What’s a Group?

A Group is a logical category that helps organize related dynamic tags.

  • In e-commerce: a group like “Product Categories” might include tags like “Clothing,” “Appliances,” and “Toys.”
  • In media: “Celebrities” could include tags like “Taylor Swift,” “Will Smith,” and “Jennifer Lopez.”
  • In advertising: an “IAB Tags” group might contain tags such as “Food,” “Health,” and “News.”

Step 2: Add Your Tags

  • Enter one keyword or phrase per line in the bulk input box.
  • Each entry becomes both the dynamic tag name and its initial positive prompt. Valid characters for tag creation are alphanumeric | spaces.
  • You can edit either later, but the initial preview (in Top Content) will be based on these inputs.

💡 Tip: Use descriptive, meaningful tag names. Avoid placeholders like test v1 or andres2024—these directly influence the model’s initial results.

Step 3: Create the Tag Group

Click “Add Group” to submit your tag list and generate your tag group.

add_dynamic_tag_group.png

Once created, you’ll be directed to the group overview page where you can monitor the status of each tag.

tag_status.png

Tag Statuses Explained
  • Creating: The tag is being initialized.
  • Publishing: Assets are being scored against your tag definition.
  • Active: Tagging is complete, and tags are available for SQL queries and Top Content.

Refining tags

Once your tags are created, you can fine-tune them to improve precision and relevance.

Refine Using Text Prompts

  • Click on a tag to review the initial results.
  • Add or modify positive or negative text prompts to better guide the model.

Example: Changing the prompt from "soccer" to "soccer goalkeeper" updates the classifier and refreshes the results.

refine_text_prompt_example.png

Refine Using Visual Labels

You can also improve tag accuracy by giving visual feedback. There are four main methods:

1. Review Auto-Generated Visual Labels

These are the model’s “best guesses” of assets that might be positive or negative examples. You can quickly correct any that are inaccurate.

💡 Tip: Reviewing these suggestions helps the model learn what to include or exclude with minimal manual effort.

review_visual_prompts.png

2. Manually Label Assets in Top Content

Head to the Top Content page to manually label results based on what you see.

Quick Labeling Guide
  • 1 click = Mark as positive
  • 2 clicks = Mark as negative
  • 3 clicks = Remove manual label

manual_label_example.png

Use the platform’s semantic search to find relevant examples and add them as visual prompts.

  • Click “Add Visual Prompts” in the upper-right corner.

  • Select “Add from Search.”

  • Enter a query to retrieve assets that match your intended concept.

  • Choose assets to label as positive examples.

    positive_label_example.png

4. Add from Uncertain Assets

Let the model show you where it’s least confident—these are high-value opportunities to clarify your tag.

  • Click “Add Visual Prompts” in the upper-right corner.
  • Select “Add from Uncertain Assets.”
  • You’ll be shown assets the model is most uncertain about. Labeling these helps refine the decision boundary.

💡 Tip: Providing feedback on uncertain assets is a powerful way to accelerate model improvement with minimal effort.

uncertain_asset_example.png


Querying the Data

Once your dynamic tags are active, you can explore and analyze results directly in the Query tab using SQL.

Each Group Table contains raw tagging data, including a relevance score for every tag, per asset (image or keyframe).

Example If your group is named "sports", the corresponding table will be named: group_sports

Group Table Structure

Each row in the Group Table represents one keyframe (for video) or one image, tagged with a specific dynamic tag. The table includes:

ColumnDescription
PREVIEWA clickable thumbnail of the asset. Clicking opens a sidebar with more details.
COACTIVE_IMAGE_IDA unique ID for the asset.
TAG_NAMEThe name of the dynamic tag applied.
SCOREThe asset’s relevance score (0 to 1) indicating how likely the asset matches the tag.

sql_query_table_example.png

Joining for Advanced Analytics

To perform more advanced video analytics, such as:

  • Aggregating keyframes by coactive_video_id
  • Filtering by keyframe timestamp (keyframe_time_ms)
  • Enriching results with metadata fields like SeriesName, EpisodeName, or keyframe_index, you’ll need to join the Group Table with the coactive_table and optionally coactive_table_adv.

Example Query

1SELECT
2 g.score AS basketball_probability,
3 t.coactive_video_id,
4 g.coactive_image_id,
5 v.metadata['ContentID'] AS gmo_id,
6 v.metadata['SeriesName'] AS series,
7 v.metadata['EpisodeName'] AS episode,
8 v.metadata['FreeWheelContentID'] AS external_id
9FROM
10 group_sports g
11JOIN
12 coactive_table t ON g.coactive_image_id = t.coactive_image_id
13LEFT JOIN
14 coactive_table_video v ON t.coactive_video_id = v.coactive_video_id
15WHERE
16 g.tag_name = 'basketball'
17 AND g.score > 0.60
18ORDER BY
19 g.score DESC;

Improvements to Dynamic Tags over Previous Versions

1. Positive and Negative Prompts (Text + Visual)

You now have fine-grained control over what each tag should include or exclude using both positive and negative prompts.

Example: In v1, to create a “fantasy” tag that includes only live-action content (not animation), you’d have to manually label animated examples as negative. In v2, simply add “animation” as a negative text prompt—no manual labeling required.

2. Smarter Prompt Logic

We’ve updated the prompt logic from strict AND matching to a more flexible OR.

Example: A dynamic tag for “sustainability” with positive prompts "sustainability", "forests", and "nature" and negative prompts "farm" and "CGI wildlife" will now match assets with (sustainability OR forests OR nature) AND NOT (farm OR CGI wildlife).

This broader logic improves recall while respecting exclusion criteria.

3. Active Learning: Smarter Feedback Loops

Dynamic Tags v2 gives you multiple, intuitive ways to refine your tag definitions:

  • Review and correct model-generated labels
  • Label assets Coactive recommends for training
  • Update tags directly within the Top Content UI
  • Add visual prompts via search

These feedback mechanisms help improve performance over time with minimal effort.

4. Significantly Faster SQL Queries

In v1, tag scores were calculated on-the-fly with every SQL query, which slowed performance.

In v2, scores are precomputed when a tag is created or updated—so queries return faster, even at scale.

5. Automatic Scoring for New Assets

Previously, dynamic tag scores weren’t generated for new assets unless manally re-triggered. Now, Coactive automatically calculates scores for newly added assets in datasets—so your tags stay up-to-date without additional work.