Skip to main content
Open In Colab

The Challenge

A chess match can last 4+ hours. As a content creator, you want to turn that into a punchy highlight reel, but manually scrubbing through and identifying key moments is tedious. What if AI could watch the entire match, detect every move, and automatically compile a highlight montage?

What You’ll Build

Turn a long chess match into a punchy highlight reel — automatically! You’ll:
  1. Upload chess video + background music
  2. Index scenes with simple move detection
  3. Extract timestamps automatically using AI
  4. Build a montage with transitions and effects
  5. Output a professional highlight reel
All powered by VideoDB’s Editor SDK — no manual frame-by-frame editing needed.

Setup

Install Dependencies

pip install videodb

Connect to VideoDB

import videodb

# Connect to VideoDB
api_key = "your_api_key"
conn = videodb.connect(api_key=api_key)
coll = conn.get_collection()

Implementation

Step 1: Upload Chess Video and Music

from videodb import MediaType

# Upload the chess match video
chess_video = coll.upload(url="https://www.youtube.com/watch?v=dhDe-RcoyAU")

# Upload background music for the montage
bg_music = coll.upload(
    url="https://www.youtube.com/watch?v=S19UcWdOA-I",
    media_type=MediaType.audio
)

Step 2: Index Scenes with Move Detection

Create a scene index with a binary prompt to detect moves:
from videodb import SceneExtractionType

moves_index_id = chess_video.index_scenes(
    extraction_type=SceneExtractionType.time_based,
    extraction_config={"time": 8, "frame_count": 5},
    prompt="""Look at this chess scene and focus on the chess board. Your task is to detect when pieces are moved.

Respond with ONLY one of these two keywords:
- "Player Moved" — if a chess piece was moved
- "No Move" — if no move occurred (same position, paused, talking, etc.)

Be strict. Only say "Player Moved" if you clearly see a chess piece moved.""",
    name="Chess_Move_Detection"
)

# Get all scenes with descriptions
moves_scenes = chess_video.get_scene_index(moves_index_id)

Step 3: Extract Move Timestamps Using AI

Feed the scene index to the LLM to extract timestamps:
import json

prompt = f"""Analyze the scene descriptions from this chess video.

Find EVERY scene where the description says "Player Moved".

Return a JSON array containing ONLY the start timestamps (in seconds) of those scenes.

Example output format:
[0, 8, 16, 24, 40, 48]

Rules:
- Return ONLY the JSON array, nothing else
- No descriptions, no explanations, just timestamps

Moves Index : "{moves_scenes}"
"""

response = coll.generate_text(
    prompt=prompt,
    response_type="json",
    model_name="pro"
)

# Parse timestamps from LLM response
timestamps = response.get('output', response)
if isinstance(timestamps, str):
    timestamps = json.loads(timestamps)
Initialize timeline and sample moves:
from videodb.editor import (
    Timeline, Track, Clip, VideoAsset, AudioAsset,
    Filter, Transition, TextAsset, Font
)

CLIP_DURATION = 5  # seconds per clip
TARGET_CLIPS = 10  # how many clips we want

# Sample timestamps evenly
total_detected = len(timestamps)
step = max(1, total_detected // TARGET_CLIPS)
sampled_timestamps = timestamps[::step][:TARGET_CLIPS]

# Initialize timeline
timeline = Timeline(conn)
timeline.background = "#000000"

# Create intro text
intro_text = TextAsset(
    text="Let the Match Begin",
    font=Font(family="Clear Sans", size=56, color="#FFFFFF"),
)

intro_clip = Clip(
    asset=intro_text,
    duration=3,
    transition=Transition(in_="fade", out="fade", duration=0.5)
)

intro_track = Track()
intro_track.add_clip(0, intro_clip)
timeline.add_track(intro_track)

Step 5: Add Video Clips with Transitions

# Add video clips
video_track = Track()
timeline_position = 3

for i, start_time in enumerate(sampled_timestamps):
    clip = Clip(
        asset=VideoAsset(
            id=chess_video.id,
            start=start_time-1,
            volume=0  # Muting the original audio
        ),
        duration=CLIP_DURATION,
        filter=Filter.contrast,
        transition=Transition(in_="fade", out="fade", duration=1)
    )

    video_track.add_clip(timeline_position, clip)
    timeline_position += CLIP_DURATION

timeline.add_track(video_track)

total_duration = len(sampled_timestamps) * CLIP_DURATION + 3

Step 6: Add Background Music

# Add music track
music_clip = Clip(
    asset=AudioAsset(
        id=bg_music.id,
        start=0,
        volume=0.7
    ),
    duration=total_duration
)

audio_track = Track()
audio_track.add_clip(0, music_clip)
timeline.add_track(audio_track)

Step 7: Render Montage

# Generate final montage stream
stream_url = timeline.generate_stream()

What You Get

A professional highlight reel with:
  • AI-detected key moves
  • Evenly sampled clips for pacing
  • Smooth fade transitions
  • Enhanced contrast for visual impact
  • Background music
  • Professional polish in seconds
Here’s the final chess montage:

Perfect For

  • Tournament Highlights - Post-tournament recap videos
  • Streamer Clips - Highlight reels for streaming communities
  • Educational Analysis - Study videos from master games
  • Social Media - Short-form clips from longer matches
  • Archive Content - Transform library of matches into reels

The Result

What took hours of manual editing now takes minutes. Your audience gets punchy, professional highlight reels. You get back your time. No more endless scrubbing. Just AI-powered chess analysis and automatic montages.

Explore the Full Notebook

Open the complete implementation with advanced filtering, custom transitions, and batch processing.