Skip to content
videodb
VideoDB Documentation
  • Pages
    • Welcome to VideoDB Docs
    • Quick Start Guide
      • Video Indexing Guide
      • Semantic Search
      • How Accurate is Your Search?
      • Collections
      • Public Collections
      • Callback Details
      • Ref: Subtitle Styles
      • Language Support
      • Guide: Subtitles
    • Examples and Tutorials
      • Dubbing - Replace Soundtrack with New Audio
      • VideoDB x TwelveLabs: Real-Time Video Understanding
      • Beep curse words in real-time
      • Remove Unwanted Content from videos
      • Instant Clips of Your Favorite Characters
      • Insert Dynamic Ads in real-time
      • Adding Brand Elements with VideoDB
      • Eleven Labs x VideoDB: Adding AI Generated voiceovers to silent footage
      • Elevating Trailers with Automated Narration
      • Add Intro/Outro to Videos
      • Audio overlay + Video + Timeline
      • Building Dynamic Video Streams with VideoDB: Integrating Custom Data and APIs
      • AI Generated Ad Films for Product Videography: Wellsaid, Open AI & VideoDB
      • Fun with Keyword Search
      • AWS Rekognition and VideoDB - Effortlessly Remove Inappropriate Content from Video
      • Overlay a Word-Counter on Video Stream
      • Generate Automated Video Outputs with Text Prompts | DALL-E + ElevenLabs + OpenAI + VideoDB
    • Visual Search and Indexing
      • Scene Extraction Algorithms
      • Custom Annotations
      • Scene-Level Metadata: Smarter Video Search & Retrieval
      • Advanced Visual Search Pipelines
      • Playground for Scene Extractions
      • Deep Dive into Prompt Engineering : Mastering Video Scene Indexing
    • Multimodal Search
      • Multimodal Search: Quickstart
      • Conference Slide Scraper with VideoDB
    • Real‑Time Video Pipeline
    • Meeting Recording SDK
    • Generative Media Quickstart
      • Generative Media Pricing
    • Realtime Video Editor SDK
      • Fit & Position: Aspect Ratio Control
      • Trimming vs Timing: Two Independent Timelines
      • icon picker
        Advanced Clip Control: The Composition Layer
      • Caption & Subtitles: Auto-Generated Speech Synchronization
      • Notebooks
    • Transcoding Quickstart
    • director-light
      Director - Video Agent Framework
      • Agent Creation Playbook
      • How I Built a CRM-integrated Sales Assistant Agent in 1 Hour
      • Make Your Video Sound Studio Quality with Voice Cloning
      • Setup Director Locally
    • github
      Open Source Tools
      • llama
        LlamaIndex VideoDB Retriever
      • PromptClip: Use Power of LLM to Create Clips
      • StreamRAG: Connect ChatGPT to VideoDB
    • zapier
      Zapier Integration
      • Auto-Dub Videos & Save to Google Drive
      • Create & Add Intelligent Video Highlights to Notion
      • Create GenAI Video Engine - Notion Ideas to Youtube
      • Automatically Detect Profanity in Videos with AI - Update on Slack
      • Generate and Store YouTube Video Summaries in Notion
      • Automate Subtitle Generation for Video Libraries
      • Solve customers queries with Video Answers
    • n8n
      N8N Workflows
      • AI-Powered Meeting Intelligence: Recording to Insights Automation
      • AI Powered Dubbing Workflow for Video Content
      • Automate Subtitle Generation for Video Libraries
      • Automate Interview Evaluations with AI
      • Turn Meeting Recordings into Actionable Summaries
      • Auto-Sync Sales Calls to HubSpot CRM with AI
      • Instant Notion Summaries for Your Youtube Playlist
    • mcp
      VideoDB MCP Server
    • Edge of Knowledge
      • Building Intelligent Machines
        • Part 1 - Define Intelligence
        • Part 2 - Observe and Respond
        • Part 3 - Training a Model
      • Society of Machines
        • Society of Machines
        • Autonomy - Do we have the choice?
        • Emergence - An Intelligence of the collective
      • From Language Models to World Models: The Next Frontier in AI
      • The Future Series
      • How VideoDB Solves Complex Visual Analysis Tasks
    • videodb
      Building World's First Video Database
      • Multimedia: From MP3/MP4 to the Future with VideoDB
      • Dynamic Video Streams
      • Why do we need a Video Database Now?
      • What's a Video Database ?
      • Enhancing AI-Driven Multimedia Applications
      • Misalignment of Today's Web
      • Beyond Traditional Video Infrastructure
      • Research Grants
    • Customer Love
    • Team
      • videodb
        Internship: Build the Future of AI-Powered Video Infrastructure
      • Ashutosh Trivedi
        • Playlists
        • Talks - Solving Logical Puzzles with Natural Language Processing - PyCon India 2015
      • Ashish
      • Shivani Desai
      • Gaurav Tyagi
      • Rohit Garg
      • VideoDB Acquires Devzery: Expanding Our AI Infra Stack with Developer-First Testing Automation

Advanced Clip Control: The Composition Layer

The Clip object wraps an Asset and controls how it appears on screen. Think of the Asset as your raw content (the video file, image, or text), and the Clip as all the presentation decisions - where it appears, how big it is, what color effects it has, how it fades in and out. This guide documents all available Clip parameters and their interactions.

Editor Architecture

Asset → Clip → Track → Timeline
Asset: Raw content (what to show)
Clip: Presentation control (how to show it)
Track: Layering container (for stacking multiple clips)
Timeline: Final composition (the output video)
This separation lets you reuse the same asset in multiple clips with different visual treatments - same video, but one clip shows it full-screen while another shows it as a small picture-in-picture overlay.

Clip Parameters

Core Parameters

Parameter
Type
Description
asset
Asset
The content to display (VideoAsset, ImageAsset, AudioAsset, TextAsset, CaptionAsset)
duration
float
Clip length in seconds
There are no rows in this table

Geometry Parameters

Parameter
Type
Description
fit
Fit
Scaling behavior: Fit.crop, Fit.contain, Fit.cover, Fit.none
position
Position
Anchor point (9 zones: top_left, center, bottom_right, etc.)
offset
Offset
Fine-tune position with x/y coordinates
scale
float
Size multiplier (0.0 to 10.0, default: 1.0)
There are no rows in this table

Visual Effect Parameters

Parameter
Type
Description
filter
Filter
Color treatment (greyscale, blur, contrast, etc.)
opacity
float
Transparency (0.0 = invisible, 1.0 = opaque)
transition
Transition
Fade in_/out effects
There are no rows in this table

Scale Parameter

The scale parameter is a size multiplier applied after fit mode. First, the fit mode handles the aspect ratio and scales your content to match the timeline, then scale multiplies that result. This is useful for creating picture-in-picture effects (scale=0.3 for a tiny corner video) or zoom effects (scale=1.5 to enlarge).
clip = Clip(
asset=VideoAsset(id=video.id),
duration=10,
scale=0.5, # 50% of original size
position=Position.top_left
)
Range: 0.0 to 10.0 (default: 1.0)
Example with scale = 0.5 and Position.top_left

Opacity Parameter

The opacity parameter controls transparency, letting you create semi-transparent overlays, subtle watermarks, or fade effects. At 1.0 your clip is fully solid, at 0.5 it’s half-transparent, and at 0.0 it’s completely invisible.
clip = Clip(
asset=VideoAsset(id=video.id),
duration=10,
opacity=0.5 # 50% transparent
)
Range: 0.0 (invisible) to 1.0 (opaque)
Example with opacity = 0.3

Filter Parameter

The filter parameter applies color and visual treatments to your clip. These are global effects that change the entire clip’s appearance - you can make it black and white, blur it for backgrounds, adjust contrast, or create stylistic looks. Each clip can have one filter applied.
Filter.greyscale # Remove color (black and white)
Filter.blur # Blur the video
Filter.contrast # Increase contrast
Filter.boost # Boost contrast and saturation
Filter.muted # Reduce saturation and contrast
Filter.darken # Darken the scene
Filter.lighten # Lighten the scene
Filter.negative # Invert colors
from videodb.editor import Filter

clip = Clip(
asset=VideoAsset(id=video.id),
duration=10,
filter=Filter.greyscale
)
Example with Filter.greyscale

Transition Parameter

The transition parameter controls fade in/out effects, making your clips appear and disappear smoothly instead of cutting abruptly. The fade happens over the first and last N seconds of your clip - so a 2-second fade means the first 2 seconds gradually appear, and the last 2 seconds gradually disappear.

clip = Clip(
asset=VideoAsset(id=video.id),
duration=10,
transition=Transition(
in_="fade", # Fade in effect (note the underscore)
out="fade", # Fade out effect
duration=2 # Transition duration in seconds
)
)
Example:
Parameters:
in_: Transition type for entry (use in_ with underscore because in is a Python keyword)
out: Transition type for exit
duration: Length of transition in seconds

Complete Example

Here’s a clip using multiple parameters:

clip = Clip(
asset=VideoAsset(id=video.id),
duration=10,
position=Position.bottom_left,
scale=0.7,
opacity=0.3,
filter=Filter.greyscale,
transition=Transition(in_="fade", out="fade", duration=3),
fit=None
)

Parameter Reference

Parameter
Type
Default
Description
asset
Asset
Required
Content to display
duration
float
Required
Clip length in seconds
fit
Fit
Fit.crop
Scaling mode
position
Position
None
Anchor point (9 zones)
offset
Offset
None
Fine position adjustment
scale
float
1.0
Size multiplier (0.0-10.0)
opacity
float
1.0
Transparency (0.0-1.0)
filter
Filter
None
Color treatment
transition
Transition
None
Fade effects
There are no rows in this table

Next Step:

For hands-on experimentation with all Clip parameters, see:

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.