videodb
VideoDB Documentation
videodb
VideoDB Documentation
Examples and Tutorials

icon picker
Elevating Trailers with Automated Narration

Introduction

Narration is the heartbeat of trailers, injecting excitement and intrigue into every frame ▶️
With , and , adding narration to trailers becomes a creative process.
This tutorial will guide you through the simple process of seamlessly integrating narration into trailers using these powerful tools.

Here’s an example of weaving a thrilling storyline from a reel of unrelated, but valuable cinematic shots:

Setup

📦 Installing packages

%pip install openai
%pip install videodb

🔑 API Keys

Before proceeding, ensure access to , , and API key. If not, sign up for API access on the respective platforms.
light
Get your API key from . ( Free for first 50 uploads, No credit card required ) 🎉
import os

os.environ["OPENAI_API_KEY"] = ""
os.environ["ELEVEN_LABS_API_KEY"] = ""
os.environ["VIDEO_DB_API_KEY"] = ""

🎙️ ElevenLab's Voice ID

You will also need ElevenLab's VoiceID of a Voice that you want to use.
For this demo, we will be using . ElevenLabs has a large variety of voices to choose from (browse them
). Once finalized, copy the Voice ID from ElevenLabs and link it here.
voiceover_artist_id = "VOICEOVER_ARTIST_ID"

Tutorial Walkthrough


📋 Step 1: Connect to VideoDB

Make sure you have the API key in the environment.
from videodb import connect

# Connect to VideoDB using your API key
conn = connect()

🎬 Step 2: Upload the Trailer

Upload the trailer video to VideoDB for further processing. This creates the base video asset that we shall use later in this tutorial.
video = conn.upload(url='https://www.youtube.com/watch?v=WQmGwmc-XUY')

🔍 Step 3: Analyze Scenes and Generate Scene Descriptions

Start by analyzing the scenes within the trailer using VideoDB's scene indexing capabilities. This will provide context for generating the narration script.
video.index_scenes()

Let's view the description of first scene from the video
scenes = video.get_scenes()
print(f"{scenes[0]['start']} - {scenes[0]['end']}")
print(scenes[0]["response"])
Output:
0 - 0.7090416666666666
The image captures a fiery blaze, a dynamic dance of flames in vivid shades of orange, gold, and red. Light flickers intensely, radiance expanding, contracting with the fire's rhythm. No specific source is visible; the fire dominates entirely, filling the frame with energetic movement. The luminosity suggests a fierce heat, powerful enough to demand respect and caution. Each tongue of flame is seemingly alive, almost writhing against a darker, indistinct background. This could be a natural fire or a controlled blaze—there’s no context to indicate its origin. Amidst the searing heat, the flames create a mesmeric, albeit destructive, spectacle.

🔊 Step 4: Generate Narration Script with LLM

Here, we use OpenAI’s GPT to build context around the scene descriptions above, and generate a fitting narration script for the visuals.
# Generate narration script with ChatGPT
import openai

client = openai.OpenAI()

script_prompt = "Craft a dynamic narration script for this trailer, incorporating scene descriptions to enhance storytelling. Ensure that the narration aligns seamlessly with the timestamps provided in the scene index. Don't include any annotations in output script"

full_prompt = script_prompt + "\n\n"
for scene in scenes:
full_prompt += f"- {scene}\n"

openai_res = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "system", "content": full_prompt}],
)
voiceover_script = openai_res.choices[0].message.content

# If you have ElevenLab's paid plan remove the :2500 limit on
# voiceover script.
# voiceover_script = voiceover_script[:2500]

You can refine the narration script prompt to ensure synchronization with timestamps in the scene index, optimizing the storytelling experience.

🎙️ Step 5: Generate Narration Audio with elevenlabs.io

Note: for this step, you will need a specific voice ID that fits perfectly with the vibe of your trailer. In our example, we have used this voice that resembles the vocal quality and style that of Sam Elliott. You can find a voice suitable for your trailer in the
import requests

# Call ElevenLabs API to generate voiceover
url = f"https://api.elevenlabs.io/v1/text-to-speech/{voiceover_artist_id}"
headers = {
"xi-api-key": os.environ.get("ELEVEN_LABS_API_KEY"),
"Content-Type": "application/json"
}
payload = {
"model_id": "eleven_monolingual_v1",
"text": voiceover_script,
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.5
}
}
elevenlabs_res = requests.request("POST", url, json=payload, headers=headers)

# Save the audio file
audio_file = "audio.mp3"
CHUNK_SIZE = 1024
with open(audio_file, 'wb') as f:
for chunk in elevenlabs_res.iter_content(chunk_size=CHUNK_SIZE):
if chunk:
f.write(chunk)


🎬 Step 6: Add Voiceover to Video with VideoDB

Upload the audio file (voiceover) to VideoDB
audio = conn.upload(file_path=audio_file)

🎥 Step 7: Add Narration to Trailer with VideoDB

Combine the narration audio with the trailer using VideoDB's timeline feature.
from videodb.timeline import Timeline
from videodb.asset import VideoAsset, AudioAsset

# Create a timeline object
timeline = Timeline(conn)

# Add the video asset to the timeline for playback
video_asset = VideoAsset(asset_id=video.id, start=0)

audio_asset1 = AudioAsset(asset_id=audio.id, start=5, end=25, disable_other_tracks=False)
audio_asset2 = AudioAsset(asset_id=audio.id, start=35, end=49, disable_other_tracks=False)

# add asset overlay
timeline.add_inline(asset=video_asset)
timeline.add_overlay(start=4, asset=audio_asset1)
timeline.add_overlay(start=35, asset=audio_asset2)

📺 Step 8: Review and Share

Preview the trailer with the integrated narration to ensure it aligns with your vision. Once satisfied, share the trailer with others to experience the enhanced storytelling.
from videodb import play_stream

stream_url = timeline.generate_stream()
play_stream(stream_url)

🎬 Bonus

Add Movie Poster at the End

To incorporate adding a movie poster as a bonus section at the end of the trailer using VideoDB, you can follow these steps (you’ll find the code snippet for the same below):
We first import the necessary module for handling Image Assets.
Then, we upload the movie poster image to VideoDB.
Next, we create an Image Asset with the uploaded poster image, specifying its dimensions and duration.
Finally, we add the movie poster as an overlay at the end of the video timeline.
from videodb.asset import ImageAsset
from videodb import MediaType, play_stream

image = collection.upload(url="https://raw.githubusercontent.com/video-db/videodb-cookbook/main/images/chaise_trailer.png", media_type=MediaType.image)

image_asset = ImageAsset(
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.