Skip to main content
Open In Colab

Introduction

Imagine you’re watching a captivating keynote session from your favorite conference, and you’re welcomed with a personalized stream just for you. This tutorial demonstrates how to create dynamic video streams by integrating data from custom databases and external APIs. We’ll use a practical example: a recording of a Config 2023 keynote session. By using VideoDB, we’ll show how companies like Figma can personalize the viewing experience for their audience, delivering a richer and more engaging experience. We’ll showcase how to:
  • Fetch data from a random user API to represent a hypothetical viewer.
  • Integrate this data into a custom VideoDB timeline.
  • Create a personalized stream that dynamically displays relevant information alongside the keynote video.
This tutorial is your guide to unlocking the potential of dynamic video streams and transforming your video content with personalized experiences.

Setup

Installing packages

!pip install videodb

API Keys

Before proceeding, ensure access to VideoDB Get your API key from VideoDB Console. ( Free for first 50 uploads, No credit card required)

Steps

Step 1: Connect to VideoDB

Begin by establishing a connection to VideoDB using your API key:
import videodb

# Set your API key
api_key = "your_api_key"

# Connect to VideoDB
conn = videodb.connect(api_key=api_key)
coll = conn.get_collection()

🗳️ Step 2: Upload Base Video

Upload and play the video to ensure it’s correctly loaded. We’ll be using this video for the purpose of this tutorial.
# Upload and play a video from a URL
video = coll.upload(url="https://www.youtube.com/watch?v=Nmv8XdFiej0")
video.play()

# Alternatively, get a video from your VideoDB collection
# video = coll.get_video('VIDEO_ID_HERE')
# video.play()

Step 3: Fetch Data from a Random User API

This code fetches a random user’s data (name and picture) from the “randomuser.me” API. You can adapt this to retrieve data from any relevant API (e.g., product data, news articles) for your use case.
import requests

# Make a request to the Randomizer API
response = requests.get('https://randomuser.me/api/?results=1&nat=us,ca,gb,au')
data = response.json()

# Extract relevant information
first_name = data['results'][0]['name']['first']
medium_picture = data['results'][0]['picture']['medium']

Step 4: Upload the image to VideoDB

  • First we download the image to local storage
  • Then we use the local path to upload it to VideoDB
import requests

# 1. Download the image locally
local_path = "my_local_image.jpg"

response = requests.get(medium_picture)
if response.status_code == 200:
    with open(local_path, 'wb') as f:
        f.write(response.content)
    print(f"Image downloaded successfully to: {local_path}")
else:
    print(f"Failed to download image. Status code: {response.status_code}")

# 2. Upload using the local file path

from videodb import play_stream, MediaType
image = coll.upload(file_path=local_path, media_type=MediaType.image)

print(f"Image uploaded to VideoDB: {image.id}")

Step 5: Create VideoDB Assets

We create VideoDB assets for the base video, the user’s name (text), and their picture (image) using the new Editor SDK. The Font and Background objects allow us to customize the appearance of text elements.
from videodb.editor import (
    Timeline, Track, Clip,
    VideoAsset, TextAsset, ImageAsset,
    Font, Background, Alignment, HorizontalAlignment, VerticalAlignment,
    Position, Offset, Fit)

# 1. Video Asset (Base background)
video_asset = VideoAsset(id=video.id, start=0)

# 2. Name Asset (Top)
name_asset = TextAsset(
    text=f'Hi {first_name} !',
    font=Font(family="Montserrat", size=60, color="#000000"),
    background=Background(color="#D2C11D", border_width=20, opacity=1.0),
    alignment=Alignment(horizontal=HorizontalAlignment.center, vertical=VerticalAlignment.top),)

# 3. Message Asset (Middle)
cmon_asset = TextAsset(
    text="Here are your favorite moments",
    font=Font(family="Montserrat", size=60, color="#D2C11D"),
    background=Background(color="#000000", border_width=20, opacity=1.0),
    alignment=Alignment(horizontal=HorizontalAlignment.center, vertical=VerticalAlignment.center),)

# 4. Image Asset (Bottom)
image_asset = ImageAsset(id=image.id)

↔️ Step 6: Create the VideoDB Timeline

Using the Track and Clip pattern, we arrange and layer assets to create a dynamic video stream. The main video goes on one track, while overlays (name, message, image) go on separate tracks with their start times.
# Create the timeline
timeline = Timeline(conn)

# --- Track 1: Main Video ---
video_track = Track()
video_clip = Clip(asset=video_asset, duration=float(video.length))
video_track.add_clip(0, video_clip)
timeline.add_track(video_track)

# --- Track 2: Overlays ---
overlay_track = Track()

# 1. Add Name Overlay (Top)
name_clip = Clip(
    asset=name_asset,
    duration=4,
    position=Position.top,
    offset=Offset(y=0.15))
overlay_track.add_clip(5, name_clip)

# 2. Add Message Overlay (Center)
cmon_clip = Clip(
    asset=cmon_asset,
    duration=4,
    position=Position.center,)
overlay_track.add_clip(5, cmon_clip)

# 3. Add Image Overlay (Bottom)
image_clip = Clip(
    asset=image_asset,
    duration=4,
    position=Position.bottom,
    scale=2,
    fit=Fit.none,
    offset=Offset(y=-0.15))
overlay_track.add_clip(5, image_clip)

timeline.add_track(overlay_track)

▶️ Step 7: Generate and Play the Personalized Stream

The generate_stream() method creates a streamable URL for your personalized video stream. You can then use play_stream() to preview it in your browser.
from videodb import play_stream

stream_url = timeline.generate_stream()
print(stream_url)
play_stream(stream_url)

Conclusion

This tutorial showcased how to create personalized video streams using VideoDB. By integrating data from external APIs and custom databases, you can enhance your video content, personalize user experiences, and unlock new possibilities for engagement. Explore various data sources, experiment with different integrations, and customize your video streams to suit your specific needs.

Explore Full Notebook

Open the complete implementation in Google Colab with all code examples.