videodb
VideoDB Documentation
Pages

icon picker
Real‑Time Video Pipeline

Quick‑Start

Converting live video into actionable insights takes just four simple steps. The interface is designed so users can easily specify prompts tailored to their industry, clearly defining the information and events to capture from video feeds. See the following code snippet to get started.
# 1- Connect: ingest variety of real time streams from cameras, live feeds, meetings etc.
rtstream = coll.connect_rtstream(name="Mumbai CCTV", rtsp_url=RTSP_URL)
# 2- Index: convert real time videos into information using just prompt
sc_index = rtstream.index_scenes(
prompt="Describe the pedestrian Crossing Camera installed in Mumbai.", name="traffic_monitor")

# 3- Describe Events: Create event with just a prompt
event_id = conn.create_event(event_prompt="Detect pedestrians", label="human_detection")

# 4- Action: connect event to index and receive call on webhooks when the event is detected
double_decker_bus_alert_id = sc_index.create_alert(event_id, callback_url="https://your.callback.for/receiving_alerts")
All video feeds are securely stored in VideoDB and can be accessed anytime.
Indexing charges remain consistent, whether you're processing video files or live feeds.
All prompts are processed using Pro-tier LLMs for optimal quality.

empty-flag

Explore these demos to see what's possible:



Detailed Guide

This guide expands the earlier quick start examples with a deeper look at the RTStream, RTStreamSceneIndex, and Event APIs.
It also summarises how you can tune scene extraction and frame sampling so that your real‑time pipelines stay both cost‑efficient and semantically rich.

1. Connecting a live stream

# assume you already have `conn = videodb.connect(api_key="...")`
coll = conn.get_collection()

rtstream = coll.connect_rtstream(
name="Mumbai CCTV",
rtsp_url="rtsp://user:pass@1.1.1.1:554/mystream"
)
The returned RTStream object represents the persistent ingest pipeline from your camera or encoder.
Core attributes
attribute
type
description
id
Unique identifier for the live stream
name
Friendly label you supplied at creation
collection_id
The parent collection (useful for multi‑tenant setups)
sample_rate
Defaults to 1 fps, for higher sampling rates please connect at
status
connected, stopped, etc.
There are no rows in this table
Key RTStream methods
method
purpose
start() / stop()
Toggle ingest on the server side
generate_stream(start, end)
Get an HLS/MP4 URL for an arbitrary clip window (in seconds)
index_scenes(...)
Launch on‑the‑fly visual indexing
list_scene_indexes() / get_scene_index(id)
Inspect existing scene indices linked to this stream.
There are no rows in this table

2. Indexing scenes in real time

Currently, real-time indexing supports only time-based scene extraction. Since indexing parameters significantly influence output quality, it's recommended to experiment and identify the optimal configuration before locking your pipeline for production use. Refer to the example notebooks for domain-specific configurations using extraction_config.
Note: By default, streams are ingested at 1 frame per second (fps). Ensure your time and frame_count parameters align accordingly.
scene_index = rtstream.index_scenes(
extraction_type=SceneExtractionType.time_based,
extraction_config={"time": 2, "frame_count": 1},
prompt="Describe the scene and highlight congestion",
name="traffic_monitor"
)
Full parameter reference
parameter
default
description
extraction_type
SceneExtractionType.time_based
Selects the segmentation algorithm — time‑based ( only supported for now )
extraction_config
{"time":2, "frame_count":5}
Algorithm‑specific knobs (see below)
prompt
"Describe the scene"
Text sent to the vision‑LLM for every scene
name
None
Label for this index; handy when you maintain multiple indices
There are no rows in this table
Frame‑sampling knobs
extraction_type
key
meaning
time_based
time
Seconds per scene chunk (e.g. 10 s)
frame_count
fixed number of frames per chunk
select_frames
Pick specific positions: ["first","middle","last"]
There are no rows in this table

3. Working with RTStreamSceneIndex

The object returned by index_scenes() exposes real‑time analytics utilities.
Useful Methods
method
what it does
get_scenes(start=None, end=None, page=1, page_size=100)
Paginate through raw scene records (timestamp & description)
start() / stop()
Enable/disable the index without deleting it
create_alert(event_id, callback_url)
Subscribe an alert to a pre‑defined Event
list_alerts()
Enumerate alert subscriptions
enable_alert(alert_id) / disable_alert(alert_id)
Toggle alert delivery
There are no rows in this table

4. Defining reusable Events

conn.create_event() registers a server‑side rule that can be reused across multiple streams or indices.
event_id = conn.create_event(
event_prompt="Detect pedestrians crossing the zebra",
label="human_detection"
)
Event fields
field
required
notes
event_prompt
✔︎
Natural‑language condition evaluated by the vision model
label
✔︎
Slug used in alert payloads & dashboards
There are no rows in this table

5. End‑to‑end sample

ingests frames → index per‑scene description → evaluates pedestrian rule → fires a webhook in <1 s.
rtstream = coll.connect_rtstream("Mumbai CCTV", rtsp_url=RTSP_URL)

scene_idx = rtstream.index_scenes(prompt="Summarise traffic")

# Generic pedestrian detector, reused in multiple places
ped_event = conn.create_event("Detect pedestrians", label="pedestrian")

alert_id = scene_idx.create_alert(ped_event, callback_url="https://api.example.com/webhooks/ped")



6 Demos & Notebooks

Explore these demos to see what's possible:
For more detailed, domain-specific recipes, clone or execute our published Colab notebooks. Select a use case below to get started:


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.