Capture sessions can optionally persist media for later search and playback. Control storage per-channel and access exported assets.
Desktop capture currently supports macOS only. Windows support is coming soon.
Quick Example
# After capture_session.exported webhook
cap = conn.get_capture_session( "cap-xxx" )
# Get the muxed video
video_id = cap.exported_video_id
video = coll.get_video(video_id)
# Search the captured content
video.index_spoken_words()
results = video.search( "budget discussion" )
for shot in results.shots:
print ( f " { shot.start } s: { shot.text } " )
shot.play()
Storage Control
Per-Channel Storage
Enable/disable storage for each channel:
# In desktop client
await client.start_session(
capture_session_id = cap_id,
channels = [
{ "name" : "mic:default" , "store" : True }, # Will persist
{ "name" : "display:1" , "store" : True }, # Will persist
{ "name" : "system_audio:default" , "store" : False } # Ephemeral
]
)
Setting Behavior store: trueMedia persisted, available for search and playback store: falseEphemeral - real-time processing only, no persistence
Export only runs if at least one channel has store: true.
What Gets Exported
Muxed Video
The default “playable recording” containing:
Video: Primary display (set via primary_video_channel_id)
Audio: All recorded audio channels mixed together
Use for:
Playback and sharing
Downstream indexing and search
Simple “trim and publish” workflows
Raw Channel Assets
Individual assets for each stored channel:
Channel Asset Type display:1Raw video mic:defaultRaw audio system_audio:defaultRaw audio
Use for:
Separate audio stems (mic vs system audio)
Multi-track editing
Custom muxing strategies
Picture-in-picture composites
Accessing Exports
Via Webhook
The capture_session.exported webhook includes the muxed video ID:
{
"event" : "capture_session.exported" ,
"capture_session_id" : "cap-xxx" ,
"status" : "exported" ,
"data" : {
"exported_video_id" : "m-xxx"
}
}
Via RTStream
Each RTStream has an exported_asset_id after export:
def on_exported ( payload : dict ):
cap = conn.get_capture_session(payload[ "capture_session_id" ])
# Muxed video
video_id = payload[ "data" ][ "exported_video_id" ]
# Raw channel assets
mics = cap.get_rtstream( "mic" )
if mics:
mic_asset_id = mics[ 0 ].exported_asset_id
displays = cap.get_rtstream( "display" )
if displays:
display_asset_id = displays[ 0 ].exported_asset_id
Editing with Raw Assets
Use raw assets when you need control over individual tracks.
Display Video + Mic Audio Only
from videodb.editor import Timeline, Track, Clip, VideoAsset, AudioAsset
cap = conn.get_capture_session( "cap-xxx" )
display_asset_id = cap.get_rtstream( "display" )[ 0 ].exported_asset_id
mic_asset_id = cap.get_rtstream( "mic" )[ 0 ].exported_asset_id
timeline = Timeline(conn)
timeline.resolution = "1280x720"
# Video track
video_track = Track()
video_track.add_clip( 0 , Clip( asset = VideoAsset( id = display_asset_id), duration = 60 ))
timeline.add_track(video_track)
# Audio track (mic only)
audio_track = Track()
audio_track.add_clip( 0 , Clip( asset = AudioAsset( id = mic_asset_id, volume = 1.0 ), duration = 60 ))
timeline.add_track(audio_track)
stream_url = timeline.generate_stream()
Semantic Search
After export, captured content is searchable:
video = coll.get_video(exported_video_id)
# Index for search
video.index_spoken_words()
# Search
results = video.search( "action items from the meeting" )
for shot in results.shots:
print ( f " { shot.start } s: { shot.text } " )
shot.play()
Which Asset to Use?
Use Case Asset Quick playback, sharing Muxed video (exported_video_id) Separate mic vs system audio Raw channel assets Multi-track editing Raw channel assets Custom audio mix Raw channel assets Picture-in-picture Raw channel assets
Next Steps
Privacy Controls Consent and redaction patterns
Capture Overview Architecture and quickstart