Skip to main content

Call.md on GitHub

Complete source code, installation guide, and configuration

What Is It?

Your AI meeting assistant, live during every call. Call.md turns meetings into live agent loops. It records locally, transcribes in real-time (you vs them), and provides live intelligence during calls. When the meeting ends, it generates summaries with action items and can send data to your workflow automation platforms.
The Power: During calls, AI generates contextual suggestions, monitors conversation balance, and automatically triggers your MCP tools when information needs arise — all in real-time.

Why You Need This

During the Meeting

Get real-time assistance while you talk:
  • AI-generated suggestions (things to say, questions to ask)
  • Conversation metrics (talk ratio, pace, questions asked)
  • MCP tools triggered automatically from conversation context
  • Coaching nudges when conversation needs steering

How It Works

1

Dual-Channel Transcription

Captures your mic (labeled “you”) and system audio (labeled “them”) separately. This separation powers all downstream intelligence.
2

Live Assist

Every 20 seconds, AI analyzes recent transcript and generates contextual suggestions: things to say, questions to ask.
3

Conversation Metrics

Real-time tracking of talk ratio, speaking pace (WPM), questions asked, and monologue detection. No LLM required — pure statistics.
4

MCP Auto-Triggering

Intent detector scans conversation for information needs (active or passive). When detected, automatically calls relevant MCP tools and displays results inline.
5

Post-Meeting Intelligence

When the meeting ends, generates three parallel summaries: narrative overview, key points by topic, and action items. Sends to workflow automation platforms.

Key Features

Real-Time Speech-to-Text
  • Separate channels for you and them
  • Live transcription powered by VideoDB
  • Recording history with full transcripts
  • Screen, mic, and system audio capture
AI-Generated Suggestions
  • Contextual things to say
  • Questions to ask
  • Updates every 20 seconds
  • Based on recent conversation context
Track Engagement
  • Talk ratio (you vs them)
  • Speaking pace and question count
  • Engagement score
  • Monologue detection (45s threshold)
Model Context Protocol
  • Auto-triggers tools from conversation
  • Runs every 20 seconds
  • Max 3 tool calls per run
  • Results display inline during meetings
  • Supports stdio and HTTP servers
Three-Part AI Analysis
  • Short overview (3-5 sentence narrative)
  • Key points by topic (attributed to participants)
  • Action items (3-10 concrete next steps)
  • Generated in parallel for speed
Automation Integration
  • Auto-send to n8n, Zapier, CRMs
  • Triggered when meeting ends
  • Structured data payload
  • Agent-ready output format
Setup Wizard
  • AI-generated probing questions
  • Dynamic discussion checklist
  • Google Calendar integration
  • Sync upcoming meetings
Mark Important Moments
  • Quick bookmark during calls
  • Review later with context
  • Share with team

Tech Stack

Built for performance and reliability:

Electron 34

Desktop application shell

React 19 + TypeScript 5.8

Modern UI with full type safety

tRPC 11

Type-safe API layer

Drizzle + SQLite

Local offline-first storage

VideoDB SDK (0.2.4)

Recording and transcription

MCP SDK (1.0.0)

Model Context Protocol integration

OpenAI SDK (6.19.0)

LLM calls via VideoDB API

Tailwind + shadcn/ui

Beautiful, modern interface

Getting Started

Prerequisites
1

Install Call.md

macOS (Apple Silicon & Intel):
curl -fsSL https://artifacts.videodb.io/call.md/install | bash
Currently available for macOS — Windows and Linux support coming soon
2

Launch and Register

  1. Launch Call.md from Applications or Spotlight
  2. Enter your VideoDB API key (get one free)
  3. Grant system permissions when prompted
3

Configure Recording

Configure preferences:
  • Enable microphone capture
  • Enable system audio capture
  • Select screen to record
  • Optionally connect Google Calendar
4

Start Your First Meeting

  1. Click “New Meeting” from home screen
  2. Optionally run Meeting Setup wizard
  3. Click “Start Recording”
  4. Watch live transcription and intelligence
macOS Permissions RequiredGrant in System Preferences > Privacy & Security:
  • Microphone (for voice recording)
  • Screen Recording (for screen capture)

Configuration

All features are configurable through Settings:
FeatureCustomizable
Live AssistEnable/disable, configure timing
Conversation MetricsSet thresholds for talk ratio alerts
MCP ServersAdd stdio/HTTP servers, manage connections
Workflow WebhooksConfigure n8n, Zapier, or custom endpoints
Google CalendarConnect/disconnect calendar sync

MCP Server Setup

Connect MCP servers in Settings → MCP Servers:
  1. Click Add Server
  2. Choose transport: stdio (local) or http (remote)
  3. Configure connection details
  4. Click Connect
The MCP agent runs automatically during meetings, detects information needs from conversation, and triggers relevant tools. Results appear inline in the MCP Results panel.

Privacy & Data

Local Database - All data stored in SQLite at ~/Library/Application Support/call-md/
Secure Storage - API keys encrypted, credentials protected
User Control - Delete recordings anytime, export transcripts
Transcription via VideoDB - AI features require internet connectivity

Complete Setup Guide on GitHub

Detailed installation instructions, configuration options, and troubleshooting guide

Bloom

Screen recorder with AI search for async video documentation

Pair Programmer

AI coding assistant with real-time screen and audio context

Focusd

AI-powered productivity tracking with automatic time insights

CRM Sales Agent

Automate sales call analysis and CRM updates with AI agents