Skip to main content

Call.md on GitHub

Complete source code, installation guide, and configuration

What Is It?

Your AI meeting assistant, live during every call. Most meeting tools transcribe after the fact. Call.md is different: it understands your meeting as it happens, giving you AI-powered suggestions, conversation metrics, and automatic tool triggers in real-time. It’s like having an AI copilot who watches your meeting, tracks engagement, and whispers helpful context right when you need it. After the call ends, it generates three-part summaries (overview, key points, action items) and can automatically update your CRM or project management tools through workflow webhooks. If you’ve used Otter.ai or Fireflies.ai, think of Call.md as the same idea but built for agent automation—not just passive transcription, but active intelligence during and after meetings.
The Power: During calls, AI generates contextual suggestions, monitors conversation balance, and automatically triggers your MCP tools when information needs arise — all in real-time.

Why You Need This

During the Meeting

Get real-time assistance while you talk:
  • AI-generated suggestions (things to say, questions to ask)
  • Conversation metrics (talk ratio, pace, questions asked)
  • MCP tools triggered automatically from conversation context
  • Coaching nudges when conversation needs steering

How It Works

1

Dual-Channel Transcription

Captures your mic (labeled “you”) and system audio (labeled “them”) separately. This separation powers all downstream intelligence.
Why Dual-Channel Matters: Separating “you” (microphone) from “them” (system audio) isn’t just for attribution. It enables Call.md to track conversation balance, detect when one person is dominating, measure your speaking pace independently, and generate insights like “You asked 3 questions but the client asked 0—they might not be engaged.” Single-channel transcription can’t do this.
1

Live Assist

Every 20 seconds, AI analyzes recent transcript and generates contextual suggestions: things to say, questions to ask.
2

Conversation Metrics

Real-time tracking of talk ratio, speaking pace (WPM), questions asked, and monologue detection. No LLM required — pure statistics.
3

MCP Auto-Triggering

Intent detector scans conversation for information needs (active or passive). When detected, automatically calls relevant MCP tools and displays results inline.
4

Post-Meeting Intelligence

When the meeting ends, generates three parallel summaries: narrative overview, key points by topic, and action items. Sends to workflow automation platforms.

Key Features

Real-Time Speech-to-Text
  • Separate channels for you and them
  • Live transcription powered by VideoDB
  • Recording history with full transcripts
  • Screen, mic, and system audio capture
AI-Generated Suggestions
  • Contextual things to say
  • Questions to ask
  • Updates every 20 seconds
  • Based on recent conversation context
Track Engagement
  • Talk ratio (you vs them)
  • Speaking pace and question count
  • Engagement score
  • Monologue detection (45s threshold)
Model Context Protocol
  • Auto-triggers tools from conversation
  • Runs every 20 seconds
  • Max 3 tool calls per run
  • Results display inline during meetings
  • Supports stdio and HTTP servers
How it works in practice:During a sales call, the client mentions “Can you send me pricing for the enterprise plan?” The MCP intent detector recognizes this as an information need, automatically calls your CRM tool to fetch the pricing doc, and displays it inline. You see the result immediately and can reference it without breaking flow.Or: A customer says “I’m seeing error code 502.” The MCP agent searches your knowledge base tool, finds relevant docs, and shows them to you in real-time—before you even finish taking notes.This happens automatically every 20 seconds based on conversation context. You configure which MCP tools are available; the agent decides when to call them.
Three-Part AI Analysis
  • Short overview (3-5 sentence narrative)
  • Key points by topic (attributed to participants)
  • Action items (3-10 concrete next steps)
  • Generated in parallel for speed
Automation Integration
  • Auto-send to n8n, Zapier, CRMs
  • Triggered when meeting ends
  • Structured data payload
  • Agent-ready output format
Setup Wizard
  • AI-generated probing questions
  • Dynamic discussion checklist
  • Google Calendar integration
  • Sync upcoming meetings
Mark Important Moments
  • Quick bookmark during calls
  • Review later with context
  • Share with team

Tech Stack

Built for performance and reliability:

Electron 34

Desktop application shell

React 19 + TypeScript 5.8

Modern UI with full type safety

tRPC 11

Type-safe API layer

Drizzle + SQLite

Local offline-first storage

VideoDB SDK (0.2.4)

Recording and transcription

MCP SDK (1.0.0)

Model Context Protocol integration

OpenAI SDK (6.19.0)

LLM calls via VideoDB API

Tailwind + shadcn/ui

Beautiful, modern interface

Getting Started

Prerequisites
  • macOS 12+ (Monterey or later) or Windows 10+
  • VideoDB API key (free tier available)
1

Install Call.md

macOS (Apple Silicon & Intel):
curl -fsSL https://artifacts.videodb.io/call.md/install | bash
Currently available for macOS and Windows — Linux support coming soon
2

Launch and Register

  1. Launch Call.md from Applications or Spotlight
  2. Enter your VideoDB API key (get one free)
  3. Grant system permissions when prompted
3

Configure Recording

Configure preferences:
  • Enable microphone capture
  • Enable system audio capture
  • Select screen to record
  • Optionally connect Google Calendar
4

Start Your First Meeting

  1. Click “New Meeting” from home screen
  2. Optionally run Meeting Setup wizard
  3. Click “Start Recording”
  4. Watch live transcription and intelligence
macOS Permissions RequiredGrant in System Preferences > Privacy & Security:
  • Microphone (for voice recording)
  • Screen Recording (for screen capture)

Configuration

All features are configurable through Settings:
FeatureCustomizable
Live AssistEnable/disable, configure timing
Conversation MetricsSet thresholds for talk ratio alerts
MCP ServersAdd stdio/HTTP servers, manage connections
Workflow WebhooksConfigure n8n, Zapier, or custom endpoints
Google CalendarConnect/disconnect calendar sync

MCP Server Setup

Connect MCP servers in Settings → MCP Servers:
  1. Click Add Server
  2. Choose transport: stdio (local) or http (remote)
  3. Configure connection details
  4. Click Connect
The MCP agent runs automatically during meetings, detects information needs from conversation, and triggers relevant tools. Results appear inline in the MCP Results panel.

Privacy & Data

Local Database - All data stored in SQLite at ~/Library/Application Support/call-md/
Secure Storage - API keys encrypted, credentials protected
User Control - Delete recordings anytime, export transcripts
Transcription via VideoDB - AI features require internet connectivity

Complete Setup Guide on GitHub

Detailed installation instructions, configuration options, and troubleshooting guide

Bloom

Screen recorder with AI search for async video documentation

Pair Programmer

AI coding assistant with real-time screen and audio context

Focusd

AI-powered productivity tracking with automatic time insights