Call.md on GitHub
Complete source code, installation guide, and configuration
What Is It?
Your AI meeting assistant, live during every call. Most meeting tools transcribe after the fact. Call.md is different: it understands your meeting as it happens, giving you AI-powered suggestions, conversation metrics, and automatic tool triggers in real-time. It’s like having an AI copilot who watches your meeting, tracks engagement, and whispers helpful context right when you need it. After the call ends, it generates three-part summaries (overview, key points, action items) and can automatically update your CRM or project management tools through workflow webhooks. If you’ve used Otter.ai or Fireflies.ai, think of Call.md as the same idea but built for agent automation—not just passive transcription, but active intelligence during and after meetings.Why You Need This
- Live Intelligence
- Meeting Intelligence
- Agent Integration
During the Meeting
Get real-time assistance while you talk:- AI-generated suggestions (things to say, questions to ask)
- Conversation metrics (talk ratio, pace, questions asked)
- MCP tools triggered automatically from conversation context
- Coaching nudges when conversation needs steering
How It Works
Live Assist
Every 20 seconds, AI analyzes recent transcript and generates contextual suggestions: things to say, questions to ask.
Conversation Metrics
Real-time tracking of talk ratio, speaking pace (WPM), questions asked, and monologue detection. No LLM required — pure statistics.
MCP Auto-Triggering
Intent detector scans conversation for information needs (active or passive). When detected, automatically calls relevant MCP tools and displays results inline.
Key Features
Recording & Transcription
Recording & Transcription
Real-Time Speech-to-Text
- Separate channels for you and them
- Live transcription powered by VideoDB
- Recording history with full transcripts
- Screen, mic, and system audio capture
Live Assist
Live Assist
AI-Generated Suggestions
- Contextual things to say
- Questions to ask
- Updates every 20 seconds
- Based on recent conversation context
Conversation Metrics
Conversation Metrics
Track Engagement
- Talk ratio (you vs them)
- Speaking pace and question count
- Engagement score
- Monologue detection (45s threshold)
MCP Integration
MCP Integration
Model Context Protocol
- Auto-triggers tools from conversation
- Runs every 20 seconds
- Max 3 tool calls per run
- Results display inline during meetings
- Supports stdio and HTTP servers
MCP Auto-Triggering Example
MCP Auto-Triggering Example
How it works in practice:During a sales call, the client mentions “Can you send me pricing for the enterprise plan?” The MCP intent detector recognizes this as an information need, automatically calls your CRM tool to fetch the pricing doc, and displays it inline. You see the result immediately and can reference it without breaking flow.Or: A customer says “I’m seeing error code 502.” The MCP agent searches your knowledge base tool, finds relevant docs, and shows them to you in real-time—before you even finish taking notes.This happens automatically every 20 seconds based on conversation context. You configure which MCP tools are available; the agent decides when to call them.
Post-Meeting Summaries
Post-Meeting Summaries
Three-Part AI Analysis
- Short overview (3-5 sentence narrative)
- Key points by topic (attributed to participants)
- Action items (3-10 concrete next steps)
- Generated in parallel for speed
Workflow Webhooks
Workflow Webhooks
Automation Integration
- Auto-send to n8n, Zapier, CRMs
- Triggered when meeting ends
- Structured data payload
- Agent-ready output format
Meeting Preparation
Meeting Preparation
Setup Wizard
- AI-generated probing questions
- Dynamic discussion checklist
- Google Calendar integration
- Sync upcoming meetings
Bookmarking
Bookmarking
Mark Important Moments
- Quick bookmark during calls
- Review later with context
- Share with team
Tech Stack
Built for performance and reliability:Electron 34
Desktop application shell
React 19 + TypeScript 5.8
Modern UI with full type safety
tRPC 11
Type-safe API layer
Drizzle + SQLite
Local offline-first storage
VideoDB SDK (0.2.4)
Recording and transcription
MCP SDK (1.0.0)
Model Context Protocol integration
OpenAI SDK (6.19.0)
LLM calls via VideoDB API
Tailwind + shadcn/ui
Beautiful, modern interface
Getting Started
Prerequisites
- macOS 12+ (Monterey or later) or Windows 10+
- VideoDB API key (free tier available)
Install Call.md
macOS (Apple Silicon & Intel):Currently available for macOS and Windows — Linux support coming soon
Launch and Register
- Launch Call.md from Applications or Spotlight
- Enter your VideoDB API key (get one free)
- Grant system permissions when prompted
Configure Recording
Configure preferences:
- Enable microphone capture
- Enable system audio capture
- Select screen to record
- Optionally connect Google Calendar
Configuration
All features are configurable through Settings:| Feature | Customizable |
|---|---|
| Live Assist | Enable/disable, configure timing |
| Conversation Metrics | Set thresholds for talk ratio alerts |
| MCP Servers | Add stdio/HTTP servers, manage connections |
| Workflow Webhooks | Configure n8n, Zapier, or custom endpoints |
| Google Calendar | Connect/disconnect calendar sync |
MCP Server Setup
Connect MCP servers in Settings → MCP Servers:- Click Add Server
- Choose transport: stdio (local) or http (remote)
- Configure connection details
- Click Connect
Privacy & Data
Local Database - All data stored in SQLite at
~/Library/Application Support/call-md/Secure Storage - API keys encrypted, credentials protected
User Control - Delete recordings anytime, export transcripts
Transcription via VideoDB - AI features require internet connectivity
Complete Setup Guide on GitHub
Detailed installation instructions, configuration options, and troubleshooting guide
Related Tutorials
Bloom
Screen recorder with AI search for async video documentation
Pair Programmer
AI coding assistant with real-time screen and audio context
Focusd
AI-powered productivity tracking with automatic time insights