An open-source desktop screen recorder built with Electron, React, and TypeScript. Capture your screen, save videos locally, and auto-generate transcripts and AI summaries using Ollama — all offline, privacy-first, and fully customizable.
- 🎥 Screen Recording - Capture your entire screen or specific windows
- 📁 Smart File Organization - Automatically saves to organized folders with timestamps
- 🎵 Audio Extraction - Extracts audio from recorded videos for processing
- 📝 AI Transcript Generation - Converts speech to text using local AI models
- 🤖 Intelligent Summarization - Generates context-aware summaries using Ollama
- 🎯 Multiple Recording Types - Optimized for Google Meet, Lessons, and general videos
- 👀 Live Preview - See what you're recording in real-time
- 🔄 Cross-Platform - Works seamlessly on Windows, macOS, and Linux
- 🔒 Privacy-First - All processing happens locally, no data sent to external servers
- ⚡ Automated Setup - One-command setup for all binary dependencies
The fastest way to get started:
- Visit Releases
- Download for your platform:
- Windows:
meetingvideo-transrecorder-1.0.0-setup.exe
- macOS:
meetingvideo-transrecorder-1.0.0.dmg
- Windows:
- Install and launch the application
For developers or those who need the latest features:
# Clone the repository
git clone https://github.com/LinuxDevil/MeetingVideo-Transrecorder
cd meetingvideo-transrecorder
# Install dependencies
npm install
pip install -r requirements.txt
# Start development server
npm run dev
Component | Version | Purpose |
---|---|---|
Node.js | 16+ | Runtime environment |
Python | 3.8+ | Audio processing |
FFmpeg | Latest | Video/audio conversion |
Component | Purpose |
---|---|
Ollama | Local AI model for summarization |
Python packages | Speech-to-text processing |
The easiest way to set up all binary dependencies:
# Setup for current platform (recommended)
./setup-binaries.sh
# Or setup for all platforms
./setup-binaries.sh all
# Or setup for specific platform
./setup-binaries.sh macos
./setup-binaries.sh windows
./setup-binaries.sh linux
- Creates a Python virtual environment in
python-runtime/
- Installs all dependencies from
requirements.txt
- Downloads FFmpeg binary to
ffmpeg-bin/
- Downloads FFmpeg binary to
ffmpeg-bin-windows/
- Uses system Python (no bundled runtime needed)
- Requires
pip install -r requirements.txt
for dependencies
- Creates a Python virtual environment in
python-runtime/
- Installs all dependencies from
requirements.txt
- Downloads static FFmpeg binary to
ffmpeg-bin/
After running the setup script, you'll have:
├── python-runtime/ # macOS/Linux Python environment
├── ffmpeg-bin/ # macOS/Linux FFmpeg binary
├── ffmpeg-bin-windows/ # Windows FFmpeg binary
├── audio_extractor.py # Python script for audio processing
├── requirements.txt # Python dependencies
└── setup-binaries.sh # Setup script
Note: Windows builds use system Python, so no python-runtime-windows/
directory is created.
After running the Windows setup, install Python dependencies using system Python:
pip install -r requirements.txt
No additional setup required - Windows builds use system Python and bundled FFmpeg.
- Run
./setup-binaries.sh all
to set up binaries for all platforms - Use the appropriate npm script for your target platform:
npm run build:mac npm run build:win npm run build:linux
FFmpeg is required for audio extraction. Choose your platform:
# Using Chocolatey (recommended)
choco install ffmpeg
# Using Scoop
scoop install ffmpeg
# Manual installation
# Download from https://ffmpeg.org/download.html
# Extract to C:\ffmpeg and add C:\ffmpeg\bin to PATH
# Using Homebrew
brew install ffmpeg
# Manual installation
# Download from https://ffmpeg.org/download.html
# Extract to /usr/local/bin/
# Ubuntu/Debian
sudo apt update && sudo apt install ffmpeg
# CentOS/RHEL
sudo yum install ffmpeg
# Arch Linux
sudo pacman -S ffmpeg
For AI-powered summaries:
# 1. Download from https://ollama.ai/
# 2. Start Ollama service
ollama serve
# 3. Pull the required model
ollama pull mistral
# 4. Verify installation
ollama list
Ensure everything is installed correctly:
# Check all dependencies
node --version
python --version
ffmpeg -version
ollama list # if installed
- Make sure you've run
./setup-binaries.sh
first - Check that the Python runtime was created successfully
- Verify the paths in
transcript.utils.ts
match your setup
- Ensure FFmpeg was downloaded by the setup script
- Check that the FFmpeg binary has execute permissions
- Verify the FFmpeg path is correct for your platform
If the automated setup doesn't work, you can manually set up the binaries:
- Create a virtual environment:
python3 -m venv python-runtime
- Activate it:
source python-runtime/bin/activate
(macOS/Linux) - Install dependencies:
pip install -r requirements.txt
- Deactivate:
deactivate
- Download from FFmpeg website
- Extract to
ffmpeg-bin/
directory (macOS/Linux) orffmpeg-bin-windows/
(Windows) - Ensure the binary is executable:
chmod +x ffmpeg-bin/ffmpeg
(macOS/Linux)
Windows builds rely on system Python, so just ensure Python 3.8+ is installed and run:
pip install -r requirements.txt
The app requires these Python packages (from requirements.txt
):
openai-whisper>=20231117
- For speech-to-text conversionSpeechRecognition>=3.10.0
- Alternative speech recognitionpydub>=0.25.1
- Audio processingtorch>=2.0.0
- Machine learning backendnumpy>=1.24.0
- Numerical computations
- Development mode: Uses system Python and FFmpeg on all platforms
- Packaged app:
- macOS/Linux: Uses bundled Python runtime and FFmpeg
- Windows: Uses system Python + bundled FFmpeg
The binary directories are automatically ignored by git:
python-runtime/
(macOS/Linux only)ffmpeg-bin/
(macOS/Linux)ffmpeg-bin-windows/
(Windows)
This keeps the repository clean while allowing developers to set up their own binaries.
- Launch the application
- Select recording type (Google Meet, Lesson, or Video)
- Click "Start Recording"
- Present your content
- Click "Stop Recording"
- Download your video
After recording:
- Click "Extract Transcript"
- Wait for processing (audio extraction + speech recognition)
- Review the transcript and AI-generated summary
- Files are saved to your desktop automatically
Your recordings are automatically organized:
Desktop/
└── captured-videos/
└── 2024-01-15/
├── recording-2024-01-15-14-30-25.webm
├── recording-2024-01-15-14-30-25.txt
└── recording-2024-01-15-14-30-25-summary.txt
Type | Purpose | Summary Focus |
---|---|---|
Google Meet | Meeting recordings | Action items, decisions, assignments |
Lesson | Educational content | Learning objectives, key concepts, homework |
Video | General content | Main points, highlights, insights |
Change the AI model in src/main/utils/ollama.utils.ts
:
model: 'mistral' // Change to any model you have installed
The app supports multiple STT options:
- OpenAI Whisper (Local) - Default, works offline
- Google Speech Recognition (Online) - Fallback option
- Fallback - Simple text if other options fail
npm run dev
# Windows
npm run build:win
# macOS
npm run build:mac
# Linux
npm run build:linux
# Verify installation
ffmpeg -version
# Add to PATH if needed
# Windows: Add C:\ffmpeg\bin to system PATH
# macOS/Linux: Ensure it's in /usr/local/bin/
# Start Ollama service
ollama serve
# Check available models
ollama list
# Pull required model
ollama pull mistral
# Test connection
curl http://localhost:11434/api/tags
# Install requirements
pip install -r requirements.txt
# Verify audio_extractor.py exists
ls audio_extractor.py
- This is normal on first run
- The app will automatically detect available screens
- Try refreshing or restarting the application
Run with enhanced logging:
npm run dev -- --debug
Use the automated setup script:
python setup.py
meetingvideo-transrecorder/
├── src/
│ ├── main/ # Electron main process
│ │ ├── handlers/ # IPC handlers
│ │ ├── types/ # TypeScript types
│ │ ├── utils/ # Utility functions
│ │ └── window.ts # Window management
│ ├── preload/ # Preload scripts
│ │ ├── types/ # Type definitions
│ │ ├── utils/ # API utilities
│ │ └── index.ts # Main preload
│ └── renderer/ # React frontend
│ ├── components/ # UI components
│ ├── hooks/ # Custom hooks
│ ├── types/ # TypeScript types
│ └── utils/ # Utility functions
├── resources/ # App resources
├── audio_extractor.py # Python audio processing
├── requirements.txt # Python dependencies
└── package.json # Node.js dependencies
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Test thoroughly
- Submit a pull request
- Follow TypeScript best practices
- Use meaningful commit messages
- Test on multiple platforms
- Update documentation as needed
This project is licensed under the MIT License - see the LICENSE file for details.
- Check the troubleshooting section above
- Review console output for error messages
- Ensure all prerequisites are installed
- Try running in debug mode
- Check GitHub Issues
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share ideas
- Releases: Download the latest versions
This project uses GitHub Actions for automated builds and releases.
- Triggers: Push to
main
, pull requests, manual - Purpose: Build Windows and macOS versions
- Output: Uploads artifacts for manual download
- Triggers: Tags starting with
v*
(e.g.,v1.0.1
) - Purpose: Creates GitHub releases with artifacts
- Output: Automatic release with downloadable files
Linux/macOS:
chmod +x scripts/release.sh
./scripts/release.sh 1.0.1
Windows:
scripts\release.bat 1.0.1
- Go to Actions
- Click "Build Artifacts"
- Click "Run workflow"
- Select branch and run
Platform | Runner | Build Command | Output |
---|---|---|---|
Windows | windows-latest |
npm run build:win |
.exe , win-unpacked/ |
macOS | macos-latest |
npm run build:mac |
.dmg , mac/ |
- Build Artifacts: Available in Actions tab (30-day retention)
- Release Artifacts: Available on Releases page (permanent)
- ✅ Modular Architecture - Refactored for better maintainability
- ✅ CSS Refactoring - Improved styling with @apply directives
- ✅ GitHub Actions - Automated builds and releases
- ✅ Cross-platform Support - Windows and macOS builds
- ✅ AI Integration - Local AI-powered summarization
Made with ❤️ by LinuxDevil