VisionAI is a cutting-edge web application that combines facial recognition, emotion detection, and voice interaction to create an immersive AI assistant experience. Built with the MERN stack and featuring a stunning futuristic UI inspired by sci-fi interfaces.
- Face Recognition Login: No passwords needed - your face is your key
- Secure Face Descriptors: 128-point facial feature vectors stored in MongoDB
- Anti-Spoofing: Live detection prevents photo-based attacks
- Real-time Emotion Detection: Happy, sad, angry, surprised, fearful, disgusted, neutral
- Contextual Greetings: AI responds based on your current emotional state
- Mood Analytics: Track emotional patterns over time
- Voice Commands: Control the app with natural speech
- Text-to-Speech: AI responds with synthesized voice
- Smart Actions: "Start a new React project", "I want to take a break"
- Neon Console Theme: Dark cyberpunk aesthetic with glowing elements
- Animated Scan Lines: Dynamic visual effects like sci-fi movies
- Responsive Grid Layout: Adapts to any screen size
- Background Audio: Subtle ambient hum for immersion
- Project Templates: Quick-start templates for React, Next.js, etc.
- Relaxation Mode: Break time with breathing exercises and ambient sounds
- Session Tracking: Monitor usage patterns and productivity
- React 18 - Modern UI framework
- Styled Components - CSS-in-JS styling
- Framer Motion - Smooth animations
- face-api.js - Browser-based face recognition
- React Speech Recognition - Voice input
- Web Speech API - Text-to-speech output
- Node.js & Express - RESTful API server
- MongoDB & Mongoose - Database and ODM
- Face Recognition - Custom face matching algorithms
- Security - Helmet, CORS, rate limiting
- Concurrently - Run client and server together
- Nodemon - Auto-restart development server
- Environment Variables - Secure configuration
- Node.js 16+ and npm
- MongoDB (local or Atlas)
- Webcam for face recognition
-
Clone the repository
git clone https://github.com/your-username/visionai-dashboard.git cd visionai-dashboard
-
Install dependencies
npm run install-deps
-
Download face-api.js models
# Option 1: Download from GitHub cd client/public mkdir models # Download models from: https://github.com/justadudewhohacks/face-api.js/tree/master/weights # Option 2: Copy from node_modules (after installing face-api.js) cp -r node_modules/face-api.js/weights/* client/public/models/
-
Configure environment
cd server cp .env.example .env # Edit .env with your MongoDB connection string
-
Start MongoDB
# Local MongoDB mongod # Or use MongoDB Atlas (cloud) # Update MONGODB_URI in server/.env
-
Start the application
npm run dev
This starts both client (http://localhost:3000) and server (http://localhost:5000)
- Access the app at http://localhost:3000
- Click "NEW USER REGISTRATION"
- Allow camera access when prompted
- Fill out the registration form
- Position your face in the scanner frame
- Click "CAPTURE BIOMETRICS" when face is detected
- Complete registration - your face descriptor is now stored
- Open the app - camera automatically activates
- Position your face for scanning
- Click "INITIATE SCAN" to authenticate
- Enjoy personalized greetings based on your detected emotion
- Use voice commands or click interface buttons
- Try saying:
- "Start a new React project"
- "I want to take a break"
- "Show me dashboard"
- "Logout"
- "New React project" → Shows project templates
- "Take a break" → Activates relaxation mode
- "Dashboard" → Returns to main view
- "Logout" → Signs you out
The app uses CSS custom properties for easy theming:
:root {
--primary-cyan: #00ffff;
--primary-pink: #ff0066;
--success-green: #00ff00;
--warning-orange: #ffa500;
--error-red: #ff0000;
--background-dark: #000000;
--glass-bg: rgba(0, 255, 255, 0.05);
}
Adjust animation timings in styled components:
// Faster animations
animation: scanLines 2s linear infinite;
// Slower animations
animation: scanLines 6s linear infinite;
Register a new user with face biometrics.
Body:
{
"name": "John Doe",
"email": "john@example.com",
"faceDescriptor": [128 numbers array]
}
Authenticate user by face recognition.
Body:
{
"faceDescriptor": [128 numbers array],
"emotion": "happy"
}
End user session.
Body:
{
"userId": "user_id_here"
}
Get user profile and statistics.
Update user preferences.
Get mood and usage analytics.
{
name: String,
email: String,
faceDescriptor: [Number], // 128-point face vector
moodLogs: [{
emotion: String,
confidence: Number,
timestamp: Date,
context: String
}],
sessions: [{
loginTime: Date,
logoutTime: Date,
duration: Number,
emotionsDetected: [String],
voiceCommands: [String]
}],
preferences: {
voiceEnabled: Boolean,
autoGreeting: Boolean,
backgroundAudio: Boolean,
emotionTracking: Boolean
},
stats: {
totalLogins: Number,
totalTimeSpent: Number,
mostCommonEmotion: String,
lastLogin: Date,
averageSessionDuration: Number
}
}
- Face Descriptor Encryption: Biometric data is stored as mathematical vectors
- Rate Limiting: Prevents brute force attacks
- CORS Protection: Restricts cross-origin requests
- Input Validation: Sanitizes all user inputs
- Environment Variables: Sensitive data not in code
- Helmet.js: Sets security headers
- Lazy Loading: Components load when needed
- Image Optimization: Compressed assets
- Database Indexing: Fast queries on user data
- Face Recognition Caching: Reduces computation
- WebRTC Optimization: Efficient camera streaming
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Use ESLint and Prettier
- Follow React best practices
- Write meaningful commit messages
- Add JSDoc comments for functions
- Chrome/Edge: Check camera permissions in browser settings
- Firefox: Allow camera access when prompted
- HTTPS Required: Camera access requires secure connection in production
- Models Not Loading: Ensure face-api.js models are in
/client/public/models/
- Poor Lighting: Use good lighting for better detection
- Distance: Keep face 2-3 feet from camera
- Angle: Face camera directly for best results
- Local MongoDB: Ensure MongoDB service is running
- Atlas: Check network access and credentials
- Firewall: Open port 27017 for local MongoDB
- Microphone Access: Grant microphone permissions
- Browser Support: Use Chrome/Edge for best compatibility
- Noise: Use in quiet environment for better accuracy
This project is licensed under the MIT License - see the LICENSE file for details.
- face-api.js - Amazing browser-based face recognition
- MongoDB - Flexible document database
- React Team - Incredible frontend framework
- Styled Components - Beautiful CSS-in-JS solution
- Framer Motion - Smooth animations made easy
This project is open to suggestions, improvements, and collaboration.
If you find a bug, have an idea, or want to contribute — feel free to open an issue or pull request!
Experience the future of human-computer interaction today!