Skip to content

Commit b839011

Browse files
Readme.md
1 parent a5dd788 commit b839011

File tree

2 files changed

+100
-1
lines changed

2 files changed

+100
-1
lines changed

.github/workflows/ci.yml

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
name: CI
2+
3+
on:
4+
push:
5+
branches: [ main ]
6+
pull_request:
7+
branches: [ main ]
8+
9+
jobs:
10+
test:
11+
runs-on: ubuntu-latest
12+
strategy:
13+
matrix:
14+
python-version: [3.8, 3.9]
15+
16+
steps:
17+
- uses: actions/checkout@v3
18+
- name: Set up Python ${{ matrix.python-version }}
19+
uses: actions/setup-python@v4
20+
with:
21+
python-version: ${{ matrix.python-version }}
22+
- name: Install dependencies
23+
run: |
24+
python -m pip install --upgrade pip
25+
pip install flake8 pytest
26+
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
27+
- name: Lint with flake8
28+
run: |
29+
# stop the build if there are Python syntax errors or undefined names
30+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
31+
# exit-zero treats all errors as warnings
32+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
33+
- name: Run tests
34+
run: |
35+
# Run basic tests if they exist
36+
if [ -d "tests" ]; then
37+
pytest
38+
else
39+
echo "No tests directory found. Skipping tests."
40+
fi

README.md

Lines changed: 60 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@
99
[![PyTorch](https://img.shields.io/badge/PyTorch-1.7%2B-red.svg)](https://pytorch.org/)
1010
[![TensorFlow](https://img.shields.io/badge/TensorFlow-2.4%2B-orange.svg)](https://tensorflow.org/)
1111
[![License](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
12+
[![CI Status](https://github.com/vatsalmehta/speech-emotion-recognition/actions/workflows/ci.yml/badge.svg)](https://github.com/vatsalmehta/speech-emotion-recognition/actions)
13+
[![GitHub stars](https://img.shields.io/github/stars/vatsalmehta/speech-emotion-recognition?style=social)](https://github.com/vatsalmehta/speech-emotion-recognition/stargazers)
14+
[![GitHub forks](https://img.shields.io/github/forks/vatsalmehta/speech-emotion-recognition?style=social)](https://github.com/vatsalmehta/speech-emotion-recognition/network/members)
1215

1316
## 👤 About this Project
1417

@@ -458,4 +461,60 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
458461

459462
- The RAVDESS dataset creators for providing high-quality emotional speech data
460463
- The PyTorch and torchaudio teams for their excellent frameworks
461-
- The research community for advancing speech emotion recognition techniques
464+
- The research community for advancing speech emotion recognition techniques
465+
466+
## 🎮 Try It Yourself
467+
468+
### Interactive Demo
469+
470+
You can try an interactive version of this emotion recognition system online:
471+
472+
```bash
473+
# Coming soon - Streamlit or Hugging Face Spaces demo
474+
```
475+
476+
I'm currently working on deploying an interactive demo using Streamlit that will allow you to:
477+
- Upload your own audio files for emotion analysis
478+
- Compare results across different model architectures
479+
- Visualize the decision-making process in real-time
480+
481+
Check back soon or [contact me](https://github.com/vatsalmehta) for early access!
482+
483+
### Installation Options
484+
485+
For those who prefer a local installation, I've provided multiple ways to run the project:
486+
487+
1. **Docker Container** (easiest, no dependency issues):
488+
```bash
489+
docker pull vatsalmehta/emotion-recognition:latest
490+
docker run -p 8501:8501 vatsalmehta/emotion-recognition:latest
491+
```
492+
493+
2. **Python Virtual Environment** (recommended for developers):
494+
```bash
495+
python -m venv venv
496+
source venv/bin/activate # On Windows: venv\Scripts\activate
497+
pip install -r requirements.txt
498+
```
499+
500+
## 🚀 Further Development
501+
502+
I'm actively improving this project with several exciting directions:
503+
504+
- **Cross-cultural emotion detection**: Training on multilingual datasets to improve performance across different languages and accents
505+
- **Multimodal analysis**: Combining audio features with facial expressions for more accurate emotion detection
506+
- **Edge deployment**: Optimizing models for mobile and IoT devices
507+
- **Emotion tracking over time**: Analyzing emotional progression throughout conversations
508+
509+
If you're interested in collaborating on any of these features, please reach out!
510+
511+
## 📬 Contact & Connect
512+
513+
I'm always open to collaboration, feedback, or questions about this project:
514+
515+
- **LinkedIn**: [Vatsal Mehta](https://linkedin.com/in/vatsalmehta)
516+
- **GitHub**: [@vatsalmehta](https://github.com/vatsalmehta)
517+
- **Email**: your.email@example.com (replace with your actual email)
518+
- **Portfolio**: [vatsalmehta.com](https://vatsalmehta.com)
519+
520+
Whether you're interested in machine learning collaboration, have questions about emotion recognition, or just want to connect with a fellow ML engineer, don't hesitate to reach out!

0 commit comments

Comments
 (0)