Skip to content

πŸ“± Optimized ML for edge devices. Showcasing efficient model deployment, GPU-CPU memory transfer optimization, and real-world edge AI applications. πŸ€–

License

Notifications You must be signed in to change notification settings

BjornMelin/edge-ai-engineering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Edge AI Engineering πŸ“±

Python TensorFlow Lite PyTorch Mobile CUDA License

Optimized machine learning models for edge and mobile devices. Showcasing efficient model deployment, optimization techniques, and real-world edge AI applications.

Features β€’ Installation β€’ Quick Start β€’ Documentation β€’ Contributing

πŸ“‘ Table of Contents

✨ Features

  • Model quantization and optimization
  • Mobile-first architectures
  • Battery-efficient inference
  • Cross-platform deployment
  • Edge-optimized pipelines

πŸ“ Project Structure

graph TD
    A[edge-ai-engineering] --> B[models]
    A --> C[optimization]
    A --> D[deployment]
    A --> E[benchmarks]
    B --> F[tflite]
    B --> G[pytorch-mobile]
    C --> H[quantization]
    C --> I[compression]
    D --> J[android]
    D --> K[ios]
    E --> L[performance]
    E --> M[battery]
Loading
Click to expand full directory structure
edge-ai-engineering/
β”œβ”€β”€ models/            # Model implementations
β”‚   β”œβ”€β”€ tflite/       # TensorFlow Lite models
β”‚   └── pytorch/      # PyTorch Mobile models
β”œβ”€β”€ optimization/      # Optimization tools
β”‚   β”œβ”€β”€ quantization/ # Model quantization
β”‚   └── compression/  # Model compression
β”œβ”€β”€ deployment/       # Platform-specific deployment
β”‚   β”œβ”€β”€ android/     # Android deployment
β”‚   └── ios/         # iOS deployment
β”œβ”€β”€ benchmarks/       # Performance testing
└── README.md         # Documentation

πŸ”§ Prerequisites

  • Python 3.8+
  • TensorFlow Lite 2.14+
  • PyTorch Mobile 2.2+
  • Android SDK/NDK
  • Xcode (for iOS)

πŸ“¦ Installation

# Clone repository
git clone https://github.com/BjornMelin/edge-ai-engineering.git
cd edge-ai-engineering

# Create environment
python -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

πŸš€ Quick Start

from edge_ai import optimization, deployment

# Optimize model for mobile
optimized_model = optimization.quantize_for_mobile(
    model,
    target_platform="android",
    quantization="int8"
)

# Deploy to device
deployment = deployment.MobileDeployment(
    model=optimized_model,
    platform="android",
    optimize_battery=True
)

# Generate deployment package
deployment.export()

πŸ“š Documentation

Models

Model Task Size Latency (ms)
MobileNetV3 Classification 4MB 15
TinyYOLO Detection 8MB 25
MobileViT Vision 6MB 20

Optimization

  • Int8 quantization
  • Model pruning
  • Architecture optimization
  • Memory footprint reduction

Benchmarks

Performance on different devices:

Device Model Battery Impact FPS Memory
Pixel 6 MobileNet 2%/hr 30 120MB
iPhone 13 TinyYOLO 3%/hr 25 150MB
RPi 4 MobileViT N/A 15 200MB

🀝 Contributing

πŸ“Œ Versioning

We use SemVer for versioning. For available versions, see the tags on this repository.

✍️ Authors

Bjorn Melin

πŸ“ Citation

@misc{melin2024edgeaiengineering,
  author = {Melin, Bjorn},
  title = {Edge AI Engineering: Optimized Mobile Machine Learning},
  year = {2024},
  publisher = {GitHub},
  url = {https://github.com/BjornMelin/edge-ai-engineering}
}

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • TensorFlow Lite team
  • PyTorch Mobile developers
  • Mobile ML community
  • Edge computing researchers

Made with πŸ“± and ❀️ by Bjorn Melin

About

πŸ“± Optimized ML for edge devices. Showcasing efficient model deployment, GPU-CPU memory transfer optimization, and real-world edge AI applications. πŸ€–

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published