Night driving presents unique challenges: restricted visibility, sudden glare from oncoming vehicles, and the constant need to balance illumination with safety. Although nighttime travel accounts for only 25% of all driving time, it tragically contributes to 50% of traffic fatalities.¹ In Bhopal, India alone, high-beam glare caused 1,470 accidents in 2024.² According to the National Safety Council, the risk of traffic deaths at night is three times greater than during daylight hours.³
Inspired by Audi’s cutting-edge adaptive headlight technology, Adaptive LED Matrix offers a proof-of-concept solution that is both smart and cost-effective. By analyzing real-time road conditions via a camera, the system dynamically adjusts an LED display to optimize beam patterns—minimizing glare for other drivers while maximizing visibility for the user. This repository is organized into two phases:
-
Phase 1: Software Simulation
Simulate adaptive LED behavior using YOLOv7 for real-time object detection and OpenCV for LED matrix visualization. -
Phase 2: Hardware Integration (Work in Progress)
Integrate the simulation with actual hardware components, such as MAX7219 LED modules and a Raspberry Pi Camera Board Version 2 (Sony IMX219 sensor).
- Project Overview
- System Architecture
- Features and Options
- Repository Structure
- Installation and Setup
- Usage
- Sample Video Outputs
- Challenges and Solutions
- Camera and Hardware Details
- Hardware Setup
- Output
- External Links and Resources
- Future Enhancements
- License
- Contributing
- Contact
The Adaptive LED Matrix project simulates a dynamic lighting system that responds to vehicles detected in a video stream. By using YOLOv7, the system maps the detected vehicle positions to specific LED slots on an 8×8 matrix. In Phase 1, the entire process is simulated in software—laying the foundation for eventual hardware integration (Phase 2).
-
Input Source:
The system accepts either a video file or a live camera feed.- Video: For example, use
data/test_videos/cars_video.mp4
. - Live Camera: Set
source = 0
(or integrate with libraries likepicamera2
).
- Video: For example, use
-
Image Size (
img_size
):
Controls the resolution provided to YOLOv7. Lower resolutions (e.g., 320) offer faster inference at a potential cost in detection accuracy. -
Thresholds:
- Confidence (
conf_thres
): Minimum confidence level for a detection to be considered. - IoU (
iou_thres
): Intersection over Union threshold used in Non-Max Suppression to remove overlapping detections.
- Confidence (
-
Matrix Layout:
The 8×8 LED matrix is simulated as two modules:- Left Module: Columns 0–3.
- Right Module: Columns 4–7.
-
Dynamic LED Control:
For each frame, the system determines which LED slots should be turned off based on the position of detected vehicles. If multiple vehicles are detected on the same side, all corresponding slots are updated in real time.
- YOLOv7:
Chosen for its high accuracy and real-time detection performance. - ONNX (Future Option):
Although the current implementation uses PyTorch, converting the model to ONNX is planned to enable faster inference and broader device compatibility.
The Raspberry Pi Camera Board Version 2 supports multiple resolutions:
- 1080p at 30 FPS (Full HD)
- 720p at 60 FPS (HD)
- 640×480 at 90 FPS (VGA)
These options help balance image quality and processing speed.
Adaptive-LED-Matrix/
├── data/
│ ├── test_images/ # Images for debugging and testing object detection
│ └── test_videos/ # Sample videos used for simulation (e.g., cars_video.mp4, realv.mp4, highway.mp4)
├── docs/
│ ├── 00000.png # which is used in repo
│ └── problems_solutions.md # Documentation of encountered issues and their solutions , 0 phase which is commented
├── models/ # YOLOv7 model files and ONNX export scripts
├── screen/
│ ├── yolo7.png #yolo7 is downloaded
│ ├── shiftToraspb.png # from pc to raspb
| └── downloadvid.png # out put of script/Download_YoutubeVid.py
├── src/
│ ├── main.py # Main script for running the simulation
│ ├── detection.py # Object detection and video processing using YOLOv7
│ ├── led_control.py # LED matrix simulation/control logic
│ └── utils_custom.py # Utility functions (model loading, image preprocessing)
├── scripts/
| └──Download_YoutubeVid.py
├── Test/
│ ├── camera_test.py # Testing camera integration and object detection
│ ├── test_ledmatrix.py # Testing LED matrix control logic
│ └── spidev-test.py # Testing SPI communication for hardware control
├── requirements.txt # Python dependencies (e.g., torch, opencv-python, numpy)
├── README.md # Project documentation
└── LICENSE
Access data folder via the following Google Drive folder:
-
Clone the Repository:
git clone https://github.com/omnia/Adaptive-LED-Matrix.git cd Adaptive-LED-Matrix
-
Install Dependencies for Phase 1:
Navigate to the Phase 1 folder:
cd src pip install -r requirements.txt
Note: Ensure you are using a compatible Python version (e.g., Python 3.8 – 3.11). If you experience version conflicts (e.g., NumPy requirements), adjust the dependency versions in
requirements.txt
accordingly. -
(Phase 2)
Additional dependencies for hardware control will be included inphase2/requirements.txt
as development progresses.
To run the simulation for Phase 1, execute the main script:
python3 src/main.py #--weights yolov7-tiny.pt --source "data/test_videos/cars_video.mp4" --img-size 640 --conf-thres 0.5 --iou-thres 0.5 --device cpu --view-img
The system processes multiple video sources and updates the LED matrix in real time. For instance:
-
Urban Roads:
The system detects vehicles and dynamically disables LED slots to simulate adaptive headlight patterns in a city setting.
🎥 Watch urban.mp4 -
Highway Traffic:
With a higher FPS input, the system smoothly updates the LED matrix even at high speeds.
🎥 Watch Highway.mp4 -
Controlled Test Scenario:
A controlled video feed used to test different resolutions and LED mapping strategies.
🎥 Watch Countryside.mp4
Video outputs are available in the docs/
folder.
- Problem:
Incompatibilities between certain versions of NumPy, Python, and ONNX. - Solution:
Specify compatible versions (e.g.,numpy>=1.18.5,<1.24.0
) inrequirements.txt
and consult the YOLOv7 repository for recommendations.
- Problem:
Converting YOLOv7 models to ONNX for deployment can be challenging. - Solution:
Use YOLOv7’s export scripts (e.g.,export.py
) and test with ONNX Runtime to ensure proper conversion.
- MAX7219 Module 4-in-1 8X8 LED Matrix Module:
Selected for its simplicity and cost-effectiveness in controlling an 8×8 LED array. - Raspberry Pi Camera Board V2:
Provides versatile resolution options to balance between quality and performance.
-
Raspberry Pi 4 Product Link
-
Raspberry Pi Camera Board Version 2
Product Link- Supports 1080p @ 30 FPS, 720p @ 60 FPS, and 640×480 @ 90 FPS.
-
MAX7219 LED Matrix Module
Product Link- Ideal for controlling 8×8 LED arrays, these modules provide a straightforward way to simulate dynamic lighting.
Connect each MAX7219 module on the hat to the Pi’s SPI pins as follows:
Module | MAX7219 Pin | Raspberry Pi Pin | GPIO # | Notes |
---|---|---|---|---|
Left | VCC | Pin 1 (3.3 V) | — | Or 5 V if required by your module |
GND | Pin 6 (GND) | — | ||
DIN | Pin 19 | GPIO 10 (MOSI) | ||
CLK | Pin 23 | GPIO 11 (SCLK) | ||
CS | Pin 24 | GPIO 8 (CE0) | ||
Right | VCC | Pin 1 (3.3 V) | — | Or 5 V |
GND | Pin 6 (GND) | — | ||
DIN | Pin 19 | GPIO 10 (MOSI) | Shared data line | |
CLK | Pin 23 | GPIO 11 (SCLK) | Shared clock | |
CS | Pin 26 | GPIO 7 (CE1) |
-
Enable SPI
sudo raspi-config # → Interface Options → SPI → Yes
-
Check kernel modules
lsmod | grep spi
You should see:
spidev 16384 4 spi_bcm2835 20480 0
-
List SPI devices
ls /dev/spidev*
Expected:
/dev/spidev0.0 /dev/spidev0.1
sudo apt update
sudo apt install build-essential python3-dev python3-pip libfreetype6-dev libjpeg-dev
pip3 install luma.led_matrix
-
Clone & build
git clone https://github.com/rm-hull/spidev-test.git cd spidev-test make
-
Run
sudo ./spidev_test -D /dev/spidev0.0
Expected:
spi mode: 0x0 bits per word: 8 max speed: 500000 Hz (500 KHz) RX | 00 00 … 00 | …
python3 Test/test_ledmatrix.py
This will light up your matrices to verify the luma.led_matrix
driver.
- Wiring: Double-check all pin mappings.
- SPI Enabled: Confirm via
raspi-config
. - spidev-test: Use C test to isolate hardware issues.
- Power: Ensure a stable 3.3 V/5 V supply per module spec.
- Virtual Env: Isolate Python deps if needed.
- YOLOv7 GitHub Repository
- ONNX Runtime
- Letterboxing in YOLO (Medium Article)
- which yolo
- Additional YouTube Videos, Video 3, What is YOLO algorithm, best model / best algorithm
- Reddit Discussion on Object Detection Models
- Real Hardware:
Integrate MAX7219 LED modules with a Raspberry Pi for physical LED control. - Camera Integration:
Use live camera feeds and experiment with different resolutions. - ONNX Conversion:
Convert the YOLOv7 model to ONNX for optimized inference on embedded devices. - Enhanced Multi-Object Handling:
Refine LED mapping logic to handle overlapping detections more robustly. - User Interface:
Develop a GUI for easier system configuration and monitoring.
- Advanced Logging and Metrics:
Implement detailed logging to monitor system performance and detection accuracy. - Energy Consumption and Intelligent Brightness Control: Adopt more energy-efficient technologies to ensure that the system can operate for extended periods without requiring frequent maintenance. Additionally, instead of turning off a portion of the LED matrix when an object is detected, implement an intelligent brightness control mechanism that adjusts the LED intensity based on the distance of the approaching vehicle.
- Modular Improvements:
Continue refactoring code for improved maintainability and scalability.
This project is licensed under the MIT License.
Contributions are welcome! Please fork the repository and submit a pull request with your proposed changes.