Releases: open-edge-platform/edge-ai-libraries
Edge AI Libraries v1.2.0
Edge AI Libraries v1.2.0
Release Overview
The Edge AI Libraries v1.2.0 hosts a collection of libraries, microservices, and tools for Edge application development. This project also includes sample applications to showcase the AI use cases that are relevant across multiple vertical industries.
Highlighted Features
Edge AI Libraries include the following new features and improvements:
- Deep Learning Streamer (is a streaming media analytics framework, based on GStreamer*. In this release, DL Streamer delivers the ability for end users to create a custom post-processing library, latency mode support, visual embedding enablement, automatic INT8 quantization for Yolo models, native support for Windows 11, Edge Microvisor Toolkit support, upgrades to OpenVINO™ 2025.2, GStreamer* 1.26.4, NPU driver v1.19, and build optimization and Docker image size reduction improvements.
-
Support for new model includes:
- Vision Language Model (VLM)
- Clip-ViT-Base-B16
- Clip-ViT-Base-B32
- miniCPM2.6
- License Plate Recognition (yolov8_license_plate_detector, ch_PP-OCRv4_rec_infer)
- 3D Object Detection (limited support in this release via Deep Scenario 3D vehicle detection commercial model)
-
Support for new elements includes:
- gstgenai for converting video into text with VLM models
- gvarealsense for integration with RealSense cameras in order to enable video and depth stream capture in GStreamer* pipelines.
-
- Deep Learning Streamer Pipeline Server: Built on top of GStreamer, a containerized microservice for the development and deployment of a video analytics pipeline.
- Model Registry: Providing capabilities to manage the lifecycle of an AI model.
- Data Ingestion microservice: Data ingestion service loads, parses, and creates embeddings for popular document types like PDF, DOCX, and TXT files.
Sample applications include:
- Chat Question-and-Answer Core: Chat Question-and-Answer sample application is a foundational Retrieval-Augmented Generation (RAG) pipeline that allows users to ask questions and receive answers, including those based on their own private data corpus.
- Chat Question-and-Answer: Compared to the Chat Question-and-Answer Core implementation, this implementation of Chat Question-and-Answer is a modular microservices-based approach with each constituent element of the RAG pipeline bundled as an independent microservice.
Tools include:
- This release of the Visual Pipeline and Platform Evaluation Tool delivers the Simple Video Structurization pipeline, a versatile, use case-agnostic solution that supports license plate recognition, vehicle detection with attribute classification, and other object detection and classification tasks, adaptable based on the selected model. Additionally, the pipeline tool now supports:
- live output, allowing you to view real-time results directly in the UI. This feature enhances the user experience by providing immediate feedback on video processing tasks.
- new pre-trained models for object detection (YOLO v8 License Plate Detector) and classification (PaddleOCR, Vehicle Attributes Recognition Barrier 0039), expanding the range of supported use cases and improving accuracy for specific tasks.
- The latest release of SceneScape introduces new features and significant performance improvements for accelerating spatial intelligence application development using multi-modal sensor data. Feature enhancements include:
- volumetric regions of interest(ROIs) (While traditional ROIs are 2D (flat polygons on a map)
- volumetric ROIs extend this concept into three dimensions, allowing for more precise spatial analysis, especially in environments where vertical movement or object height matters.)
- enhanced tracker performance now enabling the ability to reliably handle a multi-object tracking density of 50 concurrent tracks
- scene import/export
- native support for geospatial coordinate output and a shift in the underlying Video Analytics engine from Percebro to DL Streamer Pipeline Server.
- Software quality improvements involved refactoring build systems to remove unnecessary dependencies, reduce image sizes, and optimize build times. Additionally, containerization best practices including volumes, secrets, and configurations were employed. SceneScape is now licensed under Apache 2.0.
Known Issues
- See detailed DL Streamer known issues here.
- None Issue: Visual Pipeline and Platform Evaluation Tool Metrics are displayed only for the last GPU when the system has multiple discrete GPUs.
- See detailed SceneScape known issues here.
Breaking Changes
None
Edge AI Libraries v1.0.0
Edge AI Libraries v1.0.0 (Initial Release)
Release Overview
The Edge AI Libraries v1.0.0 hosts a collection of libraries, microservices, and tools for Edge application development. This project also includes sample applications to showcase the generic AI use cases.
Key Components
Component | Category | Get Started | Developers Docs |
---|---|---|---|
Deep Learning Streamer | Library | Link | API Reference |
Deep Learning Streamer Pipeline Server | Microservice | Link | API Reference |
Document Ingestion | Microservice | Link | API Reference |
Model Registry | Microservice | Link | API Reference |
Object Store | Microservice | Link | Usage |
Visual Pipeline and Performance Evaluation Tool | Tool | Link | Build instructions |
Chat Question and Answer | Sample Application | Link | Build instructions |
Chat Question and Answer Core | Sample Application | Link | Build instructions |
Highlighted Features
Libraries includes:
- Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is an open-source streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.
Microservices includes:
- Deep Learning Streamer Pipeline Server: Built on top of GStreamer, a containerized microservice for development and deployment of video analytics pipeline.
- Model Registry: Providing capabilities to manage lifecycle of an AI model.
- Object Store Microservice: MinIO based object store microservice to build generative AI pipelines.
- Data Ingestion microservice: Data ingestion service loads, parses, and creates embeddings for popular document types like pdf, docx, and txt files.
Sample applications includes:
- Chat Question-and-Answer Core: Chat Question-and-Answer sample application is a foundational Retrieval-Augmented Generation (RAG) pipeline that allows users to ask questions and receive answers, including those based on their own private data corpus.
- Chat Question-and-Answer: Compared to the Chat Question-and-Answer Core implementation, this implementation of Chat Question-and-Answer is a modular microservices based approach with each constituent element of the RAG pipeline bundled as an independent microservice.
Tools includes:
- Visual Pipeline and Platform Evaluation Tool: The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by allowing you to configure workload parameters, benchmark performance, and analyze key metrics such as throughput, CPU, and GPU usage. With its intuitive interface, the tool provides actionable insights to help you optimize hardware selection and performance.
Known Issues
None
Breaking Changes
None — this is the initial release.