Skip to content

This project provides a complete pipeline to process polygon-style annotations, convert them into YOLO-compatible formats, and train a segmentation model using YOLOv8

License

Notifications You must be signed in to change notification settings

thendralmagudapathi/Instance-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOv8 Segmentation Annotation Converter and Trainer

This project provides a complete pipeline to process polygon-style annotations, convert them into YOLO-compatible formats, and train a segmentation model using YOLOv8. The code supports both image-level and per-object annotations in XML, normalizes annotation coordinates, and provides utilities to prepare datasets for training.

Note: All client-specific and confidential data has been removed. This repository is a sanitized version showcasing the workflow and implementation logic.


Project Structure

1. yaml.yaml

This file defines the dataset configuration required by YOLOv8 for segmentation training.

  • names: Class labels represented as strings mapped to integer indices.
  • nc: Number of classes (16).
  • train, val, test: Paths to dataset images.
  • path: Base path to the dataset folder.

2. NormalizedAnnotations.py

This script:

  • Reads raw polygon annotations in text format.
  • Uses PIL to extract image dimensions.
  • Converts polygon points into YOLO normalized format (between 0 and 1).
  • Writes the converted annotations to new .txt files in YOLO format.

Key Features:

  • Each .txt output contains lines with class index followed by normalized x, y coordinates.
  • Ensures that image-annotation pairing is validated before conversion.

3. xmltotext.py

This script parses an XML annotation file containing polygons and exports the annotation data in plain text format, usable for training.

Modes of Operation:

  • Commented Section 1: Converts all annotations into a single file.
  • Active Section: Saves each image's annotations in a separate .txt file.

Output Format:

Each line in output .txt:

<label> x1,y1;x2,y2;...xn,yn

4. trial.ipynb

A training notebook for YOLOv8 segmentation using the ultralytics package. It includes:

  • Configuration of training parameters.
  • Model selection (yolov8n-seg.pt).
  • Dataset path loading via the provided YAML file.
  • Use of standard YOLOv8 training arguments such as batch, epochs, imgsz, augment, warmup, and more.
  • System: Python 3.12, Torch 2.2.0, CPU (Intel Core i5).

How it Works

1. Annotation Parsing:

  • Original annotations (in XML or plain text with polygon data) are read.
  • Each polygon's coordinates are extracted.
  • Coordinates are normalized with respect to image width and height.

2. Annotation Formatting:

  • The normalized coordinates are written in YOLOv8-compatible .txt format.
  • Each image gets a corresponding annotation file.

3. Training:

  • ultralytics YOLOv8 library is used.
  • Segmentation model is fine-tuned on custom dataset using YOLOv8n-seg.

Getting Started

Dependencies

  • Python 3.8+
  • Ultralytics YOLOv8: pip install ultralytics
  • PIL: pip install pillow

Steps

  1. Place your XML or raw text annotations in the specified folders.
  2. Run xmltotext.py to convert XML to text annotations.
  3. Run NormalizedAnnotations.py to normalize them into YOLO format.
  4. Ensure the dataset is split into train, val, and test with matching YAML paths.
  5. Use trial.ipynb or a Python script to start model training with YOLOv8.

License

This repository is open-sourced under the MIT License. See the LICENSE file for details.


Disclaimer

This project has been stripped of all client-related identifiers. The logic and implementation provided are for educational or portfolio demonstration purposes only.

Releases

No releases published

Packages

No packages published