This repository contains classic machine learning algorithms implemented from scratch. Each algorithm folder includes code, a Jupyter Notebook, a detailed report, and sample data, making it easy to understand and experiment with each model.
- Decision Tree: A model that uses tree-like structures for decision-making and classification.
- K-Means Clustering: An unsupervised algorithm for clustering data into
k
groups based on feature similarity. - Linear Discriminant Analysis (LDA): A dimensionality reduction technique primarily used for feature extraction in classification tasks.
- Linear Regression: A simple yet powerful algorithm for predicting a target variable based on linear relationships.
- Naive Bayes: A probabilistic classifier based on Bayes' theorem, ideal for text classification and more.
- Perceptron: A fundamental binary classifier foundational to neural networks.
- Principal Component Analysis (PCA): A method for reducing dimensionality while preserving variance in the data.
- Support Vector Machine (SVM): A supervised learning model useful for classification and regression tasks.
-
Each algorithm is organized into its own folder, with the following structure:
-
algorithm_name/
-
├── algorithm_name.ipynb # Jupyter Notebook with explanations, visualizations, and interactive code cells.
-
├── algorithm_name.py # Python script containing the algorithm’s implementation.
-
├── algorithm_name.report # Report summarizing the theory, math, and approach behind the algorithm.
-
└── data/ # Any dataset required to run and test the algorithm.