You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Machine learning algorithms that learn from labeled training data to make predictions or decisions, such as linear regression, decision trees, and support vector machines.
<a href="Linear-reg.html" title="An optimization technique used to minimize the loss function by iteratively adjusting model parameters in the direction of the steepest descent.">
<li><strong>Goal:</strong> Find hidden patterns or reduce complexity</li>
189
+
<li><strong>Output:</strong> Groups, compressed representations, or flagged outliers</li>
190
+
</ul>
191
+
</div>
192
+
193
+
<divclass="section" id="types">
194
+
<h2>Key Types of Unsupervised Algorithms</h2>
195
+
196
+
<h3>1. Clustering Algorithms</h3>
197
+
<ul>
198
+
<li><strong>K-Means:</strong> Partitions data into <code>K</code> groups by minimizing intra-cluster distances. Fast and scalable.</li>
199
+
<li><strong>Hierarchical Clustering:</strong> Creates a nested tree of clusters using a bottom-up (agglomerative) or top-down approach. Doesn’t require K.</li>
200
+
<li><strong>DBSCAN:</strong> Groups densely packed points. Great for detecting noise and arbitrary shapes.</li>
201
+
<li><strong>Gaussian Mixture Models (GMM):</strong> Soft clustering using a probabilistic approach with Gaussian distributions.</li>
202
+
</ul>
203
+
204
+
<h3>2. Dimensionality Reduction</h3>
205
+
<ul>
206
+
<li><strong>PCA:</strong> Projects data into a lower-dimensional space while preserving variance.</li>
207
+
<li><strong>t-SNE:</strong> Non-linear technique for 2D/3D visualization by preserving local relationships.</li>
208
+
<li><strong>UMAP:</strong> Similar to t-SNE but faster and better at preserving global structure.</li>
209
+
</ul>
210
+
211
+
<h3>3. Anomaly Detection</h3>
212
+
<ul>
213
+
<li><strong>One-Class SVM:</strong> Learns a decision boundary around the normal data points.</li>
214
+
<li><strong>Isolation Forest:</strong> Randomly splits data—anomalies are isolated faster with fewer splits.</li>
215
+
<li><strong>Local Outlier Factor (LOF):</strong> Detects outliers by comparing local density to neighbors.</li>
216
+
</ul>
217
+
218
+
<h3>4. Association Rule Learning</h3>
219
+
<ul>
220
+
<li><strong>Apriori Algorithm:</strong> Finds frequent itemsets and derives association rules (e.g., market basket analysis).</li>
221
+
<li><strong>Eclat Algorithm:</strong> A vertical layout-based alternative to Apriori, faster for large datasets.</li>
222
+
</ul>
223
+
</div>
224
+
225
+
<divclass="section" id="use-cases">
226
+
<h2>Use Cases</h2>
227
+
<table>
228
+
<thead>
229
+
<tr>
230
+
<th>Task</th>
231
+
<th>Common Algorithm</th>
232
+
</tr>
233
+
</thead>
234
+
<tbody>
235
+
<tr>
236
+
<td>Customer Segmentation</td>
237
+
<td>K-Means, GMM</td>
238
+
</tr>
239
+
<tr>
240
+
<td>Document Topic Modeling</td>
241
+
<td>LDA (Latent Dirichlet Allocation)</td>
242
+
</tr>
243
+
<tr>
244
+
<td>Anomaly Detection in Logs</td>
245
+
<td>Isolation Forest, LOF</td>
246
+
</tr>
247
+
<tr>
248
+
<td>Recommender Systems</td>
249
+
<td>Association Rules (Apriori, Eclat)</td>
250
+
</tr>
251
+
<tr>
252
+
<td>High-Dimensional Data Visualization</td>
253
+
<td>t-SNE, UMAP, PCA</td>
254
+
</tr>
255
+
</tbody>
256
+
</table>
257
+
</div>
258
+
</section>
259
+
260
+
261
+
<!-------Reference ------->
262
+
<sectionid="reference">
263
+
<h2>References</h2>
264
+
<ul>
265
+
<li>My github Repositories on Remote sensing <ahref="https://github.com/arunp77/Machine-Learning/" target="_blank">Machine learning</a></li>
0 commit comments