@@ -211,6 +211,20 @@ <h2>Machine learning</h2>
211
211
</ div >
212
212
</ div >
213
213
214
+ < div class ="col-lg-4 col-md-6 portfolio-item filter-python ">
215
+ < div class ="tooltip-container ">
216
+ < a href ="reinforcement_learning.html ">
217
+ < div class ="portfolio-wrap ">
218
+ < img src ="assets/img/machine-ln/reinforcment_logo.png " class ="img-fluid " alt ="" style ="width: 65%; height: auto; ">
219
+ < p class ="portfolio-title "> 4. Reinforcement Learning</ p >
220
+ </ div >
221
+ < div class ="tooltip-text ">
222
+ Agent learns to make decisions by interacting with an environment.
223
+ </ div >
224
+ </ a >
225
+ </ div >
226
+ </ div >
227
+
214
228
<!-- Portfolio items for 'ML' category -->
215
229
<!-- <div class="col-lg-4 col-md-6 portfolio-item filter-ML">
216
230
<a href="Linear-reg.html" title="An optimization technique used to minimize the loss function by iteratively adjusting model parameters in the direction of the steepest descent.">
@@ -226,7 +240,7 @@ <h2>Machine learning</h2>
226
240
< a href ="Linear-reg.html ">
227
241
< div class ="portfolio-wrap ">
228
242
< img src ="assets/img/machine-ln/gradient-discent.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
229
- < p class ="portfolio-title "> 4 . Gradient Descent Method</ p >
243
+ < p class ="portfolio-title "> 5 . Gradient Descent Method</ p >
230
244
</ div >
231
245
< div class ="tooltip-text ">
232
246
An optimization technique used to minimize the loss function by iteratively adjusting model parameters in the direction of the steepest descent.
@@ -250,7 +264,7 @@ <h2>Machine learning</h2>
250
264
< a href ="mle.html ">
251
265
< div class ="portfolio-wrap ">
252
266
< img src ="assets/img/machine-ln/mle-logo.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
253
- < p class ="portfolio-title "> 5 . MLE & MAP</ p >
267
+ < p class ="portfolio-title "> 6 . MLE & MAP</ p >
254
268
</ div >
255
269
< div class ="tooltip-text ">
256
270
MLE (Maximum Likelihood Estimation) estimates model parameters by maximizing the likelihood function, while MAP (Maximum A Posteriori) incorporates prior distributions into parameter estimation.
@@ -274,7 +288,7 @@ <h2>Machine learning</h2>
274
288
< a href ="Linear-Parameter-estimation.html ">
275
289
< div class ="portfolio-wrap ">
276
290
< img src ="assets/img/data-engineering/Linear-reg1.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
277
- < p class ="portfolio-title "> 6 . Linear Regression</ p >
291
+ < p class ="portfolio-title "> 7 . Linear Regression</ p >
278
292
</ div >
279
293
< div class ="tooltip-text ">
280
294
A statistical method for modeling the relationship between a dependent variable and one or more independent variables using a linear equation.
@@ -298,7 +312,7 @@ <h2>Machine learning</h2>
298
312
< a href ="Polinomial-regression.html ">
299
313
< div class ="portfolio-wrap ">
300
314
< img src ="assets/img/machine-ln/polinomial-reg.png " class ="img-fluid " alt ="" style ="width: 75%; height: auto; ">
301
- < p class ="portfolio-title "> 7 . Polynomial Regression</ p >
315
+ < p class ="portfolio-title "> 8 . Polynomial Regression</ p >
302
316
</ div >
303
317
< div class ="tooltip-text ">
304
318
An extension of linear regression that models the relationship between the dependent variable and the independent variable(s) as an nth-degree polynomial.
@@ -322,7 +336,7 @@ <h2>Machine learning</h2>
322
336
< a href ="Ridge-lasso-elasticnet.html ">
323
337
< div class ="portfolio-wrap ">
324
338
< img src ="assets/img/machine-ln/lasso-jtheta.png " class ="img-fluid " alt ="" style ="max-width: 75%; max-height: 20%; ">
325
- < p class ="portfolio-title "> 8 . Ridge-lasso-Elasticnet</ p >
339
+ < p class ="portfolio-title "> 9 . Ridge-lasso-Elasticnet</ p >
326
340
</ div >
327
341
< div class ="tooltip-text ">
328
342
Techniques combining regularization methods to prevent overfitting and improve model performance.
@@ -345,7 +359,7 @@ <h2>Machine learning</h2>
345
359
< a href ="pca-analysis.html ">
346
360
< div class ="portfolio-wrap ">
347
361
< img src ="assets/img/machine-ln/pca-logo.png " class ="img-fluid " alt ="" style ="width: 80%; height: 80%; ">
348
- < p class ="portfolio-title "> 9 . PC Analysis (PCA)</ p >
362
+ < p class ="portfolio-title "> 10 . PC Analysis (PCA)</ p >
349
363
</ div >
350
364
< div class ="tooltip-text ">
351
365
Principal Component Analysis is a technique for dimensionality reduction by transforming data into orthogonal components.
@@ -369,7 +383,7 @@ <h2>Machine learning</h2>
369
383
< a href ="classification.html ">
370
384
< div class ="portfolio-wrap ">
371
385
< img src ="assets/img/data-engineering/classification.png " class ="img-fluid " alt ="" style ="width: 75%; height: auto; ">
372
- < p class ="portfolio-title "> 10 . Classification Regression</ p >
386
+ < p class ="portfolio-title "> 11 . Classification Regression</ p >
373
387
</ div >
374
388
< div class ="tooltip-text ">
375
389
Methods for classifying data into categories and predicting continuous values using regression techniques.
@@ -393,7 +407,7 @@ <h2>Machine learning</h2>
393
407
< a href ="logistic-regression.html ">
394
408
< div class ="portfolio-wrap ">
395
409
< img src ="assets/img/machine-ln/deep-smf.png " class ="img-fluid " alt ="" style ="max-width: 95%; max-height: 80%; ">
396
- < p class ="portfolio-title "> 11 . Logistic Regression</ p >
410
+ < p class ="portfolio-title "> 12 . Logistic Regression</ p >
397
411
</ div >
398
412
< div class ="tooltip-text ">
399
413
A classification algorithm used for binary outcomes, predicting probabilities based on the logistic function.
@@ -418,7 +432,7 @@ <h2>Machine learning</h2>
418
432
< a href ="naive-byes.html ">
419
433
< div class ="portfolio-wrap ">
420
434
< img src ="assets/img/machine-ln/classification-naive-modified1.png " class ="img-fluid " alt ="" style ="max-width: 55%; max-height: 55%; ">
421
- < p class ="portfolio-title "> 12 . Naive Bayes ML</ p >
435
+ < p class ="portfolio-title "> 13 . Naive Bayes ML</ p >
422
436
</ div >
423
437
< div class ="tooltip-text ">
424
438
A probabilistic classifier based on Bayes' theorem with an assumption of feature independence.
@@ -442,7 +456,7 @@ <h2>Machine learning</h2>
442
456
< a href ="knn.html ">
443
457
< div class ="portfolio-wrap ">
444
458
< img src ="assets/img/machine-ln/classification-knn1.png " class ="img-fluid " alt ="" style ="width: 55%; height: auto; ">
445
- < p class ="portfolio-title "> 13 . KNN ML</ p >
459
+ < p class ="portfolio-title "> 14 . KNN ML</ p >
446
460
</ div >
447
461
< div class ="tooltip-text ">
448
462
K-Nearest Neighbors is a simple, non-parametric algorithm used for classification and regression based on proximity.
@@ -466,7 +480,7 @@ <h2>Machine learning</h2>
466
480
< a href ="decision-tree.html ">
467
481
< div class ="portfolio-wrap ">
468
482
< img src ="assets/img/machine-ln/classification-decision-tree.png " class ="img-fluid " alt ="" style ="max-width: 80%; max-height: 70%; ">
469
- < p class ="portfolio-title "> 14 . Decision Tree</ p >
483
+ < p class ="portfolio-title "> 15 . Decision Tree</ p >
470
484
</ div >
471
485
< div class ="tooltip-text ">
472
486
A model that uses a tree-like graph of decisions and their possible consequences for classification and regression.
@@ -491,7 +505,7 @@ <h2>Machine learning</h2>
491
505
< a href ="support-vector.html ">
492
506
< div class ="portfolio-wrap ">
493
507
< img src ="assets/img/machine-ln/classification-svm.png " class ="img-fluid " alt ="" style ="width: 80%; height: auto; ">
494
- < p class ="portfolio-title "> 15 . Support Vector</ p >
508
+ < p class ="portfolio-title "> 16 . Support Vector</ p >
495
509
</ div >
496
510
< div class ="tooltip-text ">
497
511
Support Vector Machines are used for classification and regression by finding the optimal hyperplane that maximizes the margin between classes.
0 commit comments