@@ -201,7 +201,7 @@ <h2>Machine learning</h2>
201
201
< div class ="tooltip-container ">
202
202
< a href ="unsupervised_learning.html ">
203
203
< div class ="portfolio-wrap ">
204
- < img src ="assets/img/data-engineering/ML-rg-0 .png " class ="img-fluid " alt ="" style ="width: 65%; height: auto; ">
204
+ < img src ="assets/img/machine-ln/unsupervised_logo .png " class ="img-fluid " alt ="" style ="width: 65%; height: auto; ">
205
205
< p class ="portfolio-title "> 3. Unsupervised Algorithms</ p >
206
206
</ div >
207
207
< div class ="tooltip-text ">
@@ -226,7 +226,7 @@ <h2>Machine learning</h2>
226
226
< a href ="Linear-reg.html ">
227
227
< div class ="portfolio-wrap ">
228
228
< img src ="assets/img/machine-ln/gradient-discent.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
229
- < p class ="portfolio-title "> 3 . Gradient Descent Method</ p >
229
+ < p class ="portfolio-title "> 4 . Gradient Descent Method</ p >
230
230
</ div >
231
231
< div class ="tooltip-text ">
232
232
An optimization technique used to minimize the loss function by iteratively adjusting model parameters in the direction of the steepest descent.
@@ -250,7 +250,7 @@ <h2>Machine learning</h2>
250
250
< a href ="mle.html ">
251
251
< div class ="portfolio-wrap ">
252
252
< img src ="assets/img/machine-ln/mle-logo.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
253
- < p class ="portfolio-title "> 4 . MLE & MAP</ p >
253
+ < p class ="portfolio-title "> 5 . MLE & MAP</ p >
254
254
</ div >
255
255
< div class ="tooltip-text ">
256
256
MLE (Maximum Likelihood Estimation) estimates model parameters by maximizing the likelihood function, while MAP (Maximum A Posteriori) incorporates prior distributions into parameter estimation.
@@ -274,7 +274,7 @@ <h2>Machine learning</h2>
274
274
< a href ="Linear-Parameter-estimation.html ">
275
275
< div class ="portfolio-wrap ">
276
276
< img src ="assets/img/data-engineering/Linear-reg1.png " class ="img-fluid " alt ="" style ="width: 85%; height: auto; ">
277
- < p class ="portfolio-title "> 5 . Linear Regression</ p >
277
+ < p class ="portfolio-title "> 6 . Linear Regression</ p >
278
278
</ div >
279
279
< div class ="tooltip-text ">
280
280
A statistical method for modeling the relationship between a dependent variable and one or more independent variables using a linear equation.
@@ -298,7 +298,7 @@ <h2>Machine learning</h2>
298
298
< a href ="Polinomial-regression.html ">
299
299
< div class ="portfolio-wrap ">
300
300
< img src ="assets/img/machine-ln/polinomial-reg.png " class ="img-fluid " alt ="" style ="width: 75%; height: auto; ">
301
- < p class ="portfolio-title "> 6 . Polynomial Regression</ p >
301
+ < p class ="portfolio-title "> 7 . Polynomial Regression</ p >
302
302
</ div >
303
303
< div class ="tooltip-text ">
304
304
An extension of linear regression that models the relationship between the dependent variable and the independent variable(s) as an nth-degree polynomial.
@@ -322,7 +322,7 @@ <h2>Machine learning</h2>
322
322
< a href ="Ridge-lasso-elasticnet.html ">
323
323
< div class ="portfolio-wrap ">
324
324
< img src ="assets/img/machine-ln/lasso-jtheta.png " class ="img-fluid " alt ="" style ="max-width: 75%; max-height: 20%; ">
325
- < p class ="portfolio-title "> 7 . Ridge-lasso-Elasticnet</ p >
325
+ < p class ="portfolio-title "> 8 . Ridge-lasso-Elasticnet</ p >
326
326
</ div >
327
327
< div class ="tooltip-text ">
328
328
Techniques combining regularization methods to prevent overfitting and improve model performance.
@@ -345,7 +345,7 @@ <h2>Machine learning</h2>
345
345
< a href ="pca-analysis.html ">
346
346
< div class ="portfolio-wrap ">
347
347
< img src ="assets/img/machine-ln/pca-logo.png " class ="img-fluid " alt ="" style ="width: 80%; height: 80%; ">
348
- < p class ="portfolio-title "> 8 . PC Analysis (PCA)</ p >
348
+ < p class ="portfolio-title "> 9 . PC Analysis (PCA)</ p >
349
349
</ div >
350
350
< div class ="tooltip-text ">
351
351
Principal Component Analysis is a technique for dimensionality reduction by transforming data into orthogonal components.
@@ -369,7 +369,7 @@ <h2>Machine learning</h2>
369
369
< a href ="classification.html ">
370
370
< div class ="portfolio-wrap ">
371
371
< img src ="assets/img/data-engineering/classification.png " class ="img-fluid " alt ="" style ="width: 75%; height: auto; ">
372
- < p class ="portfolio-title "> 9 . Classification Regression</ p >
372
+ < p class ="portfolio-title "> 10 . Classification Regression</ p >
373
373
</ div >
374
374
< div class ="tooltip-text ">
375
375
Methods for classifying data into categories and predicting continuous values using regression techniques.
@@ -393,7 +393,7 @@ <h2>Machine learning</h2>
393
393
< a href ="logistic-regression.html ">
394
394
< div class ="portfolio-wrap ">
395
395
< img src ="assets/img/machine-ln/deep-smf.png " class ="img-fluid " alt ="" style ="max-width: 95%; max-height: 80%; ">
396
- < p class ="portfolio-title "> 10 . Logistic Regression</ p >
396
+ < p class ="portfolio-title "> 11 . Logistic Regression</ p >
397
397
</ div >
398
398
< div class ="tooltip-text ">
399
399
A classification algorithm used for binary outcomes, predicting probabilities based on the logistic function.
@@ -418,7 +418,7 @@ <h2>Machine learning</h2>
418
418
< a href ="naive-byes.html ">
419
419
< div class ="portfolio-wrap ">
420
420
< img src ="assets/img/machine-ln/classification-naive-modified1.png " class ="img-fluid " alt ="" style ="max-width: 55%; max-height: 55%; ">
421
- < p class ="portfolio-title "> 11 . Naive Bayes ML</ p >
421
+ < p class ="portfolio-title "> 12 . Naive Bayes ML</ p >
422
422
</ div >
423
423
< div class ="tooltip-text ">
424
424
A probabilistic classifier based on Bayes' theorem with an assumption of feature independence.
@@ -442,7 +442,7 @@ <h2>Machine learning</h2>
442
442
< a href ="knn.html ">
443
443
< div class ="portfolio-wrap ">
444
444
< img src ="assets/img/machine-ln/classification-knn1.png " class ="img-fluid " alt ="" style ="width: 55%; height: auto; ">
445
- < p class ="portfolio-title "> 12 . KNN ML</ p >
445
+ < p class ="portfolio-title "> 13 . KNN ML</ p >
446
446
</ div >
447
447
< div class ="tooltip-text ">
448
448
K-Nearest Neighbors is a simple, non-parametric algorithm used for classification and regression based on proximity.
@@ -466,7 +466,7 @@ <h2>Machine learning</h2>
466
466
< a href ="decision-tree.html ">
467
467
< div class ="portfolio-wrap ">
468
468
< img src ="assets/img/machine-ln/classification-decision-tree.png " class ="img-fluid " alt ="" style ="max-width: 80%; max-height: 70%; ">
469
- < p class ="portfolio-title "> 13 . Decision Tree</ p >
469
+ < p class ="portfolio-title "> 14 . Decision Tree</ p >
470
470
</ div >
471
471
< div class ="tooltip-text ">
472
472
A model that uses a tree-like graph of decisions and their possible consequences for classification and regression.
@@ -491,7 +491,7 @@ <h2>Machine learning</h2>
491
491
< a href ="support-vector.html ">
492
492
< div class ="portfolio-wrap ">
493
493
< img src ="assets/img/machine-ln/classification-svm.png " class ="img-fluid " alt ="" style ="width: 80%; height: auto; ">
494
- < p class ="portfolio-title "> 14 . Support Vector</ p >
494
+ < p class ="portfolio-title "> 15 . Support Vector</ p >
495
495
</ div >
496
496
< div class ="tooltip-text ">
497
497
Support Vector Machines are used for classification and regression by finding the optimal hyperplane that maximizes the margin between classes.
0 commit comments