Important note
Of course, grid search cannot guarantee that you will come up with your target performance. That depends on the algorithm and the training data.
A common practice, though, is to define the values for testing by using a linear space or log space, where you can manually set the limits of the hyperparameter you want to test and the number of values for testing. Then, the intermediate values will be drawn by a linear or log function.
As you might imagine, grid search can take a long time to run. A number of alternative methods have been proposed to work around this time issue. Random search is one of them, where the list of values for testing is randomly selected from the search space.
Another method that has gained rapid adoption across the industry is known as Bayesian optimization. Algorithm optimizations, such as gradient descent, try to find what is called the global minima, by calculating derivatives of the cost function. The global minima are the points where you find the algorithm configuration with the least associated cost.
Bayesian optimization is useful when calculating derivatives is not an option. So you can use the Bayes theorem, a probabilistic approach, to find the global minima using the smallest number of steps.
In practical terms, Bayesian optimization will start testing the entire search space to find the most promising set of optimal hyperparameters. Then, it will perform more tests specifically in the place where the global minima are likely to be.
In this chapter, you learned about the main metrics for model evaluation. You started with the metrics for classification problems and then you moved on to the metrics for regression problems.
In terms of classification metrics, you have been introduced to the well-known confusion matrix, which is probably the most important artifact for performing a model evaluation on classification models.
You learned about true positives, true negatives, false positives, and false negatives. Then, you learned how to combine these components to extract other metrics, such as accuracy, precision, recall, the F1 score, and AUC.
You then went even deeper and learned about ROC curves, as well as precision-recall curves. You learned that you can use ROC curves to evaluate fairly balanced datasets and precision-recall curves for moderate to imbalanced datasets.
By the way, when you are dealing with imbalanced datasets, remember that using accuracy might not be a good idea.
In terms of regression metrics, you learned that the most popular ones, and the ones most likely to be present in the AWS Machine Learning Specialty exam, are the MAE, MSE, RMSE, and MAPE. Make sure you know the basics of each of them before taking the exam.
Finally, you learned about methods for hyperparameter optimization, such as grid search and Bayesian optimization. In the next chapter, you will have a look at AWS application services for AI/ML. But first, take a moment to practice these questions about model evaluation and model optimization.