Moving to the era of explainable AI, a comprehensive comparison of the performance of single- and multi-objective stochastic optimization algorithms has become an increasingly important task. A crucial role in performing a performance assessment is the benchmarking process. This course will focus on providing details for the following parts of the benchmarking process: which optimization problems should be selected for the analysis to decrease the bias in the analysis; which performance measures should be collected and analyzed; and what is the appropriate statistical approach to analyze the performance data that will lead to more robust and reproducible outcomes. This course will provide an overview of the current approaches for analyzing algorithms’ performance with special emphasis on caveats that should be more often noticed. The course will end with a demonstration of a web-service-based framework (i.e. DSCTool) for statistical comparison of single- and multi-objective stochastic optimization algorithms.