The last decade witnessed massive advances in machine learning (ML), with far-reaching societal impact. By all accounts, such impact is expected to become even more prominent in the near future. Nevertheless, a threat to the widespread deployment of ML is the lack of trust that arises from decisions made by what are most often inscrutable ML models. Explainable artificial intelligence (XAI) aims to help human decision-makers to understand the decisions made by ML models. However, the best-known XAI approaches offer essentially no guarantees of rigor, and this can cast distrust instead of building trust. As a result, recent years have witnessed the emergence of formal approaches for explaining the operation of ML models, being referred to as formal explainability in AI (FXAI). The explanations obtained with FXAI are logic-based, and offer guarantees of rigor that are unmatched by other XAI approaches. This course offers an in-depth contact with the underpinnings of formal explainability in AI.