The deployment of autonomous systems in the real world presents opportunities to combine model-based reasoning with data-driven models. To approach this in a systematic way, one must acknowledge the inherent inaccuracies and underspecification of data-driven models. Therefore, understanding and reasoning about model uncertainty is crucial. This reasoning can be qualitative, in the form of policies which are robust to variations in the model, or quantitative, in the form of policies that operate on the belief space of models. This introductory course will focus on understanding and reasoning about model uncertainty. We will introduce several extensions of the standard Markov decision process (MDP) framework for planning under uncertainty that enable reasoning over model uncertainty. We will begin by providing the basics of planning for MDPs, and then use the context of long-term autonomy to demonstrate the need for reasoning about model uncertainty. We will then cover three extensions of MDPs designed to do so: interval MDPs, uncertain MDPs, and Bayes-adaptive MDPs.