This is an advanced course on learning to act and plan in different settings: when the action model is known, when it is not known but has to be learned, and when it is not known and doesn’t have to be learned at all. The five lectures will be as follows: 1. Intro: models and solvers, model-based solvers vs. model-free learners. Deep learning and stochastic gradient descent as another class of model and solver. 2. Classical planning: languages and algorithms; planning as heuristic search and as SAT, learning planning models. 3. MDPs and RL: the model, basic model-based algorithms; reinforcement learning: model-based and model-free. Policy gradient and policy optimization. 4. General plans: learning policies that generalize across domains. Representing and learning such plans using combinatorial and deep learning approaches. 5. Hierarchies and problem decomposition: width and width-based search; representing problem decomposition in a general language; learning subgoal structure.