Decision making in the real world involves reasoning in the presence of uncertainty, calling for a probabilistic approach. Often, these reasoning processes can be rather complex and involve background knowledge given by logical or arithmetic constraints. Moreover, in sensitive domains like health-care and economical decision making, the results of these queries are required to be exact, as approximations without guarantees would make the decision making process brittle. Despite all their recent successes, deep probabilistic models, such as VAEs, normalizing flows and diffusion models, are intractable and fall short of the above requirements. In this Introductory Course, we will introduce the research field of Tractable Probabilistic Modeling in general and the framework of Probabilistic Circuits (PCs) in particular, which have recently emerged as a “lingua franca” of tractable probabilistic modeling. We will first introduce the students to the general field of probabilistic machine learning and motivate the use of probabilistic models as rigorous and consistent reasoning tools. We will review classical representations such as probabilistic graphical models and provide a brief introduction to modern deep probabilistic models, illustrating their intractability and the need for tractable representations. We will then introduce the field of tractable probabilistic models, PCs as a universal framework to represent these, as well as learning and inference algorithms for PCs. Finally, we will cover recent developments and applications of PCs, as well as complex reasoning scenarios such high-dimensional structured output prediction, as used for planning and semantic multi-label prediction.