Follow Us On

Large Language Models: Background and Applications

Large Language Models: Background and Applications

Speakers
Abstract

Large language models (LLMs) have revolutionized the field of natural language processing in recent years, enabling machines to produce human-like language and perform tasks such as translation, sentiment analysis, and question answering. However, understanding the inner workings of LLMs and their implementation can be intimidating for beginners. In this course, we will explore different variants of LLMs, focusing on transformer-based models such as BERT, GPT, and RoBERTa, and their specific use cases. We will cover the basic components of these LLMs and their architecture, the pre-training and fine-tuning process, as well as applications. The course will provide step-by-step guidance on how to use different LLMs, based on implementations in Python and existing libraries. Along the way, we will also discuss ethical considerations and limitations. Finally, a tutorial on Friday will cover more advanced topics such as modularity and parameter efficiency, as well as the use of LLMs in multilingual and multimodal setups.

Schedule