Description
This session introduces learners to the key differences between foundational and fine-tuned large language models (LLMs). It explains how foundational models serve as broad, general-purpose AI systems trained on massive multimodal datasets, while fine-tuned models are specialized derivatives optimized for specific domains and tasks. Learners will explore each model’s training process—from large-scale pretraining to supervised fine-tuning and reinforcement learning with human feedback (RLHF). The module includes real-world examples such as GPT, BERT, Claude, and Med-PaLM, illustrating their diverse applications in industry. Participants will also learn how to evaluate strengths, limitations, and appropriate model selection strategies across enterprise contexts.