Modern Large Language Models: A First-Principles Guide to Building and Understanding Transformer-Based Language Models
商品資訊
ISBN13:9798232738587
出版社:DRAFT2DIGITAL LLC
作者:Daniel R. Holt
出版日:2025/12/15
裝訂:平裝
規格:27.9cm*21.6cm*2.4cm (高/寬/厚)
商品簡介
Large language models now sit at the core of modern software systems. They power search, recommendation engines, coding assistants, conversational interfaces, and autonomous agents. Yet for many engineers and practitioners, these models remain opaque-understood through fragments of code, borrowed recipes, or surface-level explanations.
This book was written to change that.
Modern Large Language Models is a clear, systems-level guide to understanding how transformer-based language models actually work-starting from first principles and building upward toward complete, modern LLM systems.
Rather than treating large language models as black boxes, this book explains the fundamental ideas that make them possible: probabilistic language modeling, vector representations, attention mechanisms, optimization, and architectural composition. Concepts are introduced gradually, with visual intuition and concrete reasoning before full implementations, allowing readers to develop understanding that transfers beyond any single framework or model version.
The book takes you from the foundations of language modeling to the realities of training, fine-tuning, evaluation, and deployment. Along the way, it connects theory to practice, showing how design decisions shape model behavior, performance, and limitations.
This is not a collection of shortcuts or prompt recipes. It is a guide for readers who want to reason about large language models as engineered systems-systems that can be analyzed, debugged, improved, and deployed with confidence.
What You'll Learn
- How language modeling works at a probabilistic level-and why it matters
- How tokens, embeddings, and vector spaces encode meaning
- How self-attention and transformer architectures operate internally
- How complete GPT-style models are built from first principles
- How training pipelines work, including optimization and scaling considerations
- How fine-tuning, instruction tuning, and preference optimization fit together
- How embeddings, retrieval, and RAG systems extend model capabilities
- How modern LLM systems are evaluated, deployed, and monitored responsibly
What Makes This Book Different
Most books on large language models focus either on high-level descriptions or narrow implementation details. This book takes a first-principles, systems-oriented approach, emphasizing understanding over memorization and architecture over tools.
The examples use PyTorch for clarity, but the ideas are framework-agnostic and designed to remain relevant as tooling and architectures evolve. Clean diagrams, structured explanations, and carefully reasoned trade-offs replace hype and jargon.
Who This Book Is For
This book is written for software engineers, data scientists, machine learning practitioners, researchers, and technically curious readers who want to move beyond surface familiarity with LLMs.
You do not need to be an expert in machine learning to begin, but you should be comfortable with programming and willing to engage with ideas thoughtfully. Readers looking for quick tutorials or platform-specific recipes may want supplementary resources; readers seeking durable understanding will find this book invaluable.
What This Book Is Not
This book does not promise instant mastery, viral tricks, or platform-specific shortcuts. It does not focus on prompt engineering in isolation, nor does it attempt to catalog every model variant or benchmark.
Instead, it focuses on what lasts: the principles that explain why large language models work-and how to think clearly about the systems built around them.
If you want to understand modern large language models deeply-not just use them-this book provides the foundation.
主題書展
更多書展購物須知
外文書商品之書封,為出版社提供之樣本。實際出貨商品,以出版社所提供之現有版本為主。部份書籍,因出版社供應狀況特殊,匯率將依實際狀況做調整。
無庫存之商品,在您完成訂單程序之後,將以空運的方式為你下單調貨。為了縮短等待的時間,建議您將外文書與其他商品分開下單,以獲得最快的取貨速度,平均調貨時間為1~2個月。
為了保護您的權益,「三民網路書店」提供會員七日商品鑑賞期(收到商品為起始日)。
若要辦理退貨,請在商品鑑賞期內寄回,且商品必須是全新狀態與完整包裝(商品、附件、發票、隨貨贈品等)否則恕不接受退貨。

