headerdesktop targvara16iun25

MAI SUNT 00:00:00:00

MAI SUNT

X

headermobile targvara16iun25

MAI SUNT 00:00:00:00

MAI SUNT

X

Large Language Models for Developers: A Prompt-Based Exploration of Llms

Large Language Models for Developers: A Prompt-Based Exploration of Llms - Oswald Campesato

Large Language Models for Developers: A Prompt-Based Exploration of Llms

This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.
FEATURES
- Covers the full lifecycle of working with LLMs, from model selection to deployment
- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization
- Teaches readers to enhance model efficiency with advanced optimization techniques
- Includes companion files with code and images -- available from the publisher
Citeste mai mult

-15%

transport gratuit

PRP: 402.94 Lei

!

Acesta este Pretul Recomandat de Producator. Pretul de vanzare al produsului este afisat mai jos.

342.50Lei

342.50Lei

402.94 Lei

Primesti 342 puncte

Important icon msg

Primesti puncte de fidelitate dupa fiecare comanda! 100 puncte de fidelitate reprezinta 1 leu. Foloseste-le la viitoarele achizitii!

Livrare in 2-4 saptamani

Descrierea produsului

This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.
FEATURES
- Covers the full lifecycle of working with LLMs, from model selection to deployment
- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization
- Teaches readers to enhance model efficiency with advanced optimization techniques
- Includes companion files with code and images -- available from the publisher
Citeste mai mult

S-ar putea sa-ti placa si

De acelasi autor

Parerea ta e inspiratie pentru comunitatea Libris!

Istoricul tau de navigare

Noi suntem despre carti, si la fel este si

Newsletter-ul nostru.

Aboneaza-te la vestile literare si primesti un cupon de -10% pentru viitoarea ta comanda!

*Reducerea aplicata prin cupon nu se cumuleaza, ci se aplica reducerea cea mai mare.

Ma abonez image one
Ma abonez image one