Skip to main content

A.5 Hardware and Cloud Resource Guide

Hardware and Cloud Resource Decision Tree

Cost comparison chart for local, cloud, and API approaches

The short answer: do not buy a GPU first. Start with the task, then choose local CPU, cloud GPU, or API.

Quick decision table

Learning stageLocal needBetter option when stuck
Chapters 1-5 tools, Python, data, math, classic ML8-16GB RAM, SSDUsually no GPU needed
Chapter 6 deep learning basics16GB RAMCloud GPU for training exercises
Chapter 7 LLM principles and fine-tuning concepts16-32GB RAMCloud GPU or API experiments
Chapters 8-9 RAG and Agent16GB RAM, stable networkAPI-first engineering route
Chapters 10-11 CV and NLP16GB RAMCloud GPU for heavier experiments
Chapter 12 multimodal16-32GB RAMCloud generation or API services

Buying priority

For most learners, spend in this order:

  1. Memory: 16GB minimum, 32GB comfortable.
  2. SSD: 512GB minimum, 1TB comfortable.
  3. Stable environment: clean Python, Node, Docker, and project folders.
  4. Display and input comfort: external monitor, keyboard, mouse.
  5. GPU: only after you know your real workload.

When to use cloud or API

OptionBest forWatch out for
Free notebooksSmall demos and learning the workflowTime limits and unstable availability
Hourly cloud GPUTraining experiments with clear code and dataPrepare first, shut down immediately after use
API-first routeRAG, Agent, assistant, and product projectsLogging, cost control, privacy, and retries
Local GPUFrequent long-term training and fast local iterationVRAM, cooling, power, and total cost

When a local GPU is worth it

Buy only when at least two are true:

  • You will train models frequently for months.
  • Cloud queues or time limits slow you down every week.
  • You know the model size, batch size, and VRAM you need.
  • You need fast local iteration more than low upfront cost.

If the reason is only “I may need it later,” wait.

Practical plan

Use your current computer for Chapters 1-5. Rent cloud GPU when Chapter 6, 10, or 11 really needs it. Use API-first projects for Chapters 8-9. Decide on local GPU only after your project workload proves it.