Skip to main content

A.9 Learning Resources Quick Reference

AI Project Quick Reference Overview

AI Project Quick Reference Troubleshooting Index

Use this page while building. Do not read it from top to bottom.

Environment checks

python --version
which python
pip --version
pip list
pwd
ls

For the docs site:

npm install
npm run start
npm run build

For NVIDIA GPU:

nvidia-smi

Baseline first

TaskTry first
Tabular classification/regressionLinear model or tree model
Text classificationTF-IDF + LogisticRegression
Image classificationTransfer learning
Named entity recognitionRules/dictionary baseline, then sequence model
Document Q&AKeyword/BM25 retrieval, then RAG
Agent tool useSingle Agent + one safe tool

Metrics

TaskFirst metrics
Balanced classificationAccuracy, F1
Imbalanced classificationPrecision, Recall, F1, confusion matrix
RegressionMAE, RMSE, residual review
Retrieval / RAGHit@K, MRR, citation accuracy, human review
Agentsuccess rate, tool errors, cost, trace review

Training warning signs

SignalCheck first
Loss does not decreaselabels, loss function, learning rate, input format
Train good, validation pooroverfitting, leakage, distribution mismatch
Accuracy unchangedweak features, wrong labels, model not learning
GPU out of memorybatch size, input length, model size
Unstable resultsrandom seed, small data, inconsistent split

RAG checklist

  1. Documents split correctly?
  2. Retrieval returns the right chunks?
  3. Answer includes sources?
  4. Answer truly uses the retrieved content?
  5. Permission filtering and no-answer behavior exist?

Agent checklist

  1. Start with single-turn Q&A.
  2. Add one tool.
  3. Add strict parameter schema.
  4. Add logs and trace replay.
  5. Add permission boundary and stop condition.

Prompt template

You are a ____.
Your task is ____.
Input:
Output format:
Constraints:
If information is insufficient, say so clearly.

Minimal training loop

data = [(1.0, 2.0), (2.0, 4.0), (3.0, 6.0)]
w = 0.0
lr = 0.01

for epoch in range(3):
total_loss = 0.0
for x, y in data:
pred = w * x
error = pred - y
total_loss += error * error
grad = 2 * error * x
w -= lr * grad
print(f"epoch={epoch} w={w:.3f} loss={total_loss:.3f}")

Expected output:

epoch=0 w=0.521 loss=48.630
epoch=1 w=0.907 loss=26.580
epoch=2 w=1.192 loss=14.528

Read it as: data -> prediction -> loss -> gradients -> parameter update.