☀️ Tuesday Noon — AI Vocabulary Deep Dive

Session goal: Master the AI terms you hear every day in meetings, Slack, and tech articles — and learn how to explain them clearly.


🌟 Word of the Day: Hallucinate

IPA/həˈluː.sɪ.neɪt/
VietnameseẢo giác (khi AI tạo ra thông tin sai)
Part of speechVerb

When an AI model hallucinates, it confidently produces information that is factually incorrect or completely made up.

3 Example Sentences

  1. “The chatbot hallucinated a fake research paper and even invented author names.”
  2. “We can’t ship this feature until we reduce the model’s tendency to hallucinate.”
  3. “Always verify AI output — large language models sometimes hallucinate subtle errors.”

🔗 References


📋 Vocabulary Table: 5 Essential AI Phrases

PhraseIPAVietnameseExample
inference/ˈɪn.fər.əns/Suy luận / chạy mô hình”Inference latency is under 200ms.”
fine-tuning/ˈfaɪnˌtjuː.nɪŋ/Tinh chỉnh mô hình”We fine-tuned the model on our internal data.”
token/ˈtoʊ.kən/Đơn vị văn bản (trong LLM)“This prompt uses 1,500 tokens.”
grounding/ˈɡraʊn.dɪŋ/Căn cứ hóa / gắn với thực tế”RAG improves grounding by providing context.”
prompt engineering/prɒmpt ˌen.dʒɪˈnɪər.ɪŋ/Kỹ thuật viết lệnh cho AI”Good prompt engineering cuts error rates significantly.”

🗣️ Pronunciation Guide

Practice Sentence

“Fine-tuning the model reduced hallucinations and improved inference speed.”

Breakdown

WordSounds LikeStress
fine-tuningFYNE-tyoo-ningFYNE-tyoo-ning
hallucinationshuh-loo-sih-NAY-shunzhuh-loo-sih-NAY-shunz
inferenceIN-fer-entsIN-fer-ents

🔊 Tips

  • hallucinations — the stress falls on the 4th syllable: huh-loo-sih-NAY-shunz
  • inference — Americans often say it fast: IN-frents (3 syllables, not 4)
  • fine-tuning — both syllables are clear; don’t swallow the “t”

🔗 Forvo pronunciation examples | YouGlish AI terms


✏️ Exercise 1: Vocabulary in Context

Fill in the blank with the correct word: (hallucinate / fine-tuning / grounding / tokens / prompt engineering)

  1. “The model used 3,000 ________ on that long document summary.”
  2. “We spent two weeks on ________ to adapt the base model to our legal domain.”
  3. “Without ________, the AI might invent facts that sound convincing but are wrong.”
  4. “RAG provides ________ by supplying real documents as context to the model.”
  5. “Our team hired a specialist in ________ to improve response quality.”
✅ Answers
  1. tokens
  2. fine-tuning
  3. prompt engineering (or: grounding)
  4. grounding
  5. prompt engineering

✏️ Exercise 2: Translation Challenge

Translate these Vietnamese sentences into English using today’s vocabulary:

  1. “Mô hình bị ảo giác và tạo ra một tên tác giả giả.”
  2. “Tốc độ suy luận của mô hình quá chậm cho ứng dụng thực tế.”
  3. “Chúng tôi đang tinh chỉnh mô hình trên dữ liệu khách hàng.”
  4. “Kỹ thuật viết lệnh tốt giúp giảm thiểu lỗi đầu ra.”
✅ Suggested Answers
  1. “The model hallucinated and generated a fake author name.”
  2. “The model’s inference speed is too slow for a real-time application.”
  3. “We are fine-tuning the model on customer data.”
  4. “Good prompt engineering helps minimize output errors.”

💡 Idiom of the Day: “garbage in, garbage out”

VietnameseRác vào, rác ra (đầu vào kém → kết quả kém)
Used forEmphasizing data quality in AI/tech

Examples

  1. “If your training data is biased, don’t be surprised by biased outputs — garbage in, garbage out.”
  2. “The model performs poorly because the labels are inconsistent. Garbage in, garbage out.”

🧠 Tip: This phrase is widely used in data science, ML, and software engineering. Use it to stress the importance of data quality.


🎭 Mini Dialogue: AI Code Review

Context: Two engineers discussing a new AI feature in a code review meeting.


Linh: I’ve been testing the new summarization endpoint. It works great, but it hallucinates sometimes — it invents sources that don’t exist.

Thuan: Yeah, I saw that. We need better grounding. Have you tried RAG to pull in real documents?

Linh: Not yet. I think some prompt engineering would also help — the current system prompt is too vague.

Thuan: Agreed. Also, watch the token count — we’re close to the context limit on long inputs.

Linh: Right. I’ll run inference benchmarks after the fix. Should be ready for review by Thursday.

Thuan: Sounds good. Let’s also plan a fine-tuning experiment for next sprint.


🏆 Daily Challenge

⏱ 2-minute action — do this right now:

Open Slack, your team chat, or just a notes app. Write 2 sentences describing what your team is building — but this time, use at least 2 AI vocabulary words from today’s lesson.

Example: “We are fine-tuning a model for document classification. Inference speed is a key metric for this use case.”

Share it with a colleague or just say it out loud. Speaking > reading.


📊 Today’s Summary

TermMeaning
hallucinateAI tạo ra thông tin sai
inferenceChạy mô hình để ra kết quả
fine-tuningTinh chỉnh mô hình với dữ liệu mới
tokenĐơn vị xử lý văn bản
groundingCăn cứ hóa thông tin cho AI
prompt engineeringKỹ thuật viết lệnh hiệu quả
garbage in, garbage outĐầu vào kém → kết quả kém

🔁 Come back at 6 PM for the Evening session — Writing & Email Phrases.

📅 Tomorrow (Wednesday Noon): Architecture Vocabulary — cloud terms and system design language.

Export for reading

Comments