☀️ Tuesday Noon — AI Vocabulary Deep Dive
Session goal: Master the AI terms you hear every day in meetings, Slack, and tech articles — and learn how to explain them clearly.
🌟 Word of the Day: Hallucinate
| IPA | /həˈluː.sɪ.neɪt/ |
| Vietnamese | Ảo giác (khi AI tạo ra thông tin sai) |
| Part of speech | Verb |
When an AI model hallucinates, it confidently produces information that is factually incorrect or completely made up.
3 Example Sentences
- “The chatbot hallucinated a fake research paper and even invented author names.”
- “We can’t ship this feature until we reduce the model’s tendency to hallucinate.”
- “Always verify AI output — large language models sometimes hallucinate subtle errors.”
🔗 References
📋 Vocabulary Table: 5 Essential AI Phrases
| Phrase | IPA | Vietnamese | Example |
|---|---|---|---|
| inference | /ˈɪn.fər.əns/ | Suy luận / chạy mô hình | ”Inference latency is under 200ms.” |
| fine-tuning | /ˈfaɪnˌtjuː.nɪŋ/ | Tinh chỉnh mô hình | ”We fine-tuned the model on our internal data.” |
| token | /ˈtoʊ.kən/ | Đơn vị văn bản (trong LLM) | “This prompt uses 1,500 tokens.” |
| grounding | /ˈɡraʊn.dɪŋ/ | Căn cứ hóa / gắn với thực tế | ”RAG improves grounding by providing context.” |
| prompt engineering | /prɒmpt ˌen.dʒɪˈnɪər.ɪŋ/ | Kỹ thuật viết lệnh cho AI | ”Good prompt engineering cuts error rates significantly.” |
🗣️ Pronunciation Guide
Practice Sentence
“Fine-tuning the model reduced hallucinations and improved inference speed.”
Breakdown
| Word | Sounds Like | Stress |
|---|---|---|
| fine-tuning | FYNE-tyoo-ning | FYNE-tyoo-ning |
| hallucinations | huh-loo-sih-NAY-shunz | huh-loo-sih-NAY-shunz |
| inference | IN-fer-ents | IN-fer-ents |
🔊 Tips
- hallucinations — the stress falls on the 4th syllable: huh-loo-sih-NAY-shunz
- inference — Americans often say it fast: IN-frents (3 syllables, not 4)
- fine-tuning — both syllables are clear; don’t swallow the “t”
🔗 Forvo pronunciation examples | YouGlish AI terms
✏️ Exercise 1: Vocabulary in Context
Fill in the blank with the correct word: (hallucinate / fine-tuning / grounding / tokens / prompt engineering)
- “The model used 3,000 ________ on that long document summary.”
- “We spent two weeks on ________ to adapt the base model to our legal domain.”
- “Without ________, the AI might invent facts that sound convincing but are wrong.”
- “RAG provides ________ by supplying real documents as context to the model.”
- “Our team hired a specialist in ________ to improve response quality.”
✅ Answers
- tokens
- fine-tuning
- prompt engineering (or: grounding)
- grounding
- prompt engineering
✏️ Exercise 2: Translation Challenge
Translate these Vietnamese sentences into English using today’s vocabulary:
- “Mô hình bị ảo giác và tạo ra một tên tác giả giả.”
- “Tốc độ suy luận của mô hình quá chậm cho ứng dụng thực tế.”
- “Chúng tôi đang tinh chỉnh mô hình trên dữ liệu khách hàng.”
- “Kỹ thuật viết lệnh tốt giúp giảm thiểu lỗi đầu ra.”
✅ Suggested Answers
- “The model hallucinated and generated a fake author name.”
- “The model’s inference speed is too slow for a real-time application.”
- “We are fine-tuning the model on customer data.”
- “Good prompt engineering helps minimize output errors.”
💡 Idiom of the Day: “garbage in, garbage out”
| Vietnamese | Rác vào, rác ra (đầu vào kém → kết quả kém) |
| Used for | Emphasizing data quality in AI/tech |
Examples
- “If your training data is biased, don’t be surprised by biased outputs — garbage in, garbage out.”
- “The model performs poorly because the labels are inconsistent. Garbage in, garbage out.”
🧠 Tip: This phrase is widely used in data science, ML, and software engineering. Use it to stress the importance of data quality.
🎭 Mini Dialogue: AI Code Review
Context: Two engineers discussing a new AI feature in a code review meeting.
Linh: I’ve been testing the new summarization endpoint. It works great, but it hallucinates sometimes — it invents sources that don’t exist.
Thuan: Yeah, I saw that. We need better grounding. Have you tried RAG to pull in real documents?
Linh: Not yet. I think some prompt engineering would also help — the current system prompt is too vague.
Thuan: Agreed. Also, watch the token count — we’re close to the context limit on long inputs.
Linh: Right. I’ll run inference benchmarks after the fix. Should be ready for review by Thursday.
Thuan: Sounds good. Let’s also plan a fine-tuning experiment for next sprint.
🏆 Daily Challenge
⏱ 2-minute action — do this right now:
Open Slack, your team chat, or just a notes app. Write 2 sentences describing what your team is building — but this time, use at least 2 AI vocabulary words from today’s lesson.
Example: “We are fine-tuning a model for document classification. Inference speed is a key metric for this use case.”
Share it with a colleague or just say it out loud. Speaking > reading.
📊 Today’s Summary
| Term | Meaning |
|---|---|
| hallucinate | AI tạo ra thông tin sai |
| inference | Chạy mô hình để ra kết quả |
| fine-tuning | Tinh chỉnh mô hình với dữ liệu mới |
| token | Đơn vị xử lý văn bản |
| grounding | Căn cứ hóa thông tin cho AI |
| prompt engineering | Kỹ thuật viết lệnh hiệu quả |
| garbage in, garbage out | Đầu vào kém → kết quả kém |
🔁 Come back at 6 PM for the Evening session — Writing & Email Phrases.
📅 Tomorrow (Wednesday Noon): Architecture Vocabulary — cloud terms and system design language.