🌙 Tuesday Evening — Review AI Vocabulary + Listening Comprehension Tips
Welcome back! Tonight we review the AI vocabulary you’ve been building all week and sharpen your listening comprehension skills. These are the words you’ll hear in podcasts, YouTube talks, and tech meetings — let’s make sure they stick!
📖 Word of the Day
inference /ˈɪn.fər.əns/
Vietnamese: suy luận, quá trình chạy mô hình AI để đưa ra kết quả
This word appears constantly in AI engineering conversations. When a model “runs inference,” it means it is processing input and generating output — predicting, classifying, or generating text.
3 Example Sentences:
- “The GPU cluster is optimised for inference rather than training, so response times are under 100 milliseconds.”
- “We reduced inference costs by 40% after switching to a quantised model.”
- “Real-time inference at the edge requires lightweight models that fit within strict memory limits.”
🔊 Pronunciation Links:
- 📘 Cambridge Dictionary — inference
- 🎧 YouGlish — hear “inference” in real speech
- ▶️ YouTube — AI inference explained
💡 Pronunciation Tip: Stress falls on the FIRST syllable: IN-fer-ence. Three syllables total. The middle syllable is very short and reduced — many native speakers say it almost like “IN-frence.”
📚 Vocabulary Review Table
Review these five key AI phrases. Cover the Vietnamese column and test yourself!
| Phrase | Vietnamese | Example |
|---|---|---|
| large language model (LLM) | mô hình ngôn ngữ lớn | ”We fine-tuned the large language model on our proprietary dataset.” |
| hallucination | ảo giác (AI bịa đặt thông tin) | “The model had a hallucination — it cited a paper that doesn’t exist.” |
| retrieval-augmented generation (RAG) | tạo sinh có tăng cường truy xuất | ”We used RAG to ground the chatbot’s answers in verified documents.” |
| fine-tuning | tinh chỉnh mô hình | ”Fine-tuning on domain-specific data improved accuracy significantly.” |
| context window | cửa sổ ngữ cảnh | ”This model’s context window supports up to 128,000 tokens.” |
🗣️ Pronunciation Practice
Tonight’s Sentence:
“We deployed a retrieval-augmented generation pipeline to reduce hallucinations in our large language model.”
Let’s break it down syllable by syllable:
| Word | IPA | Stress | Notes |
|---|---|---|---|
| retrieval | /rɪˈtriː.vəl/ | re-TRIEV-al | Stress on 2nd syllable |
| augmented | /ɔːɡˈmen.tɪd/ | aug-MEN-ted | Stress on 2nd syllable |
| generation | /ˌdʒen.əˈreɪ.ʃən/ | gen-er-A-tion | Stress on 3rd syllable |
| hallucinations | /həˌluː.sɪˈneɪ.ʃənz/ | hal-lu-ci-NA-tions | Stress on 4th syllable |
| deployed | /dɪˈplɔɪd/ | de-PLOYED | Stress on 2nd syllable |
🎵 Rhythm & Linking Tips:
- Link “retrieval-augmented” smoothly — they often run together as one unit: “ri-TREE-vl-AWG-men-tid”
- “Large language model” is said so often that it becomes almost one phrase: “large-LANG-gwidge-MO-dl”
- Reduce unstressed vowels: “generation” → the first two syllables are very short
Practice Pattern — Slow → Normal → Fast:
- 🐢 “We… deployed… a retrieval-augmented generation… pipeline…”
- 🚶 “We deployed a retrieval-augmented generation pipeline…”
- 🏃 “We deployed a RAG pipeline to reduce hallucinations in our LLM.”
✏️ Exercise 1 — Fill in the Blank
Fill each gap with the correct AI term from the box below.
Word Box: inference, fine-tuning, hallucination, context window, large language model
- “The model produced a ______________ — it invented a feature that doesn’t exist in our API.”
- “We’re running ______________ on our company’s support tickets to make the chatbot more helpful.”
- “The ______________ was too small to include the entire codebase, so we used chunking.”
- “GPT-4 is the most well-known ______________ available to the public.”
- “After optimising the ______________ endpoint, latency dropped from 3 seconds to 0.4 seconds.”
✅ Click to reveal answers
- hallucination — “…it invented a feature that doesn’t exist…”
- fine-tuning — “…on company’s support tickets…”
- context window — “…too small to include the entire codebase…”
- large language model — “…the most well-known…”
- inference — “…latency dropped from 3 seconds to 0.4 seconds…”
✏️ Exercise 2 — Translate into English
Translate these Vietnamese sentences into natural-sounding English using today’s vocabulary.
- “Mô hình của chúng tôi bị ảo giác khi không có đủ thông tin trong cửa sổ ngữ cảnh.”
- “Chúng tôi đã tinh chỉnh mô hình ngôn ngữ lớn trên dữ liệu nội bộ của công ty.”
- “Quá trình chạy mô hình rất nhanh nhờ phần cứng GPU chuyên dụng.”
✅ Click to reveal suggested answers
-
“Our model hallucinated when there wasn’t enough information in the context window.”
- Alternative: “The model started hallucinating because the context window lacked sufficient data.”
-
“We fine-tuned the large language model on our company’s internal data.”
- Alternative: “We performed fine-tuning on our LLM using proprietary company datasets.”
-
“The inference process is very fast thanks to dedicated GPU hardware.”
- Alternative: “Inference runs extremely quickly due to our specialised GPU infrastructure.”
💡 Idiom of the Day
”garbage in, garbage out” 🗑️➡️🗑️
Vietnamese: “rác vào, rác ra” — nếu dữ liệu đầu vào tệ, kết quả đầu ra cũng sẽ tệ
This classic computing phrase is extremely common in AI and data engineering conversations. It means: the quality of your output is only as good as the quality of your input.
Usage Examples:
-
“We spent three weeks cleaning the training data because, well — garbage in, garbage out. The model was useless until the data was consistent.”
-
“The client complained the AI gave wrong answers, but honestly, the prompts they were sending were terrible. Classic garbage in, garbage out situation.”
When to use it: In team discussions, code reviews, when explaining why data quality matters to stakeholders who want quick results without proper data preparation.
🎧 Listening Comprehension Tips
Struggling to understand AI podcasts or conference talks? Here are 3 proven strategies for tonight:
Tip 1: Shadow the Speaker 🪞
Choose a 60-second clip from a tech talk (try “AI Engineering Summit” on YouTube). Listen once, then play it again and speak along simultaneously. Don’t worry about understanding everything — focus on matching the rhythm and stress patterns.
Tip 2: Use the “3 Passes” Method 📝
- Pass 1: Listen and note keywords you do recognise
- Pass 2: Listen and fill in gaps around those keywords
- Pass 3: Listen for emotion, hesitation, and confidence — this tells you how the speaker feels about the topic
Tip 3: Pause on Unknown Words ⏸️
When you hear an unfamiliar word, pause the audio, say the word aloud 3 times, then continue. This trains your brain to decode sounds faster in real-time.
Tonight’s recommended listening: Search for “Lex Fridman AI podcast” or “AI Engineer podcast” — these have clear, measured speech perfect for intermediate learners.
🗣️ Speaking Challenge — 60 Seconds
Your Mission: Explain a concept to a non-technical person
Set a timer for 60 seconds and record yourself (voice memo on your phone) explaining this:
“What is a large language model, and why does it sometimes give wrong answers?”
Your answer should include:
- ✅ What an LLM is (1–2 sentences)
- ✅ The word “hallucination” used correctly
- ✅ One reason hallucinations happen
- ✅ One way to reduce them (hint: RAG or fine-tuning!)
Sample Answer to Compare With:
“A large language model is an AI system trained on massive amounts of text data to understand and generate human language. However, LLMs can sometimes produce hallucinations — meaning they confidently state things that are factually incorrect, because they’re generating statistically likely text rather than looking up verified facts. One way to reduce this is through retrieval-augmented generation, where the model is connected to a reliable knowledge base before generating its response.”
🎤 Listen back to your recording. Ask yourself:
- Did I stress the right syllables?
- Did I speak at a comfortable pace, or too fast?
- Did I use the vocabulary correctly?
🌙 Evening Challenge
One tiny action before tomorrow morning:
📱 Open LinkedIn, find one AI-related post in English, and write a 3-sentence comment using at least two words from today’s vocabulary table.
Example: “Great insight! Fine-tuning on domain-specific data really does make a huge difference in reducing hallucinations. Our team saw similar results when we narrowed the context window to the most relevant documents.”
This takes 5 minutes and gets you real writing practice with an audience. That’s how vocabulary moves from your notes into your brain permanently.
📊 Progress Tracker
| Session | Topic | Status |
|---|---|---|
| Mon Morning | Technical vocabulary foundations | ✅ |
| Mon Evening | Speaking + review | ✅ |
| Tue Morning | AI vocabulary deep dive | ✅ |
| Tue Evening | AI vocab review + listening tips | ← You are here |
| Wed Morning | Architecture vocabulary | 🔜 |
| Wed Evening | Explain complex systems simply | 🔜 |
🔑 Quick Reference — Tonight’s Key Terms
| Term | IPA | Vietnamese |
|---|---|---|
| inference | /ˈɪn.fər.əns/ | suy luận / chạy mô hình |
| hallucination | /həˌluː.sɪˈneɪ.ʃən/ | ảo giác AI |
| fine-tuning | /ˈfaɪnˌtjuː.nɪŋ/ | tinh chỉnh mô hình |
| retrieval-augmented | /rɪˈtriː.vəl ɔːɡˈmen.tɪd/ | tăng cường truy xuất |
| context window | /ˈkɒn.tekst ˈwɪn.dəʊ/ | cửa sổ ngữ cảnh |
Great work tonight, Thuan! 🌙 You’re building a vocabulary that will make you sound confident and natural in any AI engineering conversation. See you tomorrow morning for architecture vocab!
— Your English Coach 🎓