🌅 Tuesday Morning — AI & Machine Learning English
Good morning! Today we dive into the language of Artificial Intelligence and Machine Learning — one of the hottest domains in tech. Mastering this vocabulary will help you read papers, join discussions, and present your AI work confidently in English.
🔤 Word of the Day: Inference
| Word | inference |
| IPA | /ˈɪn.fər.əns/ |
| Part of Speech | noun |
| Vietnamese | suy luận / giai đoạn dự đoán (trong AI) |
📖 What does it mean in AI?
In machine learning, inference is the process of using a trained model to make predictions on new data — as opposed to training, which is when the model learns from data.
Think of it this way: training = studying for an exam. Inference = actually taking the exam.
✏️ Example Sentences
-
“We run inference on the edge device to reduce latency.” → Chúng tôi chạy inference trên thiết bị biên để giảm độ trễ.
-
“The model’s inference speed improved after we switched to a quantized version.” → Tốc độ inference của mô hình được cải thiện sau khi chúng tôi chuyển sang phiên bản lượng tử hóa.
-
“During inference, the LLM generates tokens one at a time in an autoregressive manner.” → Trong quá trình inference, LLM tạo token từng cái một theo cách hồi quy.
🔗 Resources
- 📖 Cambridge Dictionary — inference
- 🎧 YouGlish — hear “inference” used in real speech
- 🎬 Andrej Karpathy explains LLM inference
📚 Vocabulary Table: AI & Machine Learning Phrases
| Phrase | Vietnamese | Example Sentence |
|---|---|---|
| hallucination | ảo giác (AI bịa đặt thông tin) | “The model produced a hallucination — it cited a paper that doesn’t exist.” |
| fine-tuning | tinh chỉnh mô hình | ”We fine-tuned GPT-4o on our internal dataset to improve domain accuracy.” |
| retrieval-augmented generation (RAG) | sinh văn bản có tăng cường truy xuất | ”RAG helps reduce hallucinations by grounding the model in real documents.” |
| context window | cửa sổ ngữ cảnh | ”This model has a 128k token context window, so it can handle long documents.” |
| prompt engineering | kỹ thuật thiết kế câu lệnh | ”Good prompt engineering can dramatically improve output quality without retraining.” |
🗣️ Pronunciation Guide
Breaking down: “inference”
IN - fer - ence
/ɪn/ /fər/ /əns/
- IN → short “i” as in it, is, in — NOT “eye-n”
- fer → weak syllable, like “fur” with a schwa — /fər/
- ence → /əns/ — soft ending, almost swallowed
🎯 Common mistake: Vietnamese speakers often say “in-FER-ence” (stressing the second syllable). The stress falls on the first syllable: IN-fer-ence.
🔊 Practice Sentence — Read aloud 3 times:
“Fast inference on large language models requires optimized hardware and quantization techniques.”
| Round | Focus |
|---|---|
| 1st read | Slow — focus on each word’s pronunciation |
| 2nd read | Normal speed — natural rhythm |
| 3rd read | Confident — as if presenting to your team |
Key words to nail:
- inference → /ˈɪn.fər.əns/
- language → /ˈlæŋ.ɡwɪdʒ/ (not “lan-guage”)
- quantization → /ˌkwɒn.tɪˈzeɪ.ʃən/
- techniques → /tekˈniːks/
✍️ Exercise 1: Fill in the Blank
Choose the correct word: inference / hallucination / fine-tuning / RAG / context window
- “The ________ of this model is only 4,000 tokens, so it can’t process long PDFs.”
- “To prevent the chatbot from making up facts, we implemented a ________ pipeline with a vector database.”
- “The sales demo failed because the model produced a ________ — it invented a product feature that doesn’t exist.”
- “We did ________ on a dataset of customer support tickets to make the model more helpful.”
- “The ________ endpoint is exposed via REST API and handles about 500 requests per second.”
✅ Click to see answers
- context window
- RAG (retrieval-augmented generation)
- hallucination
- fine-tuning
- inference
🔄 Exercise 2: Translate into English
Translate these Vietnamese sentences into natural English. Think about how a senior AI engineer would say this in a team meeting.
- “Mô hình này bị ảo giác rất nhiều khi được hỏi về dữ liệu thời gian thực.”
- “Chúng ta cần tinh chỉnh mô hình trên tập dữ liệu nội bộ của công ty.”
- “Cửa sổ ngữ cảnh 128k token cho phép chúng ta truyền toàn bộ codebase vào prompt.”
✅ Click to see suggested answers
-
“This model hallucinates a lot when asked about real-time data.” → Or: “The model tends to hallucinate frequently on real-time queries.”
-
“We need to fine-tune the model on the company’s internal dataset.”
-
“The 128k token context window lets us pass the entire codebase into the prompt.”
💡 Idiom of the Day: “garbage in, garbage out”
| Idiom | garbage in, garbage out |
| Abbreviation | GIGO |
| Vietnamese | ”rác vào, rác ra” — nếu dữ liệu đầu vào tệ, kết quả đầu ra cũng tệ |
This classic computing phrase is heavily used in AI/ML discussions to explain why data quality matters as much as model architecture.
📌 Examples in context:
-
“Our model’s predictions were terrible last quarter — classic garbage in, garbage out. We hadn’t cleaned the training data properly.” → Dự đoán của mô hình rất tệ trong quý vừa rồi — điển hình của garbage in, garbage out. Chúng tôi đã không làm sạch dữ liệu huấn luyện đúng cách.
-
“Before we blame the LLM, let’s audit our prompts — garbage in, garbage out applies to prompt engineering too.” → Trước khi đổ lỗi cho LLM, hãy kiểm tra lại prompt của chúng ta — garbage in, garbage out cũng áp dụng cho prompt engineering.
📺 Recommended Watching
Level up your AI English with these channels:
| Resource | Why Watch | Link |
|---|---|---|
| Andrej Karpathy | Former Tesla/OpenAI engineer; clear, deep explanations of LLMs | youtube.com/@AndrejKarpathy |
| Yannic Kilcher | ML paper walkthroughs with technical English commentary | youtube.com/@YannicKilcher |
| AI Explained | Accessible AI news and model breakdowns — great for listening practice | youtube.com/@aiexplained-official |
🎯 Tip: Watch with English subtitles (not Vietnamese). Pause and repeat sentences you find hard to follow.
🌄 Today’s Challenge
“One tiny action, done consistently, beats big plans done rarely.”
Your challenge for today:
In your next Slack message or PR description, use the word “inference” correctly in context.
For example:
- “This endpoint handles model inference — we should add a timeout.”
- “Inference latency increased after the model update. Investigating now.”
If you don’t have a chance at work, write one sentence in a comment on any AI-related GitHub issue or tweet/post.
⏱ Takes less than 2 minutes. Do it before lunch.
📊 Quick Recap
| Item | What You Learned |
|---|---|
| 🔤 Word | inference /ˈɪn.fər.əns/ — suy luận / giai đoạn dự đoán |
| 📚 Phrases | hallucination, fine-tuning, RAG, context window, prompt engineering |
| 💡 Idiom | garbage in, garbage out — rác vào, rác ra |
| 🗣️ Pronunciation | Stress on IN-fer-ence, not in-FER-ence |
| ✍️ Practice | 2 exercises covering fill-in-blank and translation |
See you this evening for the Afternoon session! 🚀
— Your English Coach at luonghongthuan.com