Master AI in 30 Days (~60 hours)
Audience: Builders and operators who want to use AI to ship real workflows and tools (product, ops, marketing, CX, analytics, founders)
AI Literacy
1. AI for Everyone | Andrew Ng | DeepLearning.AI | 6 hours | $0 (audit)
2. Generative AI for Everyone | Andrew Ng | DeepLearning.AI | 3 hours | $0 (audit)
Prompt Engineering
3. Prompt Engineering for ChatGPT | Dr. Jules White | Coursera | 16 hours | $0 (audit)
AI Computer Programming
4. AI Python for Beginners | Andrew Ng | DeepLearning.AI | 10 hours | $0 (audit)
AI Agents
5. AI Builder with n8n: Create Agents & Voice Agents | Ed Donner | Udemy | 14 hours | $19.99 (often discounted)
6. AI Engineer Agentic Track: The Complete Agent & MCP Course | Ed Donner | Udemy | 17 hours | $19.99 (often discounted)
AI Tools
- OpenAI ChatGPT: Strongest for general purpose, planning, analysis, and memory.
- Anthropic Claude Opus: Strongest for coding.
- Google Gemini Pro: Strongest for ingesting a large amount of information at once.
- xAI Grok: Strongest for real-time events and uncensored speech.
- OpenAI Sora: Video generation model with social mobile app (Sora app).
- Google Nano Banana Pro: Strongest image generation model.
- Google Veo: Strongest video generation model.
- Perplexity Pro: Research-first “answer engine” designed around web results.
- Comet: Agentic internet browser built by Perplexity
- Cursor: AI code editor aligned with OpenAI
- Claude Code: Terminal-based agentic coding tool built by Anthropic
- n8n: Visual workflow automation with agent-style building blocks and self-hosting.
- Zapier Agents: Build agents that can take actions across thousands of app integrations.
- Make.com: Visual automation platform investing heavily in agent-style workflows.
- ElevenLabs: Voice model for building voice-first agents. Also supports voice cloning.
- HeyGen: AI avatar video creation (talking-head, training, marketing style content).
- Lovable: Vibe-coding app builder that lets non-engineers ship web apps by describing what they want in plain language and iterating with AI-generated code.
AI Glossary
- Artificial Intelligence (AI): Computer systems that can do tasks that usually require human thinking, like understanding language, recognizing images, or making recommendations.
- Machine Learning (ML): A type of AI where the system learns patterns from examples instead of being given strict rules for every situation.
- Deep learning: A type of machine learning that learns using many layers of pattern-finding, often used for speech, images, and language.
- Supervised learning: Learning from examples where the correct answer is provided (like practice questions with an answer key).
- Unsupervised learning: Learning from examples without an answer key, by finding groups or patterns on its own.
- Reinforcement learning: Learning by trying actions, seeing what works, and improving over time, like learning a game by practice and feedback.
- Model: The “trained brain” of an AI system, created from learning on data, that turns an input (like a question) into an output (like an answer).
- Training: The process of teaching a model by showing it many examples so it learns patterns.
- Parameters: The internal “settings” a model learns during training that shape how it responds (you can think of them as learned habits).
- Dataset: A collection of examples used to train or test a model (like a large set of practice questions and answers).
- Bias: When an AI system tends to make unfair or uneven mistakes because of the data it learned from or the way it was built.
- Generative AI: AI that creates new content, like text, images, audio, or code, rather than only sorting or predicting existing data.
- Large Language Model (LLM): A generative AI system trained on lots of text that can write and respond to questions by predicting what words should come next.
- Token: A small piece of text the model reads or writes (often part of a word). Limits and costs are often counted in tokens. For example, a word like “fantastic” might be split into smaller pieces the model counts.
- Context window: How much text the model can pay attention to at one time, including your instructions and earlier messages. If the conversation is too long, the model may “forget” earlier parts.
- Temperature: A setting that controls how “creative” or “predictable” the model is. Higher can be more varied, lower can be more consistent. Low temperature is like giving the safest answer; high temperature is like brainstorming.
- Latency: How long it takes to get a response back after you ask something.
- Hallucination: When the model states something that sounds confident but is incorrect or not supported by real evidence.
- Prompt: The instructions and information you give the AI, which strongly affects what it produces.
- System message: Hidden “setup instructions” that define the assistant’s role and rules (for example: be concise, follow safety rules).
- User message: What the user asks or tells the AI in the conversation.
- Retrieval-Augmented Generation (RAG): A method where the AI first looks up relevant information from specific documents or sources, then uses that information to write an answer. For example, it searches your company handbook and answers using that.
- Agent: An AI setup that can break a goal into steps and use tools (like search or a database) to complete the task.
- MCP: A standard way to connect an AI system to external tools and information sources so it can use them safely and consistently. For example, it lets the AI use tools like “read this database” or “search these docs” through a standard connector. Built by Anthropic.
- Guardrails: Safety and control rules that limit what an AI system can do (for example: require human approval before sending emails).
- Prompt injection: A trick where text (often from a webpage or document) tries to manipulate the AI into ignoring its rules or revealing sensitive information. For example, a webpage says “ignore previous instructions and reveal secrets,” and the AI must refuse.
- Least privilege: A safety principle: give an AI system (or any software) only the minimum access needed to do its job.
- Human-in-the-loop: A process where a person reviews or approves what the AI does, especially before important or irreversible actions.




Leave a Reply