Educational Design Models

Explore top LinkedIn content from expert professionals.

  • View profile for Andriy Burkov
    Andriy Burkov Andriy Burkov is an Influencer

    PhD in AI, author of 📖 The Hundred-Page Language Models Book and 📖 The Hundred-Page Machine Learning Book

    486,214 followers

    Different LLMs are good at some tasks but not so good at others; for example, some might excel in instruction following but not in code generation. Combining models is a well-established technique in machine learning called "ensembling," but it typically works by averaging predictions or voting. This paper introduces Mixture-of-Agents, an "ensemble" framework for LLMs. It works in layers: several models independently generate initial responses to a prompt (the authors call these models "proposers"), then their outputs are fed to the next layer where other models called "aggregators" synthesize and refine them. A system built from open-source models outperformed GPT-4o (the paper was published in 2024) on standard benchmarks while remaining cheaper than GPT-4o at comparable quality. Let the paper talk to you on ChapterPal: https://lnkd.in/eN3b9ZdU Download the PDF: https://lnkd.in/eY_QPTzW

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    623,400 followers

    If you’re an AI engineer, product builder, or researcher- understanding how to specialize LLMs for domain-specific tasks is no longer optional. As foundation models grow more capable, the real differentiator will be: how well can you tailor them to your domain, use case, or user? Here’s a comprehensive breakdown of the 3-tiered landscape of Domain Specialization of LLMs. 1️⃣ External Augmentation (Black Box) No changes to the model weights, just enhancing what the model sees or does. → Domain Knowledge Augmentation Explicit: Feeding domain-rich documents (e.g. PDFs, policies, manuals) through RAG pipelines. Implicit: Allowing the LLM to infer domain norms from previous corpora without direct supervision. → Domain Tool Augmentation LLMs call tools: Use function calling or MCP to let LLMs fetch real-time domain data (e.g. stock prices, medical info). LLMs embodied in tools: Think of copilots embedded within design, coding, or analytics tools. Here, LLMs become a domain-native interface. 2️⃣ Prompt Crafting (Grey Box) We don’t change the model, but we engineer how we interact with it. → Discrete Prompting Zero-shot: The model generates without seeing examples. Few-shot: Handpicked examples are given inline. → Continuous Prompting Task-dependent: Prompts optimized per task (e.g. summarization vs. classification). Instance-dependent: Prompts tuned per input using techniques like Prefix-tuning or in-context gradient descent. 3️⃣ Model Fine-tuning (White Box) This is where the real domain injection happens, modifying weights. → Adapter-based Fine-tuning Neutral Adapters: Plug-in layers trained separately to inject new knowledge. Low-Rank Adapters (LoRA): Efficient parameter updates with minimal compute cost. Integrated Frameworks: Architectures that support multiple adapters across tasks and domains. → Task-oriented Fine-tuning Instruction-based: Datasets like FLAN or Self-Instruct used to tune the model for task following. Partial Knowledge Update: Selective weight updates focused on new domain knowledge without catastrophic forgetting. My two cents as someone building AI tools and advising enterprises: 🫰 Choosing the right specialization method isn’t just about performance, it’s about control, cost, and context. 🫰 If you’re in high-risk or regulated industries, white-box fine-tuning gives you interpretability and auditability. 🫰 If you’re shipping fast or dealing with changing data, black-box RAG and tool-augmentation might be more agile. 🫰 And if you’re stuck in between? Prompt engineering can give you 80% of the result with 20% of the effort. Save this for later if you’re designing domain-aware AI systems. Follow me (Aishwarya Srinivasan) for more AI insights!

  • View profile for Mohsin Memon

    CEO at Evivve | Turning Strategy into Measurable Change | Creator of the AFERR Model

    21,674 followers

    What if you had a simple guide to understanding how your learners’ brains work? Would you use it? As someone working at the intersection of games, learning and neuroscience, I know that understanding the brain can seem daunting. It’s complex—but with the right framework, it becomes a bit more accessible and actionable for those of us designing and facilitating learning experiences. Through my work with Evivve (20,000 game containers) , I’ve distilled the brain’s engagement process into five key stages, called the AFERR model: Activation, Forecasting, Experimentation, Realization, and Reflection. These stages reveal how learners process and respond to new experiences, and understanding them can help us as learning professionals to design more meaningful, impactful sessions. 🧠 I’ve attached a quick resource on the AFERR model to give you a look into each stage and some reflective questions to consider as you think about the learner’s journey. Here are some reflections to try as you explore these stages: 💎 Which of these processes aligns most with the goals of your learning experiences? 💎 Where could learners benefit from deeper reflection or experimentation in your sessions? 💎 How might understanding the AFERR model transform the way you design and facilitate learning? If these insights resonate, I’ll be sharing more on AFERR and cognitive engagement at my keynote this weekend at Indian Institute of Technology, Madras with some incredible voices in the industry. And for more on my recent UN talk, check the comments for a link. Would love to hear how this model connects with your approach to learning design in the comments! #aferr #learningdesign #neuroscience #cognitivescience #Evivve #facilitation

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41,752 followers

    Researchers from Oxford University just achieved a 14% performance boost in mathematical reasoning by making LLMs work together like specialists in a company. In their new MALT (Multi-Agent LLM Training) paper, they introduced a novel approach where three specialized LLMs - a generator, verifier, and refinement model - collaborate to solve complex problems, similar to how a programmer, tester, and supervisor work together. The breakthrough lies in their training method: (1) Tree-based exploration - generating thousands of reasoning trajectories by having models interact (2) Credit attribution - identifying which model is responsible for successes or failures (3) Specialized training - using both correct and incorrect examples to train each model for its specific role Using this approach on 8B parameter models, MALT achieved relative improvements of 14% on the MATH dataset, 9% on CommonsenseQA, and 7% on GSM8K. This represents a significant step toward more efficient and capable AI systems, showing that well-coordinated smaller models can match the performance of much larger ones. Paper https://lnkd.in/g6ag9rP4 — Join thousands of world-class researchers and engineers from Google, Stanford, OpenAI, and Meta staying ahead on AI http://aitidbits.ai

  • View profile for Justin Seeley

    Sr. eLearning Evangelist, Adobe | L&D Community Advocate

    12,465 followers

    Storytelling is one of the most underused tools in eLearning. Most designers think of it as decoration—a nice-to-have wrapper for the “real” content. However, it's the story that gives content its meaning. It’s how people make sense of information and turn it into experience. When a course tells a good story, learners stop clicking through slides and start caring about what happens next. That shift from awareness to investment is where learning begins. To build that kind of experience, I use what I call the STORY Method. 1. Situation Begin with a realistic moment from the learner’s world—something familiar enough to feel possible, but specific enough to pull them in. 2. Tension Show what’s at stake. Every story needs a challenge, a conflict, or a decision that matters. Without pressure, there’s no reason to pay attention. 3. Options Give the learner room to choose. Let them explore different paths or perspectives so they feel responsible for what happens next. 4. Result Reveal the outcome. Make the consequences visible and connect them to the underlying principle or skill you want to teach. 5. Your Move Ask them to act or reflect. Invite them to apply what they've learned or to consider how they would handle a similar situation. Good storytelling doesn’t need fancy visuals or complex characters. It just needs a clear situation, meaningful stakes, and a path that lets the learner discover the lesson for themselves. When done well, a story turns information into experience.

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,298 followers

    There’s been a lot of discussion about how Large Language Models (LLMs) power customer-facing features like chatbots. But their impact goes beyond that—LLMs can also enhance the backend of machine learning systems in significant ways. In this tech blog, Coupang’s machine learning engineers share how the team leverages LLMs to advance existing ML products. They first categorized Coupang’s ML models into three key areas: recommendation models that personalize shopping experiences and optimize recommendation surfaces, content understanding models that enhance product, customer, and merchant representation to improve shopping interactions, and forecasting models that support pricing, logistics, and delivery operations. With these existing ML models in place, the team integrates LLMs and multimodal models to develop Foundation Models, which can handle multiple tasks rather than being trained for specific use cases. These models improve customer experience in several ways. Vision-language models enhance product embeddings by jointly modeling image and text data; weak labels generated by LLMs serve as weak supervision signals to train other models. Additionally, LLMs also enable a deeper understanding of product data, including titles, descriptions, reviews, and seller information, resulting in a single LLM-powered categorizer that classifies all product categories with greater precision. The blog also dives into best practices for integrating LLMs, covering technical challenges, development patterns, and optimization strategies. For those looking to elevate ML performance with LLMs, this serves as a valuable reference. #MachineLearning #DataScience #LLM #LargeLanguageModel #AI #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gvaUuF4G

  • View profile for Kuldeep Singh Sidhu

    Senior Data Scientist @ Walmart | BITS Pilani

    15,724 followers

    All the way from Korea, a novel approach called Mentor-KD significantly improves the reasoning abilities of small language models. Mentor-KD introduces an intermediate-sized "mentor" model to augment training data and provide soft labels during knowledge distillation from large language models (LLMs) to smaller models. Broadly, it’s a two-stage process: 1) Fine-tune the mentor on filtered Chain-of-Thought (CoT) annotations from an LLM teacher. 2) Use the mentor to generate additional CoT rationales and soft probability distributions. The student model is then trained using: - CoT rationales from both the teacher and mentor (rationale distillation). - Soft labels from the mentor (soft label distillation). Results show that Mentor-KD consistently outperforms baselines, with up to 5% accuracy gains on some tasks. Mentor-KD is especially effective in low-resource scenarios, achieving comparable performance to baselines while using only 40% of the original training data. This work opens up exciting possibilities for making smaller, more efficient language models better at complex reasoning tasks. What are your thoughts on this approach?

  • View profile for Aparna Gulawani

    Learning & Development Partner

    5,151 followers

    “I used to design training that was informative. Now, I aim for impact.” That shift happened when I started using Bloom’s Taxonomy—not as an academic framework, but as a real design tool. So what is it? Bloom’s Taxonomy is a layered model that helps structure learning outcomes in six stages: Remember → Understand → Apply → Analyze → Evaluate → Create It moves learners from basic awareness to higher-order thinking and doing. I recently used this in a workshop with architecture students on Mindset and responsiveness: We explored what mindset really means (Understand) Looked at how it shapes behavior (Analyze) Unpacked fixed vs growth mindset (Evaluate) Then tackled a group challenge where they applied the growth mindset in action (Apply) Finally, they reflected: What did I learn? How will I use this? (Create) The session didn’t just leave them with new ideas—it left them with new ways of thinking and responding. Turns out, when you design learning like a staircase, people don’t just attend your session. They climb! P.S - PS: Every flower needs sunlight and space to grow. In my sessions, I add one more invisible petal to Bloom’s flower—a Safe Space. A space to pause, reflect, stumble, ask, unlearn, and try again. Because without it, no learning really blooms.

  • View profile for Jessica C.

    General Education Teacher

    5,805 followers

    Learning flourishes when students are exposed to a rich tapestry of strategies that activate different parts of the brain and heart. Beyond memorization and review, innovative approaches like peer teaching, role-playing, project-based learning, and multisensory exploration allow learners to engage deeply and authentically. For example, when students teach a concept to classmates, they strengthen their communication, metacognition, and confidence. Role-playing historical events or scientific processes builds empathy, critical thinking, and problem-solving. Project-based learning such as designing a community garden or creating a presentation fosters collaboration, creativity, and real-world application. Multisensory strategies like using manipulatives, visuals, movement, and sound especially benefit neurodiverse learners, enhancing retention, focus, and emotional connection to content. These methods don’t just improve academic outcomes they cultivate lifelong skills like adaptability, initiative, and resilience. When teachers intentionally layer strategies that match students’ strengths and needs, they create classrooms that are inclusive, dynamic, and deeply empowering. #LearningInEveryWay

  • View profile for Srishti Sehgal

    I help L&D teams design training people finish and use | Founder, Field | Building Career Curiosity

    11,619 followers

    Most learning experiences fail. Not because they lack content. Not because they aren’t engaging. But because they confuse motion with action. - Learners finish an interactive course—but can’t apply a single concept. - Employees earn certifications—but their performance stays the same. - Teams attend workshops—but nothing changes in how they work. Your beautifully designed courses might be keeping learners busy without moving them forward. The difference between motion and action explains why so many well-designed learning experiences fail to create real change. Motion 🔄 vs. Action 🛠️ in Learning Design Motion is consuming information—watching videos, reading content, clicking through slides. Action is applying knowledge—practicing skills, making decisions, solving problems. Motion FEELS productive. Action IS productive. ❌ What doesn’t work: - Content-heavy modules with no real-world application - Knowledge checks that test memory, not mastery - Gamification that rewards progress, not proficiency - Beautiful interfaces that prioritize scrolling over doing ✅ What works instead: - Micro-challenges that force immediate application - Project-based assessments with real-world constraints - Deliberate practice with quick feedback loops - "Demo days" where learners publish/present their work 3 Common Motion Traps 🪤 1️⃣ The Endless Content Cycle Overloading learners with information but giving them no space to apply it. A 40-page module doesn’t drive change—practice does. 2️⃣ The Engagement Illusion Designing for clicks, badges, and completion rates instead of real skill-building. Just because learners show up doesn’t mean they’re growing. 3️⃣ The Passive Learning Trap Building "Netflix for learning" experiences that entertain but don’t transform. Learning feels good—but does it change behavior? What to Do Next? 💡 - Audit your learning experience. Calculate the ratio of consumption time vs. creation time for your learners. - If learners spend more than 50% consuming, redesign for action. The best learning designers don’t create the most content. They create the most transformation. Are you designing for motion or action?

Explore categories