The Neural Maze’s cover photo
The Neural Maze

The Neural Maze

Education

Aguadulce, Almería 2,274 followers

Become a real Machine Learning Engineer in a world full of hype

About us

Learn to build AI Systems that actually work. From principles to production. Follow The Neural Maze’s journey on: 📩 Substack: https://theneuralmaze.substack.com/ 🧑💻 GitHub: https://github.com/neural-maze

Website
https://theneuralmaze.substack.com/
Industry
Education
Company size
1 employee
Headquarters
Aguadulce, Almería
Type
Self-Employed
Founded
2024
Specialties
machine learning, machine learning engineer, mlops engineer skills, mlops learning material, mlops engineer, generative ai, llm, ai egineering, ai engineer, data science, artificial intelligence, agents, agentic patterns, and agentic simulation

Locations

Updates

  • The Neural Maze reposted this

    Builders build (even on Saturdays 😎) 𝟕 𝐩𝐫𝐨𝐣𝐞𝐜𝐭𝐬 𝐭𝐨 𝐠𝐫𝐨𝐰 𝐚𝐬 𝐚𝐧 𝐀𝐈 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫 👇 AI Engineers aren't built on tutorials. They're forged in end-to-end projects. That's why, today, I'll show you 7 hands-on projects for those AI Engineers wanting to level up their skills. - 𝟏. 𝐀𝐠𝐞𝐧𝐭 𝐃𝐞𝐬𝐢𝐠𝐧 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬 Learn how to implement the Reflection, Planning, Tool Use and MultiAgent Patterns using Python and Groq LLMs. So no LangChain, LlamaIndex or Crewai. Just building everything from scratch! 👉 https://lnkd.in/dEpjmiX4 - 𝟐. 𝐑𝐢𝐜𝐤 𝐋𝐋𝐌 The ideal course for those wanting to learn LLM finetuning. You'll work with Rick & Morty transcripts, generating instruct datasets to finetune Llama 3.1 using the Unsloth AI library. After that, you'll deploy the finetuned LLM to Ollama. 👉 https://lnkd.in/dRZ64dwn - 𝟑. 𝐓𝐰𝐢𝐧 𝐂𝐞𝐥𝐞𝐛𝐫𝐢𝐭𝐲 𝐀𝐩𝐩 You'll build a Twin Celebrity App using Qdrant, Facenet Embeddings, ZenML, Streamlit and Google Cloud Run. 👉 https://lnkd.in/d7Jg4Hrp - 𝟒. 𝐀𝐯𝐚, 𝐭𝐡𝐞 𝐖𝐡𝐚𝐭𝐬𝐀𝐩𝐩 𝐀𝐠𝐞𝐧𝐭 In this course, a collaboration with the one and only Jesús Copado, you'll learn how to build a multimodal agent connected to WhatsApp. Build agentic workflows using LangGraph, implement TTS and STT pipelines, generate high quality images using diffusion models and integrate VLMs in your applications. 👉 https://lnkd.in/dbvrR4ti - 𝟓. 𝐏𝐡𝐢𝐥𝐨𝐀𝐠𝐞𝐧𝐭𝐬 - 𝐖𝐡𝐞𝐧 𝐀𝐈 𝐦𝐞𝐞𝐭𝐬 𝐏𝐡𝐢𝐥𝐨𝐬𝐨𝐩𝐡𝐲 My collaboration with Paul Iusztin! Learn how to build an AI agent simulation engine that brings historical philosophers to life in an interactive game environment. 👉 https://lnkd.in/dDP7xHAj - 𝟔. 𝐊𝐮𝐛𝐫𝐢𝐜𝐤, 𝐭𝐡𝐞 𝐌𝐂𝐏 𝐕𝐢𝐝𝐞𝐨 𝐏𝐫𝐨𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐀𝐠𝐞𝐧𝐭 My collab with Alex Razvant, where we dive deep into MCP servers, video processing, and multimodal agents. We're going all in 😎 👉 https://lnkd.in/dEh5QJe2 - 𝟕. 𝐏𝐡𝐨𝐧𝐞 𝐂𝐚𝐥𝐥𝐢𝐧𝐠 𝐀𝐠𝐞𝐧𝐭𝐬 𝐂𝐨𝐮𝐫𝐬𝐞 My new collaboration with Jesús Copado. Build an agent-powered call-center using FastRTC, Superlinked, Twilio, and Runpod. It also includes a 𝐋𝐢𝐯𝐞 𝐄𝐝𝐢𝐭𝐢𝐨𝐧: weekly 1–1.5h coding sessions + Q&A with premium subscribers. 👉 https://lnkd.in/dszHxVDm - 𝐋𝐞𝐭 𝐦𝐞 𝐤𝐧𝐨𝐰 𝐰𝐡𝐢𝐜𝐡 𝐨𝐧𝐞 𝐲𝐨𝐮 𝐩𝐫𝐞𝐟𝐞𝐫! 👇

    • No alternative text description for this image
  • The Neural Maze reposted this

    The best OCR model has only 0.9B parameters And it runs entirely on your laptop! 👇 𝐆𝐋𝐌-𝐎𝐂𝐑 scored 𝟗𝟒.𝟔𝟐 on 𝐎𝐦𝐧𝐢𝐃𝐨𝐜𝐁𝐞𝐧𝐜𝐡 𝐕𝟏.𝟓 — beating models 10x its size. Formulas, tables, complex layouts, code-heavy documents. It handles all of it. The secret? A two-stage pipeline that combines a layout detector with a vision-language decoder. Small, focused, and ridiculously effective. In this week's article at The Neural Maze, we show you how to run it locally step by step: Docker setup, hardware optimization, custom model configuration with Ollama, and real-world document parsing with the official SDK. The best part? No GPUs needed ... just your laptop! 💻 📘 𝐅𝐮𝐥𝐥 𝐠𝐮𝐢𝐝𝐞 𝐡𝐞𝐫𝐞 → https://lnkd.in/eycUwn2M ♻️ Repost for more local deployments!

    • No alternative text description for this image
  • The Neural Maze reposted this

    Finetuning is the most underrated AI skill Here's the course to master it 👇 Everyone talks about prompting, context engineering, agentic loops, etc. Almost nobody talks about finetuning! But finetuning is what turns a generic LLM into 𝐘𝐎𝐔𝐑 model (𝐬𝐦𝐚𝐥𝐥𝐞𝐫, 𝐜𝐡𝐞𝐚𝐩𝐞𝐫, and 𝐛𝐞𝐭𝐭𝐞𝐫 at your specific task). 👉 𝐓𝐡𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦? Most resources either bury you in math or throw code at you with zero context. So Antonio Zarauz Moreno and I built the 𝐅𝐢𝐧𝐞𝐭𝐮𝐧𝐢𝐧𝐠 𝐒𝐞𝐬𝐬𝐢𝐨𝐧𝐬 ... a course with a different philosophy. Every lesson has 𝐭𝐡𝐫𝐞𝐞 𝐝𝐞𝐥𝐢𝐯𝐞𝐫𝐚𝐛𝐥𝐞𝐬: 📄 𝐓𝐡𝐞𝐨𝐫𝐲 𝐚𝐫𝐭𝐢𝐜𝐥𝐞: understand what's happening under the hood 🔬 𝐇𝐚𝐧𝐝𝐬-𝐨𝐧 𝐥𝐚𝐛: put it into practice with real code 🎙️ 𝐎𝐟𝐟𝐢𝐜𝐞 𝐡𝐨𝐮𝐫: discuss, debate, and get your questions answered 𝟖 𝐬𝐞𝐬𝐬𝐢𝐨𝐧𝐬 covering the 𝐟𝐮𝐥𝐥 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞: Pretraining → Supervised Finetuning → LoRA → QLoRA → RLHF → Reasoning (GRPO) → Multimodal Finetuning → Deployment A course from real builders to real builders. 𝐒𝐭𝐚𝐫𝐭 𝐡𝐞𝐫𝐞! 👇 🔗 https://lnkd.in/epSYGigk Repost to help someone who needs this! ♻️

    • No alternative text description for this image
  • The Neural Maze reposted this

    The 𝐟𝐢𝐧𝐞𝐭𝐮𝐧𝐢𝐧𝐠 𝐜𝐨𝐮𝐫𝐬𝐞 we needed didn't exist So we spent 𝟖 𝐰𝐞𝐞𝐤𝐬 building it ... 👇 Antonio Zarauz Moreno and I couldn't find the finetuning course we needed when we started. So we built it from scratch. Every lesson follows the same structure: understand the theory, build it with code, then discuss it live. 8 lessons covering the 𝐟𝐮𝐥𝐥 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞: → The Finetuning Landscape → Supervised Finetuning → LoRA → QLoRA → RLHF → GRPO → Multimodal Finetuning → LLM Deployment From pretraining to production across 24 deliverables. Huge thank you to Unsloth AI for sponsoring this course. If you've seen the labs, you've already seen their tools in action. 🦥 𝐓𝐡𝐞 𝐟𝐮𝐥𝐥 𝐜𝐨𝐮𝐫𝐬𝐞 𝐨𝐯𝐞𝐫𝐯𝐢𝐞𝐰 𝐢𝐬 𝐧𝐨𝐰 𝐚𝐯𝐚𝐢𝐥𝐚𝐛𝐥𝐞! 👇 🔗 https://lnkd.in/epSYGigk ♻️ Repost to help a builder who needs this

    • No alternative text description for this image
  • The Neural Maze reposted this

    Staying up to date in AI is 𝐍𝐎𝐓 optional. But how to do it is 𝐲𝐨𝐮𝐫 𝐜𝐡𝐨𝐢𝐜𝐞 👇 𝐓𝐰𝐨 𝐭𝐲𝐩𝐞𝐬 of people right now: 🧘♂️ One reads foundational papers, builds mental models, and understands the why behind every new release. Calm. Focused. Retains everything. ☕ The other is 3 coffees in, eyes red, scrolling every thread, every launch, every hot take. Trying to keep up with everything. Remembering almost nothing. Here's what I've learned: ⛔ 𝐓𝐡𝐞 𝐩𝐞𝐨𝐩𝐥𝐞 𝐰𝐡𝐨 𝐰𝐢𝐧 𝐥𝐨𝐧𝐠-𝐭𝐞𝐫𝐦 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐩𝐚𝐜𝐞 𝐚𝐫𝐞𝐧'𝐭 𝐭𝐡𝐞 𝐟𝐚𝐬𝐭𝐞𝐬𝐭 𝐬𝐜𝐫𝐨𝐥𝐥𝐞𝐫𝐬. ✅ 𝐓𝐡𝐞𝐲'𝐫𝐞 𝐭𝐡𝐞 𝐝𝐞𝐞𝐩𝐞𝐬𝐭 𝐭𝐡𝐢𝐧𝐤𝐞𝐫𝐬. You can chase 𝐧𝐨𝐢𝐬𝐞, or you can build 𝐜𝐥𝐚𝐫𝐢𝐭𝐲. Stay updated ... Just don't lose your sanity doing it! 🙏

    • No alternative text description for this image
  • The Neural Maze reposted this

    Notebooks don't go to production Here's the shift most ML teams miss 👇 Working with Databricks over the last few years changed how I think about 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐌𝐋. One idea I really like is MLOps Stacks: 𝐭𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐞𝐧𝐭𝐢𝐫𝐞 𝐌𝐋 𝐥𝐢𝐟𝐞𝐜𝐲𝐜𝐥𝐞 𝐚𝐬 𝐜𝐨𝐝𝐞. Instead of deploying a model, you deploy the process that builds it: ➤ 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 ➤ 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞𝐬 ➤ 𝐓𝐞𝐬𝐭𝐢𝐧𝐠 ➤ 𝐂𝐈 / 𝐂𝐃 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 ➤ 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 Everything becomes 𝐯𝐞𝐫𝐬𝐢𝐨𝐧𝐞𝐝, 𝐫𝐞𝐩𝐫𝐨𝐝𝐮𝐜𝐢𝐛𝐥𝐞, and 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐞𝐝. With tools like 𝐌𝐋𝐟𝐥𝐨𝐰, 𝐔𝐧𝐢𝐭𝐲 𝐂𝐚𝐭𝐚𝐥𝐨𝐠, and 𝐃𝐚𝐭𝐚𝐛𝐫𝐢𝐜𝐤𝐬 𝐀𝐬𝐬𝐞𝐭 𝐁𝐮𝐧𝐝𝐥𝐞𝐬, experimentation and production stop being separate worlds. For teams trying to move from notebooks to production ML, this structure makes a huge difference. - I'm thinking about sharing how to build a 𝟒-𝐬𝐭𝐚𝐠𝐞 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐞𝐫 𝐬𝐲𝐬𝐭𝐞𝐦 on Databricks as part of my newsletter The Neural Maze. 𝐖𝐡𝐚𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐛𝐮𝐢𝐥𝐝𝐞𝐫𝐬? 🤔

    • No alternative text description for this image
  • The Neural Maze reposted this

    You've seen 𝐆𝐋𝐌-𝐎𝐂𝐑 run locally ... But can you scale it? 👇 After 𝟖 𝐰𝐞𝐞𝐤𝐬 of the 𝐅𝐢𝐧𝐞𝐭𝐮𝐧𝐢𝐧𝐠 𝐒𝐞𝐬𝐬𝐢𝐨𝐧𝐬, we've finally mastered finetuning, from SFT to GRPO. Now that we understand how LLMs are trained, it's time for the next step. It's time to learn how to 𝐝𝐞𝐩𝐥𝐨𝐲 them. In the final office hours, Antonio Zarauz Moreno and I covered: 🔹 Finetuning Qwen3-VL for a custom vision task 🔹 Voice cloning by finetuning Orpheus-3B 🔹 Deploying GLM-OCR locally with Ollama 🔹 How to scale these models with cloud deployments on Kubernetes All in one live session! 📘 https://lnkd.in/e2WpBQ6j ♻️ Repost if you think more people should learn to deploy LLMs at scale!

    • No alternative text description for this image
  • The Neural Maze reposted this

    How LLMs are built explained in 𝐧𝐢𝐧𝐞 𝐬𝐥𝐢𝐝𝐞𝐬 From raw data to 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 👇 Here are the 𝟕 𝐬𝐭𝐚𝐠𝐞𝐬 that take a model from raw data to production, explained simply: 𝟏. 𝐏𝐫𝐞𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 → The model reads billions of tokens. No instructions, no tasks. Just next-token prediction. This is where general knowledge is born. 𝟐. 𝐒𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐅𝐢𝐧𝐞𝐭𝐮𝐧𝐢𝐧𝐠 (𝐒𝐅𝐓) → The model learns to be useful. You show it curated input-output pairs, and it learns to follow instructions. This is where a base model becomes an assistant. 𝟑. 𝐋𝐨𝐑𝐀 → Full finetuning is expensive. LoRA freezes the original weights and injects small trainable matrices. Same results, a fraction of the compute. 𝟒. 𝐐𝐋𝐨𝐑𝐀 → LoRA, pushed further. Load the base model in 4-bit precision. Now you can finetune a 7B model on a single consumer GPU. 𝟓. 𝐑𝐋𝐇𝐅 (𝐏𝐏𝐎) → The model follows instructions but might still be toxic or unhelpful. RLHF uses human preferences to push it toward better responses. This is where alignment happens. 𝟔. 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠 (𝐆𝐑𝐏𝐎) → Skip the reward model. Sample multiple outputs, score them with verifiable rules, reinforce the best ones. This is how you teach a model to think step by step. 𝟕. 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 → Quantize it, wrap it in an API, optimize with vLLM, and ship it. This is where training meets the real world. That's the full pipeline. Most courses teach you one or two of these stages. 𝐖𝐞 𝐭𝐞𝐚𝐜𝐡 𝐚𝐥𝐥 𝟕, 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞𝐨𝐫𝐲, 𝐡𝐚𝐧𝐝𝐬-𝐨𝐧 𝐥𝐚𝐛𝐬, 𝐚𝐧𝐝 𝐥𝐢𝐯𝐞 𝐨𝐟𝐟𝐢𝐜𝐞 𝐡𝐨𝐮𝐫𝐬. 🦥 𝐅𝐢𝐧𝐞𝐭𝐮𝐧𝐢𝐧𝐠 𝐒𝐞𝐬𝐬𝐢𝐨𝐧𝐬: https://lnkd.in/e4BtynZh - ♻️ 𝐑𝐞𝐩𝐨𝐬𝐭 if this was useful. 📌 𝐒𝐚𝐯𝐞 it for reference. 🔔 𝐅𝐨𝐥𝐥𝐨𝐰 Miguel Otero Pedrido for more.

  • The Neural Maze reposted this

    The best OCR model has only 0.9B parameters And it runs entirely on your laptop! 👇 𝐆𝐋𝐌-𝐎𝐂𝐑 scored 𝟗𝟒.𝟔𝟐 on 𝐎𝐦𝐧𝐢𝐃𝐨𝐜𝐁𝐞𝐧𝐜𝐡 𝐕𝟏.𝟓 — beating models 10x its size. Formulas, tables, complex layouts, code-heavy documents. It handles all of it. The secret? A two-stage pipeline that combines a layout detector with a vision-language decoder. Small, focused, and ridiculously effective. In this week's article at The Neural Maze, we show you how to run it locally step by step: Docker setup, hardware optimization, custom model configuration with Ollama, and real-world document parsing with the official SDK. The best part? No GPUs needed ... just your laptop! 💻 📘 𝐅𝐮𝐥𝐥 𝐠𝐮𝐢𝐝𝐞 𝐡𝐞𝐫𝐞 → https://lnkd.in/eycUwn2M ♻️ Repost for more local deployments!

    • No alternative text description for this image

Similar pages

Browse jobs