Alma Learning team, Matt from Eduba. We read the site, the blog, and the FETC talk. The framing is familiar. Pedagogy first, AI second. Educators teach as avatars in any language. Service, not a tool.
We want to name the thing you are already living with. Every new program Alma takes on starts the same way. A client ships you syllabi, decks, manuals, and lecture recordings. Your instructional designers turn those into a course scaffold. Your production team builds the avatar lessons. A senior educator reviews. The course lands in Canvas or D2L. The client sees the result. The sequence works. The question is how much of it is re-invented per client.
How we would think about Alma's stack specifically
Most of what touches a course is not AI. It is structure. A syllabus is a structure. A learning objective is a structure. A quiz rubric is a structure.
Structure and content ops
Syllabi, objectives, rubrics, LMS plumbing. Traditional code and content operations. Fast, cheap, reliable.
Rule-based logic
Learning objective to quiz to assessment to completion. A rules engine, not a language model.
Where LLMs and avatars belong
Socratic dialogue when a learner is stuck. Adaptive explanation past a rule. Language delivery.
When teams skip that distinction, the LLM gets asked to do work that Postgres or a well-written rubric would do faster, cheaper, and more reliably. When teams get the distinction right, the LLM is only used where it is the only tool that works, and the rest of the pipeline runs on code.
A client that faced the same question
One client we worked with is a good analogue. Feeld ran a scoped sprint with us. Their CTO wanted an honest read on where AI fit in their internal stack and where it did not.
What came out of the sprint was a context architecture. A folder structure that held the company's working knowledge in a form the models and the team could share. The framework is still in use. It did not replace their engineering team, and it did not replace their AI tooling. It told them which job belonged on which layer.
My colleague Jake, Eduba's founder, wrote the methodology up as a paper. Interpretable Context Methodology: Folder Structure as Agent Architecture, submitted to ACM TiiS. It reads like infrastructure for instructional design, which is what Alma already ships without naming it that way. The paper lives at github.com/RinDig/Interpretable-Context-Methodology-ICM-.
Jake and Matt also built an online community to 22,000 members in five weeks, so we know the difference between a production pipeline that scales and one that stalls.
Thirty minutes. Bring one course you are mid-production on.
We walk the pipeline end to end and mark which pieces are AI-shaped and which are structure-shaped. No slides, no preamble. One specific offer, one specific artifact.
Book 30 minutes with MattEduba partners with NLP Logix for work that sits below the orchestration layer. If the conversation turns toward custom avatar infrastructure or data pipeline work, we make that introduction. NLP Logix has been in machine learning since 2011 and runs over 150 data scientists.