Validated Learning Evidence at the Moment the Conversation End
Three specialized agents work as one inside every Develop session, teaching the learner, scoring every response, and synthesizing the conversation into a transcript-grounded individual learning record when it ends.
The Closed Evidence Loop
Most AI learning products end at the conversation with engagement metrics or a quiz score. The Live Learning Agents close the loop by feeding evaluation back into the conversation in real time. Every response gets scored against the learning objective, the Tutor's next move adapts to what the learner just demonstrated, and the session synthesizes into transcript-grounded individual evidence when it ends. The conversation produces the evidence; no separate event required.
- 0 stop-and-grade moments, with comprehension evaluated through the conversation itself
- 100% of comprehension scores grounded in transcript evidence, traced to a real learner response
- 4 levels of personalization: who the learner is, the context shaping their work, what they already know, and how they're reasoning in the moment
Where the Live Learning Agents Make the Biggest Difference

Leadership Development at Scale
"Half my managers did the leadership program. Nothing changed."
Executive coaching reaches the top 10%; the rest get a workshop and no evidence anything landed. The learning agents put every people-leader in an Oxford-style dialogue scored against the targeted capability, with objective evidence of real learning at the end.

Reskilling With Capability Evidence
"We reskilled 5,000 people. The CFO wants to know if the capability actually landed."
Reskilling and upskilling only matter if the capability lands. The learning agents go beyond completion dashboards to show capability evidence scored against the learning objectives the new role requires.

AI Adoption Readiness
"Copilot's deployed to 30,000 people. Most are using it; almost nobody is changing how they work."
Tool access isn't behavior change. The Tutor teaches each learner how to apply AI inside their specific work; the Evaluator catches the gap between confidence and capability.

Audit-Grade Compliance
"Click-through training works until someone makes the wrong call in the field."
In healthcare, financial services, and other regulated work, major consequences hinge on whether people actually learned. The Evaluator scores reasoning against the regulation; the Development Advisor produces transcript-grounded evidence per learner.
Inside the Live Learning Loop
Modeled on the Oxford tutorial: Socratic dialogue that pulls the learner forward rather than giving the answer. Voice or text, any language, inside Microsoft Teams, Slack, or browser. Adapts at four levels: who the learner is, the EX signals shaping their context, what they already know, and how they're reasoning in the moment.
The Evaluator is a multi-agent ensemble built on established learning theory: Dunning-Kruger calibration, transfer of learning, metacognition, gaming-behavior detection, and additional perspectives from Perceptyx's internal learning validation research. Sub-agents triangulate from these lenses, scoring every response and aggregating into a single percentage per learning objective. Validation runs continuously through the dialogue. Designed by PhD academic educators including an originator of the Flipped Classroom, with the framework grounded in dozens of foundational learning science studies.
Activates when the conversation ends. Synthesizes every question, response, and Evaluator score into an individual report: comprehension per objective, strengths and concerns tied to quoted transcript, competency map, sentiment overlay, recommended next steps. Every learner, every session.
Async (default): learners take the conversation on their own schedule inside their collaboration tools. Live classroom feedback: instructors see real-time per-learner comprehension and adjust mid-session. Pre-training diagnostic: run against the planned material to reveal what the cohort already knows.
Paired with Activate, comprehension joins an application score measuring engagement with course-specific nudges and AI coaching in the days and weeks after the session.
Frequently Asked Questions
Socratic AI tutors that pull learners forward through dialogue have become common. The Live Learning Agents add what those tools were never designed to deliver: a transcript-grounded individual learning record at session end. Three agents produce it together. The Tutor runs the dialogue, the Evaluator scores every response against established learning science in real time, and the Development Advisor synthesizes the session into the record the moment the conversation ends.
ChatGPT is one general-purpose agent good at answering questions. The Live Learning Agents are three purpose-built agents working as one inside every session: the Tutor teaches Socratically, the Evaluator scores every response against the learning objective in real time, and the Development Advisor synthesizes the session into transcript-grounded individual evidence the moment it ends. ChatGPT can do a passable job at any one of these in isolation, but coordinated operation with shared state, behavioral-science-grounded evaluation, and audit-grade individual evidence at session end is an architecture problem prompt engineering can't solve.
No. Validation is passive and runs continuously through the dialogue; the learner's reasoning is observed throughout, with no stop-and-grade moment. Sub-agents trained on established learning theory evaluate every response and aggregate into a percentage score per learning objective. The conversation serves as the evaluation.
Two reasons. The Evaluator is a multi-agent ensemble, not a single model grading itself: sub-agents trained on Dunning-Kruger calibration, transfer of learning, and gaming-behavior detection triangulate from multiple perspectives. And every score is traceable to the transcript. The Development Advisor ties every strength and concern to a quoted passage, so admins can drill into any score. The evidence is in the transcript.
Gaming-behavior detection is one of the Evaluator's sub-agent perspectives, grounded in research on intelligent tutoring systems. Surface engagement without underlying understanding gets caught and reflected back through the Tutor's next move. The Tutor's adaptivity makes scripted answers ineffective; it reframes rather than accepting a pattern-matched response.
What most systems call adaptive is content recommendation based on clicks or quiz scores; the system adapts once at the start, then runs scripted content. The learning agents adapt continuously at four levels: who the learner is, the context shaping their work, what they already know, and how they're reasoning in the moment. The output also differs: a transcript-grounded individual learning record at session end, where most adaptive learning produces a content recommendation.
No. Develop layers on top of the content and systems the organization already has. The learning agents work with content the organization owns, structured by the Content Conversion Agent, and delivered inside Microsoft Teams, Slack, or browser. The Development Advisor report integrates with existing HR and talent systems. The learning agents produce what those systems were never built to deliver: evidence the learner actually understood the material.
100% represents absolute mastery of a learning objective. It marks the ceiling of the scoring scale rather than the expected outcome of any session. The scoring curve is non-linear: early engagement moves the percentage quickly, and final percentages are harder to earn. That distinguishes "I heard this" from "I can apply it." Most learners reach 70–85% per session; the report names where the gap to mastery sits.
See How Every Session Produces Individual Evidence
The Live Learning Agents produce transcript-grounded individual learning records the moment each session ends. A 30-minute demo walks through the three agents inside a real session.