The AIED–TLU research group is a multidisciplinary team at Tallinn University’s School of Digital Technologies, bringing together members from Tallinn University and partner institutions. Our group unites computer scientists, psychologists, learning sciences researchers, and developers, working closely with schools, teachers, students, and parents. We have experience across multiple research projects and active collaborations with industry. Our strength lies in combining technical expertise with a deep understanding of education and psychology.
We are among the pioneers of hybrid human–AI methods, such as neural-symbolic AI for education, and we are committed to advancing fairness and the ethics of AI in educational contexts. Our work focuses on developing AI methods for two broad purposes:
Improving pedagogical practice – through personalised instruction, adaptive feedback, and scaffolding tailored to learners’ needs
Supporting research advancement – by extracting pedagogical insights from educational data
Together, we see the importance of AI for education not only as a technological challenge but also as a deeply educational and psychological one.
Artificial intelligence (AI) has demonstrated significant potential in addressing various challenges within the education sector. Despite its potential, AI in education is often criticized for neglecting key learning processes such as motivation, emotion, and (meta)cognition. Many systems are built with little input from educators or stakeholders, function largely as black boxes, and overlook ethical issues like data quality and bias. Current approaches also prioritize automated, individualized instruction over fostering metacognitive and self-regulated learning. Even with these shortcomings, AI technologies are steadily being integrated into public education.
Our mission is to shift the focus from AI in education to AI for education by developing responsible, transparent, and co-designed AI methods that truly serve learners and educators. We emphasise both the responsible development and the responsible use of AI, building hybrid human–AI systems that support human judgement, and empower stakeholders at every stage of learning.
At our research group, we make a clear distinction:
AI in education often means applying generic AI methods to educational contexts, such as tutoring systems with purely data-driven models, developed without meaningful input from stakeholders (e.g., educators, learners, researchers) or validated educational data.
AI for education, in contrast, means hybrid human–AI collaboration at two stages:
Responsible development – AI methods and systems are co-developed with stakeholders and developers.
Responsible use – AI systems facilitate collaboration between humans and AI in practice.
We believe this paradigm shift from AI in to AI for education is urgently needed. Crucially, responsible use of AI in classrooms depends on responsible development from the start. While technical fixes—such as fine-tuning data-driven models to reduce hallucinations or mitigate bias—may address surface problems, they cannot resolve the deeper structural flaws that arise when systems are built without meaningful human collaboration.
We subscribe to definition of Goellner and colleagues (2024), viewing responsible AI as a human-centred approach that builds user trust through ethical decision-making, explainable outcomes, and privacy-preserving implementation.
Responsible development through Hybrid Human-AI
In the development stage, we employ hybrid AI methods that combine data-driven learning with stakeholders’ explicit knowledge. Examples include probabilistic graphical models (e.g., Bayesian networks) and neural-symbolic computing.
Stakeholders—such as teachers, psychologists, and domain experts—can actively participate by injecting their practical and theoretical knowledge into these models. This symbolic knowledge constrains and guides data-driven learning, making models more context-aware and educationally meaningful. Examples of such knowledge include:
Causal relationships among variables
Theoretical models linking learning constructs
Interactions between variables affecting learning outcomes
Ethical constraints, such as fairness principles that prevent biased treatment of specific learner groups
Privacy-preserving rules, such as limiting reliance on sensitive personal data while still enabling accurate modelling
Benefits of Knowledge-Enhanced Models
This approach directly promotes responsible AI by design:
Human-centredness through co-development with stakeholders
Ethical decision-making by avoiding spurious correlations and addressing data quality issues
Interpretability by using both injected knowledge and learned patterns to make model reasoning transparent
Privacy preservation by reducing reliance on sensitive student data
User trust through stakeholder co-development, and through ethical, transparent, and privacy-aware decision-making in development
Does this mean the models merely follow stakeholder instructions and lose their data-driven power?
No. They still freely learn patterns from data—both with and without stakeholder knowledge—ensuring a balance between pattern recognition and educationally grounded guidance.
Responsible use through Hybrid Human-AI collaboration
Responsible use of AI in education means going beyond automation to ensure that systems support human judgement, learner agency, and meaningful learning processes. Many of the existing AI-based systems (e.g., large language models and purely data-driven systems) cannot reliably track skill mastery, capture motivation or emotion, or inform learners and teachers about a student’s full learning trajectory. They often overemphasize individualized instruction while neglecting the development of metacognitive and self-regulated learning skills, risking learners becoming passive recipients. By contrast, responsible use emphasizes hybrid human–AI collaboration, where AI tools act as partners—providing transparent insights, supporting reflection, and empowering both students and teachers—while keeping critical educational decisions under human oversight.
Does this mean we should not use large language models or other fully data-driven systems in education?
Not at all. These systems can be highly valuable and should be used—but always within a thoughtful, hybrid framework where human oversight, stakeholder knowledge, and ethical considerations guide their application.
The AIED-TLU research group brings together researchers in education and psychology, computer scientists, and practitioners to collaboratively address current challenges and advance educational technology.