We combined general AI literacy domains (Long and Magerko AI Literacy Framework) with education specific AI literacy and competency considerations (Delcker et al. AI competence domains for teachers). 543 teachers from the eight participating high schools responded the survey and provided crucial insights for educators and policy makers on areas we need to prioritise for ethical, pedagogical use of AI in education. In this blog post, we share some of the key findings from the survey study and considerations from the project team informed by what we found from the survey study for next step policy and practice development to promote responsible use of AI. The conference paper this blog post is based on, ‘Diversifying Understandings of AI Literacy: A Global South Perspective from Vietnam Education’, was presented at the UKCI conference in September 2025 with the full conference paper to be published by Springer.
Awareness vs. Ethical practices
While high school teachers show high engagement - with 90.2% participants exploring how to apply AI in their teaching - a significant knowledge gap currently threats responsible integration. The majority of participants (65.7%) reported being aware of AI but having only a limited understanding of the underlying concepts. This gap is particularly evident in ethical considerations. This can impact on teachers’ confidence in integrating AI into their practices, and potentially weaken responsible applications of AI into education.
Regarding ethical considerations, teachers’ awareness shows inconsistency across topics:
• High Awareness: Data Privacy (64.1%) and Data Security (55.4%) were the ethical issues most frequently identified by teachers.
• Low Awareness: Critical concerns such as AI’s perpetuation of societal bias (identified by 29.1%) and the environmental impact of AI (identified by 21.4%) were recognised by far fewer teachers.
It is apparent the limited understandings about ethical considerations translate directly into teachers’ ability to manage ethical challenges in practice: 49.0% of participants stated they do not know how to address ethical considerations when using AI in teaching. Furthermore, 31.9% said they cannot identify ethical issues when using AI.
Consideration 1: Mandate Ethics-First Professional Development
Findings from our study clearly emphasise the need for targeted interventions that move AI literacy beyond digital tool usage to encompass ethics, transparency, and sociocultural relevance.
Policy makers and educational institutions should consider developing and scaling up competency-based professional learning programmes that have an explicit focus on responsible/ethical use of AI. This should address the general ethical considerations about using AI such as data privacy, data security, social impact and environmental impact, and the education specific considerations such as children and young people’s rights, accessibility and equity, impact on cognitive and social development, and professional and academic integrity. Since teachers currently rely heavily on self-learning via free online materials (54.9%) and peer support, future efforts need to focus on longer-term, well-structured training programmes to improve quality and depth of understanding.
Consideration 2: Promoting Transparency and Explainability
The difficulty teachers face in addressing ethical issues is partly attributed to the nature of the tools themselves. Teachers in our study predominately use commercial AI tools such as Google Gemini, ChatGPT, Deepseek. Many of these commercial AI tools exhibit an opaque "black-box nature," providing little insight into their underlying data, limitations, or potential failures. This opacity makes it challenging for non-experts to develop accurate mental models of how AI systems work. This issue is also compounded by the fact the most of these tools are not developed with educational considerations as they are general AI tools.
To counteract this, responsible use of AI initiatives need to consider:
• Promote transparency and ethical design. Training should enable educators to interpret AI outputs as results of underlying algorithms and data patterns, fostering critical evaluation rather than viewing AI as an authoritative, functional tool.
• Integrate AI literacy and critical thinking into school and teacher education curricula.
• Ensure equitable tool design: Systems and training resources need to be developed to meet the needs and demands of education in different contexts.
Consideration 3: Build Collaborative Capacity
Our study found differences in AI participation across gender, region (urban vs. rural), and experience levels, underscoring the need for context-sensitive learning. What clearly is in the Vietnamese education context, peer based learning is a main source of continue professional learning with 54.5% of participants attended school organised training 45.9% of participants learn about AI with colleagues at their school. This is a strong foundation for sustainable development of an ethically aware ecosystem in education adaption of AI in Vietnam. The relatively high rate of peer based learning suggests the value of organising peer support activities, such as collaborative learning sessions/groups, to help teachers share experiences and troubleshoot challenges together. Supporting experienced teachers as peer trainers can also help build sustainable internal capacity.
Teachers, acting as end-users, should partner with computer scientists and researchers to facilitate the iterative improvement of technical tools through co-design. This collaboration can lead to more usable, trustworthy, and inclusive educational technologies. This is a key recommendation by UNESCO that “as AI is emerging, there is an opportunity to involve education stakeholders at an early stage to shape them based on educators', learners' and education leaders' perspectives for sustainable development and implementation of AI”.
By prioritising ethics-focused training and demanding greater transparency in AI tool design, educators and policy makers can work together to position educators from being consumers of AI into confident mediators and co-creators of responsible use of AI who ensure that the adoption of AI is ethically sound.
Author: Dr Vanessa Cui, Birmingham City University