From Technoshame to Technoagency
April 27, 2026
April 27, 2026
By Gayle T. Dow and Susan P. Antaramian
Artificial intelligence is rapidly reshaping K-12 teaching, not simply by influencing what students can do (or are not doing), but by transforming how teachers themselves approach instruction. Increasingly, educators are using AI to design lessons, generate classroom materials, differentiate instruction, and provide individualized feedback.
This is raising fundamental questions about professional identity, instructional judgment, and the evolving definition of teaching itself. Today, teachers might type a prompt into ChatGPT, watch the text appear across the screen, and think, “This is really good!” as they experience the thrill of producing something polished in a fraction of the usual time. Yet for many educators, that excitement is quickly followed by a quiet unease: “Is this work really mine?” This emotional tension between pride in efficiency and discomfort about authorship or originality is becoming increasingly common. The feeling now has a name we introduce here into educational discussions: technoshame. It is not a clinical term, but it captures a very real psychological and cultural tension facing modern educators.
We define technoshame as the conflicted emotional response that arises when educators use AI tools to produce instructional materials such as lesson plans, rubrics, worksheets, quizzes, or discussion prompts, and feel both pride in the result and shame about how it was created. Unlike traditional forms of plagiarism or academic dishonesty, technoshame does not typically involve deception or even cryptomnesia, (which occurs when someone unintentionally mistakes an external idea for their own). Educators are aware of their AI use and are pursuing it for legitimate reasons, such as saving time, merging ideas, or enhancing creativity. The discomfort does not stem from wrongdoing, but from a perceived disruption of professional identity and authorship.
Historically, teaching has been closely tied to notions of authentic expertise, a theme reflected throughout Virginia’s professional teaching standards. Creating lesson plans, assessments, and instructional materials has long been viewed as evidence of a teacher’s mastery of content knowledge, instructional planning, and professional responsibility. These materials were not just documents; they were demonstrations of a teacher’s ability to interpret standards, scaffold learning, and differentiate for diverse students. When AI can now generate in seconds what once required hours of thoughtful alignment to the Virginia Standards of Learning (SOLs), it can feel simultaneously empowering and unsettling. It may function as a welcome support but can also challenge long-held assumptions about what it means to be a highly qualified educator in the Commonwealth.
It is important to recognize some of AI’s clear benefits. Teachers are almost always pressed for time, juggling lesson preparation, grading student work, attending meetings, and taking care of countless unseen tasks. AI tools can ease these demands in several ways. They increase efficiency, since a worksheet or quiz that might take 45 minutes to create can often be generated in under five. AI can provide polish by producing text that is grammatically clean, well-structured, and ready for classroom use. It also offers inspiration, as a well-crafted prompt can spark new ways of presenting a concept or organizing a lesson. Finally, it allows rapid customization by tailoring content to different reading levels, learning needs, or subject areas. In many classrooms, technoshame begins with pride rather than doubt. When a teacher uses AI to design a differentiated reading comprehension passage or a creative debate prompt, the resulting material can genuinely enhance learning.
It’s important to recognize why the shame side of technoshame feels more complex. For many educators, it’s rooted in long-standing beliefs about authorship, authenticity, and professional identity. Teachers may quietly question whether they truly earned the praise a lesson receives if AI assisted in its creation. Others worry that AI use undermines originality or devalues the expertise they have spent years cultivating. Many educators also believe that authentic instruction requires personally crafted materials, since teachers are expected to model academic integrity for their students. This creates tension when educators struggle to define meaningful standards for originality in their own AI assisted work. There is also fear of misinterpretation. Some teachers hesitate to disclose AI use, concerned that colleagues, administrators, or students may view it as cutting corners or even cheating, despite the fact that many applications are thoughtful, intentional, and pedagogically sound. These concerns are intensified by public debates about AI in schools, where fears about plagiarism and job loss often overshadow nuanced discussions of ethical and transparent use.
It is also important to recognize that technoshame is not only about using a tool, but about shifts in professional identity. Lesson design, instructional language, and scaffolding reflect a teacher’s voice and expertise. When AI can replicate or accelerate these tasks, it raises unsettling questions about what it means to be a good teacher, whether authorship defines legitimacy, and where the boundary lies between support and substitution. These questions reflect a profession in transition, often accompanied by technoreplacement anxiety, or fears of being devalued or replaced as AI performs certain instructional tasks efficiently. In this context, the central message matters. AI should not replace a teacher’s voice. Professional expertise extends far beyond producing materials. It includes judgment, context, relationships, and the human insight that allows instruction to be meaningful for individual learners.
Technoshame does not have to remain a source of guilt. When examined intentionally, it can serve as a catalyst for professional growth and redefinition. With reflection, it can become an entry point for rethinking pedagogy, authorship, and professional identity. This reframing marks the shift from technoshame toward technoagency. Here are a few ways to support that transition.
Acknowledge AI as a Tool. A power drill does not make a carpenter less skilled, nor does a microwave (and now an air fryer) make a cook any less capable; both simply speed up part of the process. In the same way, AI use does not diminish an author’s voice or undermine a teacher’s expertise; it enhances their work when used thoughtfully. As educators, we still decide what is pedagogically sound. In Virginia, this responsibility is firmly rooted in the state’s professional teaching standards, which emphasize thoughtful instructional planning, alignment to the SOLs, and the use of materials that support diverse learners. Even when AI assists with drafting or brainstorming, it is the teacher who evaluates whether the content meets state expectations, reflects accurate subject knowledge, and aligns with developmentally appropriate practice. It is also the teacher who knows that Olivia learns best when she can read in the quiet corner of the room or that Benjamin loves horses and will be more motivated to write an essay on Black Beauty. Ultimately, the teacher, not the tool, determines what strengthens instruction, supports student growth, and upholds professional standards.
Emphasize Curation Over Creation. Educators have always curated and shared materials, such as lesson plans, open educational resources, and textbooks, and AI can be understood as an extension of this practice. Professional expertise includes discerning what to use, adapt, and discard. Now, it also includes recognizing that AI tools may reflect cultural biases embedded in their training data, particularly when that data privileges Western, English-speaking perspectives. Therefore, thoughtful review and modification remain essential. Teachers exercise technoagency by reshaping generic outputs into culturally responsive, inclusive, and differentiated instruction. This includes adjusting reading levels, modifying questions, adding scaffolds, or creating alternative pathways for learning. In this way, AI generated materials become meaningful only through the teacher’s professional judgment.
Practice Transparent Authorship. As the education landscape transitions, transparency becomes increasingly important. If part of the guilt associated with technoshame stems from secrecy, openness can serve as a corrective. Sharing prompts, drafts, and revision processes with colleagues helps normalize AI use and reframes it as a professional practice rather than a hidden shortcut. Transparency shifts the narrative from “I did not write this” to “I intentionally designed and refined this learning experience.” When educators articulate how and why they use AI, they reinforce that responsible AI use is a skill rooted in expertise, ethics, and reflection.
For teachers navigating technoshame, several structured practices can reduce anxiety and support the development of technoagency.
First, use AI as an assistant rather than an authority. AI is most effective as a starting point for brainstorming, drafting, or offering alternative representations of content. Teachers then review, revise, and adapt materials to meet instructional goals and student needs. Because AI-generated materials may contain errors or cultural assumptions, educators retain responsibility for accuracy, bias correction, and appropriateness.
Second, layer in expertise by adding classroom context, pacing decisions, differentiation strategies, or accommodations. This transforms AI output into instruction that reflects professional judgment and deep knowledge of learners. In this process, the material becomes an expression of the teacher’s pedagogy rather than the technology’s output.
Third, set clear expectations for student AI use. Modeling transparent and ethical practices reinforces academic integrity and helps students develop critical digital literacy skills. Discussing how and why AI is used in class encourages students to practice essential 21st-century thinking skills such as critical evaluation of information, digital literacy, and ethical reasoning about technology. This conversation also helps students recognize the limitations of AI, including accuracy, bias, and privacy issues, encouraging them to engage with emerging technologies thoughtfully rather than rely on them unquestioningly. Throughout these discussions, equity must remain central so that AI expands, rather than narrows, learning opportunities.
Looking forward, technoshame may be a transitional feeling, much like the early unease some educators experienced when adopting online gradebooks or flipped classrooms. Over time, the conversation will likely shift from whether teachers use AI to how they use it. Schools and school divisions play a central role in this shift by providing professional development, creating clear policies that encourage transparency rather than punishment, and celebrating innovation rather than policing technology use. As these cultural norms evolve, the sense of shame may dissipate and be replaced by widely accepted, transparent practices that support educator technoagency, which we define as the confident, ethical, and intentional use of digital tools in service of pedagogy rather than in place of it. In this sense, technoshame reflects the collision of old values with new tools, while technoagency represents the reclamation of professional control and purpose. Pride in using AI does not need to be tempered by guilt. Teachers remain the architects of pedagogy, using every available tool to meet the needs of their students. As technoagency strengthens, technoshame will fade, much like past debates about using spell checkers, learning management systems, and digital instructional platforms, reaffirming that the talent of a great teacher never resided in the tool itself but in how it is used.
Gayle Dow, PhD, is an educational psychologist and associate professor at Christopher Newport University whose work focuses on creative and critical thinking. Susan Antaramian, PhD, is a school psychologist and associate professor at Christopher Newport University whose work focuses on student engagement.
Acknowledgment: Portions of this article were edited with assistance from ChatGPT, a large language model developed by OpenAI, which was used to support grammar and clarity. All substantive ideas, content, and examples were written by the authors.
According to a survey by the EdWeek Research Center, teacher use of AI tools in the classroom almost doubled between 2023 and 2025. In 2023, 34 percent of teachers responding to the survey said they were using artificial intelligence in their work “a little,” “some,” or “a lot.” By 2025, that figure had reached 61 percent. Experts point to the growing availability of professional development as a primary factor, as more teachers learn a range of ways AI can be put to work in their classroom.
Here are a couple places to turn to learn more about issues surrounding the use of AI in the classroom:
In 2025, the Edweek Research Center surveyed teachers who said they don’t currently use AI tools in their instruction, asking them why they had made that decision at the time. Here are their top reasons (teachers were allowed to choose more than one):
34% – I haven’t explored these tools because I have other priorities that are more important.
34% – I know something about using these tools but I’m not sure how to effectively incorporate them into instruction.
26% – I don’t know how to use these tools.
21% – My district hasn’t outlined a policy on how to use the technology appropriately.
20% – I don’t believe the technology is appropriate for a K-12 setting because of its potential to be used for cheating.
17% – I don’t understand how artificial intelligence works.
16% – I’m not sure where or how to find these tools.
16% – I don’t think the technology is applicable to my subject matter or grade.
Teacher shortages are a serious issue across the country. Here in Virginia, there are currently over 3,648 unfilled teaching positions. (FY23)
Learn More