The Educator’s Beacon with Dr. Brandon Naylor
This is your go-to space for thoughtful, energizing insights into the world of teaching. Whether you're a seasoned educator, a new teacher finding your footing, or a stakeholder invested in meaningful learning, this series is built for you. Each episode offers an in-depth exploration of the ideas and challenges that matter most, from innovative pedagogical strategies and classroom management techniques to teacher wellness and burnout prevention. Grounded in current research but delivered in an approachable, conversational style, every episode provides practical tools you can apply, real talk about the issues educators face daily, and validation for the important work you do. This isn't just another education podcast; it's a resource for thriving in one of the world's most demanding and rewarding professions.
If you'd like to learn or see more, please visit:
https://wcceinternational.org/
https://theeducatorsbeaconpodcast.buzzsprout.com
The Educator’s Beacon with Dr. Brandon Naylor
Episode 2: Navigating AI in the Classroom: Promise, Pitfalls, and a Path Forward
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Like the show? Let me know! Write thoughts here!!!
In this episode, Dr. Brandon Naylor offers a thoughtful conversation for classroom teachers and school administrators who are trying to make sense of AI in education. Drawing from his background in educational experience and educational studies, Dr. Naylor explores both the genuine potential and the very real challenges that AI presents in today's schools. Rather than offering grand promises or dire warnings, he shares balanced, research-informed perspectives alongside practical strategies that educators might find useful in their own contexts. This is candid guidance from someone who recognizes the daily complexities of education and hopes to support the dedicated professionals who serve students every day. No buzzwords, no sales pitches, just honest thoughts to help you consider AI's role in your educational practice.
Podcast is hosted by Dr. Brandon Naylor.
Support the show by leaving a rating and review!
Hey, to the teachers, the dreamers, and the believers, we are back. You're listening to The Educators Beacon, a podcast for educators who want honest practical conversations about the tools, challenges, and ideas shaping education today. No hype, no jargon, just real talk for the people who show up for students every single day. Hello everyone, my name is Dr. Brandon Naylor. I hold a doctorate in educational studies with a focus on curriculum and critical social inquiry, and I've spent significant time working alongside teachers, educators, and stakeholders to contribute to advocating and supporting conditions where great teaching can actually thrive. Now, for those listening, um I admit last episode we were going to dive into professional development, and I will just say we will next episode. But this time around, we're gonna dive into something a little bit deeper, specifically AI. Now I know I promised we'd deep dive into general professional development this week, but honestly, looking at the speed of this AI shift, talking about the standard PD right now and you know the phenomenon of surrounding PD, I feel like that should take a bit of a back seat because I feel like discussing interior design while the house is being rewired isn't very productive. The reality is shifting too fast to wait. So we're calling an audible, okay? We're putting the PD on the back burner for a moment to tackle the AI elephant in the room. Now, I'm not here to sell you on AI. I am not here to scare you away from it. I'm here to give you what the research says and help you figure out what to do with it. Artificial intelligence has arrived in schools faster than anyone fully anticipated, and faster than most teachers have been given support to handle it. 60% of K through 12 teachers are now using AI tools in their professional practice. AI detection software is flagging student essays, policies are lagging, and somewhere in the middle of all that, teachers are just trying to do right by their kids. So today we're going to have a real conversation about it. The genuine benefits, the hidden burdens, the students most at risk of being caught in the crossfire, and the concrete steps for teachers and administrators alike to make this moment better than it currently is. Okay, I'm just going to come right out and say it. AI in the classroom is one of the most confusing things that has happened to teachers in a very long time. And I don't mean that as a criticism of anyone. I mean I mean it as a simple, honest observation. You were handed a tool, sometimes without any training, sometimes without any clear policy, and then told in so many words to figure it out. If that has felt overwhelming or frustrating or just plain unfair, you are not imagining things. That has been the experience for a lot of you. And honestly, you deserve better than just figuring it out on your own. So let's take a beat, clear the noise, and actually talk about this. I want to focus on having a real conversation with you, as I said previously. Okay, this is not a sales pitch for AI, not a warning to stay away from it either, just a genuine practical talk about the research actually says, what it means for your day-to-day work, and what you can do right now to make AI work for you rather than the other way around. Now I know you're busy, uh, I know many of you are listening to this between grading a stack of papers and answering 17 emails, so I promise to make every minute count. And partway through, I'm going to take a moment to speak directly to administrators because the decisions they make or don't make have very real impact on your experience in the classroom. And hey, if you're an administrator right now, welcome. I'm glad you're here. Stay with me. Now, in many ways, education moves at the speed of a glacier. We know that, but right now we are living through a lightning strike. Uh, what do I mean by that? Well, let's start with a bit of context because I think it helps to know we're all navigating this together. According to a large national survey by the Walton Family Foundation in partnership with Gallup, published in 2025, and this is a survey of over 2,000 US teachers, so it's a serious sample. As of the 2024 to 2025 school year, roughly 60% of K through 12 teachers report using AI tools in their professional practice. That number nearly doubled in just one year. One year, think about that. In terms of how quickly something changes in education, that is extraordinary speed. The optimism driving that growth is understandable too. That same Molten Family Foundation Gallup research suggests that teachers who use AI tools regularly could save the equivalent of about six weeks per year. Six weeks by letting AI help with lesson planning, creating materials, grading at administrative tasks. And for the 82% of educators who say excessive workload is their single biggest professional concern, a figure documented by researcher Cheryl Poth in 2024, six weeks back sounds pretty wonderful. But here's the thing I need you to sit with for a second. A separate large-scale survey, uh, this one conducted by the Royal Society of Chemistry in 2024, found that while 44% of teachers had uh tried AI tools, only 3%, 3% said those tools had greatly reduced their workload. Six weeks of savings promise, 3% actually feeling it. That gap is enormous and not a fluke. Uh that gap is telling us something important. What it's telling us is that AI's potential and AI's reality are two different things right now. And the reason isn't the tools don't work, the reason is that nobody has set teachers up with what they actually need to make the tools work. The infrastructure we're talking, the training, the time, the policies, the support hasn't kept up with the technology. And that's the problem we need to address. So let's dig into what AI actually can do for you, and then let's be equally honest about what it brings with it that nobody's fully prepared you for. Hey, welcome back. I want to spend real time here because I don't want this to turn into one of those conversations that just sounds like a warning or doom listening. There are genuine, meaningful things AI can do for you, and I want to celebrate those honestly. So let's start with lesson preparation. If you've ever stayed up until midnight trying to figure out how to differentiate the same lesson three different ways or three different reading levels, uh you know how exhausting that is. AI is genuinely good at drafting. You can describe your learning objective, tell it your grade level and subject, give it a sense of your students' needs, and get a solid first draft for a differentiated activity in a couple of minutes. Is it going to be perfect right out of the box? No. You're still going to look it over, but the blank page problem? Gone. The same logic applies to creating supplemental materials, worksheets, discussion questions, reading comprehension checks, vocabulary lists. The Walton Family Foundation Gallup study found that most common AI applications include lesson preparation, used about 37% of teachers, creating worksheets and activities at 33%, and modifying materials for diverse learning at 28%. Across those tasks, teachers in that survey estimated collective savings of roughly 24 hours per month. That's real time. That's time you can spend on relationships, on feedback, on actually teaching. AI is also good for uh the administrative side of teaching. Things like drafting parent communication, writing report card comment templates. You can then personalize, building rubrics, formatting slide decks. These are not glamorous tasks, but they eat into the hours in your day. Having a starting point saves you time when you still, you know, have to edit. Now, here's a use case I want to spend a little extra time on because I think it's one of the most powerful modifying materials for diverse student needs. If you're a general education teacher with English language learners in your class, or students with IEPs, or students who are reading well above grade level, you know the challenge of meeting everyone where they are. AI can help you adapt to text to a lower lexile level, simplify the vocabulary in a set of instructions, or add more complexity and challenge for your advanced learners. You're still making the final call whether it's appropriate, but AI can do that structural adaptation work in minutes instead of hours. And here's one more number I want to leave you with uh from that Walton Family Foundation Gallup research. Uh between 57 to 74% of teachers who use AI regularly say it improves the quality of their work. Not just the speed, but the quality. That's meaningful. When you're not burning out at midnight trying to create everything from scratch, you have more energy to put into the parts of teaching that AI can't touch. Relationships with students, your judgment in the moment, your ability to read the room. That's the promise, and it's real. But it doesn't come automatically and it doesn't come without some things you need to be aware of. So let's talk about those. Okay, let's be real about the other side. We've looked at the skyrocketing numbers and sheer speed of the shift, but let's be honest, just because a tool is fast doesn't mean it's always actually working for you. There is a quite frustrating tax that comes with all this saved time. A cost that rarely makes it into the glossy AI keynotes. Uh, one teacher surveyed by the Royal Society of Chemistry in 2024 put it really, really well. She said, I have used AI infrequently for creating worksheets. These need to be checked carefully, which results in less time saved than hoped. That's not a complaint about AI being broken. That's completely fair description about how AI currently works. The output is starting to point. It's a rarely it's rarely a finished product. You still have to fact check it. You still have to make sure it aligns with your curriculum standard. You still have to make sure it's actually appropriate for your students and your school's culture. Heck, even something as simple as just checking that it's correct, that checking process takes time and sometimes it takes enough time that you wonder if you would have been faster just doing it yourself. So that's the first hidden cost. AI tools as they currently exist don't eliminate work, they shift work. For teachers who already have limited time, uh shifting work around can feel like adding work, and I want to put this in context because it matters. A 2024 RAND Corporation survey by researchers Steiner and Wu, and this is a survey for more than 1,300 teachers, found that educators reported job-related stress at rates far exceeding those of other college-educated working adults. The primary drivers were unmanageable workload, inadequate administrative support, and insufficient time for professional learning. AI has arrived inside what is already a strained work environment. So when I say the efficiency gains can feel invisible to a lot of teachers, I mean it. This is not a technology problem, it's a support problem. The second hidden cost is the one I want to spend the most time on because it's the one that catches teachers the most off guard. It's what researchers are starting to call the digital detective role, and it's a real burden. Here's the situation. As AI tools have become widely available to students, uh, student use has absolutely exploded. According to a 2025 data survey from Demand Sage, 88% of students report using generative AI for schoolwork, up to 53% just a year earlier. And according to the 2025 report from Tech Revol, about 24% of high school students admit using it specifically to cheat. Now, I want to be careful here. Most students are not malicious. A lot of this is confusion about what's allowed, genuine ignorance of why it matters, and in some cases genuine desperation. But the reality is that teachers are now in the position of trying to figure out did my student actually write this or did a machine write it? There are detection tools, you've probably heard of them, Turnitin, GPT-0, Winston AI. Some of them claim accuracy rates as high as 99.98 cent. Uh, and that sounds impressive, right? But here's the math that nobody advertises. Even a 99% accurate detector applied to a 150 student essays will incorrectly accuse one or two completely innocent students. And research published by IB Source Education in 2025, uh looking specifically at how those tools perform across diverse student populations, they fall disproportionately on specific groups of students who are rarely navigating more than their share of challenges. Let me be specific. Because these students deserve to be named, English language learners, students whose first language may not maybe Spanish, Arabic, Khamang, Somali, Mandarin, or dozens of other languages often write in patterns that detection algorithms flag as AI generated simply because grammatically careful formulaic construction can look too clean to a machine trained mostly on fluent native English writing. The IB source education research is explicit on this point. Students with autism spectrum disorder, ADHD, and other neurodivergent profiles may write in ways that are unusually structured for highly systematic, and again, an algorithm can misread that as machine output. Students from lower-income households who have fewer opportunities to develop a casual, expressive writing voice through recreational reading, uh enrichment activities, tutors, quiet study spaces may write in a more formal register that triggers false positives. And the ugly irony is this: these are often the students who worked the hardest on that essay, who stayed up late without a quiet room, without a parent who could edit their draft, without access to a writing center. They turned in their best, most careful work, and the algorithm flagged it. The consequences of a false accusation are not minor. We're talking about grade penalties, uh, formal academic integrity violations that follow a student on their record, potential suspension, and lasting psychological harm. For a student who was already wondering whether school was a place that genuinely wanted them there, being accused of cheating when they absolutely did not cheat can confirm every fear they already had. And I want us to sit with that for a moment because we are going to come back to that. So, what do teachers do? Some supplement or replace the detection software with their own methods, checking Google Docs, revision history to see if text appeared all at once, holding writing conferences, doing oral examinations. These methods are more reliable, honestly. But for a teacher with 150 students, doing even brief one-on-one conversations for suspected cases can take enormous amounts of time. And meanwhile, students are adopting AI humanization tools, specifically designed to make machine-written text look more human-written, is genuinely an arms race and it has no clean endpoint. There's also a data privacy dimension here that I think deserves a mention. When you upload student work to a third-party detection platform, that student data is going to external servers. Most of those platforms comply with FERPA, yes. But as researchers uh Garcia Lopez and Truziello Linan pointed out in their 2025 work on algorithmic bias and equity in education, compliance with a law and genuine protection of student privacy are not the same thing. They also raise something worth sitting. The normalization of data collection as a routine part of schooling subtly conditions students, often minors, to accept surveillance as unremarkable with potentially far-reaching consequences for how they understand privacy for the rest of their lives. As a teacher, you have a relationship of trust with your students, and part of honoring that relationship is knowing what you're handing off and to whom. Now, again, I'm not saying don't use these tools. I'm saying use them thoughtfully, know their limitations, and most importantly, know their risks, and let's talk about some alternatives. I know that's a lot to sit with. It's a heavy list of hidden costs, the shifted workloads, the digital detective burden, the risk of false accusation against our most vulnerable students, and the looming questions of data privacy. It can make the whole AI revolution feel like a minefield rather than a breakthrough. But here's the reality we have to navigate. These tools are already in our classrooms. They are in our students' pockets. They are increasingly integrated into the platforms we use to teach every day. We can't simply opt out, but we can choose a different path than the arms race I just described. The goal isn't to let AI run the classroom, nor is it to spend our entire careers policing it. The goal is to move from being reactive to being intentional, to reclaim our time and our relationships with students by using these tools on our own terms. So, how do we actually do that? How do we move past the theory and the hidden costs into something that feels uh like actual support. Alright, so let's get concrete. I want to give you specific practical examples you can actually use. Because the term use AI thoughtfully is only helpful if you know what thoughtful looks like in practice. My first recommendation: pick two or three tasks where AI actually saves you meaningful time and start there. Don't try to use it for everything at once. That leads to overwhelm, inconsistency, and sometimes worse results than before. What are the tasks that take you the longest but feel most repetitive? Those are your best candidates. For most teachers, those tasks are lesson plan drafting, differentiated materials, and rubric creation. So here's a concrete example. Say you're a seventh grade English teacher, you're teaching argumentative writing. You can go to a tool like ChatGPT, Microsoft Co-Pilot, or Claude and say something like, I'm going to be specific here. Write me three versions of an argumentative writing activity about school uniforms, uh, one for students reading at grade level, one for students reading at two years below grade level with simplified vocabulary and sentence starters, and one for advanced students that asks them to address the counter-argument. What comes back is going to need your review. You're going to check that the instructions are clear, that the examples fit your school culture, that the sentence starters are actually useful, but that structural work building three versions from scratch just took you two minutes instead of 45. Here's another one feedback on student writing. Now I want to be careful. AI should not be replacing your personal feedback because your personal relationship with your student matters enormously. But if you have a stack of first drafts and you want every student to get some specific feedback before the final version is due, you can use AI as a first pass tool. You type in a student's paragraph, you tell the AI to look for specific skills, say how well they're using evidence, and you get some initial notes that you can then personalize and add to. This is not AI grading your students, this is AI doing a first pass that you then own. A third example could be parent communication, end of a grading period. Okay, you need to reach out to families about students who are struggling. Writing individual emails from scratch can be exhausting, but you can ask AI to draft a professional warm template for specific situations, say a student who has a strong ability but is not turning in work, then you personalize two or three sentences for the specific student and family, and you spent five minutes instead of 30 minutes writing a letter from scratch. Now, on the academic integrity side, rather than leaning entirely on detection software with all its limitations and false positives, consider redesigning some of your assignments. This is actually one of the most powerful long-term strategies. AI resistant assessments are one that requires students to demonstrate understanding in ways that are genuinely hard to outsource. In-class writing on a specific topic introduced that day. Oral presentations with real time follow up questions, portfolios that include visible process work like brainstorming, outlines in early drafts, or performance tasks tied to local and specific contexts that an AI wouldn't know about. These are just harder to cheat on. They're actually better assessments. They show you more of what your students actually know. And finally, AI literacy as a classroom subject. Okay, bear with me. One of the most interesting shifts happening right now is teachers turning AI from a threat into a lesson. Having an explicit conversation with your students about how AI works, what its limitations are, why using it to shortcut your own thinking actually hurts your own brain development. That's a powerful, powerful pedagogical move. Students who understand AI are better positioned to use it responsibly. You can even use AI in class transparently to demonstrate its limitations. Feed it a prompt together, look at the output and analyze what did it get right? What did it miss? What would uh what would have needed to know to do it better? That is critical thinking. That is your curriculum. Even with the best prompts and the most efficient workflows, we have to remember that we aren't just teaching a curriculum, we're teaching human beings. If we focus so much on the how of the technology that we lose sight of the who in the desks, we've missed the point entirely. Because while AI can draft a worksheet in seconds, it can't build the one thing that actually makes learning possible. Trust. The relationship between a teacher and a student is one of the most important things in education. Research consistently shows that students learn better when they feel safe, respected, and genuinely believed in. Anything that damages that relationship has real costs for learning, and not all students arrive at your classroom carrying the same relationship with institutions. So I want to talk about some specific students, okay? Students who are already navigating considerably more than their peers, often invisibly, because the way we handle AI detection, surveillance, and academic integrity is not a neutral experience for everyone. For some students, it carries a weight that most of us, if we're honest, have never had to carry ourselves. Think about your English language learners, right? These are students, often children of immigrant families, who are simultaneously learning the content of your course, learning the language it's taught in. They may have left behind schools, friends, extend families, sometimes entire ways of life. They come to your classroom already exhausted from the translation work their brains are doing every single day. Many of them are also serving as translators and cultural bridges for their own parents at home, filling out forms, making phone calls, navigating bureaucracies that were not designed for their families and minds. They are doing adult labor while trying to be students. And then they work extraordinarily hard in an essay, drafting carefully, choosing each word deliberately, and a detection algorithm says their writing looks like a machine written. Particularly those in schools that are under-resourced, that are in neighborhoods where policy is heavy, where young people have watched their family members or neighbors be presumed guilty before anything else. These students have often had a deeply complicated relationship with authority figures and institutions long before they walked in your classroom. Many of them have already received the message in countless small and large ways that the system is watching them more closely than their Caucasian peers. And they are suspected first and understood later, if ever. When you introduce surveillance technology into the classroom, keystroke monitoring, revision history tracking, AI detection flagging, you are not introducing something neutral for these students. You're adding one more layer to a pattern they already know too well. Think about your students living in poverty, students who are sharing a bedroom with three siblings, who don't have reliable internet at home, who are working part-time jobs to help their family make rent, who sometimes come to school hungry. When we talk about academic integrity, we often frame it as a moral issue. And it is. But we have to be honest that the circumstances in which student turns to AI as a shortcut are not always simply laziness or dishonesty. Sometimes they reflect desperation. Sometimes they reflect a student who has been uh set up to fail and knows it. That doesn't make academic shortcuts acceptable, but it does mean the conversation we have with that student needs to be very different than the conversation we might have with a student who simply couldn't be bothered. And think about your students with disabilities, students with dyslexia who have been told their whole lives that their writing is messy, students with anxiety disorders who write in hyper-organized repetitive ways because structure helps them feel safe. Students with ADHD whose writing can swing between brilliant and fragmented. These students have often spent years in systems that were not built for them, being evaluated by tools that weren't built for them, and receiving feedback that centered what they couldn't do rather than what they could. Adding an algorithm that scrutinizes the style of their writing and compares it to the norm they were never able to achieve is not a small thing. I want to acknowledge and support students who identify as LGBTQ plus, and in particular those who may not be out at home. These students might be coming to school as the only safe place where they can feel comfortable and express themselves. They may have learned, sometimes through harsh experiences, that certain institutions are not safe for them. For these students, a teacher's genuine trust and care can be a lifeline. The last thing they need is to feel like their teachers constantly monitoring their every move. I say all of this not to paralyze you or tell you that uh you can't have standards. You absolutely can, you should. High expectations are one of the most powerful forms of respect you can offer any student. What I'm saying is that how you hold those standards, the spirit in which you approach a student's work, the first instinct that guides you when something seems off, matters enormously. And for these specific students, it can matter the difference between staying engaged with school and quietly giving up on it. And that giving up isn't just a lack of motivation, it's actually a biological shutdown. This is about how when trust breaks down, the brain itself changes its priorities, how the human brain is wired to learn, or more importantly, how it's wired to protect itself. Here's something I want to be very direct about. There is substantial research on what happens to learning when students don't trust their teachers or their environment. And it's not just a feelings issue, it's a cognitive issue. When a student feels threatened, surveyed, or suspected, their brain activates a stress response. Decades of educational psychology research have established that chronic stress is genuinely incompatible with the kind of higher order thinking that school requires. Analysis, synthesis, creativity, argument building, genuine writing. You cannot write a thoughtful essay when part of your brain is scanning for danger. You cannot take intellectual risks when you are braced for consequences. Students who do not feel trusted by their teachers do not ask questions when they're confused because asking a question might reveal that they don't understand something. And in an environment that feels hostile, that feels like a vulnerability they can't afford, they disengage, they perform compliance rather than genuine learning, and over time, they stop believing that school is for them. For students from historically marginalized communities, Latin and Hispanic, Indigenous, African American, etc., students from low-income families, students with disabilities, this pattern is not abstract. It is documented. Researchers Garcia Lopez and Credo Lenan, writing in 2025, found that students who perceive racial and cultural bias from their teachers show measurably lower academic achievement. Not because they were less capable, but because the psychological cost of navigating that environment consumes cognitive resources that would otherwise go toward learning. When we talk about achievement gap, we often look for explanations in the students themselves. The research increasingly points back to the environment. Trust is not a nice extra, it's a prerequisite. So when you introduce AI detection tools without careful thought about who they are disproportionately flagging and without protective processes to ensure no student is accused based on the algorithm alone, you're not just adding a neutral administrative step to your workflow, you are potentially confirming for some of your most vulnerable students that they were right to be cautious about trusting you. That is a consequence worth taking very seriously. So let's talk about what this actually looks like in practice because I don't want to leave you with a problem without a path. First, be transparent with your students about your AI policies from day one. Not just the rules, but the why. And when you explain the why, be honest, tell them that certain tools can sometimes incorrectly flag student work, that you are aware of that, and that your process will always involve a conversion with them before any conclusion is drawn. That message alone, that you will talk to them first, that you will hear them out, an algorithm does not have the final word on their integrity, can mean something profound to a student who has been on the wrong end of the institutional assumption before. Second, when you have a concern about a piece of work, let your first move be curiosity, not accusation. Sit down with that student, ask them to walk you through their thinking. What was their argument? Why did they choose that angle? What evidence felt most convincing to them? A genuine author can tell you, and if something went wrong, if they used AI in a way that crossed a line they didn't fully understand, a conversation gives you a teaching moment instead of a disciplinary crisis. It also gives you the chance to ask the question underneath the question, what was going on when you were writing this? What got in the way? Sometimes the answer to that question matters more than the actual assignment itself. Third, invest in relationship before you need to have a hard conversation. The teacher-student relationship is the most powerful variable in education. We know this. When students know in their bones, not just in theory, that their teacher sees them as a capable, valuable person whose success genuinely matters to that adult, everything changes. They are more willing to ask for help, they are more willing to take intellectual risks, they are more willing to come forward and say, hey, I made a mistake, rather than hoping nobody notices. Relationship is not a soft skill, it is the infrastructure everything else runs on. And finally, and this is perhaps the most important thing I can say in this whole episode: make sure your most vulnerable students know clearly and repeatedly that they belong in your classroom. Not conditionally, not only when they perform well, unconditionally. Students who are English language learners, students who are living in poverty, students who are neurodivergent, students who are indigenous African American, Latin, or Hispanic, students who are navigating their identity in a world that makes that hard. These students ring gifts and perspectives and strengths that your classroom is richer for. Tell them that, show them that, and then make sure that every technology decision you make, including how you handle AI, honors that belief rather than contradicts it. Your students are watching how you handle this. They're watching whether you approach AI with fear and suspicion or with curiosity and critical thinking. If you model intellectual engagement with new technology, asking questions, acknowledging uncertainty, staying grounded in your values, you are teaching them something more important than any lesson plan. You are teaching them how to navigate a changing world thoughtfully and with integrity. That is a gift. And for some of your students, it is a gift they have never received before. Okay, I want to take a moment to speak directly to administrators. If you're a principal, department head, curriculum director, or assistant superintendent, this part's especially for you. And teachers, I hope you'll stay with me too because you deserve to hear this conversation just as much. I want to start with saying I know this is hard for you too. You are being handed mandates from above, pressure from parents, budget constraints from every direction, and technology landscape that is changing faster than any policy cycle can track. I understand that you are not simply choosing to leave teachers unsupported, but I also want to be honest with you, the gap between what teachers need to use AI well and what they're currently being given is real. It is documented and it's having consequences. Let me show you some specific concrete things you can do. Not someday, but now.com found that as recently as early 2024, only 18% of university students believed their institution's faculty were well equipped to use AI tools. That number climbed to 42% by 2025, real progress, but it still means a majority of settings are lagging. And in K-12, about 74% of districts planned to offer AI training uh by fall 2025, uh, according to Demand Sage, though the quality and depth of those programs remain largely undocumented. So instead of booking a one-day AI training in September and calling it done, here's what I'd encourage build a small AI learning team of four to six teachers who meet regularly, share what's working, what's not, and bring back learning to their departments. Bring in a coach who can do classroom embedded support rather than just standard and deliver training. This doesn't have to cost a fortune. It costs time and a genuine commitment to protecting that time. Here is something practical you can do tomorrow. Look at your teacher's schedules and identify one protected, uninterrupted period per month specifically for AI experimentation and learning. No additional committee meetings, no coverage duties, just time to try a tool, reflect, and share with a colleague. And if your teachers are asking what to actually study during that time, I'll point you toward two frameworks worth knowing. The first is the TPAC model, technological pedagogical content knowledge, originally developed by Mishra and Kohler in 2006, and now adapted specifically for AI integration. It emphasizes that effective use requires not just technical competence, but the ability to connect AI tools meaningfully to content and pedagogy. The second is the Teacher Artificial Intelligence Competence Self-Efficacy Scale, or TAICS for short, developed by Chu and colleagues in 2024. It's a six-dimensional framework that encompasses AI knowledge, AI pedagogy, ethical reasoning, human-centered education, and ongoing professional engagement. Both of these are excellent guides for structuring what genuine AI professional learning looks like. Teachers consistently report that the biggest barrier to using AI effectively is not willingness, it's time. You cannot simultaneously ask teachers to reduce their workload and then refuse to give them the protected time needed to learn the tools that would enable that. It just doesn't compute. Another thing I want to talk about is auditing your AI detection tools before you deploy them. If your school or district is using AI detection software or is considering it, please do your homework before rolling it out school-wide. Ask the vendor directly, what is your false positive rate? What groups of students are most likely to be incorrectly flagged? Is there an independent research on this? The evidence is clear that false positives are more common among English language learners and students with neurodivergent writing profiles. Deploying these tools without awareness of that risk exposes your most vulnerable students to unjust academic consequences and your school to real liability. If you use these tools, pair them with a clear process that requires a human conversation before any formal action is taken on an algorithmic result. Another thing you might want to do is ask your teachers what they actually need. This one seems obvious, but it isn't happening as often as it should. I'm telling you, send out a simple anonymous survey. Which AI-related tasks are saving you time, right? Which AI-related tasks are costing you more time than they save. What would make the biggest difference in your experience? And then this is the part that matters. Actually respond to what you hear. If teachers tell you they need two additional planning periods per month to implement new tools, take that seriously. If they tell you the current detection software is creating more anxiety than it resolves, investigate alternatives. Frontline teachers have the most important data available about what is actually happening. You need that data to make good decisions. And then, of course, model it yourself. If you want your teachers to engage with AI thoughtfully, engage with it thoughtfully yourself. Use it to draft an agenda and tell yourself you did. Use it to help write a parent newsletter and be transparent about it. Show curiosity and critical thinking. When you model learning something new in front of your staff, including the uncertainty that comes with learning, you signal that it is safe for them to do the same. That psychological safety is not a soft thing, it is the foundation of any sustainable professional growth. Okay, I want to circle back to where we began because this part really does matter. AI in education isn't slowing down, the tools are getting more capable, and students are using them more often, and the pressure on teachers to adapt is only increasing. None of that is reversing, and that's exactly why this next piece of the conversation is so important. So here are three things I want to leave you with, regardless whether you're a teacher or an administrator. First, be honest about what is actually happening. Not the six week promise, not the three percent reality. Both. They are both true and they are both deserve weight in how you plan and how you talk about this. Second, slow down. In a world moving this fast, the instinct is to run fast. But teachers have found real benefit from AI are not the ones who tried everything at once. They're the ones who identified two or three things that genuinely worked for them and built from there. You do not need to be an AI expert. You need to be thoughtful, professional who knows their own context. Start small, be intentional, build from what works. Okay? And now, third, the question that should precede every decision about AI in the classroom is this one. Does this support student learning better than the alternative? Not, is this fast? Not is this trendy? Not, did the principal say we should use it? Does it support student learning? If the answer is yes, and if you have what you need to make that yes sustainable, then AI can genuinely be part of a better future for education. But getting to that future requires honesty about the present. The burdens are real, the support is insufficient, and the ethical stakes are high. You showed up for your students today, you're going to show up for them tomorrow. And that matters more than any tool, any policy, or any trend. But because your time and your relationships are so valuable, I want to make sure you have a practical blueprint to protect them. Let's break down the essential moves for the classroom and the front office. So here are some key takeaways. For any teachers listening in, start small. Pay two or three high-leverage tasks, lesson drafts, differentiated materials, parent communication templates, and use AI consistently for those before expanding. Treat AI output as a first draft, not a finished product. Always review, fact-check, and adapt to your students and curriculum. Redesign assessments to include process-based, in-person, or locally specific elements that are genuinely hard to outsource to AI. Be aware that AI detection tools disproportionately produce false positives for English language learners, neurodivergent students, students writing in a second language, and students from lower-income backgrounds. Never take action based solely on an algorithmic result. Have a private, non-accusatory conversation with a student before escalating any academic integrity concern. Let curiosity lead, not suspicion. Make belonging explicit and repeated for your most vulnerable students, ELL students, students of various nationalities and colors, students with disabilities, students living in poverty, students navigating their identity, tell them and show them that they belong. Use transparent conversations with students about your AI policies, including naming the limitations of detection tools so they know you will always hear them first. Consider teaching AI literacy explicitly. Analyzing AI output together in class builds critical thinking and responsible use. And for any administrators who might be listening, whether you're a principal or department head or curriculum director, establish a clear written AI use policy. Imperfect and evolving is far better than none at all. Replace one-time workshops with sustained, job-embedded professional development supported by coaching and peer collaboration. Protect specific uninterrupted time for teachers to learn and experiment with AI tools. Audit detection tools for false positive rates, especially for ELL and neurodivergent students, before deploying at a scale. And survey teachers regularly about their actual experiences with AI and act on what you hear. Model thoughtful AI engagement yourself, your behavior sets the culture. I hope you share this podcast with others. You could share it with a colleague, a department head, a principal, or even a friend who works in your school. These conversations don't happen often enough, and the people who need them the most are sometimes the ones who never get pointed toward them. And to every educator listening, thank you for what you do. Not for the test scores, not for the metrics, but for the student in the back row who needed someone to notice them and did, because you were paying attention. This has been the Educators Beacon Podcast, and I'm your host, Dr. Brandon Naylor. I'll be back with another conversation soon, this time on professional development. For real, this time. Until then, take good care of yourselves so you can keep taking good care of your students. I really appreciate all you do, and I'm grateful for the time we spent together today.