
As AI tools rapidly make their way into classrooms, marking policies, lesson planning, and even how students write essays, it’s increasingly important for educators to understand not just what these tools do, but how they work. Among the most popular AI models in education are Claude (Anthropic), ChatGPT (OpenAI), DeepSeek, and Copilot (Microsoft). Each of these tools brings something different to the table—and knowing those differences is key to using them well and teaching AI literacy confidently.
One of the biggest distinctions between these models is how they handle memory and personalization. OpenAI’s ChatGPT, particularly in its GPT-4-turbo version, is designed to learn your preferences over time. It can remember your name, the tone you prefer, or even what projects you’re working on. This can be a real advantage for teachers who use the tool regularly—whether for planning lessons, co-writing resources, or creating consistent feedback. Its ability to match your writing style makes it feel like a personal assistant that improves the more you use it. However, this comes with a trade-off: personalization requires storing some information about you. While OpenAI offers settings to manage, view, and delete memory, it still means placing trust in how that data is handled.
In contrast, Claude takes a privacy-first approach. It does not retain memory between sessions. Every time you start a conversation, it starts fresh—there’s no long-term record of your preferences or prior chats. This can feel limiting for ongoing tasks, but it’s ideal for educators who prioritize data minimization and prefer tools that don’t remember student inputs or sensitive conversations. Importantly, Claude has also developed a reputation for producing more nuanced, thoughtful writing, especially in humanities-style tasks like critical analysis, ethical discussions, or literary responses. For that reason, many educators have found Claude to be the strongest choice when working on essay writing or modelling thoughtful tone.
Then there’s DeepSeek, a lesser-known but fast-growing AI model that brings a very different strength to the classroom: transparency. One of the most impressive features of DeepSeek is that it often shows its reasoning process—it lays out its thinking step by step. This is especially valuable for educators trying to teach students how to structure arguments, justify decisions, or trace logic in both writing and problem-solving. While DeepSeek doesn’t yet offer memory or deep personalization, its ability to “think out loud” makes it a fantastic teaching tool for developing AI literacy, critical thinking, and structured reasoning.
Copilot, Microsoft’s AI assistant, takes yet another approach. Rather than focusing on writing or reasoning, Copilot integrates seamlessly into tools like Word, Excel, Teams, and PowerPoint. Its strength lies in automating repetitive tasks, helping you rewrite an email, summarize a meeting, or extract insights from a spreadsheet. While it offers some adaptive behaviour, it doesn’t currently have a transparent memory system or the level of conversational nuance that Claude and ChatGPT provide. It’s best thought of as a practical assistant for workflow efficiency rather than a co-writer or tutor.
One important issue that often gets overlooked is how educators access these models. Many people use so-called “wrappers”—third-party websites or platforms that sit on top of Claude, ChatGPT, or other LLMs. While these can offer convenience or flashy features, they often hide what’s actually happening underneath, and may introduce privacy risks, outdated models, or misleading representations of what the AI can and cannot do. For teachers aiming to guide students toward responsible AI use, it’s essential to go straight to the source when possible. This means using OpenAI’s ChatGPT platform directly, Claude via Anthropic’s own site, or DeepSeek through its official interface. Doing so ensures access to the most current capabilities, clearer data policies, and a more accurate understanding of the tool’s limitations.
In this context, developing AI literacy doesn’t just mean knowing which prompts get the best results. It means understanding how memory works, what kinds of data are stored, how different models handle writing and reasoning, and why using core tools (rather than third-party wrappers) leads to more transparent, ethical engagement. It also means recognising that no tool is “better” across the board—just better for different things.
Resources
