AI, Education and the Environment: Why Human Thinking Still Matters

Artificial intelligence is no longer a distant idea waiting somewhere in the future. It is already shaping how we write, learn, work, search, code, communicate and even form relationships. In a wide-ranging conversation with Professor Nigel Crook from Oxford, students explored some of the biggest questions surrounding AI today: its environmental cost, its effect on learning, its role in creativity, its limits, and its place in future careers.

The discussion began with one of the most immediate concerns: software development. AI tools can now generate code quickly, which is one reason companies are using them. They save time and money. However, speed does not always mean quality. AI-generated code can vary enormously. Some of it may work well, but some of it may be poorly structured, insecure or unreliable. This matters because software is not just used for games or websites. It increasingly controls finance, education, transport, healthcare and public services. If large amounts of code are generated automatically without proper review, testing and security checks, society may end up depending on systems that were not developed with enough care. In this sense, AI does not remove the need for skilled software engineers. It may actually make them more important, especially in cyber security, code review and responsible system design.

Another major issue is the environmental cost of AI. Large language models require huge amounts of energy, especially during training, when they process vast quantities of text, images and other data. They also continue to consume energy each time users enter prompts. Professor Crook highlighted two major impact areas: electricity and water. Data centres generate heat, so they need cooling, and that cooling can involve significant use of water. Even if companies locate data centres in colder parts of the world, the underlying problem remains: current AI systems are computationally expensive. The comparison with the human brain is striking. A human brain uses roughly 15 to 20 watts of power, while training large AI models can require enormous amounts of electricity. The point is not simply that AI uses energy, but that its current design may be inefficient when compared with biological intelligence.

This raises a deeper question: are today’s AI algorithms the right direction? Modern AI has been inspired in part by the brain, but it does not work like the brain. Neural networks process huge arrays of numbers, usually on GPUs because GPUs are very good at parallel calculations. This is why graphics cards, originally designed to handle pixels and fast-changing visual data, have become central to AI development. However, this also creates further environmental and supply-chain concerns, including demand for specialist chips and the materials needed to manufacture them. If AI is to become part of everyday life, then making it more efficient is not just a technical challenge. It is an ethical and environmental one.

For students thinking about careers, the message was surprisingly optimistic. AI has not made computer science irrelevant. On the contrary, it has made computing more important. Software, data, cyber security, robotics and AI ethics are becoming central to more industries. Engineering is also increasingly important, especially where computing meets the physical world, such as robotics, motorsport, automation and sustainable technology. The advice given to students was not simply to chase whatever subject sounds most employable, but to find the overlap between their strengths, interests and future opportunities. Passion still matters, because meaningful work usually requires persistence, curiosity and deep learning.

Education, however, faces a difficult challenge. AI can help students learn, but it can also help them avoid learning. Universities are already dealing with students submitting AI-generated work. Some departments have introduced monitoring, declaration policies and other guard rails. But the larger issue is not just cheating. The greater danger is that students may let AI do the thinking for them. If learners use AI to bypass the struggle of forming ideas, solving problems and developing judgement, they may weaken the very abilities education is meant to build. AI can support thinking, but it should not replace it.

This concern connects to a student’s question about whether humans were smarter 500 years ago. The answer depends on what “smarter” means. Modern people may know more facts and have access to more tools, but people in the past achieved remarkable things with far less technology. Ancient engineering, architecture and navigation show high levels of intelligence, planning and creativity. The real question is whether AI could make us intellectually dependent. Satnav offers a simple example: it helps us reach a destination, but if we rely on it completely, we may lose our ability to read maps or understand routes. AI could have a similar effect on writing, reasoning, coding and problem-solving if we use it passively.

There was also a thoughtful discussion about creativity in mathematics and science. AI can already help solve complex problems, but that does not mean human mathematicians or scientists become unnecessary. AI systems are trained on human-created data. They generate outputs by learning patterns from that data and recombining them in useful ways. This can be powerful, but it also has limits. In areas where the data is incomplete, unusual or outside the model’s training, AI can produce false or misleading answers. The comparison with chess is useful: human plus machine can outperform either human or machine alone. The future of mathematics, science and creative problem-solving may not be AI replacing humans, but humans using AI as a powerful tool while still providing direction, judgement and imagination.

The question of whether AI can feel emotions was answered clearly: no. AI can simulate emotional language, but simulation is not the same as experience. A chatbot can say that it feels sad, happy, lonely or in love, but it does not have consciousness, lived experience or inner awareness. This distinction matters because people are already forming emotional and romantic relationships with AI systems. Professor Crook’s wider work on ethical AI and moral machines connects closely with this concern: as AI becomes more convincing, society needs to think carefully about what machines can imitate and what they cannot genuinely experience.

This leads to one of the most important limits of AI: relationships. AI may affect almost every field, from farming to finance, but there are areas where human experience remains essential. Counselling, teaching, care work, mentoring, psychology and leadership all depend on more than pattern recognition. They require empathy, judgement, trust, moral responsibility and lived experience. AI may support professionals in these fields, but it should not replace the human relationship at the centre of them.

The conversation also touched on gender in computer science. Many universities still find it difficult to attract equal numbers of girls and boys into computing, even though some schools are making strong progress. Activities such as robotics competitions, practical problem-solving and early exposure can help students see computer science as creative, collaborative and relevant. This matters because the future of AI and computing should not be shaped by a narrow group of people. If technology affects everyone, then a wider range of voices should be involved in designing it.

The overall lesson is balanced rather than alarmist. AI is not going away. It is commercially valuable, increasingly embedded in existing systems and too useful for society simply to abandon. But accepting AI does not mean accepting it uncritically. We need to ask harder questions about energy use, water use, security, education, creativity, relationships and human agency.

For schools, the challenge is clear. Students need to learn how AI works, where it is useful, where it is risky and how to use it without giving up their own thinking. The aim should not be to reject AI or worship it. The aim should be to develop students who can use AI intelligently, question it confidently and remain capable of independent thought.

AI may become one of the most important tools of this generation, but it is still a tool. The human task is to decide how, when and whether to use it.

Thank you to Professor Nigel Crook for giving his time so generously and for offering students such a thoughtful, honest and engaging discussion about artificial intelligence, ethics, education and the future.

Further reading

Rise of the Moral Machine: Exploring Virtue Through a Robot’s Eyes – Professor Nigel Crook’s book on moral machines and ethical AI.
https://ntcrook.com/?page_id=56

Vibe Coding: From Idea to App at the Speed of Flow – James Abela’s book on using AI-assisted development to move from an idea to a working app through rapid prototyping, iteration and testing.
https://www.amazon.com/Vibe-Coding-Idea-Speed-Flow-ebook/dp/B0GNZSC39P

Author