Round 1 Matchups and Discussion Highlights

8th March (Round 1.8): Lego Prime vs. Vex GO

This matchup brought out strong opinions from educators, particularly regarding long-term learning pathways and support ecosystems. Vex GO received widespread praise for its structured progression, allowing students to seamlessly transition to VEX IQ and VEX V5 as they advance in their learning. The well-established VEX ecosystem, including student competitions and a comprehensive support system, was a significant factor in its favour.

On the other hand, Spike Prime was recognised for its flexibility, particularly in programming languages. It supports both block coding and Python, making it a strong choice for classrooms that want a smooth entry into text-based programming. Educators then shared that VEX does indeed support multiple languages. In addition to this discussion, educators noted that VEX’s integration between block coding and Python/C++ was more seamless, giving students a clearer progression.

Win for Vex GO

9th March (Round 1.9): MU vs. Thonny

Unlike some of the other matchups, this one did not generate a great deal of debate. MU was the clear winner, primarily due to its user-friendly interface, which is particularly well-suited for younger learners. While MU only supports an older version of Python, educators appreciated its simplicity and ease of use.

Thonny, while a solid choice for Python development, was seen as slightly more advanced and better suited for learners who already have some experience with text-based programming. In classrooms with absolute beginners, MU’s interface was favored for reducing cognitive load.

Win for MU

10th March (Round 1.10): Code.org vs. Khan Academy

This matchup sparked an interesting discussion about target audiences. Khan Academy was praised for its rich content and high-quality instructional videos, but many educators felt it was more appropriate for older students.

In contrast, Code.org was seen as an excellent choice for younger learners, particularly due to its structured approach and engaging activities. Many also credited Code.org for its Hour Of Code initiative, which has significantly influenced computing education worldwide, generating goodwill and extensive teaching materials.

Ultimately, Code.org’s hands-on, interactive approach gave it the edge, especially in primary education settings.

Win for Code.org

11th March (Round 1.11): CodeHS vs. Codecademy

This was one of the more anticipated matchups, as both platforms are highly regarded in the coding education space.

CodeHS stood out for its structured curriculum paths, making it a strong choice for schools looking for a guided, standards-aligned approach. It provides a clear learning progression and is often favored by teachers who want a structured syllabus for their students.

Codecademy, on the other hand, was praised for its interactive, self-paced approach. It allows students to explore coding concepts at their own speed, making it a good fit for independent learners. However, the lack of a structured curriculum made it less appealing for classroom settings.

In the end, CodeHS’s curriculum-driven model secured its victory in this round.

Win for CodeHS

12th March (Round 1.12): Microbits vs. Makeblock mBot

Both platforms have their strengths, but Microbit won this round due to its affordability, versatility, and ecosystem of accessories. Educators appreciated how easily it integrates into various learning environments, making it a great tool for introducing physical computing concepts.

That said, some concerns were raised about Micro:bit’s durability—specifically, its flimsy battery connectors, which can be a weak point in the classroom.

Makeblock mBot, while a capable robotics tool, faced criticism for its plastic motors and inconsistent motion tracking, which some educators found frustrating. Despite these drawbacks, it remains a viable option for robotics education. However, Micro:bit’s affordability and flexibility gave it the edge.

Win for Microbits

13th March (Round 1.13): Swift Playground vs. CS First

This matchup was largely influenced by hardware availability.

Swift Playground was highly praised as an excellent iPad-based coding environment, offering an intuitive introduction to Swift programming. Schools with iOS/macOS devices found it to be an engaging and effective way to teach coding.

CS First, on the other hand, was recognised for its strong integration with Scratch and support for self-paced learning. It is particularly well-suited for schools using Chromebooks or mixed-device environments.

Ultimately, in settings where iPads were available, Swift Playgrounds was seen as the stronger choice due to its deep integration with Apple’s development ecosystem.

Win for Swift Playgrounds

14th March (Round 1.14): Raspberry Pi Code Editor vs. Edublocks

This was a quieter round, with fewer strong opinions from participants.

One notable downside of the Raspberry Pi Editor was that it required effort for a school to join and set up, which proved to be a barrier for some educators.

Edublocks, in contrast, was recognized for its flexibility. By supporting both block-based and full Python programming, it provided an accessible entry point for beginners while still offering the power of Python’s extensive library ecosystem. This dual-mode approach made it more appealing to educators looking for a smooth transition from block coding to text-based coding.

Win for Edublocks

15th March (Round 1.15): CodeCombat/Ozaria vs. Erase All Kittens

This was an interesting round, as many participants were already familiar with CodeCombat but had little to no exposure to Erase All Kittens.

CodeCombat’s gamified approach to teaching coding was widely praised, particularly for its ability to engage students in a fun, interactive way. Some educators mentioned that there are cheaper alternatives available, but CodeCombat’s polished experience and structured learning paths gave it an advantage.

Erase All Kittens, despite having a unique and creative premise, suffered from lower awareness among educators, making it difficult to compete against the well-established reputation of CodeCombat.

Win for Code Combat / Ozaria

16th March (Round 1.16): Turtle Academy vs. Python Turtle

Both tools focus on turtle graphics, but they cater to slightly different audiences.

Turtle academy was praised for its pre-built exercises, which made it easy to introduce students to the concepts of turtle graphics. The fact that it is a ready-made package was a major plus for teachers looking for quick deployment.

Python Turtle, on the other hand, was seen as the more flexible solution for experienced computing teachers. While it required more setup, it allowed for greater depth and progression once students outgrew the basics of turtle graphics. This made it a preferred choice for educators looking to support students beyond the introductory level.

Win for Python Turtle

Authors