Saturday, 12 April 2025

The Unseen Hand of Complexity: How Tesler’s Law Shapes Effective Assessment Design

It all started with a question. Maybe it was meant to be an insult:

"Are you sure you aren’t making this more complex for yourself than you need to?"

I wasn't sure how to answer. Today, I would reply emphatically, 

“Yes! That’s exactly what I’m doing!”

Tesler's Law

    Last week, I discovered Tesler’s Law. From that moment, all the problems that I’d been facing on an assessment design project simply melted into thin air. In this post, I want to elaborate how Tesler’s Law of Conserved Complexity might apply to assessment design in education. Although I work in vocational education, the information will, I hope, be useful to anyone involved in assessment design.

    The pursuit of effective assessment is a constant evolution. It’s no wonder the word assessment begins with the small letter ‘a’! Bad Lacanian jokes aside, every educator, every where, strives to create assessment methods that accurately gauges student understanding, provides meaningful feedback, and ultimately drives learning forward. But all too often, we find ourselves wrestling with overly convoluted marking schemes, overly simplistic scoring systems, and labyrinthine processes that lead nowhere. Could there really be a guiding principle lurking beneath these design challenges? I didn’t think so. And then, out of the blue, there it was, right in front of me; 

Tesler’s Law states:

"Every application has an inherent amount of complexity that cannot be eliminated or hidden."

Whilst originally conceived in the field of computer-human interaction, Tesler’s Law, also known as the Law of Conservation of Complexity, offers a surprisingly insightful and practical lens through which to analyse assessment design.

Come and see the complexity inherent in the system


    The complexity is always there. You can’t wish it or design it away. Instead, it must be dealt with in one of two ways: either by the application itself or by the user. As educators, our "application" is the assessment itself whether it be the test paper, or the practical task to be completed competently. The fact is that Tesler’s Law applies to any method we use to evaluate what students have learned or are supposed to have learned. The "user" is both the student completing the assessment, and the assessor tasked with evaluating it. That latter point, I think, was the most striking revelation of last week’s “discovery”. I had designed a process that looked incredibly simple. But all I’d done was effectively shift the complexity onto the poor bloody infantry of assessors. It really seemed like a binary choice: simplicity or complexity. Once I started to think about the problem in terms of Tesler’s Law, I was able to shift the complexity elsewhere.

The Trap of Overly Simplistic Assessments: Student Burden v. Limited Obvious Insight for Educators


    One might initially think that the ideal assessment is the simplest one possible. A quick multiple-choice quiz, for instance, seems straightforward. However, Tesler’s Law suggests that by pushing all the complexity onto the student, we might well be missing crucial insights. A multiple-choice question requires the student to:-

  • navigate potentially nuanced options;
  • discern subtle differences;
  • guess strategically, or
  • all of the above.

    The complexity of understanding the underlying concept and applying it correctly is entirely on their shoulders. And whilst multiple-choice assessments are quick and easy to mark and score, such assessments can often provide a superficial understanding of student learning. We know what they got right or wrong, but not necessarily why. They might have guessed right, or they might have guessed wrong. They might even have guessed their way to a pass. Under the circumstances of these known unknowns, the complexity of diagnosing misconceptions and providing targeted feedback is significantly increased for the educator.

The Power of Well-Designed Complexity:

    Conversely, consider an assessment with a detailed rubric. Here, the educator takes on more of the complexity in the design phase.

Educator Investment: Crafting a clear rubric with specific criteria, levels of achievement, and guiding questions requires significant upfront effort. This is the educator absorbing the inherent complexity.

Reduced Student Ambiguity: A well-defined rubric clarifies expectations, reduces ambiguity, and guides students in demonstrating their understanding in a structured way. The cognitive load for the student shifts from deciphering the assessment's hidden requirements to focusing on what's required of them.

Richer Data for Educators: The detailed rubric allows for a more nuanced evaluation of student work, providing richer data on their strengths and areas for growth. The complexity of analysis is managed by the structured framework.

Finding the Balance:

    The key takeaway from applying Tesler’s Law to assessment design isn’t about making everything complex or overly simplistic. It’s about consciously deciding where to put the inherent complexity. Effective assessment design strategically distributes this complexity to maximize learning and provide meaningful insights. We should always know where the complexity lies.

Here are some practical lessons that I’ve learned from my current design project:

Clarity is Key: I’ve invested a lot of time on this over the last few weeks. Crafting clear and concise instructions, rubrics, and expectations shifts the complexity away from the student. Instead of the student trying to decipher the task, the educator should provide clear, upfront, and standardized guidance. Think inductions and briefings.

Structure for Success: Provide scaffolding and frameworks for complex tasks. Breaking down large tasks into smaller, manageable steps reduces the cognitive load on students. But this doesn’t just apply to the student, it also applies to the assessment process, such as in the way that the assessment items are written in the first place.

Targeted Feedback Mechanisms: Design assessments that allow for specific and actionable feedback. This helps educators address the underlying complexities of student understanding. It’s worth considering ways that can automate a certain amount of the feedback to allow educators to provide more detailed and/or nuanced context.

Consider the Learning Objectives: The complexity of the assessment should align with the complexity of the learning objectives. High-order thinking skills often require more complex assessment methods. Ensuring that the syllabus aligns with the assessment is a jolly good place to start.

Iterative Design: Don’t expect perfection. Assessment design is rarely perfect on the first try. Be willing to iterate and refine your methods based on student performance and your own reflections. It might mean a period of double-marking. But taking that weight of responsibility removes complex problems later. This also acknowledges and addresses the inherent complexity of measuring learning. Students are endlessly inventive in the ways they get things wrong. The assessment structure and processes have no option but to be dynamic.

Beyond Ease of Grading:

    Ultimately, Tesler’s Law reminds us that striving for the easiest assessment to grade might inadvertently place a greater burden on our students and limit the depth of our understanding of their learning. By consciously embracing and strategically managing the inherent complexity of assessment design, we can ensure fairer, more informative, and ultimately more effective methods for evaluating and fostering student growth. Let’s move beyond the illusion of simplicity and embrace the inherent complexity.

No comments:

Post a Comment

The Human Element in Moderation: A Journey from Process to Educational Ethic

Personal Prelude  When I first started working in education, the term for ensuring consistent assessment was 'verification.' It felt...