/ December 29, 2025

Rethinking Grading Part 2: When the Gradebook Drives the Bus

Choosing a Reporting System That Actually Supports Competency-Based Learning

In a lot of districts that say they’re “doing competency-based learning,” you can see two very different realities:

  • In classrooms, teachers are designing rich tasks, giving targeted feedback, and tracking evidence of student learning over time.
  • In student information systems or gradebooks, everything is compressed into point averages, single-assignment scores, and rigid settings that don’t reflect how learning actually happens.

Too often, districts end up compromising good practice because the grading and reporting system can’t do what’s needed. The product is engineered to support any grading policy, so it doesn’t have to accommodate what would be fair and accurate evidence of learning.

That’s backwards.

A grading and reporting system should support and reinforce best practices, not dictate bad practices.

Researchers such as Thomas Guskey, Ken O’Connor, Susan Brookhart, and others have shown that traditional grading often conflates achievement, behavior, and timing into a single symbol. When that happens, grades become less accurate, less fair, and less useful for students and families. At the same time, national competency-based education work stresses that systems must be redesigned so that evidence of mastery—not seat time—is at the center of how we report learning.

Below are key ideas and non-negotiables to keep in mind as you evaluate or redesign your grading and reporting system for competency-based learning—and how the New Hampshire Learning Initiative (NHLI) can help you navigate that work.

Report Competencies as Separate Grades—Without Making Teachers Do Backflips

If you’re serious about competency-based learning, your system must be able to report individual competencies as a separate performance level.

That means:

  • When a teacher enters an assessment, they can attach multiple competencies to that assessment.
  • They can score each competency separately based on the evidence provided by the assessment.
  • The system stores and reports those separate competency scores without forcing workarounds.

In many current systems, every competency tied to an assessment is automatically given the same score. The only workaround is to enter the same assessment multiple times—once for each competency. That is inefficient, confusing, and undermines good assessment design.

Non-negotiable:
A modern grading and reporting system must support a single assessment across multiple competencies, with each scored accurately and independently.

This isn’t just a convenience feature; it’s about validity. Competency-based grading research shows that reporting against specific competencies improves clarity and fairness because grades reflect what students know and can do in each area, rather than a single, vague overall mark.

If a system can’t do this, it is not truly designed for competency-based learning—no matter what the marketing slide says.

Flexible Scales at the Assessment Level (and Accurate Conversion)

Districts are rarely at the same stage in their grading journey. Some are ready to move entirely to a 4-point rubric. Others are just beginning to shift practice but want to maintain a 100-point scale for now.

A quality grading system can handle that.

At the assessment level, teachers should be able to use:

  • A 4-point rubric
  • A 10-point scale
  • A 100-point scale
  • Or other rubric structures needed for specific tasks

The system’s job is to:

  • Convert these scores accurately into the school or district’s overall grading scale, and
  • Do so transparently, in ways that align with local policies and make sense to students and families.

This flexibility allows districts to:

  • Keep a 100-point scale while they build competency-based practice.
  • Transition to a 4-point scale as needed.
  • Allow different structures across levels (e.g., K–5 on rubrics, 9–12 on a 100-point scale) while still maintaining internal consistency and fairness.

Grading experts consistently remind us that grades should summarize achievement against clear learning goals, regardless of the raw scoring scale used on individual tasks. Accuracy, consistency, and meaning matter more than the specific number system.

Be wary of any salesperson who says, “You have to go all in at once.”
That is not only inaccurate but also highly disruptive. When a system demands big changes before people understand the “why,” trust erodes and momentum stalls.

The system must be able to grow with the district, not impose limitations on it.

Alignment Does Not Mean Everything Looks the Same K–12

Another common misconception: “alignment” means every grade level’s report card, grading scale, and practices must look identical.

Real alignment comes from:

  • A shared vision of competency-based learning,
  • A coherent framework of competencies and performance levels, and
  • Practices that are developmentally appropriate and build as a progression over time.

It does not mean that K–2 needs to look like 11–12.

In many ways, elementary practice is already closer to what research says works:

  • Elementary teachers closely monitor progress over time.
  • They provide ongoing feedback and guidance.
  • They treat many tasks as practice, not final judgment.

Competency-based education frameworks emphasize that systems should respond to developmental needs, ensuring students have multiple opportunities and supports to achieve common competencies, rather than requiring identical reporting templates across grade levels.

A strong system recognizes that:

  • K–2 might lean on narrative descriptors, checklists, and space for teacher comments.
  • 3–5 might blend narrative with transparent competency reporting.
  • 6–8 begins to transition students into more formal grading structures.
  • 9–12 reports credit-bearing competencies while still honoring growth and evidence over time.

Alignment comes from shared language and coherent progressions, not from identical screens and report layouts.

Separate Practice from Competency Evidence in the Gradebook

One of the most significant divides between elementary and secondary practice is how we treat practice:

  • In elementary school, teachers track extensive practice—reading logs, drafts, math fluency, small-group tasks—but those practice assessments do not determine the final level of performance. They provide instruction and feedback.
  • In secondary school, many of these same practice opportunities become grades. Practice turns into “points in the book,” even when the learning isn’t finished.

Those “filler” grades:

  • They are often used to manage participation, not to measure competency.
  • It can undercut a student’s final demonstration of learning.
  • Make the gradebook bloated and confusing for students and families.

A grading system that supports competency-based learning must clearly distinguish between:

  • Practice assessments (formative, feedback-focused, meant to support learning), and
  • Competency assessments (summative demonstrations that determine the level of performance).

Many districts choose structures like:

  • 20% Practice / 80% Competency at the secondary level;
  • In elementary school, practice is tracked, but only competency assessments determine performance on the report card.

The exact percentages are less important than the underlying logic. The system must:

  • Allow you to label and separate practice vs. competency, and
  • Support weighting or other rules that make evidence of competency the primary driver of reported performance.

This lines up with decades of formative assessment research: when assessment is used primarily to support learning through feedback, adjustment, and student self-assessment, achievement increases. When we convert every practice opportunity into a grade, we strip formative assessment of its power.

If your system can’t distinguish between those assessment types, it will constantly pull you back toward points, averages, and compliance instead of learning.

Use Descriptive Performance Levels, Not Just Numbers

When competencies are reported online or on a report card, students and families need meaningful information—not just digits. 

Performance levels such as:

  • Extending / Advanced
  • Applying / Proficient
  • Developing / Basic Competency
  • Beginning / Not Yet Competent

Should be backed by explicit descriptors that explain:

  • What performance looks like at that level,
  • The kinds of tasks students can do independently, and
  • What would move them toward the next level?

Numbers alone (2.5, 3.0, 4.0) don’t tell a student:

  • “Here’s where you are today.”
  • “Here’s what this looks like in your work.”
  • “Here’s what you can do next.”

Standards-based grading work from multiple organizations has shown that performance descriptors tied to clear standards improve communication with families and support more targeted instructional decisions.

Students deserve fair, accurate, descriptive feedback about their learning. The system should make it easier, not harder, to provide that.

Some grading systems advertise clever formulas that promise to “replicate the trend line.” But there is no substitute for actually reviewing a student’s learning trends.

Consider a student whose rubric scores over time are:

2.0 → 3.0 → 3.5 → 4.0

  • The average is 3.0.
    That early 2.0 pulls down the final number, even though it reflects the student’s starting point rather than their current understanding.
  • A learning trend examines growth patterns and gives greater weight to the most recent evidence. In this case, a 3.5 is a more accurate representation of the student’s current competency level.

A learning trend:

  • Recognizes that early attempts are part of the learning process.
  • Reflects what the student can do now, not what they could do weeks or months ago.
  • Produces a grade that is more accurate and fair.

Grading experts argue that grades should be based on the most recent and/or most consistent evidence of learning, rather than simple averages that punish students for early attempts.

A strong system:

  • Lets you view evidence over time for a specific competency.
  • Supports tools or views that emphasize the most recent and consistent performance, rather than just the arithmetic mean.
  • Makes it easier for teachers to use professional judgment grounded in evidence.

At the end of a quarter, trimester, semester, or year, a student’s reported level should accurately reflect what they know and can do now.

What Quality Support Looks Like When Designing a Grading and Reporting System That Fits the Learning

This kind of work is complex. It touches policy, technology, classroom practice, and family communication. Districts shouldn’t be expected to navigate it alone—and they shouldn’t have to compromise good practice to fit a system.

High-quality support helps districts ensure that grading and learning systems grow together, grounded in both local wisdom and national research.

Clarifying the Vision and Non-Negotiables

High-quality support begins with a clear vision, not with a software demo. Effective partners work with leadership teams and teacher leaders to:

  • Define what competency-based learning means in your local context, drawing on national frameworks while honoring community values.
  • Identify non-negotiables for grading and reporting (for example, separate competency scores, use of learning trends, and a clear distinction between practice and competency assessments).
  • Map how competencies, performance levels, and grading practices align across K–2, 3–5, 6–8, and 9–12, so each level is developmentally appropriate yet coherent.

This keeps the reporting system anchored in a clear, shared, research-informed vision rather than vendor defaults.

The focus is on designing a reporting structure that reflects how learning actually happens, not on retrofitting instruction to fit a preset system.

Supporting Vendor Selection and Configuration

Quality support is vendor-neutral and focused on fit, not sales. Strong guidance helps districts:

  • Ask the right questions of vendors, such as:
    • Can a single assessment be scored across multiple competencies?
    • Can multiple scoring scales be used at the task level and converted accurately?
    • How does the system handle the most recent evidence vs. simple averages?
  • Evaluate whether a product truly supports competency-based grading, or simply re-labels traditional practices.
  • Configure the chosen system so that categories, scales, and reports reflect best practice, not just “what the system can do easily.”

In a high-quality process, technology is shaped to align with the district’s instructional model and community values—not the other way around.

Building Teacher Capacity and Trust

No grading system works if teachers don’t understand it, trust it, and see its value for students.

High-quality support invests in teacher learning and collaboration by:

  • Providing professional learning on competency-based grading, evidence collection, and using learning trends instead of simple averaging.
  • Supporting teams as they design rubrics, scales, and exemplars that are usable in daily practice and aligned with research on effective assessment.
  • Developing sample gradebook views, communication guides, and parent-friendly explanations so the system feels transparent and supportive rather than mysterious or punitive.

This keeps the focus where it belongs: fair, accurate evidence of learning that teachers can use and families can understand.

Designing a Phased, Sustainable Implementation

Quality support helps districts avoid the “all-in-at-once” trap.

Instead, effective partners work with districts to:

  • Design phased rollouts—for example, starting with one level, one content area, or a single key shift (such as separating practice from competency assessments).
  • Support transition periods in which multiple scales (e.g., 4-point rubrics and 100-point scales) coexist with clear, research-aligned conversion rules.
  • Facilitate ongoing reflection and adjustment, using data and feedback from teachers, students, and families to refine the system over time.

The goal of high-quality support is a grading and reporting system that grows with the district, deepens understanding, and strengthens confidence over time—rather than a sudden flip that leaves everyone scrambling.

Final Thought: Don’t Let the Product Define Your Practice

When you’re selecting or redesigning a grading and reporting system, keep this at the center:

The system must grow with your district’s competency-based work—not the other way around.

If a product:

  • Can’t report separate competency scores from a single assessment,
  • Forces you onto a single rigid scale with no explicit conversion.
  • Demands uniform structures K–12 regardless of developmental needs,
  • Blurs practice and competency assessments, or
  • Reduces rich evidence into uninformative averages,

Then it is not the right system to support a fair, accurate, competency-based approach.

Start with your vision of good practice, supported by what the research is clear on:

  • Grades should communicate current achievement on clear competencies, separate from behavior and effort.
  • Students need descriptive feedback and multiple opportunities to demonstrate learning.
  • The most recent and most consistent evidence is more accurate than a simple average.

Then select (or design) a system that honors and extends that practice, not undermines it.

And if you want a partner in that design work—someone to help you hold onto best practice while navigating the complexity of systems, vendors, and change—NHLI is ready to walk alongside you.

In short:
Don’t let a product dictate what good practice is.
Let your understanding of learning, evidence, and fairness—grounded in research and lived classroom experience—dictate what the product must be able to do.

User Image

Ellen Hume-Howard

Executive Director

Ellen Hume-Howard has served as Executive Director of the New Hampshire Learning Initiative (NHLI), a leading organization dedicated to advancing innovative, student-centered learning practices since 2017. A visionary leader and champion for competency-based education, Ellen has played a pivotal role in transforming instructional practices in New Hampshire and influencing educational reform nationwide. Under her guidance, NHLI has become a model for designing equitable and personalized learning systems that prepare students for future success. With a career spanning decades in public education, Ellen is known for her ability to empower educators, foster collaborative partnerships, and inspire systemic change that resonates far beyond state lines.

Categories: Competency-Based Education Competency-Based Learning (CBL) Educational Best Practices Grading and Reporting Student-Centered Learning

Email Newsletter

Get support for student success - right in your inbox

  • This field is for validation purposes and should be left unchanged.

«« Back To Posts