Gradebook

A white label learning management system from Kaplan

Atom was created by Kaplan to provide students, authors, teachers, and administrators a place for their educational materials. Leveraging our 80 years of learning experience, we used our institutional knowledge, user-centric methodologies, and custom design system to create a learning management system for any brand to showcase their content.

Main accomplishments

  • Own the educator and administrative experiences on Atom – all research, user flows, UI designs, and testing.

  • Contribute to and maintain our proprietary design system.

  • Launched Gradebook, a new UI for reporting that became used organization-wide across 6+ standardized tests.

Project summary

  • Project Timeline: 3 months

  • Deliverables: Competitive Analysis, User Flows, High Fidelity Wireframes, Prototype, Usability Testing, Handoff

  • Tools: Jira, Sketch, InVision, Marvel, Principle, Abstract

  • Target Device: Responsive Web

What is Gradebook?

Gradebook is a snapshot of overall performance for a cross section of users: a classroom, a group of students across different classrooms, or even a single student. Using Gradebook, teachers are able to view the overall performance of their students in whatever way they would like to do so.

After Gradebook launched, it became used organization-wide across 6+ standardized tests, replacing the previous reporting solutions used by Kaplan.

What’s the problem?

Prior to Gradebook, Kaplan’s educator reporting provided overall performance at one level, the class level. Overall performance averaged the test and taxonomic category scores at a single activity or a single activity type for the students in a single class. Administrators and educators needed more. In Gradebook, we aimed to provide the capability to view overall performance at multiple levels of an organization – by test, by category, and by student.

Teachers could only view performance as an aggregate. Before Gradebook, there was no way to see progress over time, nor aggregate progress across test categories like math, science, or writing.

Exploring what good looks like

What are others doing?

We began our exploration by analyzing comparative learning management systems, evaluating the reporting features and data visualizations of companies like Blackboard, and Edulastic to draw inspiration for Atom.

Edulastic - Overall performance at a glance

With Edulastic, teachers can view the scores and grades of a class on an assignment by assignment basis, seeing trends on an individual level and a class level. At a glance, teachers can get an overview of performance, while also having the ability to dive more deeply into a specific score.

Canvas - create grade ranges

Canvas focuses on teacher customizations around grading schema, allowing users to define specific ranges for scores. For Atom, since we are a white label product, it’d be crucial to allow users to define their own standards.

Blackboard - colors as a signifier

Blackboard provides color customization options for teachers that use that schema for analog teaching experiences – like highlighting test scores in different colors, or using post it notes to mark different performance levels.

ACT - Highly visual data analysis

The ACT has a heavy emphasis on data visualization, breaking each of their sections into graphs that allow teachers to get a visual breakdown of each category on the test, complete with national standards and averages. This lets teachers that prefer the visual to get their data at a glance as well.

Defining the structure

What features should we prioritize?

Based on our research, we compiled a list of features to prioritize for Atom. Given the needs of the business and our current customers, the features we wanted to prioritize the most were:

  • Data visualization – we felt we could add value for teachers by providing a view of overall performance at a glance to guide teachers to insights about their classes.

  • Grades split by any user segment – allows our teachers to see a single student, a group of students in different classes, a group of students in the same class, or all students at once.

  • Customization – we wanted teachers to apply their standards to Atom, defining what good looks like for their students.

Moving through the product

With a set of features in place, I thought about how teachers would access the different views of grades for each of their classes and student populations:

Since Atom is designed for multiple use cases and flows, I also considered other pathways to Gradebook, including the pathway from a school administrator that has access to the grades in their district (such as, for statistical analysis):

Thinking through solutions

Sketching out ideas

With our list of prioritized features and agreed upon user flows, I began to sketch out potential ideas for reporting views, criteria customization, and data visualization.

Our first brainstorm was around criteria customization. We considered iconography, colors, and letters to signify performance.

Later in the process, I presented my more fleshed out ideas, working out finer details and obtaining buy-in for different explorations.

Visualizing student progress

One challenging aspect of designing Gradebook was accommodating clients with complex grading systems. Some clients had up to 15 different categories, or “taxonomies” that they wanted to display for their users. To satisfy these requirements, we explored various data visualization models that could work for all possible clients.

What’s the best number?

Some clients had up to 15 different taxonomies to track, so I explored various ways to capture all data points clients would want to see. We concluded that we could handle, at most, five taxonomies at once.

What’s the best model?

Once we determined the number of data points to display, I explored ways to display five taxonomies at once. Since we wanted to show progress over time, we settled on a line graph.

What’s the best emphasis?

To explore ways to call out different taxonomies in a cluttered graph, I explored various hover states and ‘highlights’ so users could clearly see which taxonomy was which.

Viewing grades (Any Way You Want It)

While exploring data visualization ideas, we also explored the ways in which we could present grades in tables. Legacy customers and new clients both expected grades to be viewable in data-rich tables.

Across student groupings

To improve on our existing products, we created sortable filters to show students across groups and classes via nested, collapsable rows.

All grades viewable

A large pain point of existing products was that teachers could view only the first and most recent tests. With Gradebook, we wanted to show all tests, across all students.

Progress, at a glance

To approach data visualization in an additional way, I added arrows to indicate performance and colored cells to call out grades performing below standards.

Setting standards

Within student performance, we wanted the ability to customize views so that teachers could set their own standards for Below, Meets, and Exceeds Expectations.

Performance criteria

To allow teachers to view their students’ grades with their own standards, I created a way to set the scores for each performance category, either by direct input of the range or via a slider that corresponded to that category.

Putting it all together

Taking our ideas around data visualization, grades across time, and performance customization, I put together Gradebook.

Testing our ideas

Testing with teachers

To validate my designs, I tested an end to end flow in Gradebook with 12 teachers over the course of several weeks. I moderated all usability testing sessions via Zoom.

Viewing all grades enhanced perspectives

Being able to view all grades at once allowed teachers to make comparisons across each test, on a student by student basis.

“The graph would give me a way to prioritize.”

“I like that the graph can quickly compare groups or classes against each other. If everyone across my classes is struggling with a topic, I might handle that different than if just one class is. It would make me think about whether it was me or just the class.”

“Comparing two content areas is valuable. So much of it integrates. You want to see how students are performing in comparable subjects.”

“Anything I have now, it doesn’t give you the visual data, and it doesn’t let you sort quite so easily. It doesn’t show things as well as this.”

Data visualization clarified the big picture

Teachers found the data visualization helpful for drilling down into specific content areas and comparing content areas to each other. At a glance, the graph allowed teachers to hone in on areas of improvement.

“We have programs where students are required to get a certain overall score, but also have score requirements in each sub section of the test. This will really help them better understand where to focus their efforts, and where they’re looking to hold us accountable for making sure we’re providing solid instruction in the specific sub areas.”

“We currently have no way for anyone to see a student’s progress test over test. Especially in this format, where you can see the average and each student’s score. It’s very helpful.”

Overall performance underperformed

In the end, my version of overall performance fell flat. Teachers liked the colors, but they were confused by the arrows, unsure if they were trends, starting points, or aggregate performance.

“I use red / yellow / green myself. It’s a great way to mark things. We use those terms in classes. I think it’s intuitive.”

“The arrow is misleading if it is a ranking of the overall class performance. If it’s a benchmark, or a current performance status, that’s cool, but the arrow makes it feel like it should be a trend.”

“Now that I look at it, I don’t know what the arrows mean...I thought they were trending relative to itself. I like highlighting each individual score rather than the individual.”

“This trend is going to change all the time...it may be more confusing than its worth. You might think you’re doing poorly but you’re not.”

Customization could set new standards

Teachers responded well to the idea that they could customize the performance filters so they could set the standards for their students. Criteria customization would allow them to hone in on specific areas and devise more targeted lesson plans.

“Teachers would have a lot more expected of them because we would have a lot more options to personalize. I start thinking about all of the things that we could do that we really don’t have a way to do now. When those students are asking ‘I don’t understand why I’m not getting better.’ And if we could nail down the fact that ‘hey, you’re improving on everything except THIS.’ That would be such a benefit to them. It would give them confidence in other areas, and we could target that with quiz banks, channel sessions, or strategic homework.”

Revising based on user feedback

I took the feedback from usability testing and revised the designs, focusing mostly on the overall performance issues and making small enhancements to the UI. The biggest change was to remove the arrows, since they were extremely confusing, and replace them with highlights for each score.

Designing for accessibility

Since the new design direction emphasized color highly, it was especially important to consider alternatives that were accessible for everyone. I brainstormed several ways to enable this table to work for all users.

High contrast mode

I created this mode for people who have difficulty distinguishing between different colors, such as telling the difference between red and green. It also works for people who are color blind.

Dark mode

The intent behind this mode was to reduce eye strain, adjust to current lighting conditions, and using a screen in dark environments – for example, if a teacher was grading assignments late at night.

Disable criteria

For the users that found the highlights too overwhelming, I also created a way to toggle the feature on and off. In the advanced criteria settings, users can also turn on or off specific criteria, which lets users customize their experience further – such as, by leaving on the highlights that show low grades.

Delivering designs

Partnering with engineering

The benefit of using a common design system and prototyping in Marvel is that developer handoff was simple. Each wireframe had common components that existed in multiple places within Atom, so developers were already familiar with a lot of the pieces used to build Gradebook. For anything unfamiliar, Marvel provides the code, color, and position of any element on the screen. If there were any issues, I also sat close to the developers so we could talk things through.

To see an example of the options Marvel provided us, please watch the video below.