Knowledge is power, and even more so through Retention
What is it about?
The education system has always relied on tests to help students retain information. Retention does the same.
By generating assessments for the books you choose and using spaced retention to structure your review sessions, Retention can help knowledge stick with you.
What I produced
•
design system
•
visual style guide
•
MVP designs w/ upcoming features
•
prototype
Duration
4 weeks
Background
Scoping out the project
The beginnings
I was brought on to expand my clients’ proof of concept into a mobile application. I met with the product owner and developer to align with their vision, learned about their business goals, and review proof of concept. Getting to know what has been done and not done gave me insight for my next steps, one of which is to conduct additional research into the quizzing market to understand how quiz platforms are handling their material.
Competitive analysis
Looking into quiz platforms
What I discovered
When reviewing competitors on how they are conducting and creating their quizzes, I noticed a couple of common patterns:
High quality quizzes are limited to common academic books, like classic literature.
Platforms cater towards students, often limiting the selections to academic ones.
User created quizzes are inconsistent in quality.
A gap revealed
There really isn’t a platform with consistently high quality quizzes for a diverse book selection. Through Retention’s proof of concept and design to ensure high-quality quizzes are generated, Retention can be the platform for users to generate quizzes for any books that want to study more on. A sentiment I found resonated with what I was seeing:
“The quizzes really helped me retain the... But now that I'm out of school I find myself missing a lot of smaller but important details in books. I know Sparknotes has multiple choice quizzes on more classical books, the ones you would be typically assigned in school, but I'm looking for a resource that I could use for ideally any book.”
- Forum poster seeking quiz platform recommendations
Planning
Drafting the IA and wireframes
Ensuring alignment
Throughout this stage, my goal was align with the developer as early and as often as possible so there are no surprises later on. We did in fact run into minor misalignments in this process, but all is resolved out after further discussion and debates.
Wireframe ideations
Once information architecture was corrected and questions were answers, I used the IA as a framework for ideating core screens.
Visual iterations
Exploring visual direction
The process
I created a mood board of common, trending styles to explore the design languages of education and learning apps. After many iterations and tweaking, I landed on these three options and had the product owner select one to move forward with.
Exploring solutions
Study methods?
A thought sprung up
As I applied the visual style across the screens, I thought to myself: How might we enable users to feel motivated to return to the app and take more tests? During the competative analysis, I came across a concept called spaced retention. I noted it to do research later when I had time, but never got around it until this moment.
What is spaced retention?
Spaced repetition is related to the forgetting curve, in which we lose the ability to access information in our memory over time. To retain information, we utilize strategic intervals of spaced out short study session and active recall exercises such as: spaced study session, Leitner system, and Anki.
Choosing spaced study session
After discussions of the other retention methods with the developer, we determined that spaced study session would be best to develop first for MVP, as what we are looking deploy and check product market fit first.
How is spaced study sessions implemented into Retention?
When a user starts and completes a new test, checkpoints flow is activated. Checkpoints add some gamification by requiring users to retest on day 3 and day 7. If the user misses a checkpoint deadline, their progress resets. When all checkpoints are completed, the mode deactivates, the test is marked as successfully finished and the user will earn a star. Stars are just a number right now, but that can opens opportunities for having in-app currency and monetization in the future.
Users can retake the tests whenever and however much they want, but the checkpoints are reminders to encourage them to return and stay engaged.
Potential issue to consider
It is recommended that set intervals be shorter for more complex book topics. With topic complexity being subjective to one’s experiences, we hypothesize that users may want custom intervals. After discussing with the developer, we decided to keep the interval at a predetermined set for MVP.
visual design
Finishing touches
I organized and expanded on Retentions’s existing visual identity
A few high-fi mobile screens
User testing
A round of user testing
What did I want to find out?
I conducted a quick round of user testing to check two things:
1.
Does the onboarding flow clearly communicate what and how Retention works?
2.
Can the user navigate successfully through the core path on Retention?
So, does the onboarding flow communicate clearly what Retention is?
For the most part, yes! There’s a lot of notes about assisting students, but that may be because studying is closely coupled with education. Retention aims to appeal to more than just students.
This app creates quizzes to help you retain information on whatever you need to memorize.
knowledge retention
Assisting students in their learning goals in schools.
ai study app
help students with studying
An evaluation application/solution to measure some sort of progress.
This looks to be a educational tool that provides a platform for users to apply modern learning theories to structure their learning. I see spaced repetition mentioned and I can also see there being room for "desired difficulty" elements as the learner refines their knowledge.
Based on what I saw this app is about knowledge retention. You would be working on the app and reviewing the information multiple times to retain it.
Did you users navigate successfully through the core path?
All testers made it to the end, but all not smoothly.
A mistake on my part
For the navigation task, I asked the testers to “create an assessment.” When looking at the heat map, I noticed there was an large number of clicks on both buttons here. Reading the written feedback, I understood what I messed up on.
The verbiage was not consistent with what’s on the mobile screen, which was “Start test.” I caused confusion for some testers because they were looking to “create” something, not “start” something. Lesson here is to make sure my verbiage is 1:1 with what is being asked and what is being read.

Reflections
What I learned
Good things come out of debates and discussions
I’m glad that I got to work closely with a developer, because there were considerations I would have not taken because it’s knowledge not on my radar. It also provided me a different perspective of other things I should consider in the future.
Here’s a few:
Time zones can get funky
“What time zone are we setting the app to reset at?” A question I’ve never considered before.
“To 11:59PM their local time?” I replied.
And then the discussion ensued into how changing timezones can mess with the application, especially if there are time sensitive features. With checkpoints being time-sensitive, it was important to ensure the user’s progress doesn’t get messed up by us.
Long story short, we agreed on UTC as the app’s timezone and will clearly communicate to the user when their application reset time in relations to their local time.
Versioning: local vs. system
For context, when a user can generate multiple tests for the same chapter. Behind the scenes, all generated tests of that chapter from other users will be stored so the app to retrieve a “new” test.
Each test will have a version number so users can distinguish between the chapter’s tests. The question here is, should the version number be local to the user or to the system?
Although more complex to implement, I debated for local versioning. Users would not have the context of tests within the system, and displaying system version would provided little insight for the users. Local versioning can aid users to better track what they’ve been studying.
Not all books are the same
Because Retention relies on books that have a numbered chapter index, this system would not work well for books with informal chapter structures. In fact, some books have super short chapters, leading to not enough data to generate a quality chapter test.
We don’t have a solution yet, but we decided to start with a small sample out of the 40 million books.