
Designed an intelligent assessment tool that supports educators in making data-driven assessment decisions to enhance student learning.
EdTech SaaS Product + AI Web App + End to End UX
TIMELINE
JAN 2024 - AUG 2024
MISSION
PROVIDE ASSESSMENT SUGGESTIONS LEVERAGING AI
ROLE
DESIGN LEAD FOR CAPSTONE PROJECT
PROBLEM
How should we faciliate instructor in University of Tasmania in creating innovative types of assessments that accomodate the advent of AI?
As generative AI become more prevalent, instructors in University of Tasmania face challenges in designing assessments that evaluate authentic learning.

PRODUCT PREVIEW

USER INTERVIEW
In our initial interview, we found that our users value assessing higher-order thinking and are excited about the possibility of AI, but caustiousness around the integrity of AI-driven assessment still exist.
To better understand the problem, we developed user personas and journey map of professors from two colleges at UTas—the College of Arts, Law, and Education, and the College of Health and Medicine—to gain insight into their goals and challenges.
.png)
USER PERSONA
USER JOURNEY MAP


COMPETITOR ANALYSIS
Existing tools help create assessments by generating questions like multiple choice or fill-in-the-blanks, but they mainly focus on assessing memorization. None target higher-level skillset such as applying, analyzing, or creating, or consider the role of AI in the assessment design.
.png)
Research Insights
01
Instructors are excited about AI’s potential to enhance higher-order thinking but are concerned about maintaining the integrity and rigor of AI-assisted assessments.
02
Many instructors lack the time and AI literacy needed to confidently integrate AI into their assessments - there's the need for clear guidance and professional development.
03
Existing assessment platforms fail to support more complex skills like applying, analyzing, and creating. Instructors need a recommendation system that suggests diverse assessment types aligned with their learning objectives.
DESIGN
MVP 1
From our ideation process, we decided to move forward with the concept of an AI assessment type generator. We drafted out MVP1, which is a simple Web App chatbot connected to a customized GPT Assistant API.

PILOT TESTING
We performed think-aloud usability testing with 6 professors on our MVP1 and got some great feedback related to the experience of interacting with the chatbot.


Testing Insights
.png)
01
Users hope to have a scaffolded, guided approach of prompting, with AssessMate suggesting what kind of information they should input
02
.png)
User hope to have a feature that can customize the degree of AI usage in the assessment.
FINAL DESIGN
Based on the feedback we received from MVP1, our final design visualizes and optimizes the user experience, offering more guidance and scaffold for the user to reach the desired assessment outcomes.
COLORS


TEXT COLORS

LOGO & BUTTON


FONTS

communication
Communicate with teammates, communicate with stakeholders/clients, communicate with developers - constant and smooth communication is the key to a great end-product.
shitty first draft
Don't be afraid of shitty first draft. The minimal viable product or our first baby might not be perfect in features, but we can keep iterating and plan future steps.
small detail matter
I realized how minor change in design decision can lead to significant shift in user experience and satisfaction.