top of page
Group 137_edited.png

Designed an intelligent assessment tool that supports educators in making data-driven assessment decisions to enhance student learning.

EdTech SaaS Product + AI Web App + End to End UX

TIMELINE

JAN 2024 - AUG 2024

MISSION

PROVIDE ASSESSMENT SUGGESTIONS LEVERAGING AI

ROLE

DESIGN LEAD FOR CAPSTONE PROJECT

PROBLEM

How should we faciliate instructor in University of Tasmania in creating innovative types of assessments that accomodate the advent of AI?

As generative AI become more prevalent, instructors in University of Tasmania face challenges in designing assessments that evaluate authentic learning. 

public-examination-preparation-concept.jpg

PRODUCT PREVIEW

USER INTERVIEW

In our initial interview, we found that our users value assessing higher-order thinking and are excited about the possibility of AI, but caustiousness around the integrity of AI-driven assessment still exist. 

To better understand the problem, we developed user personas and journey map of professors from two colleges at UTas—the College of Arts, Law, and Education, and the College of Health and Medicine—to gain insight into their goals and challenges.

Group 140 (1).png

USER PERSONA

USER JOURNEY MAP

image 342.png
Group 9078.png

COMPETITOR ANALYSIS

Existing tools help create assessments by generating questions like multiple choice or fill-in-the-blanks, but they mainly focus on assessing memorization. None target higher-level skillset such as applying, analyzing, or creating, or consider the role of AI in the assessment design. 

Competitive Analysis (2).png

Research Insights

01

Instructors are excited about AI’s potential to enhance higher-order thinking but are concerned about maintaining the integrity and rigor of AI-assisted assessments.

02

Many instructors lack the time and AI literacy needed to confidently integrate AI into their assessments - there's the need for clear guidance and professional development.

03

Existing assessment platforms fail to support more complex skills like applying, analyzing, and creating. Instructors need a recommendation system that suggests diverse assessment types aligned with their learning objectives.

DESIGN

MVP 1

From our ideation process, we decided to move forward with the concept of an AI assessment type generator. We drafted out MVP1, which is a simple Web App chatbot connected to a customized GPT Assistant API.

Group 9107.png

PILOT TESTING

We performed think-aloud usability testing with 6 professors on our MVP1 and got some great feedback related to the experience of interacting with the chatbot. 

Testing Insights

Group (1).png

01

Users hope to have a scaffolded, guided approach of prompting, with AssessMate suggesting what kind of information they should input

02

Frame 2 (2).png

User hope to have a feature that can customize the degree of AI usage in the assessment.

FINAL DESIGN

Based on the feedback we received from MVP1,  our final design visualizes and optimizes the user experience, offering more guidance and scaffold for the user to reach the desired assessment outcomes.​

COLORS

Primary Orange.png
Group 7.png

TEXT COLORS

Group 8.png

LOGO & BUTTON

Group 9.png
Group 10.png

FONTS

DM Sans DM Sans.png

ONBOARDING

To start with, the user selects their goals of using AssessMate. The design decision is that user choose between a “step-by-step” guiding interaction and a free chatting interaction. This gives users the autonomy to select whether they want a guided approach or free prompting.

REFLECTIONS.

communication

Communicate with teammates, communicate with stakeholders/clients, communicate with developers - constant and smooth communication is the key to a great end-product.

shitty first draft

Don't be afraid of shitty first draft. The minimal viable product or our first baby might not be perfect in features, but we can keep iterating and plan future steps.

small detail matter

I realized how minor change in design decision can lead to significant shift in user experience and satisfaction.

OTHER PROJECTS

bottom of page