
IBM
Machine learning model fine-tuning platform.
Project Overview
The capstone project in the UCSC graduate program is a collaborative project with IBM, focusing on user experience research and UI design for a machine learning model fine-tuning platform. It aims to create an intuitive and efficient platform that allows users, especially non-machine learning users, to fine-tune pre-trained foundation models for specific tasks.
My Role
UI designer (wireframing, prototyping, branding)
UX researcher (competitive analysis, user interviews, card sorting, surveys)
Duration
April 2023 ~ Present
Team
2 UI/UX Designers
2 Researchers
1 IBM Employees
Preliminary Knowledge
(1) What are foundation models?
Foundation models are pre-trained AI models that serve as the fundamental building blocks for various machine-learning tasks across different domains. These models are trained on vast and diverse datasets, learning to extract relevant patterns and features from the data. Once pre-trained, they can be fine-tuned on specific tasks, making them highly versatile and efficient for a wide range of applications, including image recognition, natural language processing, speech synthesis, and more.

(2) What are fine-tuning ?
Fine-tuning is a process in machine learning where a pre-trained model is further trained on a smaller dataset specific to the target task. It adapts the model's knowledge to the new task, making it more accurate and efficient. It saves time and resources compared to training from scratch.

Design Challenge
How to empower non-machine learning users to fine-tune foundation models,
by dedicatedly designing the necessary fine-tuning steps?
User Research
(1) Participants: All participants 14
IBM Employees
3 participants
UCSC NLP Students
2 participants
ML PhD Students
2 participants
Non-ML PhD Students
7 participants
Research scientists that work directly with IBM. They have varied levels of experience working with machine learning models.
Graduate students in the natural language processing field. They represent the unique perspective of being an early-stage machine learning user.
Researchers that have experience with various types of foundation models. Provide perspective of more advanced machine learning user and can anticipate roadblocks for non ML expert users.
PhD students from various non-ML disciplines. Represented domains include: Developmental Psychology, Microbiology, Material Science, Astronomy and Astrophysics, History.
(2) Data Analysis
-
Open Coding (first round) → Axial Coding (2nd Round)
-
Our first round of coding consisted of reviewing audio transcripts of interviews and developing discrete interpretable codes from specific quotes of interest to be used in the next rounds of coding.
-
Our second round of coding consisted of sorting codes into categories and subcategories to visualize the relationships and salience of each concept.
Key findings and designs
(1) Fine-tuning foundation models is a complex process that necessitates careful consideration and expertise. It involves a lot of considerations and details, such as environmental setup, model adaption, computational resource allocation, continuous monitoring, etc.
→ In light of these complexities, the development of a fine-tuning tool that preserves key steps and automates certain processes can greatly streamline the workflow. Thus, we formulate the fine-tuning process as six key steps to make the power of foundation models accessible to non-machine learning users.

Select one of our pre-trained foundation models to get started.
Upload your training data for fine-tuning the selected foundation model.
Upload your validation data for evaluating the fine-tuning performance.
Select your computing resources to fine-tune the model.
Click the button to start the fine-tuning.
Select the metric to evaluate your fine-tuned model.
(2) Non-machine learning users have barriers to effectively using foundation models, with some participants saying they’re not even familiar with the term.

-
We provide detailed and intuitive descriptions to introduce the corresponding key step on each page.
-
We use intuitive naming conventions for buttons and features, which are easier for users to understand.
(3) Users may not know what foundation models to choose for their specific tasks.

-
We organize common foundation models into clear categories, such as audio, video, text, image, etc., so that users can easily identify the model they are looking for.
-
We develop a ChatBot to provide model recommendations and help answer users’ questions along the fine-tuning process.
Low-Fidelity Prototype - Overview

-
Step 1: Users select a foundation model from our organized model repository to get started.



-
Step 2: Users upload their training dataset for fine-tuning from their local files or remote servers, following the data format requirements.


-
Step 3: Users need to further upload validation data for model validation. (Validation data is a dataset that is separate from the training data and used to tune hyperparameters and model validation.)


-
Step 4: Users can select to use local computing resources or cloud computing according to their hardware and budget.

-
Step 5: Users can double-check and adjust their previous selection at each step before starting fine-tuning.

-
Step 6: Users can select a metric to evaluate the performance of their fine-tuned model.

Future Work
(1) Performance comparison before and after fine-tuning.
Add a page to show the performance comparison of the base foundation model (without fine-tuning) and the fine-tuned model, thus demonstrating if there is an obvious improvement.
(2) Training time estimation.
On the computational resource selection page (Step 4), we can provide the possible number of GPUs users can select and the estimated time for the fine-tuning process. In this way, users can decide if the schedule work for them.
(3) Task management and team collaboration.
We can construct a task management page where users can view the progress of all the fine-tuning tasks. Each task can involve multiple collaborators and the log of each task operated by each team member is well organized and present.