AI imaging SaaS platform for biotech (Cryo-EM) research

My Role
Founding Product Designer at $4M-funded AI BioTech Startup

As the principle designer on a deep-tech team, I collaborated closely with AI scientists to deliver a browser-based AI imaging SaaS platform for biotech (Cryo-EM) research within 3 months:

  1. Spearheaded the 0-to-1 product lifecycle and drove a rapid agile development process with engineers, launching a browser-based AI imaging SaaS platform for biotech (Cryo-EM) research within 3 months.

  2. Streamlined complex multi-step workflows for 200+ structural biologists, enabling over 2M image processing runs with up to 40% fewer manual steps.

  3. Enabled the company to secure multi-million dollar funding from MiraclePlus.


Type:
B2B, Web Design
Timeline:
Sep 2024 - Dec 2024
Responsbilities:
Define the problem and use cases;
Wireframe;
Build prototype for testing;
Iterate interaction;
Craft visual details.
Deliverables:
Interactive prototype;
High fidelity mocks;
Design files & doc;
Presentation.

200+ active users

2M+ tasks processed

A next-gen AI platform that streamlined

cryo-EM workflows into a collaborative, task-centered system.

I.CONTEXT

Almost ALL Cryo-ET labs worldwide rely on IMOD for image processing. It remains the standard because of its obvious advantages: Accuracy, Maturity, and Stability.

IMOD

, developed in the late 1990s by David Mastronarde’s team at the University of Colorado Boulder

" IMOD is perhaps the best known and most used programs for tomographic reconstruction, with well over two hundred sites using it, and the reason for this popularity is because it offers several big advantages:

(a) it's free ....
(b) it works on any platform ....
(c) it's been around for ages
(d) it's designed specifically with 3D electron tomography in mind ...
(e) it handles the whole electron tomography process....
(f) it's quite large and versatile .... "

——— Andrew Noske

( IMOD tutorial author and plugin developer)

Why IMOD is so popular?

But, it comes with a TRADE-OFF.

“Once IMOD is installed you'll quickly realize it has a VERY STEEP learning curve and a bunch of features you’d probably never find unless someone points them out.”

In practice, researchers find IMOD TIME-CONSUMING.

As Andrew bluntly put it:
II.PROBLEM

Why is it so time-consuming?

At first glance, two surface-level reasons stand out:
  1. The tool is outdated —— its interaction model doesn’t match today’s perceptual expectations, which makes it hard to learn.
  1. In the scientific research context, usability is rarely prioritized. Accuracy and engineering efficiency dominate.
Surface-level Reasons

Why does this matter?

We believe tools should co-evolve with their users. Cryo-ET researchers are pushing the boundaries of structural biology — there’s no reason they should be slowed down by outdated interfaces. And while engineering efficiency is critical, neglecting human efficiency lowers overall productivity.

III. CAUSES BELOW SURFACE

We visited the Cryo-ET lab at iHuman Research Institute for in-depth interviews and shadowing with target users.

Our outsider perspective became valuable. Instead of asking about pain points, I asked researchers to “walk me through your day/workflow.” This helped us objectively surface problem spaces they had normalized.
Interestingly, researchers often blame themselves instead of the tool — they adapt to its quirks rather than questioning the system. So when we asked directly, What are your pain points?, answers were vague.
Challenges
Opportunities
Current Cryo-ET Workflow with IMOD
We mapped IMOD’s role in the workflow and identified 4 key bottlenecks:
  1. Fragmented workflow: reliance on command-line tools and frequent context switching.
  2. Black-box processes: manual fiducial tracking and particle picking, dominated by trial-and-error.
  3. Low automation: large Cryo-ET datasets with long computations, errors discovered only post-run.
  4. Weak collaboration & reproducibility: scattered parameters/results and workflows limited to single users.
Key Findings:

After figuring out key questions to target on, instead of applying a universal design strategies to all problems, I adapt unique design methdologies to each of the question flexibily and nichely:

IV. DESIGN IDEATION

Design Strategy:

Wrapping command-line operations into guided UI interactions

Shifting from window-based to task-centered navigation

Problem 1

Key Decision 1 — Migrating Design Logic for a Fragmented Workflow

Researchers had to juggle

a Fragmented and

Error-prone workflow

Due to:

1.command-line tools

2.multiple windows

3.online tutorials

Insights


Across these references, the key strategies are:

Wireframe Design (Partial)

Approach: Case Studies


Instead of applying generic UX fixes, I examined how modernized scientific and creative platforms streamline complex workflows. Case studies included:


Weights & Biases

CoreWeave

RunwayML

Task-centered dashboards connecting multiple stages of ML workflows

Transforming low-level code operations into intuitive visual actions

“Puts machine learning in the hands of creators.”

“Bridges experiment to insight.”

“Turns complexity into clarity.”

AlphaFold Server

Google DeepMind

Abstraction of technical complexity through pre-configured pipelines

Movie-2

Export

Previous

Next

Parameters/Info

Task Index

Status

Index

Movie-1

Movie-2

Movie-12

Movie-13

Movie-11

Movie-10

Movie-9

Movie-8

Movie-7

Movie-3

Movie-4

ctf_estimation_settings: {defocus_step: 100, max_defocus: 50000, max_res: 5, min_defocus: 5000, min_res: 30,…}

global_settings: {acceleration_kv: 300, amplitude_contrast: 0.07, binning_factor: 2, p_size: 0.41,…}

motion_correction_settings: {eer_fraction: null, eer_sampling: null, patch: 5, save_as_float16: false, software: "MotionCor2"}

eer_fraction: null

eer_sampling: null

patch: 5

save_as_float16: false

software: "MotionCor2"

Movie-5

Movie-6

Quality

Movie-2

Status

Analysis

Message

Task Index

Status

Index

Movie-1

Movie-2

Movie-12

Movie-13

Movie-11

Movie-10

Movie-9

Movie-8

Movie-7

Movie-3

Movie-4

Movie-5

Movie-6

Quality

Task Center

Filter

ID

Task Name

Owner

Created Date

Progress

Status

Options

Date

Owner

Status

Cryo-EM ID

Create New Task

Create New Task

Experiment Management

Global

Motion Correction

CTF Estimation

Back

Continue

Software

Raw Movies Directory

Pixel Size

Total dose per movie

Acceleration Voltage

Spherical Aberration

Binning Factor

Note

2.Encapsulate command-line inputs into intuitive selectors, fill-ins, and buttons.

1.Frame the entire platform around task flows rather than tools.

1.Confidence bars & color-coded overlays showing prediction certainty.

2.Dual-view comparison mode showing human vs. AI-picked results.

3.Inline feedback prompts for quick validation or override of AI results.

Problem 2

Key Decision 2 — Redefining Human–AI Collaboration

Manual particle picking and fiducial tracking led to low automation. Errors were often detected only after long computation runs.

Comparison View Design

Picking threashold adjustment in

AI Preliminary Results View

Comparision between Pre-processed

Results and AI Preliminary Results

DRACO —— Cryo-EM Foundation Models for Automated Pipeline (developed by team Cellverse AI)

Approach: Adopt Emerging AI Tech


Instead of treating automation as a patch, I reframed it as a redesign of human–AI collaboration. I studied our lab’s newly developed automatic picking algorithm, understood its technical constraints and input–output patterns, and translated these into UI logic.


Design Strategy

Dual-view Design:

Goal & Implementation


Design a new AI-integrated interface that:

Communicates AI progress and confidence levels clearly

Allows users to fine-tune thresholds or re-run subsets

Reflects new modes of division between human judgment and machine precision

Design Strategy

1.Defining how the new interface should look and function —— clustered stakeholders’ descriptive needs into clear design to-dos

2.From the seminar, I gathered client-user needs (structural biology researchers) and the development team’s technologies to be integrated, and clustered them into design modules.

Design Translation


Mapped these needs directly to UI modules:

Process transparency panels replacing opaque “progress bars”

Collaborative dashboards for shared experiment views

Parameter snapshots ensuring consistent reproducibility

-Black-box processes made users uncertain about system status


-Collaboration was limited because parameters and results were scattered or siloed.

Problem 3 & 4

Key Decision 3 — Translating User Needs into Transparent & Collaborative

Approach: Semi-structured Interview


I conducted interviews and workflow tracing to identify concrete user needs:


  • Visibility into process and error sources

  • Versioning and reproducibility across experiments

  • Shared parameter sets and visualizations


Affinity Mapping

V. INTEGRATE USER FLOW

From scattered points to create a new workflow narrative

Suggested a new user flow that addresses key user needs by enhancing task efficiency and collaborative management, integrating login, user management, a dashboard task center, and dual Cryo-EM/ET pipeline processing.

User Flow Disgram:
VI. DESIGN & FAST PROTOTYPING

Delivered a next-generation, AI-integrated cryo-EM platform that transforms fragmented workflows into a collaborative, task-centered experience.

Efficient task management that adapts to user habits:

preserved frequently used key parameters, while deprioritizing low-value image details into a collapsible list to reduce cognitive load.

Seamingless AI integration:

In the noise reduction workflow, simplifying front-end parameter tuning into a single bar control. This enhanced user sense of control while enabling direct comparison with the original image.

User Management

Easily bring lab partners into the loop and collaborate.

Status

Enable on-time human intervention.

Options

Simplify traditional complex editing.
VII. IMPACT & FEEDBACK

The platform received strong positive feedback from researchers at the iHuman Institute, who reported substantial gains in cryo-EM processing efficiency. It is now adopted by over 200 researchers, powering 2M+ processing runs, with validation testing ongoing and additional data forthcoming.

VIII. REFLECTION

  1. Immersing in an unfamiliar scientific domain revealed how disciplinary silos constrain innovation — design became the bridge translating complexity into clarity.

  2. Collaboration with AI researchers reframed design from a service layer to a cognitive framework that shapes how scientists think and interact with their tools.

  3. Co-developing shared language between data scientists and end-users showed that innovation emerges when empathy meets rigor — where human insight guides scientific precision.

Teamwork Timeline