Matterport Mobile

As the lead designer of mobile projects at Matterport, I helped launch 3D scanning capabilities on mobile devices and made the interface easier to use for a new target market of consumers and seasoned pros alike. My contributions to the app and the growth we gained allowed us to take the company public the following year.

Why build it

Over the 9 years since its inception, Matterport had pivoted from a hardware to SaaS business. The capabilities of mobile devices had also significantly improved with more processing power, camera resolution, and depth-sensing technologies (e.g. LiDAR). We wanted to provide a freemium version of Matterport that customers could use without purchasing a camera and allow them to use the devices that they already had in their pockets.

This would be a huge unlock that would allow new, consumer-oriented customers to try Matterport and also allow large enterprise customers to more readily distribute Matterport to their employees.


Initially, our highest priority was to design and build the device-based scanning capability into the app, and with that, some "how to scan" user education.

However, with some user research and digging through the data in our early beta stages, we learned that friction in our onboarding flow was causing significant drop-off, and that misunderstandings of the model-creation process was leading to unfinished or poor quality 3D models. So, this project had two main goals:

Get users to complete an initial scan

If we were successful in doing this, it would mean that the initial setup and scanning experience were intuitive enough for users to get going. Completing an initial scan also could help make Matterport really "click" for users.

Help users understand the process of creating a high quality 3D model

Not everyone understands the entire creation process after their initial scan. Many factors go into creating a high quality model and users needed some guidance and education to make them successful.


After launching an early beta release, we took time to dig into our quantitative data, and later, conduct user research.


Without significant resources in data analysis, I personally ran queries in our analytics. They showed significant drop-off during various phases of the registration flow. And to create a quality 3D model, users needed to scan a space at a minimum of two locations before uploading their scans for final processing - two steps that many new users weren't taking.


In our user testing, we found that most users did not have a problem with the actual process of scanning. But they did have trouble understanding what to do after their first scan and how to create a high 3D quality model. At times, they didn't understand that a model was incomplete, that there were editing capabilities, or that they should scan at multiple locations in order to have a high-quality output.


We'd have to optimize the registration flow, educate the user about the end-to-end process of 3D model creation, and offer more guidance on what to do after an initial scan.

Onboarding and registration

A major drop-off point in our registration flow was a software installation that took place in the middle. It was a jarring experience that added up to 2 minutes before a user could actually begin scanning. The team felt strongly that it should remain in the app due to security concerns and as to not significantly increase the size of the app download. 

I designed a loading animation to make the screen a bit more engaging, which did help decrease drop-off a bit, but in my opinion, it was still a bad user experience. After I conducted some benchmarking of comparable apps, the team gained enough confidence to remove the screen altogether resulting in a significant increase in activations and with no decrease in app downloads.

The loading animation I designed for an early release

Teaching users how to scan

For each scan point, we required a user to take multiple photos in a circle while rotating. For a high-quality scan, this could be tricky. The user would have to maintain their device in an upright position, and be careful of rotating in such a way that might cause misalignment issues with each photo.

An earlier version of the viewfinder UI
In a later iteration, we made the textual guidance easier to read and made scan points easier to find with a "leash"

In these Origami prototypes, I explored different feedback options using the accelerometer

For teaching users about proper form and rotation, I prototyped various illustrations and animations in After Effects and Blender. We tested these internally with some success before recording a tutorial video which performed best. For our launch, we contracted an external company to recreate the video alongside some promotional material.

Updating scan controls for on-device users

The Matterport scanning UI was originall built only for use with externally connected cameras and needed to be updated to support on-device cameras. The UI included a minimap of the scanned space, controls for the camera, and editing tools for the scans. Each camera option, whether it be on-device, or one of our many supported external cameras, had various settings and capabilities.

Through various signals in customer support, user testing, and internal feedback, we knew that the dense set of capabilities on this screen was confusing for users. So, while adding on-device scanning capabilities, I also updated this UI to make it easier for first-time-users to understand, while make it more efficient and effective for veteran users.

The previous version of the UI had alot of floating elements positioned around the UI that lacked clarity
Various iterations of the scanning UI where I attempted to visually anchor the controls to the bottom of the screen, keep them ergonomic, and create more obvious distinction between control types
Prototypes showing how a user might select various camera settings