I was the primary designer for Acrobat Mobile on Android from 2014 to 2016. During this time I delivered designs for a number of key releases:
- Document Cloud (2015)
- Dropbox Integration (2016)
DOCUMENT CLOUD (2015)
Designed the framework, keeping in mind
- The new Tool Switcher model for business continuity
- Incorporating the new Material Design, yet maintaining Adobe’s brand
- Designing experiences for the new features like Export PDF and Organize Pages
- Solving existing pain points like helping users find their documents faster
Design: From first exploration to final design
From the Play Store
More about this project on Behance
DROPBOX INTEGRATION (2016)
This was part of a partnership with Dropbox.
The user could open a PDF file from Dropbox, work with it in Acrobat and return to Dropbox with changes synced.
Shown below is the workflow where a first time user is going from Dropbox to Acrobat, but there’s another default .pdf app on his/her phone.
In the following workflow the user wants to add a second Dropbox account.
An additional file repository tab for Dropbox was included.
While this did not pose a challenge on the tablet, fitting all the tabs within the phone screen width was a problem. Specially because of the tab labels, which were in text, and also quite a localization challenge.
Here’s an exploration where I tried using icons as the tab labels, but there was also the issue of icon recognition. So I created a video where this micro-interaction will communicate the names of the icons to the user the first time he arrives on Home.
However we decided to retain text only labels for the tabs for the final version.
My Role: Primary Interaction Designer from April 2014-Dec 2016
Team: Collaborated with Kishore Kumar and the extended Adobe Design team.
Play Store: Acrobat Mobile
Read Next: Contextual Commenting for Acrobat Android
This was a side project inspired by the street art/murals seen in Mission district in San Fransisco. I returned from there thinking there must be a way for artists all over the world to collaborate digitally without actually being physically present.
Later this became my Adobe Kickbox project.
I came up with a workflow for creating, modifying and viewing location-specific artwork using augmented reality (AR). Here, AR would be used to project layers of digital information onto the physical environment. A mobile app called Olinda was designed to make it consumer-centric.
There were 3 steps:
- Create a canvas from the built environment
- Create the digital artwork on the canvas, and post it to the physical location
- View the artwork through an AR-enabled device
Screens from the workflow
Olinda app as a hub for social street art creators and consumers
When a concept doesn’t solve an existing problem, but envisions a future value, it’s challenging to tell that story in a simple way. I tried a number of different ways:
a) Olinda film
The goal was to show that the app could be used by anyone for making an experience really special. I scripted the story and the screenplay, gathered actors and scouted for locations and equipment, shot the film, directed the music and edited the film. With a little help from my friends!
b) Digital presence
I worked on two types of websites, presenting the concept to two types of users: the creative pro and a typical consumer
c) Social Media Campaign
I also launched a Facebook Ad Campaign and gathered a few likes:
Overall it created quite a buzz internally, but the actual technology was pretty complex to build in 2013. However I was able to file a patent (AD01.3322US01).
Date of Project: May 2012-June 2014
My Role: Everything
This UI component XPath Builder and Results, was a part of the larger product Adobe Framemaker 11. The idea was to include the XPath Builder with the Results pod, so that the user can continue working within the pod instead of having to focus on different areas of the screen.
It was a complex workflow taken up by proficient users, who primarily use the keyboard. They also create very complex queries and often search for multiple queries at the same time.
During this milestone, multiple iterations were created, validated with stakeholders, and redesigned based on the feedback.
These show the different versions as I discovered nuances of the workflow with successive iterations and validations:
Users needed a list, and queries are long:
Queries are made first, and run on different scope. Users often run multiple queries at a time. Users needed a list, and queries are long:
People generally work/read from top to bottom. Matching the workflow to define the IA. Queries are made first, and run on different scope. Users often run multiple queries at a time. Users needed a list, and queries are long:
This was the final design. I used the typical workflow to define the IA and the layout so the eye would naturally be oriented from query to results and continue using this multiple times in a single session.
Visual specs were done keeping in mind the following:
- Responsive on increasing and decreasing size of the pod
- Localization of strings
- Using elements of Gestalt to present a clutter-free UI
Date of Project: Jan 2011-June 2012
My Role: Primary Interaction Designer