Patent • Filling Forms with AR/VR

CONTEXT

The problems of filling forms on a touch device is not new to any of us. When we started thinking about it – here are some of the questions we asked ourselves:

  • We looked at the information that needs to be filled in; where is it coming from,
  • How can we improve the input mechanisms, whether from the user, or from a database
  • Existing workarounds that people tend to make

These set of questions became one invention idea – P6195-US Capturing media in context of a document.

forms01.png

At the other end of the spectrum, there were some of the questions:

  1. For some forms the current model of filling forms is closely tied to the printed paper format. What if that did not exist?
  2. And, navigation within the form: is it related to the document structure or the content type?

These two questions became 6195: Capturing media in context of a document.

UNIVERSAL PROBLEM COMMON IN BOTH CASES:

So lots of times we need to insert an image into a document, and most of the time, we break the flow of filling in the form, leave the app to take the photo, enhance it and then return. The workflow is cumbersome and the experience is broken. This is the problem which can be solved in two ways:

6195: Capturing media in context of a document

  1. Here’s a screen with a form open. The user can tap on the field to enter the capture mode – the live feed of the camera is displayed within the document.
  2. All the camera controls are available so enhancements can be made before the image is captured.
  3. It’s possible to take a video, or a burst at this time and then select the best image from there, all within the context of a document.

Show and Tell - Soo.008

6602: Contextually embedding AR/VR objects in a responsive form

This is an exploration of a different model of form filling, to pivot the filling experience to make it more about the content not the format.

  1. In this case the user needs to fill in the form about a physical object, i.e. a car.
  2. In the form the relevant fields that need to be filled in are shown
  3. Once the AR experience begins, the user can see where the form fields are in the context of the object.
  4. At the end of the process the information is in the required fields, and also linked to the object.

 

Video:

DATE OF PROJECT: June 2016

TEAM: Shilpi Aggarwal, Saurabh Gupta

XPath for Adobe FrameMaker 11

PROBLEM AREA

This UI component XPath Builder and Results, was a part of the larger product Adobe Framemaker 11. The idea was to include the XPath Builder with the Results pod, so that the user can continue working within the pod instead of having to focus on different areas of the screen.

It was a complex workflow taken up by proficient users, who primarily use the keyboard. They also create very complex queries and often search for multiple queries at the same time.

SOLUTION

During this milestone, multiple iterations were created, validated with stakeholders, and redesigned based on the feedback.

Wireframes:

These show the different versions as I discovered nuances of the workflow with successive iterations and validations:

Users needed a list, and queries are long:

02builder

Queries are made first, and run on different scope. Users often run multiple queries at a time. Users needed a list, and queries are long:

wireframe4

People generally work/read from top to bottom. Matching the workflow to define the IA. Queries are made first, and run on different scope. Users often run multiple queries at a time. Users needed a list, and queries are long:

 

wireframe1

Final Design:

This was the final design. I used the typical workflow to define the IA and the layout so the eye would naturally be oriented from query to results and continue using this multiple times in a single session.

003a005a

Visual specs were done keeping in mind the following:

  • Responsive on increasing and decreasing size of the pod
  • Localization of strings
  • Using elements of Gestalt to present a clutter-free UI

XPath Builder Spec_FINAL1

Date of Project: Jan 2011-June 2012

My Role: Primary Interaction Designer