Patent • Filling Forms with AR/VR


The problems of filling forms on a touch device is not new to any of us. When we started thinking about it – here are some of the questions we asked ourselves:

  • We looked at the information that needs to be filled in; where is it coming from,
  • How can we improve the input mechanisms, whether from the user, or from a database
  • Existing workarounds that people tend to make

These set of questions became one invention idea – P6195-US Capturing media in context of a document.


At the other end of the spectrum, there were some of the questions:

  1. For some forms the current model of filling forms is closely tied to the printed paper format. What if that did not exist?
  2. And, navigation within the form: is it related to the document structure or the content type?

These two questions became 6195: Capturing media in context of a document.


So lots of times we need to insert an image into a document, and most of the time, we break the flow of filling in the form, leave the app to take the photo, enhance it and then return. The workflow is cumbersome and the experience is broken. This is the problem which can be solved in two ways:

6195: Capturing media in context of a document

  1. Here’s a screen with a form open. The user can tap on the field to enter the capture mode – the live feed of the camera is displayed within the document.
  2. All the camera controls are available so enhancements can be made before the image is captured.
  3. It’s possible to take a video, or a burst at this time and then select the best image from there, all within the context of a document.

Show and Tell - Soo.008

6602: Contextually embedding AR/VR objects in a responsive form

This is an exploration of a different model of form filling, to pivot the filling experience to make it more about the content not the format.

  1. In this case the user needs to fill in the form about a physical object, i.e. a car.
  2. In the form the relevant fields that need to be filled in are shown
  3. Once the AR experience begins, the user can see where the form fields are in the context of the object.
  4. At the end of the process the information is in the required fields, and also linked to the object.




TEAM: Shilpi Aggarwal, Saurabh Gupta

Patent • Olinda: Create Location-Specific Artwork With AR


This was a side project inspired by the street art/murals seen in Mission district in San Fransisco. I returned from there thinking there must be a way for artists all over the world to collaborate digitally without actually being physically present.

Later this became my Adobe Kickbox project.


1. Concept

I came up with a workflow for creating, modifying and viewing location-specific artwork using augmented reality (AR). Here, AR would be used to project layers of digital information onto the physical environment. A mobile app called Olinda was designed to make it consumer-centric.

There were 3 steps:

  1. Create a canvas from the built environment
  2. Create the digital artwork on the canvas, and post it to the physical location
  3. View the artwork through an AR-enabled device

Screens from the workflow

olinda wfOlinda app as a hub for social street art creators and consumers

2. Communication

When a concept doesn’t solve an existing problem, but envisions a future value, it’s challenging to tell that story in a simple way. I tried a number of different ways:

a) Olinda film

The goal was to show that the app could be used by anyone for making an experience really special. I scripted the story and the screenplay, gathered actors and scouted for locations and equipment, shot the film, directed the music and edited the film. With a little help from my friends!

This slideshow requires JavaScript.

b) Digital presence

I worked on two types of websites, presenting the concept to two types of users: the creative pro and a typical consumer|

c) Social Media Campaign

I also launched a Facebook Ad Campaign and gathered a few likes:



Overall it created quite a buzz internally, but the actual technology was pretty complex to build in 2013. However I was able to file a patent (AD01.3322US01).

Date of Project: May 2012-June 2014

My Role: Everything