Talk • Future Shape of Tomorrow

yogiberraquoteEvery alternate year, Adobe hosts an event for all its engineers across the globe, to catch up on the latest happenings in technology, research, and get inspired. In 2013, I gave this talk titled “The Future Share of Tomorrow”.

GOAL

The goal was to communicate future trends in technology to a primarily technical audience of non-designers. We focused on the trends of

  • Nano-data
  • Vanishing interfaces
  • Digital Nomads
  • Cloud of workflows

PRESENTATION

We created an imaginative and speculative vision to communicate how these trends might apply in the context of Adobe and its business.

We used a character-based narrative – set some time in the near future – about Zoe, a graphic designer and a single parent. She accomplishes the chores of her demanding life with the help of her hybrid object Allie, that she wears as a pendant. Rather than Zoe telling Allie what to do, the story shows how the intelligent assistant Allie already senses what is required and sets commands in motion.

The experience goals were calm technology and beautiful seams between technology touchpoints. We proposed that product experience could be thought of as

“conversations between user and touchpoints along a user journey”

rather than as a set of interactions. Shown below are some of the visuals from the story.

Screenshot 2018-07-02 23.23.45

Screenshot 2018-07-02 23.23.52

Screenshot 2018-07-02 23.38.56

Screenshot 2018-07-02 23.25.56

Screenshot 2018-07-02 23.25.05

Screenshot 2018-07-02 23.26.45

Screenshot 2018-07-02 23.25.36

Screenshot 2018-07-02 23.27.04

Screenshot 2018-07-02 23.27.16

PROJECT DATE: 2012-2013

TEAM: Jaydeep Dutta (Design Manager), Ranganath Krishnamani (Sr Designer), Sunandini Basu (Sr Designer), Mrinalini Sardar (Intern)

Patent • Filling Forms with AR/VR

CONTEXT

The problems of filling forms on a touch device is not new to any of us. When we started thinking about it – here are some of the questions we asked ourselves:

  • We looked at the information that needs to be filled in; where is it coming from,
  • How can we improve the input mechanisms, whether from the user, or from a database
  • Existing workarounds that people tend to make

These set of questions became one invention idea – P6195-US Capturing media in context of a document.

forms01.png

At the other end of the spectrum, there were some of the questions:

  1. For some forms the current model of filling forms is closely tied to the printed paper format. What if that did not exist?
  2. And, navigation within the form: is it related to the document structure or the content type?

These two questions became 6195: Capturing media in context of a document.

UNIVERSAL PROBLEM COMMON IN BOTH CASES:

So lots of times we need to insert an image into a document, and most of the time, we break the flow of filling in the form, leave the app to take the photo, enhance it and then return. The workflow is cumbersome and the experience is broken. This is the problem which can be solved in two ways:

6195: Capturing media in context of a document

  1. Here’s a screen with a form open. The user can tap on the field to enter the capture mode – the live feed of the camera is displayed within the document.
  2. All the camera controls are available so enhancements can be made before the image is captured.
  3. It’s possible to take a video, or a burst at this time and then select the best image from there, all within the context of a document.

Show and Tell - Soo.008

6602: Contextually embedding AR/VR objects in a responsive form

This is an exploration of a different model of form filling, to pivot the filling experience to make it more about the content not the format.

  1. In this case the user needs to fill in the form about a physical object, i.e. a car.
  2. In the form the relevant fields that need to be filled in are shown
  3. Once the AR experience begins, the user can see where the form fields are in the context of the object.
  4. At the end of the process the information is in the required fields, and also linked to the object.

 

Video:

DATE OF PROJECT: June 2016

TEAM: Shilpi Aggarwal, Saurabh Gupta

Patent • Olinda: Create Location-Specific Artwork With AR

GOAL

This was a side project inspired by the street art/murals seen in Mission district in San Fransisco. I returned from there thinking there must be a way for artists all over the world to collaborate digitally without actually being physically present.

Later this became my Adobe Kickbox project.

SOLUTIONS

1. Concept

I came up with a workflow for creating, modifying and viewing location-specific artwork using augmented reality (AR). Here, AR would be used to project layers of digital information onto the physical environment. A mobile app called Olinda was designed to make it consumer-centric.

There were 3 steps:

  1. Create a canvas from the built environment
  2. Create the digital artwork on the canvas, and post it to the physical location
  3. View the artwork through an AR-enabled device

Screens from the workflow

olinda wfOlinda app as a hub for social street art creators and consumers

2. Communication

When a concept doesn’t solve an existing problem, but envisions a future value, it’s challenging to tell that story in a simple way. I tried a number of different ways:

a) Olinda film

The goal was to show that the app could be used by anyone for making an experience really special. I scripted the story and the screenplay, gathered actors and scouted for locations and equipment, shot the film, directed the music and edited the film. With a little help from my friends!

This slideshow requires JavaScript.

b) Digital presence

I worked on two types of websites, presenting the concept to two types of users: the creative pro and a typical consumer

www.getolinda.com|www.painttheworld.mobi

c) Social Media Campaign

I also launched a Facebook Ad Campaign and gathered a few likes:

FB_OLINDA.png

IMPACT

Overall it created quite a buzz internally, but the actual technology was pretty complex to build in 2013. However I was able to file a patent (AD01.3322US01).

Date of Project: May 2012-June 2014

My Role: Everything