Patents by Inventor Anand Agarawala

Anand Agarawala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190354542
    Abstract: A system and method for generating activity summaries for users. User activity information is received by a user activity information receiver module. Once enough data is received and processed, the data is analyzes and segmented to determine and create an activity summary or story. Content is selected. The selection of the content includes selection of content items, such as multimedia items, e.g. pictures and videos. Secondary information, such as user activity information or location information is analyzed. A story is generated based on the selected content, including the selected content. The story is represented by a display of the selected media and other information associated with the media.
    Type: Application
    Filed: August 5, 2019
    Publication date: November 21, 2019
    Applicant: Google LLC
    Inventors: Joseph Robert Smarr, Anand Agarawala, Brett Rolston Lider, Benjamin David Eidelson
  • Publication number: 20190310757
    Abstract: Disclosed herein are system, method, and computer program product embodiments for providing a local scene recreation of an augmented reality meeting space to a mobile device, laptop computer, or other computing device. By decoupling the augmented reality meeting space from virtual reality headsets, the user-base expands to include users that could otherwise not participate in the collaborative augmented reality meeting spaces. Users participating on mobile devices and laptops may choose between multiple modes of interaction including an auto-switch view and manual views as well as interacting with the augmented reality meeting space by installing an augmented reality toolkit. Users may deploy and interact with various forms of avatars representing other users in the augmented reality meeting space.
    Type: Application
    Filed: April 3, 2019
    Publication date: October 10, 2019
    Inventors: Jinha LEE, Anand Agarawala, Peter Ng, Tyler Hatch
  • Publication number: 20190310758
    Abstract: Disclosed herein are system, method, and computer program product embodiments for displaying a three-dimensional representation of a media source in an augmented reality meeting space. The augmented reality meeting space receives structured data from the media source, for example, via an RSS feed, and translates the structured data into a three-dimensional representation using an application adapter. The application adapter can be enhanced by including additional information about the structure of the data that is specific to the media source. Users can view, manipulate, and otherwise interact the three-dimensional representation within a shared, collaborative, augmented reality meeting space.
    Type: Application
    Filed: April 3, 2019
    Publication date: October 10, 2019
    Inventors: Anand AGARAWALA, Jinha LEE, Peter NG, Mischa FIERER, Elliot PJECHA
  • Publication number: 20190313059
    Abstract: Disclosed herein are system, method, and computer program product embodiments for generating collaborating AR workspaces. An embodiment operates by identifying a first user that is participating in an augment reality (AR) meeting space from a first location. A second user participating in the AR meeting space from a second location is identified. A selection of a room configuration for the AR meeting space based on at least one of the first location or the second location is received. The digital canvas is configured in the AR meeting space for at least one of the first user or the second user based on the selected room configuration, wherein a size or shape of the digital canvas is adjusted based on either the first wall or the second wall corresponding to the selected room configuration.
    Type: Application
    Filed: April 3, 2019
    Publication date: October 10, 2019
    Applicant: Spatial Systems Inc.
    Inventors: Anand AGARAWALA, Jinha LEE, Peter NG, Roman REVZIN, Waldo BRONCHART, Donghyeon KIM
  • Publication number: 20190310761
    Abstract: Disclosed herein are system, method, and computer program product embodiments for saving and loading workspaces in augmented reality (AR) environments. An embodiment operates by receiving a selection of an AR meeting space to open in a current physical location, wherein AR meeting space was previously configured for a remote physical location different from the current physical location. A selection of an AR meeting space to open in a current physical location is received. An arrangement of one or more digital objects of the selected AR meeting space is determined. A current anchor area within the current physical location that corresponds to a remote anchor area of the remote physical location is identified. The arrangement of the one or more digital objects of the AR meeting space is modified in the current physical location based on an alignment of the current anchor area with the remote anchor area.
    Type: Application
    Filed: April 30, 2019
    Publication date: October 10, 2019
    Applicant: Spatial Systems Inc.
    Inventors: Anand Agarawala, Jinha LEE, Peter NG, Roman REVZIN
  • Patent number: 10372735
    Abstract: A system and method for generating activity summaries for users. User activity information is received by a user activity information receiver module. Once enough data is received and processed, the data is analyzes and segmented to determine and create an activity summary or story. Content is selected. The selection of the content includes selection of content items, such as multimedia items, e.g. pictures and videos. Secondary information, such as user activity information or location information is analyzed. A story is generated based on the selected content, including the selected content. The story is represented by a display of the selected media and other information associated with the media.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: August 6, 2019
    Assignee: GOOGLE LLC
    Inventors: Joseph Robert Smarr, Anand Agarawala, Brett Rolston Lider, Benjamin David Eidelson
  • Patent number: 10025450
    Abstract: A system and method for generating activity summaries to users of a social network server is disclosed. User activity information is received by a user activity information receiver module. The user activity information is then categorized by a categorization module, which in some implementations, also groups the categorized user activity information in accordance with commonalities identified among the user activity information. The categorized user activity information is ranked according to relevance to the user by the ranking module or according to relevance to the user's contacts. An output generation module 308 determines when the groupings are complete. Activity summaries are then generated by the output generation module. The activity summary includes the categorized user activity information. The activity summary is sent for display on a user device of a user.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: July 17, 2018
    Assignee: Google LLC
    Inventors: Brett Rolston Lider, Joseph Robert Smarr, David Glazer, Kenneth Norton, Anand Agarawala
  • Publication number: 20170308249
    Abstract: A system and method for generating and providing user interfaces for interacting with a stream of content are disclosed. A system having one or more processors and a memory is configured to perform operations including receiving a stream of content including one or more content items; selecting a content item; determining a tile type for providing the content item based upon an attribute of the content item; populating tile components for the tile type using the content item; organizing content tiles in a dynamic grid using the attribute of the content items; and providing the dynamic grid of content tiles for display.
    Type: Application
    Filed: May 16, 2017
    Publication date: October 26, 2017
    Inventors: Frank Petterson, Brian Laird, Chikezie Ejiasi, Anand Agarawala, Leslie Ikemoto, Daniel Burka, Karl Channell
  • Patent number: 9798517
    Abstract: Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a screen interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: October 24, 2017
    Assignee: X Development LLC
    Inventors: Chun Yat Frank Li, Daniel Rodriguez Magana, Thiago Teixeira, Charles Chen, Anand Agarawala
  • Patent number: 9778819
    Abstract: A system and method for generating and providing user interfaces for interacting with a stream of content are disclosed. A system having one or more processors and a memory is configured to perform operations including receiving a stream of content including one or more content items; selecting a content item; determining a tile type for providing the content item based upon an attribute of the content item; populating tile components for the tile type using the content item; organizing content tiles in a dynamic grid using the attribute of the content items; and providing the dynamic grid of content tiles for display.
    Type: Grant
    Filed: August 14, 2013
    Date of Patent: October 3, 2017
    Assignee: Google Inc.
    Inventors: Frank Petterson, Brian Laird, Chikezie Ejiasi, Anand Agarawala, Leslie Ikemoto, Daniel Burka, Karl Channell
  • Publication number: 20170139672
    Abstract: Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a screen interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.
    Type: Application
    Filed: January 27, 2017
    Publication date: May 18, 2017
    Inventors: Chun Yat Frank Li, Daniel Rodriguez Magana, Thiago Teixeira, Charles Chen, Anand Agarawala
  • Patent number: 9575563
    Abstract: Embodiments may relate to intuitive user-interface features for a head-mountable device (HMD), in the context of a hybrid human and computer-automated response system. An illustrative method may involve a head-mountable device (HMD) that comprises a touchpad: (a) sending a speech-segment message to a hybrid response system, wherein the speech-segment message is indicative of a speech segment that is detected in audio data captured at the HMD, and wherein the speech-segment is associated with a first user-account with the hybrid response system, (b) receiving a response message that includes a response to the speech-segment message and an indication of a next action corresponding to the response to the speech-segment message, (c) displaying a card interface that includes an indication of the response, and (d) while displaying the response, detecting a singular touch gesture and responsively initiating the at least one next action.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: February 21, 2017
    Assignee: X Development LLC
    Inventors: Chun Yat Frank Li, Daniel Rodriguez Magana, Thiago Teixeira, Charles Chen, Anand Agarawala
  • Publication number: 20150339374
    Abstract: A system and method for generating activity summaries for users. User activity information is received by a user activity information receiver module. Once enough data is received and processed, the data is analyzes and segmented to determine and create an activity summary or story. Content is selected. The selection of the content includes selection of content items, such as multimedia items, e.g. pictures and videos. Secondary information, such as user activity information or location information is analyzed. A story is generated based on the selected content, including the selected content. The story is represented by a display of the selected media and other information associated with the media.
    Type: Application
    Filed: May 20, 2015
    Publication date: November 26, 2015
    Inventors: Joseph Robert Smarr, Anand Agarawala, Brett Rolston Lider, Benjamin David Eidelson
  • Patent number: 8856675
    Abstract: Methods and apparatus for displaying display windows in a graphical user interface are disclosed. An example method includes opening, on a computing device, a first root browser window and spawning, from a first link in the first root browser window in response to a user toss-gesture associated with the first link, a first subordinate browser window. The example method further includes displaying, in a hierarchical display feature of the computing device, a hierarchical relationship between the first root browser window and the first subordinate browser window so as to visually indicate hierarchical subordinacy of the first subordinate browser window to the first root browser window.
    Type: Grant
    Filed: November 16, 2011
    Date of Patent: October 7, 2014
    Assignee: Google Inc.
    Inventors: Anand Agarawala, Adam Cohen, Alex Nicolaou, Ben Eidelson, Winson Chung, Michael Jurka, Patrick Dubroy
  • Publication number: 20140164938
    Abstract: A system and method for generating and providing user interfaces for interacting with a stream of content are disclosed. A system having one or more processors and a memory is configured to perform operations including receiving a stream of content including one or more content items; selecting a content item; determining a tile type for providing the content item based upon an attribute of the content item; populating tile components for the tile type using the content item; organizing content tiles in a dynamic grid using the attribute of the content items; and providing the dynamic grid of content tiles for display.
    Type: Application
    Filed: August 14, 2013
    Publication date: June 12, 2014
    Applicant: Google Inc.
    Inventors: Frank Petterson, Brian Laird, Chikezie Ejiasi, Anand Agarawala, Leslie Ikemoto, Daniel Burka, Karl Channell
  • Patent number: 8429565
    Abstract: The present disclosure describes various techniques that may be implemented to execute and/or interpret manipulation gestures performed by a user on a multipoint touch input interface of a computing device. An example method includes receiving a multipoint touch gesture at a multipoint touch input interface of a computing device, wherein the multipoint touch gesture comprises a gesture that is performed with multiple touches on the multipoint touch input interface, and resolving the multipoint touch gesture into a command. The example method further includes determining at least one physical simulation effect to associate with the resolved multipoint touch gesture, and rendering a unified feedback output action in a graphical user interface of the computing device by executing the command, wherein the unified feedback output action includes at least a graphical output action incorporated with the at least one physical simulation effect in the graphical user interface.
    Type: Grant
    Filed: August 25, 2010
    Date of Patent: April 23, 2013
    Assignee: Google Inc.
    Inventors: Anand Agarawala, Patrick Dubroy, Adam Lesinski
  • Patent number: 8402382
    Abstract: A method, system and computer program for organizing and visualizing display objects within a virtual environment is provided. In one aspect, attributes of display objects define the interaction between display objects according to pre-determined rules, including rules simulating real world mechanics, thereby enabling enriched user interaction. The present invention further provides for the use of piles as an organizational entity for desktop objects. The present invention further provides for fluid interaction techniques for committing actions on display objects in a virtual interface. A number of other interaction and visualization techniques are disclosed.
    Type: Grant
    Filed: April 18, 2007
    Date of Patent: March 19, 2013
    Assignee: Google Inc.
    Inventors: Anand Agarawala, Ravin Balakrishnan
  • Publication number: 20110055773
    Abstract: The present disclosure describes various techniques that may be implemented to execute and/or interpret manipulation gestures performed by a user on a multipoint touch input interface of a computing device. An example method includes receiving a multipoint touch gesture at a multipoint touch input interface of a computing device, wherein the multipoint touch gesture comprises a gesture that is performed with multiple touches on the multipoint touch input interface, and resolving the multipoint touch gesture into a command. The example method further includes determining at least one physical simulation effect to associate with the resolved multipoint touch gesture, and rendering a unified feedback output action in a graphical user interface of the computing device by executing the command, wherein the unified feedback output action includes at least a graphical output action incorporated with the at least one physical simulation effect in the graphical user interface.
    Type: Application
    Filed: August 25, 2010
    Publication date: March 3, 2011
    Applicant: GOOGLE INC.
    Inventors: Anand Agarawala, Patrick Dubroy, Adam Lesinski
  • Patent number: D737314
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: Daniel M. G. Shiplacoff, Anand Agarawala, Michael Adam Cohen
  • Patent number: D884018
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: May 12, 2020
    Assignee: Spatial Systems Inc.
    Inventors: Anand Agarawala, Jinha Lee