Patents by Inventor Adarsh Prakash Murthy Kowdle

Adarsh Prakash Murthy Kowdle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11868523
    Abstract: Techniques of tracking a user's gaze includes identifying a region of a display at which a gaze of a user is directed, the region including a plurality of pixels. By determining a region rather than a point, when the regions correspond to elements of a user interface, the improved technique enables a system to activate the element to which a determined region is selected. In some implementations, the system makes the determination using a classification engine including a convolutional neural network; such an engine takes as input images of the user's eye and outputs a list of probabilities that the gaze is directed to each of the regions.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Ivana Tosic Rodgers, Sean Ryan Francesco Fanello, Sofien Bouaziz, Rohit Kumar Pandey, Eric Aboussouan, Adarsh Prakash Murthy Kowdle
  • Publication number: 20230377183
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Application
    Filed: July 21, 2023
    Publication date: November 23, 2023
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Patent number: 11810313
    Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo data, where the stereo data includes a first input image and a second input image, and a depth sensing computing system configured to predict a depth map. The depth sensing computing system includes a feature extractor configured to extract features from the first and second images at a plurality of resolutions, an initialization engine configured to generate a plurality of depth estimations, where each of the plurality of depth estimations corresponds to a different resolution, and a propagation engine configured to iteratively refine the plurality of depth estimations based on image warping and spatial propagation.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: November 7, 2023
    Assignee: GOOGLE LLC
    Inventors: Vladimir Tankovich, Christian Haene, Sean Ryan Francesco Fanello, Yinda Zhang, Shahram Izadi, Sofien Bouaziz, Adarsh Prakash Murthy Kowdle, Sameh Khamis
  • Publication number: 20230350049
    Abstract: A method including transmitting, by a peripheral device communicatively coupled to a wearable device, a frequency-modulated continuous wave (FMCW), receiving, by the peripheral device, a reflected signal based on the FMCW, tracking, by the peripheral device, a movement associated with the peripheral device based on the reflected signal, and communicating, from the peripheral device to the wearable device, an information corresponding to the movement associated with the peripheral device.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Inventors: Anandghan Waghmare, Dongeek Shin, Ivan Poupyrev, Shwetak N. Patel, Shahram Izadi, Adarsh Prakash Murthy Kowdle
  • Patent number: 11756223
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: September 12, 2023
    Assignee: Google LLC
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Publication number: 20230274491
    Abstract: A method including receiving (S605) a request for a depth map, generating (S625) a hybrid depth map based on a device depth map (110) and downloaded depth information (105), and responding (S630) to the request for the depth map with the hybrid depth map (415). The device depth map (110) can be depth data captured on a user device (515) using sensors and/or software. The downloaded depth information (105) can be associated with depth data, map data, image data, and/or the like stored on a remote (to the user device) server (505).
    Type: Application
    Filed: September 1, 2021
    Publication date: August 31, 2023
    Inventors: Eric Turner, Adarsh Prakash Murthy Kowdle, Bicheng Luo, Juan David Hincapie Ramos
  • Publication number: 20230258798
    Abstract: Smart glasses including a first audio device, a second audio device, a frame including a first portion, a second portion, and a third portion, the second portion and the third portion are moveable in relation to the first portion, the second portion including the first audio device and the third portion including the second audio device, and a processor configured to cause the first audio device to generate a signal, receive the signal via the second audio device, estimate a distance based on the received signal, and determine a configuration of the frame.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Dongeek Shin, Adarsh Prakash Murthy Kowdle, Jingying Hu, Andrea Colaco
  • Publication number: 20230259199
    Abstract: A method including receiving an image from a sensor of a wearable device, rendering the image on a display of the wearable device, identifying a set of targets in the image, tracking a gaze direction associated with a user of the wearable device, rendering, on the displayed image, a gaze line based on the tracked gaze direction, identifying a subset of targets based on the set of targets in a region of the image based on the gaze line, triggering an action, and in response to the trigger, estimating a candidate target based on the subset of targets.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Mark Chang, Xavier Benavides Palos, Alexandr Virodov, Adarsh Prakash Murthy Kowdle, Kan Huang
  • Patent number: 11687635
    Abstract: This document describes techniques and systems that enable automatic exposure and gain control for face authentication. The techniques and systems include a user device initializing a gain for a near-infrared camera system using a default gain. The user device ascertains patch-mean statistics of one or more regions-of-interest of a most-recently captured image that was captured by the near-infrared camera system. The user device computes an update in the initialized gain to provide an updated gain that is usable to scale the one or more regions-of-interest toward a target mean-luminance value. The user device dampens the updated gain by using hysteresis. Then, the user device sets the initialized gain for the near-infrared camera system to the dampened updated gain.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: June 27, 2023
    Assignee: Google PLLC
    Inventors: Adarsh Prakash Murthy Kowdle, Ruben Manuel Velarde, Zhijun He, Xu Han, Kourosh Derakshan, Shahram Izadi
  • Publication number: 20230186575
    Abstract: A method including receiving a first depth image associated with a first frame at a first time of an augmented reality (AR) application, the first depth image representing at least a first portion of a real-world space storing the first depth image receiving a second depth image associated with a second frame at a second time, after the first time, of the AR application, the second depth image representing at least a second portion of the real-world space generating a real-world image by blending, at least, the stored first depth image with the second depth image receiving a rendered AR object combining the AR object in the real-world image and displaying the real-world image combined with the AR object.
    Type: Application
    Filed: May 22, 2020
    Publication date: June 15, 2023
    Inventors: Eric Turner, Keisuke Tateno, Konstantine Nicholas John Tsotsos, Adarsh Prakash Murthy Kowdle, Vaibhav Gupta, Ambrus Csaszar
  • Publication number: 20230004216
    Abstract: Techniques of tracking a user's gaze includes identifying a region of a display at which a gaze of a user is directed, the region including a plurality of pixels. By determining a region rather than a point, when the regions correspond to elements of a user interface, the improved technique enables a system to activate the element to which a determined region is selected. In some implementations, the system makes the determination using a classification engine including a convolutional neural network; such an engine takes as input images of the user's eye and outputs a list of probabilities that the gaze is directed to each of the regions.
    Type: Application
    Filed: July 1, 2021
    Publication date: January 5, 2023
    Inventors: Ivana Tosic Rodgers, Sean Ryan Francesco Fanello, Sofien Bouaziz, Rohit Kumar Pandey, Eric Aboussouan, Adarsh Prakash Murthy Kowdle
  • Publication number: 20220335638
    Abstract: According to an aspect, a method for depth estimation includes receiving image data from a sensor system, generating, by a neural network, a first depth map based on the image data, where the first depth map has a first scale, obtaining depth estimates associated with the image data, and transforming the first depth map to a second depth map using the depth estimates, where the second depth map has a second scale.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 20, 2022
    Inventors: Abhishek Kar, Hossam Isack, Adarsh Prakash Murthy Kowdle, Aveek Purohit, Dmitry Medvedev
  • Publication number: 20220191374
    Abstract: This document describes techniques and systems that enable automatic exposure and gain control for face authentication. The techniques and systems include a user device initializing a gain for a near-infrared camera system using a default gain. The user device ascertains patch-mean statistics of one or more regions-of-interest of a most-recently captured image that was captured by the near-infrared camera system. The user device computes an update in the initialized gain to provide an updated gain that is usable to scale the one or more regions-of-interest toward a target mean-luminance value. The user device dampens the updated gain by using hysteresis. Then, the user device sets the initialized gain for the near-infrared camera system to the dampened updated gain.
    Type: Application
    Filed: September 25, 2019
    Publication date: June 16, 2022
    Applicant: Google LLC
    Inventors: Adarsh Prakash Murthy Kowdle, Ruben Manuel Velarde, Zhijun He, Xu Han, Kourosh Derakshan, Shahram Izadi
  • Publication number: 20220172511
    Abstract: This disclosure describes systems and techniques for synchronizing cameras and tagging images for face authentication. For face authentication by a facial recognition model, a dual infrared camera may generate an image stream by alternating between capturing a “flood image” and a “dot image” and tagging each image with metadata that indicates whether the image is a flood or a dot image. Accurately tagging images can be difficult due to dropped frames and errors in metadata tags. The disclosed systems and techniques provide for the improved synchronization of cameras and tagging of images to promote accurate facial recognition.
    Type: Application
    Filed: October 10, 2019
    Publication date: June 2, 2022
    Applicant: Google LLC
    Inventors: Zhijun He, Wen Yu Chien, Po-Jen Chang, Xu Han, Adarsh Prakash Murthy Kowdle, Jae Min Purvis, Lu Gao, Gopal Parupudi, Clayton Merrill Kimber
  • Publication number: 20220065620
    Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.
    Type: Application
    Filed: November 11, 2020
    Publication date: March 3, 2022
    Inventors: Sean Ryan Francesco Fanello, Kaiwen Guo, Peter Christopher Lincoln, Philip Lindsley Davidson, Jessica L. Busch, Xueming Yu, Geoffrey Harvey, Sergio Orts Escolano, Rohit Kumar Pandey, Jason Dourgarian, Danhang Tang, Adarsh Prakash Murthy Kowdle, Emily B. Cooper, Mingsong Dou, Graham Fyffe, Christoph Rhemann, Jonathan James Taylor, Shahram Izadi, Paul Ernest Debevec
  • Patent number: 11145075
    Abstract: A handheld user device includes a monocular camera to capture a feed of images of a local scene and a processor to select, from the feed, a keyframe and perform, for a first image from the feed, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe. The processor further is to determine a dense depth map from the disparity map using a bilateral solver algorithm, and process a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image. Further, the processor is to provide the AR viewfinder image for display.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: October 12, 2021
    Assignee: Google LLC
    Inventors: Julien Valentin, Onur G. Guleryuz, Mira Leung, Maksym Dzitsiuk, Jose Pascoal, Mirko Schmidt, Christoph Rhemann, Neal Wadhwa, Eric Turner, Sameh Khamis, Adarsh Prakash Murthy Kowdle, Ambrus Csaszar, João Manuel Castro Afonso, Jonathan T. Barron, Michael Schoenberg, Ivan Dryanovski, Vivek Verma, Vladimir Tankovich, Shahram Izadi, Sean Ryan Francesco Fanello, Konstantine Nicholas John Tsotsos
  • Publication number: 20210304431
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Application
    Filed: June 10, 2021
    Publication date: September 30, 2021
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Publication number: 20210264632
    Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo data, where the stereo data includes a first input image and a second input image, and a depth sensing computing system configured to predict a depth map. The depth sensing computing system includes a feature extractor configured to extract features from the first and second images at a plurality of resolutions, an initialization engine configured to generate a plurality of depth estimations, where each of the plurality of depth estimations corresponds to a different resolution, and a propagation engine configured to iteratively refine the plurality of depth estimations based on image warping and spatial propagation.
    Type: Application
    Filed: February 19, 2021
    Publication date: August 26, 2021
    Inventors: Vladimir Tankovich, Christian Haene, Sean Rayn Francesco Fanello, Yinda Zhang, Shahram Izadi, Sofien Bouaziz, Adarsh Prakash Murthy Kowdle, Sameh Khamis
  • Patent number: 11100664
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 24, 2021
    Assignee: GOOGLE LLC
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Patent number: 11037026
    Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: June 15, 2021
    Assignee: Google LLC
    Inventors: Sean Ryan Fanello, Julien Pascal Christophe Valentin, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi