Patents by Inventor Ankit Mohan

Ankit Mohan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11483451
    Abstract: A method is performed at a system that comprises one or more video cameras and a remote server system. The method includes obtaining, via a video camera of the one or more video cameras, a continuous stream of video data for a scene. The video data stream comprises color video data in accordance with a determination that the scene has illumination above an illumination threshold and comprises infrared (IR) video data in accordance with a determination that the scene does not have illumination above the illumination threshold. The method includes colorizing the IR video data based on a subset of the color video data. The method further includes presenting the colorized video data to a user in real time.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: October 25, 2022
    Assignee: Google LLC
    Inventors: George Alban Heitz, III, Rizwan Ahmed Chaudhry, Ankit Mohan, Joshua Fromm
  • Publication number: 20210400167
    Abstract: A method is performed at a system that comprises one or more video cameras and a remote server system. The method includes obtaining, via a video camera of the one or more video cameras, a continuous stream of video data for a scene. The video data stream comprises color video data in accordance with a determination that the scene has illumination above an illumination threshold and comprises infrared (IR) video data in accordance with a determination that the scene does not have illumination above the illumination threshold. The method includes colorizing the IR video data based on a subset of the color video data. The method further includes presenting the colorized video data to a user in real time.
    Type: Application
    Filed: November 19, 2019
    Publication date: December 23, 2021
    Inventors: George Alban Heitz III, Rizwan Ahmed Chaudhry, Ankit Mohan, Joshua Fromm
  • Patent number: 10448585
    Abstract: Various arrangements for visual control of a network-enabled irrigation system are presented. In some embodiments, a video stream of an outdoor location that can include a lawn may be captured. The video stream of the outdoor location may be transmitted to a cloud-based irrigation management server system. The lawn may be monitored for a period of time using the video stream. Based on monitoring the lawn for the period of time, a visual change in a state of the lawn may be identified. Based on the visual change in the state of the lawn, an adjustment of an irrigation program of the network-enabled irrigation system may be determined. An irrigation control message may be transmitted to the network-enabled irrigation system that alters an irrigation schedule for the lawn.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: October 22, 2019
    Assignee: Google LLC
    Inventors: Deepak Kundra, John Nold, James Stewart, Ankit Mohan, Leon Tan
  • Publication number: 20190141919
    Abstract: Various arrangements for visual control of a network-enabled irrigation system are presented. In some embodiments, a video stream of an outdoor location that can include a lawn may be captured. The video stream of the outdoor location may be transmitted to a cloud-based irrigation management server system. The lawn may be monitored for a period of time using the video stream. Based on monitoring the lawn for the period of time, a visual change in a state of the lawn may be identified. Based on the visual change in the state of the lawn, an adjustment of an irrigation program of the network-enabled irrigation system may be determined. An irrigation control message may be transmitted to the network-enabled irrigation system that alters an irrigation schedule for the lawn.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Applicant: Google LLC
    Inventors: Deepak Kundra, John Nold, James Stewart, Ankit Mohan, Leon Tan
  • Patent number: 10048770
    Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: August 14, 2018
    Assignee: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
  • Patent number: 9811311
    Abstract: The present disclosure provides techniques for improving IMU-based gesture detection by a device using ultrasonic Doppler. A method may include detecting the onset of a gesture at a first device based on motion data obtained from an IMU of the first device. An indication of the detection of the onset of the gesture may be provided to a second device. Next, a first audio signal may be received from the second device. As a result, the gesture may be identified based on the motion data and the received first audio signal. In some cases, a first token encoded within the first audio signal may be decoded and the first token may be provided to a third coordinating device. A confirmation message may be received from the third coordinating device based on the first token provided and identifying the gesture may be further based on the confirmation message.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: November 7, 2017
    Assignee: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan, Ryan Michael Rifkin
  • Patent number: 9791940
    Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: October 17, 2017
    Assignee: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
  • Patent number: 9563280
    Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: February 7, 2017
    Assignee: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
  • Patent number: 9417704
    Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.
    Type: Grant
    Filed: March 18, 2014
    Date of Patent: August 16, 2016
    Assignee: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
  • Patent number: 9160900
    Abstract: Systems and methods for capturing light field information including spatial and angular information using an image pickup device that includes an image sensor and at least one spatial light modulator (SLM) take multiple captures of a scene using the at least one SLM to obtain coded projections of a light field of the scene, wherein each capture is taken using at least one pattern on the at least one SLM, and recover light field data using a reconstruction process on the obtained coded projections of the light field.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: October 13, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ankit Mohan, Siu-Kei Tin, Eric W. Tramel
  • Publication number: 20150261495
    Abstract: The present disclosure provides techniques for improving IMU-based gesture detection by a device using ultrasonic Doppler. A method may include detecting the onset of a gesture at a first device based on motion data obtained from an IMU of the first device. An indication of the detection of the onset of the gesture may be provided to a second device. Next, a first audio signal may be received from the second device. As a result, the gesture may be identified based on the motion data and the received first audio signal. In some cases, a first token encoded within the first audio signal may be decoded and the first token may be provided to a third coordinating device. A confirmation message may be received from the third coordinating device based on the first token provided and identifying the gesture may be further based on the confirmation message.
    Type: Application
    Filed: March 31, 2014
    Publication date: September 17, 2015
    Applicant: Google Inc.
    Inventors: Boris Smus, Christian Plagemann, Ankit Mohan, Ryan Michael Rifkin
  • Patent number: 9100562
    Abstract: In exemplary implements of this invention, a lens and sensor of a camera are intentionally destabilized (i.e., shifted relative to the scene being imaged) in order to create defocus effects. That is, actuators in a camera move a lens and a sensor, relative to the scene being imaged, while the camera takes a photograph. This motion simulates a larger aperture size (shallower depth of field). Thus, by translating a lens and a sensor while taking a photo, a camera with a small aperture (such as a cell phone or small point and shoot camera) may simulate the shallow DOF that can be achieved with a professional SLR camera. This invention may be implemented in such a way that programmable defocus effects may be achieved. Also, approximately depth-invariant defocus blur size may be achieved over a range of depths, in some embodiments of this invention.
    Type: Grant
    Filed: April 12, 2010
    Date of Patent: August 4, 2015
    Assignee: Massachusetts Institute of Technology
    Inventors: Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar
  • Publication number: 20140254864
    Abstract: System and method for image detection that include collecting image data; at a processor, over a plurality of support regions of the image data, computing a dimensionality component of a support region of the image data, wherein the, non-nucleus pixels of a support region; calculating a normalizing factor of the dimensionality component; for at least one weighted pattern of a pattern set, applying a weighted pattern to the dimensionality component to create a gradient vector, mapping the gradient vector to a probabilistic model, and normalizing the gradient vector by the normalizing factor; condensing probabilistic models of the plurality of support regions into a probabilistic distribution feature for at least one cell of the image data; applying a classifier to at least the probabilistic distribution feature; and detecting an object in the image data according to a result of the applied classifier.
    Type: Application
    Filed: March 7, 2013
    Publication date: September 11, 2014
    Applicant: Google Inc.
    Inventors: Navneet Dalal, Rahul Garg, Varun Gulshan, Ankit Mohan
  • Patent number: 8783871
    Abstract: In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye.
    Type: Grant
    Filed: April 22, 2011
    Date of Patent: July 22, 2014
    Assignee: Massachusetts Institute of Technology
    Inventors: Vitor Pamplona, Manuel Menezes de Oliveira Neto, Ankit Mohan, Ramesh Raskar
  • Publication number: 20140157209
    Abstract: A system and method that includes detecting an application change within a multi-application operating framework; updating an application hierarchy model for gesture-to-action responses with the detected application change; detecting a gesture; according to the hierarchy model, mapping the detected gesture to an action of an application; and triggering the action.
    Type: Application
    Filed: March 12, 2013
    Publication date: June 5, 2014
    Applicant: Google Inc.
    Inventors: Navneet Dalal, Mehul Nariyawala, Ankit Mohan, Varun Gulshan
  • Publication number: 20130242138
    Abstract: First and second images of a scene are captured using respectively different first and second optical paths. The first optical path includes an optical element comprising a stack of microlens arrays. A synthesized image of the scene is generated by calculations using the first and second captured images of the scene. The synthesized image has improved image characteristics as compared to both of the first captured image and the second captured image.
    Type: Application
    Filed: March 15, 2012
    Publication date: September 19, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Axel Becker-Lakus, Ankit Mohan
  • Publication number: 20130222582
    Abstract: Systems and methods for capturing light field information including spatial and angular information using an image pickup device that includes an image sensor and at least one spatial light modulator (SLM) take multiple captures of a scene using the at least one SLM to obtain coded projections of a light field of the scene, wherein each capture is taken using at least one pattern on the at least one SLM, and recover light field data using a reconstruction process on the obtained coded projections of the light field.
    Type: Application
    Filed: February 29, 2012
    Publication date: August 29, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Ankit Mohan, Siu-Kei Tin, Eric W. Tramel
  • Patent number: 8366003
    Abstract: In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.
    Type: Grant
    Filed: July 16, 2010
    Date of Patent: February 5, 2013
    Assignee: Massachusetts Institute of Technology
    Inventors: Ankit Mohan, Ramesh Raskar, Shinsaku Hiura, Quinn Smithwick, Grace Woo
  • Publication number: 20130027668
    Abstract: In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye.
    Type: Application
    Filed: April 22, 2011
    Publication date: January 31, 2013
    Inventors: Vitor Pamplona, Manuel Menezes de Oliveira Neto, Ankit Mohan, Ramesh Raskar
  • Publication number: 20110017826
    Abstract: In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.
    Type: Application
    Filed: July 16, 2010
    Publication date: January 27, 2011
    Applicant: Massachusetts Institute of Technology
    Inventors: Ankit Mohan, Ramesh Raskar, Shinsaku Hiura, Quinn Smithwick, Grace Woo