Patents by Inventor Ankit Mohan
Ankit Mohan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11483451Abstract: A method is performed at a system that comprises one or more video cameras and a remote server system. The method includes obtaining, via a video camera of the one or more video cameras, a continuous stream of video data for a scene. The video data stream comprises color video data in accordance with a determination that the scene has illumination above an illumination threshold and comprises infrared (IR) video data in accordance with a determination that the scene does not have illumination above the illumination threshold. The method includes colorizing the IR video data based on a subset of the color video data. The method further includes presenting the colorized video data to a user in real time.Type: GrantFiled: November 19, 2019Date of Patent: October 25, 2022Assignee: Google LLCInventors: George Alban Heitz, III, Rizwan Ahmed Chaudhry, Ankit Mohan, Joshua Fromm
-
Publication number: 20210400167Abstract: A method is performed at a system that comprises one or more video cameras and a remote server system. The method includes obtaining, via a video camera of the one or more video cameras, a continuous stream of video data for a scene. The video data stream comprises color video data in accordance with a determination that the scene has illumination above an illumination threshold and comprises infrared (IR) video data in accordance with a determination that the scene does not have illumination above the illumination threshold. The method includes colorizing the IR video data based on a subset of the color video data. The method further includes presenting the colorized video data to a user in real time.Type: ApplicationFiled: November 19, 2019Publication date: December 23, 2021Inventors: George Alban Heitz III, Rizwan Ahmed Chaudhry, Ankit Mohan, Joshua Fromm
-
Patent number: 10448585Abstract: Various arrangements for visual control of a network-enabled irrigation system are presented. In some embodiments, a video stream of an outdoor location that can include a lawn may be captured. The video stream of the outdoor location may be transmitted to a cloud-based irrigation management server system. The lawn may be monitored for a period of time using the video stream. Based on monitoring the lawn for the period of time, a visual change in a state of the lawn may be identified. Based on the visual change in the state of the lawn, an adjustment of an irrigation program of the network-enabled irrigation system may be determined. An irrigation control message may be transmitted to the network-enabled irrigation system that alters an irrigation schedule for the lawn.Type: GrantFiled: November 14, 2017Date of Patent: October 22, 2019Assignee: Google LLCInventors: Deepak Kundra, John Nold, James Stewart, Ankit Mohan, Leon Tan
-
Publication number: 20190141919Abstract: Various arrangements for visual control of a network-enabled irrigation system are presented. In some embodiments, a video stream of an outdoor location that can include a lawn may be captured. The video stream of the outdoor location may be transmitted to a cloud-based irrigation management server system. The lawn may be monitored for a period of time using the video stream. Based on monitoring the lawn for the period of time, a visual change in a state of the lawn may be identified. Based on the visual change in the state of the lawn, an adjustment of an irrigation program of the network-enabled irrigation system may be determined. An irrigation control message may be transmitted to the network-enabled irrigation system that alters an irrigation schedule for the lawn.Type: ApplicationFiled: November 14, 2017Publication date: May 16, 2019Applicant: Google LLCInventors: Deepak Kundra, John Nold, James Stewart, Ankit Mohan, Leon Tan
-
Patent number: 10048770Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.Type: GrantFiled: September 18, 2017Date of Patent: August 14, 2018Assignee: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
-
Patent number: 9811311Abstract: The present disclosure provides techniques for improving IMU-based gesture detection by a device using ultrasonic Doppler. A method may include detecting the onset of a gesture at a first device based on motion data obtained from an IMU of the first device. An indication of the detection of the onset of the gesture may be provided to a second device. Next, a first audio signal may be received from the second device. As a result, the gesture may be identified based on the motion data and the received first audio signal. In some cases, a first token encoded within the first audio signal may be decoded and the first token may be provided to a third coordinating device. A confirmation message may be received from the third coordinating device based on the first token provided and identifying the gesture may be further based on the confirmation message.Type: GrantFiled: March 31, 2014Date of Patent: November 7, 2017Assignee: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan, Ryan Michael Rifkin
-
Patent number: 9791940Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.Type: GrantFiled: December 28, 2016Date of Patent: October 17, 2017Assignee: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
-
Patent number: 9563280Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.Type: GrantFiled: August 3, 2016Date of Patent: February 7, 2017Assignee: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
-
Patent number: 9417704Abstract: Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.Type: GrantFiled: March 18, 2014Date of Patent: August 16, 2016Assignee: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan
-
Patent number: 9160900Abstract: Systems and methods for capturing light field information including spatial and angular information using an image pickup device that includes an image sensor and at least one spatial light modulator (SLM) take multiple captures of a scene using the at least one SLM to obtain coded projections of a light field of the scene, wherein each capture is taken using at least one pattern on the at least one SLM, and recover light field data using a reconstruction process on the obtained coded projections of the light field.Type: GrantFiled: February 29, 2012Date of Patent: October 13, 2015Assignee: Canon Kabushiki KaishaInventors: Ankit Mohan, Siu-Kei Tin, Eric W. Tramel
-
Publication number: 20150261495Abstract: The present disclosure provides techniques for improving IMU-based gesture detection by a device using ultrasonic Doppler. A method may include detecting the onset of a gesture at a first device based on motion data obtained from an IMU of the first device. An indication of the detection of the onset of the gesture may be provided to a second device. Next, a first audio signal may be received from the second device. As a result, the gesture may be identified based on the motion data and the received first audio signal. In some cases, a first token encoded within the first audio signal may be decoded and the first token may be provided to a third coordinating device. A confirmation message may be received from the third coordinating device based on the first token provided and identifying the gesture may be further based on the confirmation message.Type: ApplicationFiled: March 31, 2014Publication date: September 17, 2015Applicant: Google Inc.Inventors: Boris Smus, Christian Plagemann, Ankit Mohan, Ryan Michael Rifkin
-
Patent number: 9100562Abstract: In exemplary implements of this invention, a lens and sensor of a camera are intentionally destabilized (i.e., shifted relative to the scene being imaged) in order to create defocus effects. That is, actuators in a camera move a lens and a sensor, relative to the scene being imaged, while the camera takes a photograph. This motion simulates a larger aperture size (shallower depth of field). Thus, by translating a lens and a sensor while taking a photo, a camera with a small aperture (such as a cell phone or small point and shoot camera) may simulate the shallow DOF that can be achieved with a professional SLR camera. This invention may be implemented in such a way that programmable defocus effects may be achieved. Also, approximately depth-invariant defocus blur size may be achieved over a range of depths, in some embodiments of this invention.Type: GrantFiled: April 12, 2010Date of Patent: August 4, 2015Assignee: Massachusetts Institute of TechnologyInventors: Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar
-
Publication number: 20140254864Abstract: System and method for image detection that include collecting image data; at a processor, over a plurality of support regions of the image data, computing a dimensionality component of a support region of the image data, wherein the, non-nucleus pixels of a support region; calculating a normalizing factor of the dimensionality component; for at least one weighted pattern of a pattern set, applying a weighted pattern to the dimensionality component to create a gradient vector, mapping the gradient vector to a probabilistic model, and normalizing the gradient vector by the normalizing factor; condensing probabilistic models of the plurality of support regions into a probabilistic distribution feature for at least one cell of the image data; applying a classifier to at least the probabilistic distribution feature; and detecting an object in the image data according to a result of the applied classifier.Type: ApplicationFiled: March 7, 2013Publication date: September 11, 2014Applicant: Google Inc.Inventors: Navneet Dalal, Rahul Garg, Varun Gulshan, Ankit Mohan
-
Patent number: 8783871Abstract: In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye.Type: GrantFiled: April 22, 2011Date of Patent: July 22, 2014Assignee: Massachusetts Institute of TechnologyInventors: Vitor Pamplona, Manuel Menezes de Oliveira Neto, Ankit Mohan, Ramesh Raskar
-
Publication number: 20140157209Abstract: A system and method that includes detecting an application change within a multi-application operating framework; updating an application hierarchy model for gesture-to-action responses with the detected application change; detecting a gesture; according to the hierarchy model, mapping the detected gesture to an action of an application; and triggering the action.Type: ApplicationFiled: March 12, 2013Publication date: June 5, 2014Applicant: Google Inc.Inventors: Navneet Dalal, Mehul Nariyawala, Ankit Mohan, Varun Gulshan
-
Publication number: 20130242138Abstract: First and second images of a scene are captured using respectively different first and second optical paths. The first optical path includes an optical element comprising a stack of microlens arrays. A synthesized image of the scene is generated by calculations using the first and second captured images of the scene. The synthesized image has improved image characteristics as compared to both of the first captured image and the second captured image.Type: ApplicationFiled: March 15, 2012Publication date: September 19, 2013Applicant: CANON KABUSHIKI KAISHAInventors: Axel Becker-Lakus, Ankit Mohan
-
Publication number: 20130222582Abstract: Systems and methods for capturing light field information including spatial and angular information using an image pickup device that includes an image sensor and at least one spatial light modulator (SLM) take multiple captures of a scene using the at least one SLM to obtain coded projections of a light field of the scene, wherein each capture is taken using at least one pattern on the at least one SLM, and recover light field data using a reconstruction process on the obtained coded projections of the light field.Type: ApplicationFiled: February 29, 2012Publication date: August 29, 2013Applicant: CANON KABUSHIKI KAISHAInventors: Ankit Mohan, Siu-Kei Tin, Eric W. Tramel
-
Patent number: 8366003Abstract: In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.Type: GrantFiled: July 16, 2010Date of Patent: February 5, 2013Assignee: Massachusetts Institute of TechnologyInventors: Ankit Mohan, Ramesh Raskar, Shinsaku Hiura, Quinn Smithwick, Grace Woo
-
Publication number: 20130027668Abstract: In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye.Type: ApplicationFiled: April 22, 2011Publication date: January 31, 2013Inventors: Vitor Pamplona, Manuel Menezes de Oliveira Neto, Ankit Mohan, Ramesh Raskar
-
Publication number: 20110017826Abstract: In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated.Type: ApplicationFiled: July 16, 2010Publication date: January 27, 2011Applicant: Massachusetts Institute of TechnologyInventors: Ankit Mohan, Ramesh Raskar, Shinsaku Hiura, Quinn Smithwick, Grace Woo