Patents by Inventor Shree K. Nayar

Shree K. Nayar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11074739
    Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 27, 2021
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 11069111
    Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 20, 2021
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20210217432
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Application
    Filed: August 30, 2019
    Publication date: July 15, 2021
    Inventors: Changxi Zheng, Arun Asokan Nair, AUSTIN REITER, Shree K. Nayar
  • Publication number: 20210201036
    Abstract: An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.
    Type: Application
    Filed: January 6, 2021
    Publication date: July 1, 2021
    Inventors: Mohit Gupta, Shree K. Nayar, Vishwanath Saragadam Raja Venkata
  • Publication number: 20210097742
    Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20210097743
    Abstract: Methods, devices, media, and other embodiments are described for a state-space system for pseudorandom animation. In one embodiment animation elements within a computer model are identified, and for each animation element motion patterns and speed harmonics are identified. A set of motion data values comprising a state-space description of the motion patterns and the speed harmonics are generated, and a probability assigned to each value of the set of motion data values for the state-space description. The probability can then be used to select and update a particular motion used in an animation generated from the computer model.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20210097744
    Abstract: Methods, devices, media, and other embodiments are described for generating, modifying, and outputting pseudorandom animations that can be synchronized to audio data. In one embodiment, a computer animation model made up of comprising one or more control points is accessed by one or more processors, which associate motion patterns with a first control point of the one or more control points, and associate one or more speed harmonics with the first control point. A set of motion states is identify with a motion state for the combinations of possibilities, and a probability value is assigned to each motion state of the set of motion states. The probability value can be used to probabilistically determine a particular motion state to be part of displayed animation for the computer animation model.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20210097746
    Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 10909373
    Abstract: An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: February 2, 2021
    Assignee: Snap Inc.
    Inventors: Mohit Gupta, Shree K. Nayar, Vishwanath Saragadam Raja Venkata
  • Patent number: 10739447
    Abstract: In accordance with some embodiments, systems, methods and media for encoding and decoding signals used in time-of-flight imaging are provided. In some embodiments, a method for estimating the depth of a scene is provided, comprising: causing a light source to emit modulated light toward the scene based on a modulation function; causing the image sensor to generate a first value based on the modulated light and a first demodulation function of K modulation functions; causing the image sensor to generate a second value; causing the image sensor to generate a third value; and determining a depth estimate for the portion of the scene based on the first value, the second value, the third value, and three correlation functions each including at least one half of a trapezoid wave.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: August 11, 2020
    Assignees: Wisconsin Alumni Research Foundation, The Trustees of Columbia University in the City of New York
    Inventors: Felipe Gutierrez Barragan, Mohit Gupta, Andreas Velten, Eric Breitbach, Shree K. Nayar
  • Patent number: 10690489
    Abstract: Systems, methods, and media for performing shape measurement are provided. In some embodiments, systems for performing shape measurement are provided, the systems comprising: a projector that projects onto a scene a plurality of illumination patterns, wherein each of the illumination patterns has a given frequency, each of the illumination patterns is projected onto the scene during a separate period of time, three different illumination patterns are projected with a first given frequency, and only one or two different illumination patterns are projected with a second given frequency; a camera that detects an image of the scene during each of the plurality of periods of time; and a hardware processor that is configured to: determine the given frequencies of the plurality of illumination patterns; and measure a shape of an object in the scene.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: June 23, 2020
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Mohit Gupta, Shree K. Nayar
  • Patent number: 10645367
    Abstract: In accordance with some embodiments, systems, methods and media for encoding and decoding signals used in time-of-flight imaging are provided. In some embodiments, a method for estimating the depth of a scene is provided, comprising: causing a light source to emit modulated light toward the scene based on a modulation function; causing the image sensor to generate a first value based on the modulated light and a first demodulation function of K modulation functions, including at least one trapezoid wave; causing the image sensor to generate a second value; causing the image sensor to generate a third value; and determining a depth estimate for the portion of the scene based on the first value, the second value, and the third value.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: May 5, 2020
    Assignees: Wisconsin Alumni Research Foundation, The Trustees of Columbia University in the City of New York
    Inventors: Mohit Gupta, Eric Breitbach, Andreas Velten, Shree K. Nayar
  • Patent number: 10582120
    Abstract: Systems, methods, and media for providing interactive refocusing are provided, the systems comprising: a lens; an image sensor; and a processor that: causes the image sensor to capture a plurality of images over a predetermined period of time, wherein each of the plurality of images represents a scene at a different point in time; changes a depth of field between at least a pair of the plurality of images; concatenates the plurality of images to create a duration focal volume in the order in which the images were captured; computes a space-time in-focus image that represents in-focus portions from each of the plurality of images based on the duration focal volume; and computes a space-time index map that identifies an in-focus image for each location of the scene from among the plurality of images based on the duration focal volume and the space-time in-focus image.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: March 3, 2020
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Shree K. Nayar, Daniel Miau, Changyin Zhou
  • Patent number: 10326956
    Abstract: Circuits for self-powered image sensors are provided. In some embodiments, an image sensor is provided, the image comprising: a plurality of pixels, each of the plurality of pixels comprising: a photodiode having an anode and a cathode connected to a constant voltage level; a first transistor having: a first input connected to the anode of the photodiode; a first output connected to a reset bus; and a first control configured to receive a discharge signal; and a second transistor having: a second input connected to the anode of the photodiode; a second output connected to a pixel output bus; and a second control configured to receive a select signal; and a third transistor having: a third input coupled to each first output via the reset bus; a third output configured to be coupled to an energy storage device; and a third control configured to receive an energy harvest signal.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: June 18, 2019
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Shree K. Nayar, Daniel Sims, Mikhail Fridberg
  • Publication number: 20190146073
    Abstract: In accordance with some embodiments, systems, methods and media for encoding and decoding signals used in time-of-flight imaging are provided. In some embodiments, a method for estimating the depth of a scene is provided, comprising: causing a light source to emit modulated light toward the scene based on a modulation function; causing the image sensor to generate a first value based on the modulated light and a first demodulation function of K modulation functions; causing the image sensor to generate a second value; causing the image sensor to generate a third value; and determining a depth estimate for the portion of the scene based on the first value, the second value, the third value, and three correlation functions each including at least one half of a trapezoid wave.
    Type: Application
    Filed: September 8, 2017
    Publication date: May 16, 2019
    Inventors: Felipe Gutierrez, Mohit Gupta, Andreas Velten, Eric Breitbach, Shree K. Nayar
  • Publication number: 20190141232
    Abstract: Systems, methods, and media for providing interactive refocusing are provided, the systems comprising: a lens; an image sensor; and a processor that: causes the image sensor to capture a plurality of images over a predetermined period of time, wherein each of the plurality of images represents a scene at a different point in time; changes a depth of field between at least a pair of the plurality of images; concatenates the plurality of images to create a duration focal volume in the order in which the images were captured; computes a space-time in-focus image that represents in-focus portions from each of the plurality of images based on the duration focal volume; and computes a space-time index map that identifies an in-focus image for each location of the scene from among the plurality of images based on the duration focal volume and the space-time in-focus image.
    Type: Application
    Filed: December 14, 2018
    Publication date: May 9, 2019
    Inventors: Shree K. Nayar, Daniel Miau, Changyin Zhou
  • Patent number: 10277878
    Abstract: Systems, methods, and media for reconstructing a space-time volume from a coded image are provided. In accordance with some embodiments, systems for reconstructing a space-time volume from a coded image are provided, the systems comprising: an image sensor that outputs image data; and at least one processor that: causes a projection of the space-time volume to be captured in a single image of the image data in accordance with a coded shutter function; receives the image data; and performs a reconstruction process on the image data to provide a space-time volume corresponding to the image data.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: April 30, 2019
    Assignee: Sony Corporation
    Inventors: Yasunobu Hitomi, Jinwei Gu, Mohit Gupta, Tomoo Mitsunaga, Shree K. Nayar
  • Publication number: 20180347971
    Abstract: Systems, methods, and media for performing shape measurement are provided. In some embodiments, systems for performing shape measurement are provided, the systems comprising: a projector that projects onto a scene a plurality of illumination patterns, wherein each of the illumination patterns has a given frequency, each of the illumination patterns is projected onto the scene during a separate period of time, three different illumination patterns are projected with a first given frequency, and only one or two different illumination patterns are projected with a second given frequency; a camera that detects an image of the scene during each of the plurality of periods of time; and a hardware processor that is configured to: determine the given frequencies of the plurality of illumination patterns; and measure a shape of an object in the scene.
    Type: Application
    Filed: December 22, 2017
    Publication date: December 6, 2018
    Inventors: Mohit Gupta, Shree K. Nayar
  • Patent number: 10148908
    Abstract: Systems, methods and media for providing modular cameras are provided. In some embodiments, a modular imaging device is provided, comprising: a base module comprising: a user device interface configured to receive signals from a user device; a first magnet; a first plurality of electrical contacts; and one or more circuits that are configured to receive information transmitted to the base module from the user device via the user device interface; and an image sensor module comprising: a second plurality of electrical contacts; a second magnet; a third plurality of electrical contacts; an image sensor; and one or more circuits that are configured to: receive a first control signal; cause the image sensor to capture image data; and transmit the captured image data.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: December 4, 2018
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Makoto Odamaki, Shree K. Nayar
  • Patent number: 10148893
    Abstract: Systems, methods, and media for high dynamic range imaging are provided, the systems comprising: an image sensor; and a hardware processor configured to: cause the image sensor to capture first image data having a first exposure time, second image data having a second exposure time, and third image data having a third exposure time that is substantially equal to the sum of the first exposure time and the second exposure time; generate combined image data using the first image data and the second image data.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: December 4, 2018
    Assignee: Sony Corporation
    Inventors: Mohit Gupta, Tomoo Mitsunaga, Daisuke Iso, Shree K. Nayar