Patents by Inventor Gurunandan Krishnan Gorumkonda

Gurunandan Krishnan Gorumkonda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240355239
    Abstract: An under-screen camera is provided. A camera is positioned behind a see-through display screen and positioned to capture scene image data of objects in front of the display screen. The camera captures scene image data of a real-world scene including a user. The scene image data is processed to remove artifacts in the scene image data created by capturing the scene image data through the see-through display screen such as blur, noise, backscatter, wiring effect, and the like.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 24, 2024
    Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Jian Wang, Bing Zhou, Sizhuo Ma, Karl Bayer, Yicheng Wu
  • Patent number: 12112427
    Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: October 8, 2024
    Assignee: SNAP INC.
    Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Patent number: 12106412
    Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: October 1, 2024
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 12093443
    Abstract: An eXtended Reality (XR) system provides grasp detection of a user grasping a virtual object. The grasp detection may be used as a user input into an XR application. The XR system provides a user interface of the XR application to a user of the XR system, the user interface including one or more virtual objects. The XR system captures video frame tracking data of a pose of a hand of a user while the user interacts with a virtual object of the one or more virtual objects and generates skeletal model data of the hand of the user based on the video frame tracking data. XR system generates grasp detection data based on the skeletal model data and virtual object data of the virtual object, and provides the grasp detection data to the XR application as user input into the XR application.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: September 17, 2024
    Assignee: SNAP INC.
    Inventors: Gurunandan Krishnan Gorumkonda, Supreeth Narasimhaswamy
  • Publication number: 20240288696
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: May 2, 2024
    Publication date: August 29, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20240281936
    Abstract: A method of correcting perspective distortion of a selfie image captured with a short camera-to-face distance by processing the selfie image and generating an undistorted selfie image appearing to be taken with a longer camera-to-face distance. A pre-trained 3D face GAN processes the selfie image, inverts the 3D face GAN to obtain improved face latent code and camera parameters, fine tunes a 3D face GAN generator, and manipulates camera parameters to render a photorealistic face selfie image. The processed selfie image has less distortion in the forehead, nose, cheek bones, jaw line, chin, lips, eyes, eyebrows, ears, hair, and neck of the face.
    Type: Application
    Filed: February 22, 2023
    Publication date: August 22, 2024
    Inventors: Jian Wang, Zhixiang Wang, Gurunandan Krishnan Gorumkonda
  • Publication number: 20240184853
    Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240185879
    Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Patent number: 12001024
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: June 4, 2024
    Assignee: Snap Inc.
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20240177390
    Abstract: Method of generating a real-time avatar animation starts with a processor receiving acoustic segments of a real-time acoustic signal. For each of the acoustic segments, processor generates using a music analyzer neural network a tempo value and a dance energy category and selects dance tracks based on the tempo value and the dance energy category. Processor generates using the dance tracks dance sequences for avatars, generates real-time animations for the avatars based on the dance sequences and avatar characteristics for the avatars, and causes to be displayed on a first client device the real-time animations of the avatars. Other embodiments are described herein.
    Type: Application
    Filed: November 30, 2023
    Publication date: May 30, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20240144569
    Abstract: Method of generating a real-time avatar animation using danceability scores starts with a processor receiving a real-time acoustic signal comprising acoustic segments. The processor generates using a danceability neural network a danceability score for each of the acoustic segments. The processor generates a real-time animation of a first avatar and a second avatar based on the danceability score and avatar characteristics associated with the first avatar and the second avatar. The processor causes to be displayed on a first client device the real-time animation of the first avatar and the second avatar. Other embodiments are described herein.
    Type: Application
    Filed: October 20, 2023
    Publication date: May 2, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240126084
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: April 13, 2023
    Publication date: April 18, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Patent number: 11947628
    Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: April 2, 2024
    Assignee: Snap Inc.
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240103610
    Abstract: A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a full-body pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.
    Type: Application
    Filed: September 14, 2023
    Publication date: March 28, 2024
    Inventors: Riku Arakawa, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Bing Zhou
  • Publication number: 20240094824
    Abstract: A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.
    Type: Application
    Filed: September 14, 2023
    Publication date: March 21, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Chenhan Xu, Bing Zhou
  • Patent number: 11935556
    Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: March 19, 2024
    Assignee: Snap Inc.
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240054709
    Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.
    Type: Application
    Filed: October 6, 2023
    Publication date: February 15, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Hsin-Ying Lee, Jie Xu
  • Publication number: 20240013467
    Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.
    Type: Application
    Filed: September 21, 2023
    Publication date: January 11, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20230419578
    Abstract: Methods, devices, media, and other embodiments are described for a state-space system for pseudorandom animation. In one embodiment animation elements within a computer model are identified, and for each animation element motion patterns and speed harmonics are identified. A set of motion data values comprising a state-space description of the motion patterns and the speed harmonics are generated, and a probability assigned to each value of the set of motion data values for the state-space description. The probability can then be used to select and update a particular motion used in an animation generated from the computer model.
    Type: Application
    Filed: September 6, 2023
    Publication date: December 28, 2023
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 11816773
    Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: November 14, 2023
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Hsin-Ying Lee, Jie Xu