Patents by Inventor Gurunandan Krishnan Gorumkonda

Gurunandan Krishnan Gorumkonda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12256035
    Abstract: Shortcut keypad system for electronic communications comprises first apparatus and second apparatus. First apparatus and second apparatus each comprise an input device, a processor and a memory. The input device comprises selectable items including first selectable item. The processor of the first apparatus receives a selection of the first selectable item, and transmits a signal corresponding to the first selectable item to the second apparatus. The processor of the second apparatus receives the signal corresponding to the first selectable item, and causes the input device of the second apparatus to indicate that the signal corresponding to the first selectable item is received. Other embodiments are described herein.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: March 18, 2025
    Assignee: Snap Inc.
    Inventors: Shree K. Nayar, Brian Anthony Smith, Karl Bayer, Marian Pho, Gurunandan Krishnan Gorumkonda
  • Publication number: 20250069620
    Abstract: An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.
    Type: Application
    Filed: November 14, 2024
    Publication date: February 27, 2025
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 12175999
    Abstract: An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: December 24, 2024
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 12135866
    Abstract: Method to retrieve media content items using captured parameters associated with a moment starts with a processor detecting an activation of a selectable element. Processor determines parameters associated with the activation including a date and a time that the activation is detected and a location of the selectable element when the activation is detected. Processor stores in a memory of a client device a first record including the parameters. When a request for the first record is received, processor transmits the request to a server for media content items associated with at least one of the parameters of the first record. Media content items are uploaded to the server from client devices communicatively coupled to the server. Processor causes a display of the client device to display the media content items. Other embodiments are described herein.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: November 5, 2024
    Assignee: SNAP INC.
    Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Marian Pho
  • Publication number: 20240355239
    Abstract: An under-screen camera is provided. A camera is positioned behind a see-through display screen and positioned to capture scene image data of objects in front of the display screen. The camera captures scene image data of a real-world scene including a user. The scene image data is processed to remove artifacts in the scene image data created by capturing the scene image data through the see-through display screen such as blur, noise, backscatter, wiring effect, and the like.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 24, 2024
    Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Jian Wang, Bing Zhou, Sizhuo Ma, Karl Bayer, Yicheng Wu
  • Patent number: 12112427
    Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: October 8, 2024
    Assignee: SNAP INC.
    Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Patent number: 12106412
    Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: October 1, 2024
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 12093443
    Abstract: An eXtended Reality (XR) system provides grasp detection of a user grasping a virtual object. The grasp detection may be used as a user input into an XR application. The XR system provides a user interface of the XR application to a user of the XR system, the user interface including one or more virtual objects. The XR system captures video frame tracking data of a pose of a hand of a user while the user interacts with a virtual object of the one or more virtual objects and generates skeletal model data of the hand of the user based on the video frame tracking data. XR system generates grasp detection data based on the skeletal model data and virtual object data of the virtual object, and provides the grasp detection data to the XR application as user input into the XR application.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: September 17, 2024
    Assignee: SNAP INC.
    Inventors: Gurunandan Krishnan Gorumkonda, Supreeth Narasimhaswamy
  • Publication number: 20240288696
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: May 2, 2024
    Publication date: August 29, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20240281936
    Abstract: A method of correcting perspective distortion of a selfie image captured with a short camera-to-face distance by processing the selfie image and generating an undistorted selfie image appearing to be taken with a longer camera-to-face distance. A pre-trained 3D face GAN processes the selfie image, inverts the 3D face GAN to obtain improved face latent code and camera parameters, fine tunes a 3D face GAN generator, and manipulates camera parameters to render a photorealistic face selfie image. The processed selfie image has less distortion in the forehead, nose, cheek bones, jaw line, chin, lips, eyes, eyebrows, ears, hair, and neck of the face.
    Type: Application
    Filed: February 22, 2023
    Publication date: August 22, 2024
    Inventors: Jian Wang, Zhixiang Wang, Gurunandan Krishnan Gorumkonda
  • Publication number: 20240184853
    Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240185879
    Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Patent number: 12001024
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: June 4, 2024
    Assignee: Snap Inc.
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20240177390
    Abstract: Method of generating a real-time avatar animation starts with a processor receiving acoustic segments of a real-time acoustic signal. For each of the acoustic segments, processor generates using a music analyzer neural network a tempo value and a dance energy category and selects dance tracks based on the tempo value and the dance energy category. Processor generates using the dance tracks dance sequences for avatars, generates real-time animations for the avatars based on the dance sequences and avatar characteristics for the avatars, and causes to be displayed on a first client device the real-time animations of the avatars. Other embodiments are described herein.
    Type: Application
    Filed: November 30, 2023
    Publication date: May 30, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Publication number: 20240144569
    Abstract: Method of generating a real-time avatar animation using danceability scores starts with a processor receiving a real-time acoustic signal comprising acoustic segments. The processor generates using a danceability neural network a danceability score for each of the acoustic segments. The processor generates a real-time animation of a first avatar and a second avatar based on the danceability score and avatar characteristics associated with the first avatar and the second avatar. The processor causes to be displayed on a first client device the real-time animation of the first avatar and the second avatar. Other embodiments are described herein.
    Type: Application
    Filed: October 20, 2023
    Publication date: May 2, 2024
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240126084
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: April 13, 2023
    Publication date: April 18, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Patent number: 11947628
    Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: April 2, 2024
    Assignee: Snap Inc.
    Inventor: Gurunandan Krishnan Gorumkonda
  • Publication number: 20240103610
    Abstract: A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a full-body pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.
    Type: Application
    Filed: September 14, 2023
    Publication date: March 28, 2024
    Inventors: Riku Arakawa, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Bing Zhou
  • Publication number: 20240094824
    Abstract: A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.
    Type: Application
    Filed: September 14, 2023
    Publication date: March 21, 2024
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Chenhan Xu, Bing Zhou
  • Patent number: 11935556
    Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: March 19, 2024
    Assignee: Snap Inc.
    Inventor: Gurunandan Krishnan Gorumkonda