Patents by Inventor Gurunandan Krishnan Gorumkonda
Gurunandan Krishnan Gorumkonda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12256035Abstract: Shortcut keypad system for electronic communications comprises first apparatus and second apparatus. First apparatus and second apparatus each comprise an input device, a processor and a memory. The input device comprises selectable items including first selectable item. The processor of the first apparatus receives a selection of the first selectable item, and transmits a signal corresponding to the first selectable item to the second apparatus. The processor of the second apparatus receives the signal corresponding to the first selectable item, and causes the input device of the second apparatus to indicate that the signal corresponding to the first selectable item is received. Other embodiments are described herein.Type: GrantFiled: October 17, 2022Date of Patent: March 18, 2025Assignee: Snap Inc.Inventors: Shree K. Nayar, Brian Anthony Smith, Karl Bayer, Marian Pho, Gurunandan Krishnan Gorumkonda
-
Publication number: 20250069620Abstract: An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.Type: ApplicationFiled: November 14, 2024Publication date: February 27, 2025Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 12175999Abstract: An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.Type: GrantFiled: December 22, 2021Date of Patent: December 24, 2024Assignee: Snap Inc.Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 12135866Abstract: Method to retrieve media content items using captured parameters associated with a moment starts with a processor detecting an activation of a selectable element. Processor determines parameters associated with the activation including a date and a time that the activation is detected and a location of the selectable element when the activation is detected. Processor stores in a memory of a client device a first record including the parameters. When a request for the first record is received, processor transmits the request to a server for media content items associated with at least one of the parameters of the first record. Media content items are uploaded to the server from client devices communicatively coupled to the server. Processor causes a display of the client device to display the media content items. Other embodiments are described herein.Type: GrantFiled: December 30, 2020Date of Patent: November 5, 2024Assignee: SNAP INC.Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Marian Pho
-
Publication number: 20240355239Abstract: An under-screen camera is provided. A camera is positioned behind a see-through display screen and positioned to capture scene image data of objects in front of the display screen. The camera captures scene image data of a real-world scene including a user. The scene image data is processed to remove artifacts in the scene image data created by capturing the scene image data through the see-through display screen such as blur, noise, backscatter, wiring effect, and the like.Type: ApplicationFiled: April 17, 2024Publication date: October 24, 2024Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Jian Wang, Bing Zhou, Sizhuo Ma, Karl Bayer, Yicheng Wu
-
Patent number: 12112427Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.Type: GrantFiled: August 29, 2022Date of Patent: October 8, 2024Assignee: SNAP INC.Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
-
Patent number: 12106412Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.Type: GrantFiled: June 17, 2021Date of Patent: October 1, 2024Assignee: Snap Inc.Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 12093443Abstract: An eXtended Reality (XR) system provides grasp detection of a user grasping a virtual object. The grasp detection may be used as a user input into an XR application. The XR system provides a user interface of the XR application to a user of the XR system, the user interface including one or more virtual objects. The XR system captures video frame tracking data of a pose of a hand of a user while the user interacts with a virtual object of the one or more virtual objects and generates skeletal model data of the hand of the user based on the video frame tracking data. XR system generates grasp detection data based on the skeletal model data and virtual object data of the virtual object, and provides the grasp detection data to the XR application as user input into the XR application.Type: GrantFiled: October 30, 2023Date of Patent: September 17, 2024Assignee: SNAP INC.Inventors: Gurunandan Krishnan Gorumkonda, Supreeth Narasimhaswamy
-
Publication number: 20240288696Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: ApplicationFiled: May 2, 2024Publication date: August 29, 2024Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Publication number: 20240281936Abstract: A method of correcting perspective distortion of a selfie image captured with a short camera-to-face distance by processing the selfie image and generating an undistorted selfie image appearing to be taken with a longer camera-to-face distance. A pre-trained 3D face GAN processes the selfie image, inverts the 3D face GAN to obtain improved face latent code and camera parameters, fine tunes a 3D face GAN generator, and manipulates camera parameters to render a photorealistic face selfie image. The processed selfie image has less distortion in the forehead, nose, cheek bones, jaw line, chin, lips, eyes, eyebrows, ears, hair, and neck of the face.Type: ApplicationFiled: February 22, 2023Publication date: August 22, 2024Inventors: Jian Wang, Zhixiang Wang, Gurunandan Krishnan Gorumkonda
-
Publication number: 20240184853Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.Type: ApplicationFiled: February 13, 2024Publication date: June 6, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240185879Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.Type: ApplicationFiled: February 13, 2024Publication date: June 6, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Patent number: 12001024Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: GrantFiled: April 13, 2023Date of Patent: June 4, 2024Assignee: Snap Inc.Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Publication number: 20240177390Abstract: Method of generating a real-time avatar animation starts with a processor receiving acoustic segments of a real-time acoustic signal. For each of the acoustic segments, processor generates using a music analyzer neural network a tempo value and a dance energy category and selects dance tracks based on the tempo value and the dance energy category. Processor generates using the dance tracks dance sequences for avatars, generates real-time animations for the avatars based on the dance sequences and avatar characteristics for the avatars, and causes to be displayed on a first client device the real-time animations of the avatars. Other embodiments are described herein.Type: ApplicationFiled: November 30, 2023Publication date: May 30, 2024Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Publication number: 20240144569Abstract: Method of generating a real-time avatar animation using danceability scores starts with a processor receiving a real-time acoustic signal comprising acoustic segments. The processor generates using a danceability neural network a danceability score for each of the acoustic segments. The processor generates a real-time animation of a first avatar and a second avatar based on the danceability score and avatar characteristics associated with the first avatar and the second avatar. The processor causes to be displayed on a first client device the real-time animation of the first avatar and the second avatar. Other embodiments are described herein.Type: ApplicationFiled: October 20, 2023Publication date: May 2, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240126084Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: ApplicationFiled: April 13, 2023Publication date: April 18, 2024Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Patent number: 11947628Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.Type: GrantFiled: March 30, 2021Date of Patent: April 2, 2024Assignee: Snap Inc.Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240103610Abstract: A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a full-body pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.Type: ApplicationFiled: September 14, 2023Publication date: March 28, 2024Inventors: Riku Arakawa, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Bing Zhou
-
Publication number: 20240094824Abstract: A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.Type: ApplicationFiled: September 14, 2023Publication date: March 21, 2024Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Chenhan Xu, Bing Zhou
-
Patent number: 11935556Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.Type: GrantFiled: March 31, 2021Date of Patent: March 19, 2024Assignee: Snap Inc.Inventor: Gurunandan Krishnan Gorumkonda