Patents by Inventor Gurunandan Krishnan Gorumkonda
Gurunandan Krishnan Gorumkonda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240355239Abstract: An under-screen camera is provided. A camera is positioned behind a see-through display screen and positioned to capture scene image data of objects in front of the display screen. The camera captures scene image data of a real-world scene including a user. The scene image data is processed to remove artifacts in the scene image data created by capturing the scene image data through the see-through display screen such as blur, noise, backscatter, wiring effect, and the like.Type: ApplicationFiled: April 17, 2024Publication date: October 24, 2024Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Jian Wang, Bing Zhou, Sizhuo Ma, Karl Bayer, Yicheng Wu
-
Patent number: 12112427Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.Type: GrantFiled: August 29, 2022Date of Patent: October 8, 2024Assignee: SNAP INC.Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
-
Patent number: 12106412Abstract: Methods, devices, media, and other embodiments are described for generating pseudorandom animations matched to audio data on a device. In one embodiment a video is generated and output on a display of the device using a computer animation model. Audio is detected from a microphone of the device, and the audio data is processed to determine a set of audio characteristics for the audio data received at the microphone of the device. A first motion state is randomly selected from the plurality of motion states, one or more motion values of the first motion state are generated using the set of audio characteristics, and the video is updated using the one or more motion values with the computer animation model to create an animated action within the video.Type: GrantFiled: June 17, 2021Date of Patent: October 1, 2024Assignee: Snap Inc.Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 12093443Abstract: An eXtended Reality (XR) system provides grasp detection of a user grasping a virtual object. The grasp detection may be used as a user input into an XR application. The XR system provides a user interface of the XR application to a user of the XR system, the user interface including one or more virtual objects. The XR system captures video frame tracking data of a pose of a hand of a user while the user interacts with a virtual object of the one or more virtual objects and generates skeletal model data of the hand of the user based on the video frame tracking data. XR system generates grasp detection data based on the skeletal model data and virtual object data of the virtual object, and provides the grasp detection data to the XR application as user input into the XR application.Type: GrantFiled: October 30, 2023Date of Patent: September 17, 2024Assignee: SNAP INC.Inventors: Gurunandan Krishnan Gorumkonda, Supreeth Narasimhaswamy
-
Publication number: 20240288696Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: ApplicationFiled: May 2, 2024Publication date: August 29, 2024Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Publication number: 20240281936Abstract: A method of correcting perspective distortion of a selfie image captured with a short camera-to-face distance by processing the selfie image and generating an undistorted selfie image appearing to be taken with a longer camera-to-face distance. A pre-trained 3D face GAN processes the selfie image, inverts the 3D face GAN to obtain improved face latent code and camera parameters, fine tunes a 3D face GAN generator, and manipulates camera parameters to render a photorealistic face selfie image. The processed selfie image has less distortion in the forehead, nose, cheek bones, jaw line, chin, lips, eyes, eyebrows, ears, hair, and neck of the face.Type: ApplicationFiled: February 22, 2023Publication date: August 22, 2024Inventors: Jian Wang, Zhixiang Wang, Gurunandan Krishnan Gorumkonda
-
Publication number: 20240184853Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.Type: ApplicationFiled: February 13, 2024Publication date: June 6, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240185879Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.Type: ApplicationFiled: February 13, 2024Publication date: June 6, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Patent number: 12001024Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: GrantFiled: April 13, 2023Date of Patent: June 4, 2024Assignee: Snap Inc.Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Publication number: 20240177390Abstract: Method of generating a real-time avatar animation starts with a processor receiving acoustic segments of a real-time acoustic signal. For each of the acoustic segments, processor generates using a music analyzer neural network a tempo value and a dance energy category and selects dance tracks based on the tempo value and the dance energy category. Processor generates using the dance tracks dance sequences for avatars, generates real-time animations for the avatars based on the dance sequences and avatar characteristics for the avatars, and causes to be displayed on a first client device the real-time animations of the avatars. Other embodiments are described herein.Type: ApplicationFiled: November 30, 2023Publication date: May 30, 2024Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Publication number: 20240144569Abstract: Method of generating a real-time avatar animation using danceability scores starts with a processor receiving a real-time acoustic signal comprising acoustic segments. The processor generates using a danceability neural network a danceability score for each of the acoustic segments. The processor generates a real-time animation of a first avatar and a second avatar based on the danceability score and avatar characteristics associated with the first avatar and the second avatar. The processor causes to be displayed on a first client device the real-time animation of the first avatar and the second avatar. Other embodiments are described herein.Type: ApplicationFiled: October 20, 2023Publication date: May 2, 2024Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240126084Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.Type: ApplicationFiled: April 13, 2023Publication date: April 18, 2024Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
-
Patent number: 11947628Abstract: A messaging system that extracts accompaniment portions from songs. Methods of accompaniment extraction from songs includes receiving an input song that includes a vocal portion and an accompaniment portion, transforming the input song to an input image, where the input image represents the frequencies and intensities of the input song, processing the input image using a convolutional neural network (CNN) to generate an output image, and transforming the output image to an output accompaniment, where the output accompaniment includes the accompaniment of the input song.Type: GrantFiled: March 30, 2021Date of Patent: April 2, 2024Assignee: Snap Inc.Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240103610Abstract: A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a full-body pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.Type: ApplicationFiled: September 14, 2023Publication date: March 28, 2024Inventors: Riku Arakawa, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Bing Zhou
-
Publication number: 20240094824Abstract: A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.Type: ApplicationFiled: September 14, 2023Publication date: March 21, 2024Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Chenhan Xu, Bing Zhou
-
Patent number: 11935556Abstract: A messaging system for audio character type swapping. Methods of audio character type swapping include receiving input audio data having a first characteristic and transforming the input audio data to an input image where the input image represents the frequencies and intensities of the audio. The methods further include processing the input image using a convolutional neural network (CNN) to generate an output image and transforming the output image to output audio data, the output audio data having a second characteristic. The input audio and output audio may include vocals. The first characteristics may indicate a male voice and the second characteristics may indicate a female voice. The CNN is trained together with another CNN that changes input audio having the second characteristic to audio having the first characteristic. The CNNs are trained using discriminator CNNs that determine whether audio has a first characteristic or a second characteristic.Type: GrantFiled: March 31, 2021Date of Patent: March 19, 2024Assignee: Snap Inc.Inventor: Gurunandan Krishnan Gorumkonda
-
Publication number: 20240054709Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.Type: ApplicationFiled: October 6, 2023Publication date: February 15, 2024Inventors: Gurunandan Krishnan Gorumkonda, Hsin-Ying Lee, Jie Xu
-
Publication number: 20240013467Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.Type: ApplicationFiled: September 21, 2023Publication date: January 11, 2024Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Publication number: 20230419578Abstract: Methods, devices, media, and other embodiments are described for a state-space system for pseudorandom animation. In one embodiment animation elements within a computer model are identified, and for each animation element motion patterns and speed harmonics are identified. A set of motion data values comprising a state-space description of the motion patterns and the speed harmonics are generated, and a probability assigned to each value of the set of motion data values for the state-space description. The probability can then be used to select and update a particular motion used in an animation generated from the computer model.Type: ApplicationFiled: September 6, 2023Publication date: December 28, 2023Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 11816773Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.Type: GrantFiled: September 28, 2021Date of Patent: November 14, 2023Assignee: Snap Inc.Inventors: Gurunandan Krishnan Gorumkonda, Hsin-Ying Lee, Jie Xu