Patents by Inventor Sudha Krishnamurthy

Sudha Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104829
    Abstract: Deep learning techniques such as vector graphics are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder such as a deep neural network such as a recurrent neural network (RNN) or transformer receives vector graphics and generates an encoded output. The encoded output is decoded by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed between the original and the output of the 3D decoder. The loss is back propagated to train the vector graphics encoder to generate 3D content.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventor: Sudha Krishnamurthy
  • Publication number: 20240104807
    Abstract: Deep learning is used to dynamically adapt virtual humans in metaverse applications. The adaptation can be according to user preferences. In addition or alternatively, virtual humans and pets can be adapted for metaverse applications based on demographics of the user. The user's personal demographics may be used to establish the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventor: Sudha Krishnamurthy
  • Publication number: 20230410824
    Abstract: Systems and methods for audio processing are described. An audio processing system receives audio content that includes a voice sample. The audio processing system analyzes the voice sample to identify a sound type in the voice sample. The sound type corresponds to pronunciation of at least one specified character in the voice sample. The audio processing system generates a filtered voice sample at least in part by filtering the voice sample to modify the sound type. The audio processing system outputs the filtered voice sample.
    Type: Application
    Filed: May 31, 2022
    Publication date: December 21, 2023
    Inventors: Jin Zhang, Celeste Bean, Sepideh Karimi, Sudha Krishnamurthy
  • Patent number: 11847743
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: December 19, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20230385646
    Abstract: A Sound effect recommendation network is trained using a machine learning algorithm with a reference image, a positive audio embedding and a negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image. The visual-to-audio correlation neural network is trained to identify one or more visual elements in the reference image and map the one or more visual elements to one or more sound categories or subcategories within an audio database.
    Type: Application
    Filed: July 3, 2023
    Publication date: November 30, 2023
    Inventor: Sudha Krishnamurthy
  • Publication number: 20230381673
    Abstract: The present disclosure generally relates to providing virtual education and education to a user. More specifically, the present system relates to educating and onboarding spectators of electronic sports (eSports) events. The onboarding activities are used to further engage the spectators with the eSports event in general, as well as the game played during the eSports event. In other aspects, the eSports onboarding activity may be modified based on the type of game being played, the user's experience with the specific game or game genre, and other user preferences.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Mahdi Azmandian, Victoria Dorn, Sarah Karp, Sudha Krishnamurthy, Kristie Ramirez
  • Publication number: 20230259553
    Abstract: A system enhances existing audio-visual content with an action a scene annotation module, an action description module, both of which are coupled to a controller. The scene annotation module classifies scene elements from an image frame received from a host system and generates a caption describing the scene elements. The scene annotation module includes a first neural network configured to generate a feature vector from the image frame and a second neural network configured to generate a caption describing elements within the image frame from the feature vector. The action description module recognizes action happening within one or more image frames received from the host system and generates a description of the action happening within one or more image frames.
    Type: Application
    Filed: April 24, 2023
    Publication date: August 17, 2023
    Inventors: Sudha Krishnamurthy, Justice Adams, Arindam Jati, Masanori Omote, Jian Zheng
  • Patent number: 11694084
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference image, a positive audio signals, and a negative audio signal. A positive audio embedding related to the reference image is generated from the positive audio signal and a negative audio embedding is generated from a negative audio signal. A machine learning algorithm uses the reference image, the positive audio embedding and the negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: July 4, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Sudha Krishnamurthy
  • Patent number: 11684852
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. Techniques are described for remastering the skybox based on various parameters.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: June 27, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Patent number: 11636673
    Abstract: A system enhances existing audio-visual content with audio describing the setting of the visual content. A scene annotation module classifies scene elements from an image frame received from a host system and generates a caption describing the scene elements.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 25, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Justice Adams, Arindam Jati, Masanori Omote, Jian Zheng
  • Patent number: 11631214
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: April 18, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Patent number: 11631225
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module generates a style adapted video stream by applying a style adapted from a target image frame to each image frame in a buffered video stream.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: April 18, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Patent number: 11615312
    Abstract: An automated method, system, and computer readable medium for generating sound effect recommendations for visual input by training machine learning models that learn audio-visual correlations from a reference image or video, a positive audio signal, and a negative audio signal. A machine learning algorithm is used with a reference visual input, a positive audio signal input or a negative audio signal input to train a multimodal clustering neural network to output representations for the visual input and audio input as well as correlation scores between the audio and visual representations. The trained multimodal clustering neural network is configured to learn representations in such a way that the visual representation and positive audio representation have higher correlation scores than the visual representation and a negative audio representation or an unrelated audio representation.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: March 28, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20230054035
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 23, 2023
    Applicant: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Patent number: 11547938
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A graphic interchange format file (GIF) is identified for each group based on the emotion associated with the group. The GIFs representing the distinct emotions of different groups of spectators are forwarded to client devices of spectators for rendering alongside content of the video game.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: January 10, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Patent number: 11511190
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. However, techniques are described for modifying an image or audio or both of the sky box responsive to identifying the character is moving toward the sky box.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: November 29, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Publication number: 20220355199
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. Techniques are described for remastering the skybox based on various parameters.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Publication number: 20220357914
    Abstract: A 3D scene is generated consisting of one or more objects from a natural language description that may consist of text or voice. Relevant keywords like asset attributes and placement are extracted from the description. Using these keywords, a 2D image is generated using a generative model. Another neural model is used to reconstruct the 3D objects from the 2D. The 3D objects can be assembled to meet the placement specifications. Alternatively, the 3D object is generated by either transforming existing 3D objects or by using a 3D generative model to meet the specifications in the description.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20220358713
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20220358718
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor