Patents by Inventor Sudha Krishnamurthy

Sudha Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11684852
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. Techniques are described for remastering the skybox based on various parameters.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: June 27, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Patent number: 11636673
    Abstract: A system enhances existing audio-visual content with audio describing the setting of the visual content. A scene annotation module classifies scene elements from an image frame received from a host system and generates a caption describing the scene elements.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 25, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Justice Adams, Arindam Jati, Masanori Omote, Jian Zheng
  • Patent number: 11631214
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: April 18, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Patent number: 11631225
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module generates a style adapted video stream by applying a style adapted from a target image frame to each image frame in a buffered video stream.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: April 18, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Patent number: 11615312
    Abstract: An automated method, system, and computer readable medium for generating sound effect recommendations for visual input by training machine learning models that learn audio-visual correlations from a reference image or video, a positive audio signal, and a negative audio signal. A machine learning algorithm is used with a reference visual input, a positive audio signal input or a negative audio signal input to train a multimodal clustering neural network to output representations for the visual input and audio input as well as correlation scores between the audio and visual representations. The trained multimodal clustering neural network is configured to learn representations in such a way that the visual representation and positive audio representation have higher correlation scores than the visual representation and a negative audio representation or an unrelated audio representation.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: March 28, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20230054035
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 23, 2023
    Applicant: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Patent number: 11547938
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A graphic interchange format file (GIF) is identified for each group based on the emotion associated with the group. The GIFs representing the distinct emotions of different groups of spectators are forwarded to client devices of spectators for rendering alongside content of the video game.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: January 10, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Patent number: 11511190
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. However, techniques are described for modifying an image or audio or both of the sky box responsive to identifying the character is moving toward the sky box.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: November 29, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Publication number: 20220358713
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20220357914
    Abstract: A 3D scene is generated consisting of one or more objects from a natural language description that may consist of text or voice. Relevant keywords like asset attributes and placement are extracted from the description. Using these keywords, a 2D image is generated using a generative model. Another neural model is used to reconstruct the 3D objects from the 2D. The 3D objects can be assembled to meet the placement specifications. Alternatively, the 3D object is generated by either transforming existing 3D objects or by using a 3D generative model to meet the specifications in the description.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20220358718
    Abstract: A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D object and the 3D object or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair. A physics engine can be used to modify the 3D objects.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Sudha Krishnamurthy, Michael Taylor
  • Publication number: 20220355199
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. Techniques are described for remastering the skybox based on various parameters.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Publication number: 20220347574
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. However, techniques are described for modifying an image or audio or both of the sky box responsive to identifying the character is moving toward the sky box.
    Type: Application
    Filed: May 3, 2021
    Publication date: November 3, 2022
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Patent number: 11450353
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: September 20, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Patent number: 11420124
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: August 23, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Patent number: 11420125
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators in an audience engaged in watching gameplay of the video game. The captured interaction data is aggregated by clustering the spectators into different groups in accordance to emotions detected from the spectators in the audience. An avatar is generated to represent emotion of each group and expressions of the avatar are dynamically adjusted to match changes in the expressions of the spectators of the respective group. The avatars representing distinct emotions of different group of spectators is presented alongside content of the video game. A size of the avatar for each distinct emotion is influenced by the confidence score associated with the respective group of spectators.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: August 23, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Patent number: 11381888
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference visual, a positive audio signal, and a negative audio signal. A trained Sound Recommendation Network is configured to output an audio embedding and a visual embedding and use the audio embedding and visual embedding to compute a correlation distance between an image frame or video segment and one or more audio segments retrieved from a database. The correlation distances for the one or more audio segments in the database are sorted and one or more audio segments with the closest correlation distance from the sorted audio correlation distances are determined. The audio segment with the closest audio correlation distance is applied to the input image frame or video segment.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: July 5, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20220171960
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A reaction track is identified for each group based on the emotion associated with the group. The reaction tracks representing distinct emotions of different groups of spectators are presented alongside content of the video game.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Publication number: 20220168639
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A graphic interchange format file (GIF) is identified for each group based on the emotion associated with the group. The GIFs representing the distinct emotions of different groups of spectators are forwarded to client devices of spectators for rendering alongside content of the video game.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Publication number: 20220168644
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators in an audience engaged in watching gameplay of the video game. The captured interaction data is aggregated by clustering the spectators into different groups in accordance to emotions detected from the spectators in the audience. An avatar is generated to represent emotion of each group and expressions of the avatar are dynamically adjusted to match changes in the expressions of the spectators of the respective group. The avatars representing distinct emotions of different group of spectators is presented alongside content of the video game. A size of the avatar for each distinct emotion is influenced by the confidence score associated with the respective group of spectators.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian