Patents by Inventor Sudha Krishnamurthy

Sudha Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220347574
    Abstract: A character in a game world of a computer simulation is identified as moving toward a sky box in the simulation. The computer simulation does not permit simulation characters to enter the sky box. However, techniques are described for modifying an image or audio or both of the sky box responsive to identifying the character is moving toward the sky box.
    Type: Application
    Filed: May 3, 2021
    Publication date: November 3, 2022
    Inventors: Michael Taylor, Sudha Krishnamurthy
  • Patent number: 11450353
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: September 20, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Patent number: 11420125
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators in an audience engaged in watching gameplay of the video game. The captured interaction data is aggregated by clustering the spectators into different groups in accordance to emotions detected from the spectators in the audience. An avatar is generated to represent emotion of each group and expressions of the avatar are dynamically adjusted to match changes in the expressions of the spectators of the respective group. The avatars representing distinct emotions of different group of spectators is presented alongside content of the video game. A size of the avatar for each distinct emotion is influenced by the confidence score associated with the respective group of spectators.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: August 23, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Patent number: 11420124
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: August 23, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Patent number: 11381888
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference visual, a positive audio signal, and a negative audio signal. A trained Sound Recommendation Network is configured to output an audio embedding and a visual embedding and use the audio embedding and visual embedding to compute a correlation distance between an image frame or video segment and one or more audio segments retrieved from a database. The correlation distances for the one or more audio segments in the database are sorted and one or more audio segments with the closest correlation distance from the sorted audio correlation distances are determined. The audio segment with the closest audio correlation distance is applied to the input image frame or video segment.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: July 5, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20220168639
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A graphic interchange format file (GIF) is identified for each group based on the emotion associated with the group. The GIFs representing the distinct emotions of different groups of spectators are forwarded to client devices of spectators for rendering alongside content of the video game.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Publication number: 20220171960
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators of an audience engaged in watching gameplay of the video game. The captured interaction data is used to cluster the spectators into different groups in accordance to emotions detected from the interactions of spectators in the audience. A reaction track is identified for each group based on the emotion associated with the group. The reaction tracks representing distinct emotions of different groups of spectators are presented alongside content of the video game.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Publication number: 20220168644
    Abstract: Methods and systems for representing emotions of an audience of spectators viewing online gaming of a video game include capturing interaction data from spectators in an audience engaged in watching gameplay of the video game. The captured interaction data is aggregated by clustering the spectators into different groups in accordance to emotions detected from the spectators in the audience. An avatar is generated to represent emotion of each group and expressions of the avatar are dynamically adjusted to match changes in the expressions of the spectators of the respective group. The avatars representing distinct emotions of different group of spectators is presented alongside content of the video game. A size of the avatar for each distinct emotion is influenced by the confidence score associated with the respective group of spectators.
    Type: Application
    Filed: April 1, 2021
    Publication date: June 2, 2022
    Inventors: David Nelson, Sudha Krishnamurthy, Mahdi Azmandian
  • Patent number: 11298622
    Abstract: An automated system that improves the spectating experience for a computer game or e-sport by placing spectators in locations based on their preferences, interests, demographics, and changes in emotions or behavior as the game progresses. The spectator may be moved automatically or may elect to move within the virtual space to improve the immersive experience. In some cases, the system may charge for better spectating experience. The system also detects abusive and inappropriate spectator behavior and allows such behavior to be isolated, in order to improve the spectating experience.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 12, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210319321
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference image, a positive audio signals, and a negative audio signal. A positive audio embedding related to the reference image is generated from the positive audio signal and a negative audio embedding is generated from a negative audio signal. A machine learning algorithm uses the reference image, the positive audio embedding and the negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210319322
    Abstract: An automated method, system, and computer readable medium for generating sound effect recommendations for visual input by training machine learning models that learn audio-visual correlations from a reference image or video, a positive audio signal, and a negative audio signal. A machine learning algorithm is used with a reference visual input, a positive audio signal input or a negative audio signal input to train a multimodal clustering neural network to output representations for the visual input and audio input as well as correlation scores between the audio and visual representations. The trained multimodal clustering neural network is configured to learn representations in such a way that the visual representation and positive audio representation have higher correlation scores than the visual representation and a negative audio representation or an unrelated audio representation.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210321172
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference visual, a positive audio signal, and a negative audio signal. A trained Sound Recommendation Network is configured to output an audio embedding and a visual embedding and use the audio embedding and visual embedding to compute a correlation distance between an image frame or video segment and one or more audio segments retrieved from a database. The correlation distances for the one or more audio segments in the database are sorted and one or more audio segments with the closest correlation distance from the sorted audio correlation distances are determined. The audio segment with the closest audio correlation distance is applied to the input image frame or video segment.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210233328
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module generates a style adapted video stream by applying a style adapted from a target image frame to each image frame in a buffered video stream.
    Type: Application
    Filed: April 12, 2021
    Publication date: July 29, 2021
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Patent number: 11030479
    Abstract: Sound effects (SFX) are registered in a database for efficient search and retrieval. This may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX. Subsequently, videos without sound may be processed for object, action, and caption recognition to generate video tags which are semantically matched with SFX tags to associate SFX with the video.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: June 8, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210113929
    Abstract: An automated system that improves the spectating experience for a computer game or e-sport by placing spectators in locations based on their preferences, interests, demographics, and changes in emotions or behavior as the game progresses. The spectator may be moved automatically or may elect to move within the virtual space to improve the immersive experience. In some cases, the system may charge for better spectating experience. The system also detects abusive and inappropriate spectator behavior and allows such behavior to be isolated, in order to improve the spectating experience.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 22, 2021
    Inventor: Sudha Krishnamurthy
  • Patent number: 10977872
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module receives a first image frame from a host system and applies a style adapted from a second image frame to the first image frame to generate a style adapted first image frame.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 13, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Publication number: 20210035610
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Application
    Filed: October 21, 2020
    Publication date: February 4, 2021
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Publication number: 20210023451
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Application
    Filed: October 7, 2020
    Publication date: January 28, 2021
    Inventor: Sudha Krishnamurthy
  • Patent number: 10847186
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: November 24, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Patent number: 10828566
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: November 10, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Sudha Krishnamurthy