Patents by Inventor Sudha Krishnamurthy

Sudha Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11298622
    Abstract: An automated system that improves the spectating experience for a computer game or e-sport by placing spectators in locations based on their preferences, interests, demographics, and changes in emotions or behavior as the game progresses. The spectator may be moved automatically or may elect to move within the virtual space to improve the immersive experience. In some cases, the system may charge for better spectating experience. The system also detects abusive and inappropriate spectator behavior and allows such behavior to be isolated, in order to improve the spectating experience.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 12, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210319321
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference image, a positive audio signals, and a negative audio signal. A positive audio embedding related to the reference image is generated from the positive audio signal and a negative audio embedding is generated from a negative audio signal. A machine learning algorithm uses the reference image, the positive audio embedding and the negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210319322
    Abstract: An automated method, system, and computer readable medium for generating sound effect recommendations for visual input by training machine learning models that learn audio-visual correlations from a reference image or video, a positive audio signal, and a negative audio signal. A machine learning algorithm is used with a reference visual input, a positive audio signal input or a negative audio signal input to train a multimodal clustering neural network to output representations for the visual input and audio input as well as correlation scores between the audio and visual representations. The trained multimodal clustering neural network is configured to learn representations in such a way that the visual representation and positive audio representation have higher correlation scores than the visual representation and a negative audio representation or an unrelated audio representation.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210321172
    Abstract: Sound effect recommendations for visual input are generated by training machine learning models that learn coarse-grained and fine-grained audio-visual correlations from a reference visual, a positive audio signal, and a negative audio signal. A trained Sound Recommendation Network is configured to output an audio embedding and a visual embedding and use the audio embedding and visual embedding to compute a correlation distance between an image frame or video segment and one or more audio segments retrieved from a database. The correlation distances for the one or more audio segments in the database are sorted and one or more audio segments with the closest correlation distance from the sorted audio correlation distances are determined. The audio segment with the closest audio correlation distance is applied to the input image frame or video segment.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210233328
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module generates a style adapted video stream by applying a style adapted from a target image frame to each image frame in a buffered video stream.
    Type: Application
    Filed: April 12, 2021
    Publication date: July 29, 2021
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Patent number: 11030479
    Abstract: Sound effects (SFX) are registered in a database for efficient search and retrieval. This may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX. Subsequently, videos without sound may be processed for object, action, and caption recognition to generate video tags which are semantically matched with SFX tags to associate SFX with the video.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: June 8, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20210113929
    Abstract: An automated system that improves the spectating experience for a computer game or e-sport by placing spectators in locations based on their preferences, interests, demographics, and changes in emotions or behavior as the game progresses. The spectator may be moved automatically or may elect to move within the virtual space to improve the immersive experience. In some cases, the system may charge for better spectating experience. The system also detects abusive and inappropriate spectator behavior and allows such behavior to be isolated, in order to improve the spectating experience.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 22, 2021
    Inventor: Sudha Krishnamurthy
  • Patent number: 10977872
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module receives a first image frame from a host system and applies a style adapted from a second image frame to the first image frame to generate a style adapted first image frame.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 13, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Publication number: 20210035610
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Application
    Filed: October 21, 2020
    Publication date: February 4, 2021
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Publication number: 20210023451
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Application
    Filed: October 7, 2020
    Publication date: January 28, 2021
    Inventor: Sudha Krishnamurthy
  • Patent number: 10847186
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: November 24, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Patent number: 10828566
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: November 10, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Sudha Krishnamurthy
  • Patent number: 10828567
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: November 10, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Sudha Krishnamurthy
  • Publication number: 20200349975
    Abstract: Automatically recommending sound effects based on visual scenes enables sound engineers during video production of computer simulations, such as movies and video games. This recommendation engine may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Sudha Krishnamurthy, Xiaoyu Liu
  • Publication number: 20200349387
    Abstract: Sound effects (SFX) are registered in a database for efficient search and retrieval. This may be accomplished by classifying SFX and using a machine learning engine to output a first of the classified SFX for a first computer simulation based on learned correlations between video attributes of the first computer simulation and the classified SFX. Subsequently, videos without sound may be processed for object, action, and caption recognition to generate video tags which are semantically matched with SFX tags to associate SFX with the video.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventor: Sudha Krishnamurthy
  • Publication number: 20200129860
    Abstract: A system enhances existing audio visual content with audio describing the action and setting of the visual content. The system may also provide subtitle content describing the important sound or sounds occurring within audio. Accommodation for color or visual impairments may be implemented by selective color substitution. A Graphical Style Modification module may apply a style from one image to another to adapt the style of a video per a gamer's preference.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Justice Adams, Arindam Jati, Sudha Krishnamurthy, Masanori Omote, Jian Zheng, Naveen Kumar, Min-Heng Chen, Ashish Singh
  • Publication number: 20200134316
    Abstract: A system enhances existing audio-visual content with audio describing the setting of the visual content. A scene annotation module classifies scene elements from an image frame received from a host system and generates a caption describing the scene elements.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Sudha Krishnamurthy, Justice Adams, Arindam Jati, Masanori Omote, Jian Zheng
  • Publication number: 20200134929
    Abstract: Graphical style modification may be implemented using machine learning. A color accommodation module receives an image frame from a host system and generates a color-adapted version of the image frame. A Graphical Style Modification module receives a first image frame from a host system and applies a style adapted from a second image frame to the first image frame to generate a style adapted first image frame.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Sudha Krishnamurthy, Ashish Singh, Naveen Kumar, Justice Adams, Arindam Jati, Masanori Omote
  • Publication number: 20190341025
    Abstract: A system and method for multimodal classification of user characteristics is described. The method comprises receiving audio and other inputs, extracting fundamental frequency information from the audio input, extracting other feature information from the video input, classifying the fundamental frequency information, textual information and video feature information using the multimodal neural network.
    Type: Application
    Filed: April 15, 2019
    Publication date: November 7, 2019
    Inventors: Masanori Omote, Ruxin Chen, Xavier Menendez-Pidal, Jaekwon Yoo, Koji Tashiro, Sudha Krishnamurthy, Komath Naveen Kumar
  • Publication number: 20190060759
    Abstract: A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Applicant: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Sudha Krishnamurthy