Patents by Inventor Prakash Yadav

Prakash Yadav has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250168217
    Abstract: Apparatuses, systems, and techniques to predict events within a gameplay session and modify a game broadcast based at least in part on the predicted events. In at least one embodiment, events within the gameplay session are predicted, which once predicted, cause assets to be generated and once the event is detected the assets are included in the game broadcast.
    Type: Application
    Filed: January 16, 2025
    Publication date: May 22, 2025
    Inventors: Prabindh Sundareson, Sachin Dattatray Pandhare, Shyam Raikar, Toshant Sharma, Prakash Yadav
  • Publication number: 20240331382
    Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.
    Type: Application
    Filed: June 11, 2024
    Publication date: October 3, 2024
    Inventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White
  • Patent number: 12014547
    Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: June 18, 2024
    Assignee: NVIDIA Corporation
    Inventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White, Shyam Raikar, Stephen Holmes, David Wilson
  • Patent number: 11657627
    Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: May 23, 2023
    Assignee: NVIDIA Corporation
    Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
  • Publication number: 20230128243
    Abstract: Apparatuses, systems, and techniques to predict events within a gameplay session and modify a game broadcast based at least in part on the predicted events. In at least one embodiment, events within the gameplay session are predicted, which once predicted, cause assets to be generated and once the event is detected the assets are included in the game broadcast.
    Type: Application
    Filed: October 27, 2021
    Publication date: April 27, 2023
    Inventors: Prabindh Sundareson, Sachin Dattatray Pandhare, Shyam Raikar, Toshant Sharma, Prakash Yadav
  • Publication number: 20230121413
    Abstract: In examples, a device's native input interface (e.g., a soft keyboard) may be invoked using interaction areas associated with image frames from an application, such as a game. An area of an image frame(s) from a streamed game video may be designated (e.g., by the game and/or a game server) as an interaction area. When an input event associated with the interaction area is detected, an instruction may be issued to the client device to invoke a user interface (e.g., a soft keyboard) of the client device and may cause the client device to present a graphical input interface. Inputs made to the presented graphical input interface may be accessed by the game streaming client and provided to the game instance.
    Type: Application
    Filed: September 20, 2022
    Publication date: April 20, 2023
    Inventors: Prakash Yadav, Charu Kalani, Stephen Holmes, David Wilson, David Le Tacon, James Lewis van Welzen
  • Publication number: 20230071358
    Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.
    Type: Application
    Filed: September 7, 2021
    Publication date: March 9, 2023
    Inventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White, Shyam Raikar, Stephen Holmes, David Wilson
  • Patent number: 11574654
    Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: February 7, 2023
    Assignee: NVIDIA Corporation
    Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
  • Publication number: 20220076704
    Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.
    Type: Application
    Filed: November 15, 2021
    Publication date: March 10, 2022
    Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
  • Patent number: 11176967
    Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: November 16, 2021
    Assignee: NVIDIA Corporation
    Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
  • Publication number: 20210326627
    Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.
    Type: Application
    Filed: July 1, 2021
    Publication date: October 21, 2021
    Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
  • Patent number: 11087162
    Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 10, 2021
    Assignee: NVIDIA Corporation
    Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
  • Publication number: 20210034906
    Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.
    Type: Application
    Filed: August 1, 2019
    Publication date: February 4, 2021
    Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
  • Publication number: 20200411056
    Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.
    Type: Application
    Filed: July 14, 2020
    Publication date: December 31, 2020
    Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
  • Patent number: 10741215
    Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: August 11, 2020
    Assignee: NVIDIA Corporation
    Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
  • Publication number: 20160326621
    Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.
    Type: Application
    Filed: July 21, 2016
    Publication date: November 10, 2016
    Inventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
  • Patent number: 9422814
    Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.
    Type: Grant
    Filed: July 14, 2010
    Date of Patent: August 23, 2016
    Assignee: UNITED TECHNOLOGIES CORPORATION
    Inventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
  • Publication number: 20140223345
    Abstract: Provided are a method of initiating communication in a computing device including a touch sensitive display and the computing device. The method includes detecting a touch gesture that is performed on at least one tagged object included in an image displayed on the touch sensitive display and initiating, in response to the detection of the touch gesture, communication to at least one individual corresponding to the at least one tagged object selected or surrounded by the detected touch gesture, according to the detected touch gesture.
    Type: Application
    Filed: February 3, 2014
    Publication date: August 7, 2014
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Ravitheja TETALI, Anand Prakash YADAV, Joy BOSE, Samarth Vinod DEO, Tasleem ARIF
  • Publication number: 20110206533
    Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.
    Type: Application
    Filed: July 14, 2010
    Publication date: August 25, 2011
    Applicant: United Technologies Corporation
    Inventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
  • Publication number: 20100026650
    Abstract: Method and system for emphasizing objects is disclosed. The method includes receiving input characters from a user of the electronic device and predicting one or more characters based on the input characters. Moreover, the method includes calculating a distance of each predicted character from a last input character. The method also includes calculating an emphasizing priority of each predicted character based on priority of each predicted character and the distance of each predicted character from the last input character. The method further includes emphasizing the predicted characters based on the emphasizing priority of the predicted characters.
    Type: Application
    Filed: July 29, 2009
    Publication date: February 4, 2010
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Alok SRIVASTAVA, Amitabh Ranjan, Anand Prakash Yadav, Jayanth Kumar Jaya Kumar, Malur Nagendra Srivatsa Bharadwaj, Sushanth Bangalore Ramesh, Tarun Pangti, Vaibhav Negi