Patents by Inventor Prakash Yadav
Prakash Yadav has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250168217Abstract: Apparatuses, systems, and techniques to predict events within a gameplay session and modify a game broadcast based at least in part on the predicted events. In at least one embodiment, events within the gameplay session are predicted, which once predicted, cause assets to be generated and once the event is detected the assets are included in the game broadcast.Type: ApplicationFiled: January 16, 2025Publication date: May 22, 2025Inventors: Prabindh Sundareson, Sachin Dattatray Pandhare, Shyam Raikar, Toshant Sharma, Prakash Yadav
-
Publication number: 20240331382Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.Type: ApplicationFiled: June 11, 2024Publication date: October 3, 2024Inventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White
-
Patent number: 12014547Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.Type: GrantFiled: September 7, 2021Date of Patent: June 18, 2024Assignee: NVIDIA CorporationInventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White, Shyam Raikar, Stephen Holmes, David Wilson
-
Patent number: 11657627Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.Type: GrantFiled: July 1, 2021Date of Patent: May 23, 2023Assignee: NVIDIA CorporationInventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
-
Publication number: 20230128243Abstract: Apparatuses, systems, and techniques to predict events within a gameplay session and modify a game broadcast based at least in part on the predicted events. In at least one embodiment, events within the gameplay session are predicted, which once predicted, cause assets to be generated and once the event is detected the assets are included in the game broadcast.Type: ApplicationFiled: October 27, 2021Publication date: April 27, 2023Inventors: Prabindh Sundareson, Sachin Dattatray Pandhare, Shyam Raikar, Toshant Sharma, Prakash Yadav
-
Publication number: 20230121413Abstract: In examples, a device's native input interface (e.g., a soft keyboard) may be invoked using interaction areas associated with image frames from an application, such as a game. An area of an image frame(s) from a streamed game video may be designated (e.g., by the game and/or a game server) as an interaction area. When an input event associated with the interaction area is detected, an instruction may be issued to the client device to invoke a user interface (e.g., a soft keyboard) of the client device and may cause the client device to present a graphical input interface. Inputs made to the presented graphical input interface may be accessed by the game streaming client and provided to the game instance.Type: ApplicationFiled: September 20, 2022Publication date: April 20, 2023Inventors: Prakash Yadav, Charu Kalani, Stephen Holmes, David Wilson, David Le Tacon, James Lewis van Welzen
-
Publication number: 20230071358Abstract: In various examples, natural language processing may be performed on text generated by a game to extract one or more in-game events from the game. The system (e.g., a client device and/or server) may receive the text in the form of one or more strings generated by a game application. The system may then extract one or more in-game events from the text using natural language processing. The game may include the text in a message it sends to the system (e.g., using an Application Programming Interface (API)) and/or in a game log entry or notification. The text may be generated based at least on the game determining one or more conditions are satisfied in the gameplay (e.g., victory, points scored, milestones, eliminations, item acquisition, etc.). The text may be mapped to event templates, which may then be used to extract parameters of events therefrom.Type: ApplicationFiled: September 7, 2021Publication date: March 9, 2023Inventors: James Lewis van Welzen, Prakash Yadav, Charu Kalani, Jonathan White, Shyam Raikar, Stephen Holmes, David Wilson
-
Patent number: 11574654Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.Type: GrantFiled: November 15, 2021Date of Patent: February 7, 2023Assignee: NVIDIA CorporationInventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
-
Publication number: 20220076704Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.Type: ApplicationFiled: November 15, 2021Publication date: March 10, 2022Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
-
Patent number: 11176967Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.Type: GrantFiled: July 14, 2020Date of Patent: November 16, 2021Assignee: NVIDIA CorporationInventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
-
Publication number: 20210326627Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.Type: ApplicationFiled: July 1, 2021Publication date: October 21, 2021Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
-
Patent number: 11087162Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.Type: GrantFiled: August 1, 2019Date of Patent: August 10, 2021Assignee: NVIDIA CorporationInventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
-
Publication number: 20210034906Abstract: In various examples, frames of a video may include a first visual object that may appear relative to a second visual object within a region of the frames. Once a relationship between the first visual object and the region is known, one or more operations may be performed on the relative region. For example, optical character recognition may be performed on the relative region where the relative region is known to contain textual information. As a result, the identification of the first visual object may serve as an anchor for determining the location of the relative region including the second visual object—thereby increasing accuracy and efficiency of the system while reducing run-time.Type: ApplicationFiled: August 1, 2019Publication date: February 4, 2021Inventors: James Van Welzen, Jonathan White, David Clark, Nathan Otterness, Jack Van Welzen, Prakash Yadav
-
Publication number: 20200411056Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.Type: ApplicationFiled: July 14, 2020Publication date: December 31, 2020Inventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
-
Patent number: 10741215Abstract: In various examples, recordings of gameplay sessions are enhanced by the application of special effects to relatively high(er) and/or low(er) interest durations of the gameplay sessions. Durations of relatively high(er) or low(er) predicted interest in a gameplay session are identified, for instance, based upon level of activity engaged in by a gamer during a particular gameplay session duration. Once identified, different variations of video characteristic(s) are applied to at least a portion of the identified durations for implementation during playback. The recordings may be generated and/or played back in real-time with a live gameplay session, or after completion of the gameplay session. Further, video data of the recordings themselves may be modified to include the special effects and/or indications of the durations and/or variations may be included in metadata and used for playback.Type: GrantFiled: July 31, 2019Date of Patent: August 11, 2020Assignee: NVIDIA CorporationInventors: Prabindh Sundareson, Prakash Yadav, Himanshu Bhat
-
Publication number: 20160326621Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.Type: ApplicationFiled: July 21, 2016Publication date: November 10, 2016Inventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
-
Patent number: 9422814Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.Type: GrantFiled: July 14, 2010Date of Patent: August 23, 2016Assignee: UNITED TECHNOLOGIES CORPORATIONInventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
-
Publication number: 20140223345Abstract: Provided are a method of initiating communication in a computing device including a touch sensitive display and the computing device. The method includes detecting a touch gesture that is performed on at least one tagged object included in an image displayed on the touch sensitive display and initiating, in response to the detection of the touch gesture, communication to at least one individual corresponding to the at least one tagged object selected or surrounded by the detected touch gesture, according to the detected touch gesture.Type: ApplicationFiled: February 3, 2014Publication date: August 7, 2014Applicant: Samsung Electronics Co., Ltd.Inventors: Ravitheja TETALI, Anand Prakash YADAV, Joy BOSE, Samarth Vinod DEO, Tasleem ARIF
-
Publication number: 20110206533Abstract: A method is provided for repairing an area of damaged coating on a component of a turbine module in a gas turbine engine, the component formed of a base material having a diffusion coating applied to the base material. The repair may be accomplished in place by directly heating the area to which a touch-up coating material has been applied with a hot gas plasma without the need to place the component in an oven for curing and heat treatment of the touch-up coating applied to the damaged area.Type: ApplicationFiled: July 14, 2010Publication date: August 25, 2011Applicant: United Technologies CorporationInventors: Glenn Lee, Om Prakash Yadav, Dylan Lim
-
Publication number: 20100026650Abstract: Method and system for emphasizing objects is disclosed. The method includes receiving input characters from a user of the electronic device and predicting one or more characters based on the input characters. Moreover, the method includes calculating a distance of each predicted character from a last input character. The method also includes calculating an emphasizing priority of each predicted character based on priority of each predicted character and the distance of each predicted character from the last input character. The method further includes emphasizing the predicted characters based on the emphasizing priority of the predicted characters.Type: ApplicationFiled: July 29, 2009Publication date: February 4, 2010Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Alok SRIVASTAVA, Amitabh Ranjan, Anand Prakash Yadav, Jayanth Kumar Jaya Kumar, Malur Nagendra Srivatsa Bharadwaj, Sushanth Bangalore Ramesh, Tarun Pangti, Vaibhav Negi