Patents by Inventor Rajiv Maheswaran
Rajiv Maheswaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11023736Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.Type: GrantFiled: March 20, 2020Date of Patent: June 1, 2021Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Patent number: 10762351Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.Type: GrantFiled: September 5, 2019Date of Patent: September 1, 2020Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Patent number: 10755103Abstract: Interacting with a broadcast video content stream is performed with a machine learning facility that processes a video feed of a video broadcast through a spatiotemporal pattern recognition algorithm that applies machine learning on at least one event in the video feed in order to develop an understanding of the at least one event. Developing the understanding includes identifying context information relating to the at least one event and identifying an entry in a relationship library detailing a relationship between two visible features of the video feed. Interacting is further enabled with a touch screen user interface configured to permit at least one broadcaster to control a portion of the content of the video feed through interaction options that are based on the identified context information. Interacting is further enhanced through an interface configured to permit remote viewers to control the portion of the content.Type: GrantFiled: May 19, 2017Date of Patent: August 25, 2020Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Patent number: 10755102Abstract: Producing an event related video content data structure includes processing a video feed through a spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of an event within the video feed. Developing the understanding includes identifying context information relating to the event and identifying an entry in a relationship library at least detailing a relationship between two visible features of the video feed. Content of the video feed that displays the event is automatically extracted by a computer and associated with the context information. A video content data structure that includes the context information is produced.Type: GrantFiled: May 19, 2017Date of Patent: August 25, 2020Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Publication number: 20200218902Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.Type: ApplicationFiled: March 20, 2020Publication date: July 9, 2020Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Publication number: 20200074182Abstract: Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.Type: ApplicationFiled: November 8, 2019Publication date: March 5, 2020Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Patent number: 10521671Abstract: Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.Type: GrantFiled: May 4, 2017Date of Patent: December 31, 2019Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Publication number: 20190392219Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.Type: ApplicationFiled: September 5, 2019Publication date: December 26, 2019Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Patent number: 10460176Abstract: An enhanced video of an event in a first video feed, which is identified by a spatiotemporal pattern recognition algorithm that uses machine learning for understanding the event, is produced by including in the enhanced video an animation that characterizes a person's motions that are derived from a machine learning-based understanding of an event in a second video.Type: GrantFiled: May 19, 2017Date of Patent: October 29, 2019Assignee: Second Spectrum, Inc.Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
-
Publication number: 20170255829Abstract: A system for enabling user interaction with video content includes an ingestion facility configured to access at least one video feed and a machine learning system configured to process the at least one video feed through a spatiotemporal pattern recognition algorithm that applies machine learning on an event in the at least one feed in order to develop an understanding of the event including identifying context information relating to the event and an entry in a relationship library at least detailing a relationship between two visible video features. The system further includes an extraction facility configured to automatically extract content displaying the event and associate the extracted content with the context information, and a video production facility configured to produce a video content data structure that includes the context information. The system further includes a user interface configured with video interaction options that are based on the context information.Type: ApplicationFiled: May 19, 2017Publication date: September 7, 2017Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
-
Publication number: 20170255826Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.Type: ApplicationFiled: May 19, 2017Publication date: September 7, 2017Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
-
Publication number: 20170255828Abstract: Interacting with a broadcast video content stream is performed with a machine learning facility that processes a video feed of a video broadcast through a spatiotemporal pattern recognition algorithm that applies machine learning on at least one event in the video feed in order to develop an understanding of the at least one event. Developing the understanding includes identifying context information relating to the at least one event and identifying an entry in a relationship library detailing a relationship between two visible features of the video feed. Interacting is further enabled with a touch screen user interface configured to permit at least one broadcaster to control a portion of the content of the video feed through interaction options that are based on the identified context information. Interacting is further enhanced through an interface configured to permit remote viewers to control the portion of the content.Type: ApplicationFiled: May 19, 2017Publication date: September 7, 2017Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
-
Publication number: 20170255827Abstract: Producing an event related video content data structure includes processing a video feed through a spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of an event within the video feed. Developing the understanding includes identifying context information relating to the event and identifying an entry in a relationship library at least detailing a relationship between two visible features of the video feed. Content of the video feed that displays the event is automatically extracted by a computer and associated with the context information. A video content data structure that includes the context information is produced.Type: ApplicationFiled: May 19, 2017Publication date: September 7, 2017Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
-
Publication number: 20170238055Abstract: Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.Type: ApplicationFiled: May 4, 2017Publication date: August 17, 2017Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
-
Publication number: 20150248917Abstract: Methods and systems are provided to enable the exploration of event data captured from video feeds, such as from sporting event venues, the discovery of relevant events (such as within a video feed of a sporting event), and the presentation of novel insights, analytic results, and visual displays that enhance decision-making, provide improved entertainment and provide other benefits.Type: ApplicationFiled: February 27, 2015Publication date: September 3, 2015Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeff Su