Patents by Inventor Noel Grant Hollingsworth

Noel Grant Hollingsworth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170255828
    Abstract: Interacting with a broadcast video content stream is performed with a machine learning facility that processes a video feed of a video broadcast through a spatiotemporal pattern recognition algorithm that applies machine learning on at least one event in the video feed in order to develop an understanding of the at least one event. Developing the understanding includes identifying context information relating to the at least one event and identifying an entry in a relationship library detailing a relationship between two visible features of the video feed. Interacting is further enabled with a touch screen user interface configured to permit at least one broadcaster to control a portion of the content of the video feed through interaction options that are based on the identified context information. Interacting is further enhanced through an interface configured to permit remote viewers to control the portion of the content.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 7, 2017
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
  • Publication number: 20170255829
    Abstract: A system for enabling user interaction with video content includes an ingestion facility configured to access at least one video feed and a machine learning system configured to process the at least one video feed through a spatiotemporal pattern recognition algorithm that applies machine learning on an event in the at least one feed in order to develop an understanding of the event including identifying context information relating to the event and an entry in a relationship library at least detailing a relationship between two visible video features. The system further includes an extraction facility configured to automatically extract content displaying the event and associate the extracted content with the context information, and a video production facility configured to produce a video content data structure that includes the context information. The system further includes a user interface configured with video interaction options that are based on the context information.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 7, 2017
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
  • Publication number: 20170255826
    Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 7, 2017
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
  • Publication number: 20170255827
    Abstract: Producing an event related video content data structure includes processing a video feed through a spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of an event within the video feed. Developing the understanding includes identifying context information relating to the event and identifying an entry in a relationship library at least detailing a relationship between two visible features of the video feed. Content of the video feed that displays the event is automatically extracted by a computer and associated with the context information. A video content data structure that includes the context information is produced.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 7, 2017
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth
  • Publication number: 20170238055
    Abstract: Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.
    Type: Application
    Filed: May 4, 2017
    Publication date: August 17, 2017
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Grant Hollingsworth