Patents by Inventor Michael W. Homyack

Michael W. Homyack has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10225625
    Abstract: Methods and systems are disclosed for real-time metatagging and captioning of an event and caption extraction and analysis for such event. The method for the real-time metatagging and captioning and caption extraction and analysis of an event may include embedding metatag information in a caption file provided by a captioner. The embedded and/or extracted metatag information may allow a user to access additional information via the text of the captioned event. Data, words, or phrases can be provided by the captioner during captioning or, post-captioning, extracted from the one or more segments of the caption transcript. Metadata based on said providing and/or extracting is provided. The metadata is stored in a metadata archive, where the metadata is associated with the caption transcript.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: March 5, 2019
    Assignee: VITAC Corporation
    Inventors: Michael W. Homyack, Richard T. Polumbus, Troy A. Greenwood
  • Patent number: 10034028
    Abstract: A synchronization process between captioning data and/or corresponding metatags and the associated media file parses the media file, correlates the caption information and/or metatags with segments of the media file, and provides a capability for textual search and selection of particular segments. A time-synchronized version of the captions is created that is synchronized to the moment that the speech is uttered in the recorded media. The caption data is leveraged to enable search engines to index not merely the title of a video, but the entirety of what was said during the video as well as any associated metatags relating to contents of the video. Further, because the entire media file is indexed, a search can request a particular scene or occurrence within the event recorded by the media file, and the exact moment within the media relevant to the search can be accessed and played for the requester.
    Type: Grant
    Filed: September 17, 2015
    Date of Patent: July 24, 2018
    Assignee: VITAC CORPORATION
    Inventors: Richard T. Polumbus, Michael W. Homyack
  • Publication number: 20160007054
    Abstract: A synchronization process between captioning data and/or corresponding metatags and the associated media file parses the media file, correlates the caption information and/or metatags with segments of the media file, and provides a capability for textual search and selection of particular segments. A time-synchronized version of the captions is created that is synchronized to the moment that the speech is uttered in the recorded media. The caption data is leveraged to enable search engines to index not merely the title of a video, but the entirety of what was said during the video as well as any associated metatags relating to contents of the video. Further, because the entire media file is indexed, a search can request a particular scene or occurrence within the event recorded by the media file, and the exact moment within the media relevant to the search can be accessed and played for the requester.
    Type: Application
    Filed: September 17, 2015
    Publication date: January 7, 2016
    Inventors: RICHARD T. POLUMBUS, MICHAEL W. HOMYACK
  • Publication number: 20150208139
    Abstract: Methods and systems are disclosed for real-time metatagging and captioning of an event and caption extraction and analysis for such event. The method for the real-time metatagging and captioning and caption extraction and analysis of an event may include embedding metatag information in a caption file provided by a captioner. The embedded and/or extracted metatag information may allow a user to access additional information via the text of the captioned event. Data, words, or phrases can be provided by the captioner during captioning or, post-captioning, extracted from the one or more segments of the caption transcript. Metadata based on said providing and/or extracting is provided. The metadata is stored in a metadata archive, where the metadata is associated with the caption transcript.
    Type: Application
    Filed: April 1, 2015
    Publication date: July 23, 2015
    Inventors: Michael W. Homyack, Richard T. Polumbus, Troy A. Greenwood
  • Publication number: 20140259084
    Abstract: A synchronization process between captioning data and/or corresponding metatags and the associated media file parses the media file, correlates the caption information and/or metatags with segments of the media file, and provides a capability for textual search and selection of particular segments. A time-synchronized version of the captions is created that is synchronized to the moment that the speech is uttered in the recorded media. The caption data is leveraged to enable search engines to index not merely the title of a video, but the entirety of what was said during the video as well as any associated metatags relating to contents of the video. Further, because the entire media file is indexed, a search can request a particular scene or occurrence within the event recorded by the media file, and the exact moment within the media relevant to the search can be accessed and played for the requester.
    Type: Application
    Filed: February 27, 2014
    Publication date: September 11, 2014
    Applicant: Caption Colorado LLC
    Inventors: RICHARD T. POLUMBUS, MICHAEL W. HOMYACK
  • Patent number: 8707381
    Abstract: A synchronization process between captioning data and/or corresponding metatags and the associated media file parses the media file, correlates the caption information and/or metatags with segments of the media file, and provides a capability for textual search and selection of particular segments. A time-synchronized version of the captions is created that is synchronized to the moment that the speech is uttered in the recorded media. The caption data is leveraged to enable search engines to index not merely the title of a video, but the entirety of what was said during the video as well as any associated metatags relating to contents of the video. Further, because the entire media file is indexed, a search can request a particular scene or occurrence within the event recorded by the media file, and the exact moment within the media relevant to the search can be accessed and played for the requester.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: April 22, 2014
    Assignee: Caption Colorado L.L.C.
    Inventors: Richard T. Polumbus, Michael W. Homyack
  • Publication number: 20140009677
    Abstract: Methods and systems are disclosed for caption extraction and analysis. In one such example method, a caption transcript corresponding to a media program is received at an extraction and analysis module, and the caption transcript is divided into one or more segments. Data, words, or phrases are extracted from the one or more segments of the caption transcript, and metadata based on said extracting is provided. The metadata is stored in a metadata archive, where the metadata is associated with the caption transcript.
    Type: Application
    Filed: July 9, 2013
    Publication date: January 9, 2014
    Inventors: Michael W. Homyack, Richard T. Polumbus
  • Publication number: 20110069230
    Abstract: A synchronization process between captioning data and/or corresponding metatags and the associated media file parses the media file, correlates the caption information and/or metatags with segments of the media file, and provides a capability for textual search and selection of particular segments. A time-synchronized version of the captions is created that is synchronized to the moment that the speech is uttered in the recorded media. The caption data is leveraged to enable search engines to index not merely the title of a video, but the entirety of what was said during the video as well as any associated metatags relating to contents of the video. Further, because the entire media file is indexed, a search can request a particular scene or occurrence within the event recorded by the media file, and the exact moment within the media relevant to the search can be accessed and played for the requester.
    Type: Application
    Filed: September 21, 2010
    Publication date: March 24, 2011
    Applicant: Caption Colorado L.L.C.
    Inventors: Richard T. Polumbus, Michael W. Homyack