Patents by Inventor Matthew G. Berry

Matthew G. Berry has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240007696
    Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.
    Type: Application
    Filed: September 13, 2023
    Publication date: January 4, 2024
    Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
  • Patent number: 11800169
    Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: October 24, 2023
    Assignee: TiVo Solutions Inc.
    Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
  • Patent number: 11281743
    Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 22, 2022
    Assignee: TiVo Solutions Inc.
    Inventor: Matthew G. Berry
  • Publication number: 20190107906
    Abstract: Managing metadata associated with a digital media asset includes selecting the digital media asset, displaying the digital media asset in a filmstrip format that presents one or more scenes from the digital media asset along a timeline, wherein each scene corresponds with an underlying point in time along the timeline, and wherein the digital media asset has a start time and an end time that define the timeline, displaying at least one track in timeline alignment with the film strip format wherein the at least one track corresponds with a type of metadata associated with the digital media asset, and displaying on the at least one track, one or more segments, wherein each segment has a start point and an end point along the timeline and wherein each respective segment represents a span of time in which the type of metadata occurs within the digital media asset.
    Type: Application
    Filed: October 9, 2018
    Publication date: April 11, 2019
    Inventors: Matthew G. Berry, Tim Jones, Isaac Kunkel
  • Patent number: 10095367
    Abstract: Managing metadata associated with a digital media asset includes selecting the digital media asset, displaying the digital media asset in a filmstrip format that presents one or more scenes from the digital media asset along a timeline, wherein each scene corresponds with an underlying point in time along the timeline, and wherein the digital media asset has a start time and an end time that define the timeline, displaying at least one track in timeline alignment with the film strip format wherein the at least one track corresponds with a type of metadata associated with the digital media asset, and displaying on the at least one track, one or more segments, wherein each segment has a start point and an end point along the timeline and wherein each respective segment represents a span of time in which the type of metadata occurs within the digital media asset.
    Type: Grant
    Filed: October 15, 2010
    Date of Patent: October 9, 2018
    Assignee: TIVO SOLUTIONS INC.
    Inventors: Matthew G. Berry, Tim Jones, Isaac Kunkel
  • Publication number: 20170255626
    Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 7, 2017
    Inventor: Matthew G. Berry
  • Patent number: 9690786
    Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.
    Type: Grant
    Filed: March 17, 2009
    Date of Patent: June 27, 2017
    Assignee: TIVO SOLUTIONS INC.
    Inventor: Matthew G. Berry
  • Publication number: 20160165288
    Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.
    Type: Application
    Filed: February 16, 2016
    Publication date: June 9, 2016
    Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
  • Publication number: 20150245111
    Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.
    Type: Application
    Filed: May 8, 2015
    Publication date: August 27, 2015
    Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
  • Patent number: 8380045
    Abstract: Systems and methods are provided for generating unique signatures for digital video files to locate video sequences within a video file comprising calculating a frame signature for each frame of a first video; and for a second video: calculating a frame signature for each frame of the second video for corresponding first video frame signatures, calculating a frame distance between each of the corresponding video frame signatures, determining video signature similarity between the videos, and searching within a video signature similarity curve to determine a maximum corresponding to the first video within the second video. The method further applies area augmentation to the video signature similarity curve to determine a maximum from among a plurality of maxima corresponding to the first video file within the second video file.
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: February 19, 2013
    Inventors: Matthew G. Berry, Schuyler E. Eckstrom
  • Patent number: 8311390
    Abstract: The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc.
    Type: Grant
    Filed: May 14, 2009
    Date of Patent: November 13, 2012
    Assignee: Digitalsmiths, Inc.
    Inventor: Matthew G. Berry
  • Patent number: 8311344
    Abstract: The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
    Type: Grant
    Filed: February 17, 2009
    Date of Patent: November 13, 2012
    Assignee: Digitalsmiths, Inc.
    Inventors: Heather Dunlop, Matthew G. Berry
  • Patent number: 8281231
    Abstract: Method, systems, and computer program products for synchronizing text with audio in a multimedia file, wherein the multimedia file is defined by a timeline having a start point and end point and respective points in time therebetween, wherein an N-gram analysis is used to compare each word of a closed-captioned text associated with the multimedia file with words generated by an automated speech recognition (ASR) analysis of the audio of the multimedia file to create an accurate, time-based metadata file in which each closed-captioned word is associated with a respective point on the timeline corresponding to the same point in time on the timeline in which the word is actually spoken in the audio and occurs within the video.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: October 2, 2012
    Assignee: Digitalsmiths, Inc.
    Inventors: Matthew G. Berry, Changwen Yang
  • Patent number: 8170280
    Abstract: The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: May 1, 2012
    Assignee: Digital Smiths, Inc.
    Inventors: Liang Zhao, Matthew G. Berry
  • Publication number: 20110134321
    Abstract: Method, systems, and computer program products for synchronizing text with audio in a multimedia file, wherein the multimedia file is defined by a timeline having a start point and end point and respective points in time therebetween, wherein an N-gram analysis is used to compare each word of a closed-captioned text associated with the multimedia file with words generated by an automated speech recognition (ASR) analysis of the audio of the multimedia file to create an accurate, time-based metadata file in which each closed-captioned word is associated with a respective point on the timeline corresponding to the same point in time on the timeline in which the word is actually spoken in the audio and occurs within the video.
    Type: Application
    Filed: September 13, 2010
    Publication date: June 9, 2011
    Applicant: Digitalsmiths Corporation
    Inventors: Matthew G. Berry, Changwen Yang
  • Publication number: 20100162286
    Abstract: Systems and methods are described for analyzing video content in conjunction with historical video consumption data, and identifying and generating relationships, rules, and correlations between the video content and viewer behavior. According to one aspect, a system receives video consumption data associated with one or more output states for one or more videos. The output states generally comprise tracked and recorded viewer behaviors during videos such as pausing, rewinding, fast-forwarding, clicking on an advertisement (for Internet videos), and other similar actions. Next, the system receives metadata associated with the content of one or more videos. The metadata is associated with video content such as actors, places, objects, dialogue, etc. The system then analyzes the received video consumption data and metadata via a multivariate analysis engine to generate an output analysis of the data.
    Type: Application
    Filed: November 24, 2009
    Publication date: June 24, 2010
    Applicant: DIGITALSMITHS CORPORATION
    Inventor: Matthew G. Berry
  • Publication number: 20090285551
    Abstract: The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc.
    Type: Application
    Filed: May 14, 2009
    Publication date: November 19, 2009
    Applicant: DIGITALSMITHS CORPORATION
    Inventor: Matthew G. Berry
  • Publication number: 20090235150
    Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.
    Type: Application
    Filed: March 17, 2009
    Publication date: September 17, 2009
    Applicant: DIGITALSMITHS CORPORATION
    Inventor: Matthew G. Berry
  • Publication number: 20090208106
    Abstract: The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.
    Type: Application
    Filed: February 17, 2009
    Publication date: August 20, 2009
    Applicant: DIGITALSMITHS CORPORATION
    Inventors: Heather Dunlop, Matthew G. Berry
  • Publication number: 20090141940
    Abstract: The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images.
    Type: Application
    Filed: December 3, 2008
    Publication date: June 4, 2009
    Applicant: DIGITALSMITHS CORPORATION
    Inventors: Liang ZHAO, Matthew G. BERRY