Patents by Inventor Matthew G. Berry
Matthew G. Berry has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12210718Abstract: Managing metadata associated with a digital media asset includes selecting the digital media asset, displaying the digital media asset in a filmstrip format that presents one or more scenes from the digital media asset along a timeline, wherein each scene corresponds with an underlying point in time along the timeline, and wherein the digital media asset has a start time and an end time that define the timeline, displaying at least one track in timeline alignment with the film strip format wherein the at least one track corresponds with a type of metadata associated with the digital media asset, and displaying on the at least one track, one or more segments, wherein each segment has a start point and an end point along the timeline and wherein each respective segment represents a span of time in which the type of metadata occurs within the digital media asset.Type: GrantFiled: October 9, 2018Date of Patent: January 28, 2025Assignee: Adeia Media Solutions Inc.Inventors: Matthew G. Berry, Tim Jones, Isaac Kunkel
-
Publication number: 20240007696Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.Type: ApplicationFiled: September 13, 2023Publication date: January 4, 2024Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
-
Patent number: 11800169Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.Type: GrantFiled: February 16, 2016Date of Patent: October 24, 2023Assignee: TiVo Solutions Inc.Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
-
Patent number: 11281743Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.Type: GrantFiled: May 19, 2017Date of Patent: March 22, 2022Assignee: TiVo Solutions Inc.Inventor: Matthew G. Berry
-
Publication number: 20190107906Abstract: Managing metadata associated with a digital media asset includes selecting the digital media asset, displaying the digital media asset in a filmstrip format that presents one or more scenes from the digital media asset along a timeline, wherein each scene corresponds with an underlying point in time along the timeline, and wherein the digital media asset has a start time and an end time that define the timeline, displaying at least one track in timeline alignment with the film strip format wherein the at least one track corresponds with a type of metadata associated with the digital media asset, and displaying on the at least one track, one or more segments, wherein each segment has a start point and an end point along the timeline and wherein each respective segment represents a span of time in which the type of metadata occurs within the digital media asset.Type: ApplicationFiled: October 9, 2018Publication date: April 11, 2019Inventors: Matthew G. Berry, Tim Jones, Isaac Kunkel
-
Patent number: 10095367Abstract: Managing metadata associated with a digital media asset includes selecting the digital media asset, displaying the digital media asset in a filmstrip format that presents one or more scenes from the digital media asset along a timeline, wherein each scene corresponds with an underlying point in time along the timeline, and wherein the digital media asset has a start time and an end time that define the timeline, displaying at least one track in timeline alignment with the film strip format wherein the at least one track corresponds with a type of metadata associated with the digital media asset, and displaying on the at least one track, one or more segments, wherein each segment has a start point and an end point along the timeline and wherein each respective segment represents a span of time in which the type of metadata occurs within the digital media asset.Type: GrantFiled: October 15, 2010Date of Patent: October 9, 2018Assignee: TIVO SOLUTIONS INC.Inventors: Matthew G. Berry, Tim Jones, Isaac Kunkel
-
Publication number: 20170255626Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.Type: ApplicationFiled: May 19, 2017Publication date: September 7, 2017Inventor: Matthew G. Berry
-
Patent number: 9690786Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.Type: GrantFiled: March 17, 2009Date of Patent: June 27, 2017Assignee: TIVO SOLUTIONS INC.Inventor: Matthew G. Berry
-
Publication number: 20160165288Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.Type: ApplicationFiled: February 16, 2016Publication date: June 9, 2016Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
-
Publication number: 20150245111Abstract: A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip.Type: ApplicationFiled: May 8, 2015Publication date: August 27, 2015Inventors: Matthew G. Berry, Benjamin J. Weinberger, Schuyler E. Eckstrom, Albert L. Segars
-
Patent number: 8380045Abstract: Systems and methods are provided for generating unique signatures for digital video files to locate video sequences within a video file comprising calculating a frame signature for each frame of a first video; and for a second video: calculating a frame signature for each frame of the second video for corresponding first video frame signatures, calculating a frame distance between each of the corresponding video frame signatures, determining video signature similarity between the videos, and searching within a video signature similarity curve to determine a maximum corresponding to the first video within the second video. The method further applies area augmentation to the video signature similarity curve to determine a maximum from among a plurality of maxima corresponding to the first video file within the second video file.Type: GrantFiled: October 9, 2008Date of Patent: February 19, 2013Inventors: Matthew G. Berry, Schuyler E. Eckstrom
-
Patent number: 8311344Abstract: The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.Type: GrantFiled: February 17, 2009Date of Patent: November 13, 2012Assignee: Digitalsmiths, Inc.Inventors: Heather Dunlop, Matthew G. Berry
-
Patent number: 8311390Abstract: The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc.Type: GrantFiled: May 14, 2009Date of Patent: November 13, 2012Assignee: Digitalsmiths, Inc.Inventor: Matthew G. Berry
-
Patent number: 8281231Abstract: Method, systems, and computer program products for synchronizing text with audio in a multimedia file, wherein the multimedia file is defined by a timeline having a start point and end point and respective points in time therebetween, wherein an N-gram analysis is used to compare each word of a closed-captioned text associated with the multimedia file with words generated by an automated speech recognition (ASR) analysis of the audio of the multimedia file to create an accurate, time-based metadata file in which each closed-captioned word is associated with a respective point on the timeline corresponding to the same point in time on the timeline in which the word is actually spoken in the audio and occurs within the video.Type: GrantFiled: September 13, 2010Date of Patent: October 2, 2012Assignee: Digitalsmiths, Inc.Inventors: Matthew G. Berry, Changwen Yang
-
Patent number: 8170280Abstract: The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images.Type: GrantFiled: December 3, 2008Date of Patent: May 1, 2012Assignee: Digital Smiths, Inc.Inventors: Liang Zhao, Matthew G. Berry
-
Publication number: 20110134321Abstract: Method, systems, and computer program products for synchronizing text with audio in a multimedia file, wherein the multimedia file is defined by a timeline having a start point and end point and respective points in time therebetween, wherein an N-gram analysis is used to compare each word of a closed-captioned text associated with the multimedia file with words generated by an automated speech recognition (ASR) analysis of the audio of the multimedia file to create an accurate, time-based metadata file in which each closed-captioned word is associated with a respective point on the timeline corresponding to the same point in time on the timeline in which the word is actually spoken in the audio and occurs within the video.Type: ApplicationFiled: September 13, 2010Publication date: June 9, 2011Applicant: Digitalsmiths CorporationInventors: Matthew G. Berry, Changwen Yang
-
Publication number: 20100162286Abstract: Systems and methods are described for analyzing video content in conjunction with historical video consumption data, and identifying and generating relationships, rules, and correlations between the video content and viewer behavior. According to one aspect, a system receives video consumption data associated with one or more output states for one or more videos. The output states generally comprise tracked and recorded viewer behaviors during videos such as pausing, rewinding, fast-forwarding, clicking on an advertisement (for Internet videos), and other similar actions. Next, the system receives metadata associated with the content of one or more videos. The metadata is associated with video content such as actors, places, objects, dialogue, etc. The system then analyzes the received video consumption data and metadata via a multivariate analysis engine to generate an output analysis of the data.Type: ApplicationFiled: November 24, 2009Publication date: June 24, 2010Applicant: DIGITALSMITHS CORPORATIONInventor: Matthew G. Berry
-
Publication number: 20090285551Abstract: The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc.Type: ApplicationFiled: May 14, 2009Publication date: November 19, 2009Applicant: DIGITALSMITHS CORPORATIONInventor: Matthew G. Berry
-
Publication number: 20090235150Abstract: The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content.Type: ApplicationFiled: March 17, 2009Publication date: September 17, 2009Applicant: DIGITALSMITHS CORPORATIONInventor: Matthew G. Berry
-
Publication number: 20090208106Abstract: The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file.Type: ApplicationFiled: February 17, 2009Publication date: August 20, 2009Applicant: DIGITALSMITHS CORPORATIONInventors: Heather Dunlop, Matthew G. Berry