Patents by Inventor Mihailo Stojancic
Mihailo Stojancic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922968Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.Type: GrantFiled: February 25, 2022Date of Patent: March 5, 2024Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20230230377Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.Type: ApplicationFiled: March 23, 2023Publication date: July 20, 2023Applicant: STATS LLCInventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Publication number: 20230222797Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.Type: ApplicationFiled: February 27, 2023Publication date: July 13, 2023Applicant: STATS LLCInventors: Mihailo STOJANCIC, Warren PACKARD, Dennis KANYGIN
-
Patent number: 11615621Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.Type: GrantFiled: October 4, 2021Date of Patent: March 28, 2023Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Patent number: 11594028Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.Type: GrantFiled: May 14, 2019Date of Patent: February 28, 2023Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Publication number: 20220327829Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.Type: ApplicationFiled: June 24, 2022Publication date: October 13, 2022Applicant: STATS LLCInventors: Mihailo STOJANCIC, Warren PACKARD
-
Patent number: 11373404Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.Type: GrantFiled: May 14, 2019Date of Patent: June 28, 2022Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20220180892Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.Type: ApplicationFiled: February 25, 2022Publication date: June 9, 2022Applicant: STATS LLCInventors: Mihailo Stojancic, Warren Packard
-
Patent number: 11264048Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.Type: GrantFiled: August 27, 2019Date of Patent: March 1, 2022Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20220027631Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.Type: ApplicationFiled: October 4, 2021Publication date: January 27, 2022Applicant: STATS LLCInventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Patent number: 11138438Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.Type: GrantFiled: May 14, 2019Date of Patent: October 5, 2021Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Patent number: 11025985Abstract: Metadata for highlights of audiovisual content depicting a sporting event or other event are extracted from audiovisual content. The highlights may be segments of the content, such as a broadcast of a sporting event, that are of particular interest. Audio data for the audiovisual content is stored, and portions of the audio data indicating crowd excitement (noise) is automatically identified by analyzing an audio signal in the joint time and frequency domains. Multiple indicators are derived and subsequently processed to detect, validate, and render occurrences of crowd noise. Metadata are automatically generated, including time of occurrence, level of noise (excitement), and duration of cheering. Metadata may be stored, comprising at least a time index indicating a time, within the audiovisual content, at which each of the portions occurs. Periods of intense crowd noise may be used to identify highlights and/or to indicate crowd excitement during viewing of a highlight.Type: GrantFiled: May 23, 2019Date of Patent: June 1, 2021Assignee: STATS LLCInventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20200037022Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect soft-entry points identified as low spectral activity points and/or low volume points in the analyzed audio data. A time index within the audiovisual content, corresponding to the soft-entry point, may be designated as the boundary, which may be the beginning or end of the highlight.Type: ApplicationFiled: June 13, 2019Publication date: January 30, 2020Inventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20190373310Abstract: Metadata for highlights of audiovisual content depicting a sporting event or other event are extracted from audiovisual content. The highlights may be segments of the content, such as a broadcast of a sporting event, that are of particular interest. Audio data for the audiovisual content is stored, and portions of the audio data indicating crowd excitement (noise) is automatically identified by analyzing an audio signal in the joint time and frequency domains. Multiple indicators are derived and subsequently processed to detect, validate, and render occurrences of crowd noise. Metadata are automatically generated, including time of occurrence, level of noise (excitement), and duration of cheering. Metadata may be stored, comprising at least a time index indicating a time, within the audiovisual content, at which each of the portions occurs. Periods of intense crowd noise may be used to identify highlights and/or to indicate crowd excitement during viewing of a highlight.Type: ApplicationFiled: May 23, 2019Publication date: December 5, 2019Inventors: Mihailo Stojancic, Warren Packard
-
Publication number: 20190354764Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.Type: ApplicationFiled: May 14, 2019Publication date: November 21, 2019Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Publication number: 20190354763Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.Type: ApplicationFiled: May 14, 2019Publication date: November 21, 2019Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
-
Publication number: 20190356948Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.Type: ApplicationFiled: May 14, 2019Publication date: November 21, 2019Inventors: Mihailo Stojancic, Warren Packard
-
Patent number: 9510044Abstract: Content segmentation, categorization and identification methods are described. Content tracking approaches are illustrated that are suitable for large scale deployment. Time-aligned applications such as multi-language selection, customized advertisements, second screen services and content monitoring applications can be economically deployed at large scales. A client performs fingerprinting, scene change detection, audio turn detection, and logo detection on incoming video and gathers database search results, logos and text to identify and segment video streams into content, promos, and commercials. A learning engine is configured to learn rules for optimal identification and segmentation at each client for each channel and program. Content sensed at the client site is tracked with reduced computation and applications are executed with timing precision. A user interface for time-aligned publishing of content and subsequent usage and interaction on one or more displays is also described.Type: GrantFiled: December 15, 2011Date of Patent: November 29, 2016Assignee: GRACENOTE, INC.Inventors: Jose Pio Pereira, Sunil Suresh Kulkarni, Oleksiy Bolgarov, Prashant Ramanathan, Shashank Merchant, Mihailo Stojancic
-
Patent number: 9436689Abstract: An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described. Selected content is stored at the distributed local database and tier1 search server(s). Content matching frequent queries, and frequent unidentified queries are cached at various levels in the search system. Content is classified using feature descriptors and geographical aspects, at feature level and in time segments. Queries not identified at clients and tier1 search server(s) are queried against tier2 or lower search server(s). Search servers use classification and geographical partitioning to reduce search cost. Methods for content tracking and local content searching are executed on clients. The client performs local search, monitoring and/or tracking of the query content with the reference content and local search with a database of reference fingerprints.Type: GrantFiled: January 7, 2016Date of Patent: September 6, 2016Assignee: Gracenote, Inc.Inventors: Jose Pio Pereira, Shashank Merchant, Prashant Ramanathan, Sunil Suresh Kulkarni, Mihailo Stojancic
-
Publication number: 20160132500Abstract: An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described. Selected content is stored at the distributed local database and tier1 search server(s). Content matching frequent queries, and frequent unidentified queries are cached at various levels in the search system. Content is classified using feature descriptors and geographical aspects, at feature level and in time segments. Queries not identified at clients and tier1 search server(s) are queried against tier2 or lower search server(s). Search servers use classification and geographical partitioning to reduce search cost. Methods for content tracking and local content searching are executed on clients. The client performs local search, monitoring and/or tracking of the query content with the reference content and local search with a database of reference fingerprints.Type: ApplicationFiled: January 7, 2016Publication date: May 12, 2016Applicant: Gracenote, Inc.Inventors: Jose Pio Pereira, Shashank Merchant, Prashant Ramanathan, Sunil Suresh Kulkarni, Mihailo Stojancic