Patents by Inventor Mihailo Stojancic

Mihailo Stojancic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11922968
    Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: March 5, 2024
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20230230377
    Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.
    Type: Application
    Filed: March 23, 2023
    Publication date: July 20, 2023
    Applicant: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Publication number: 20230222797
    Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 13, 2023
    Applicant: STATS LLC
    Inventors: Mihailo STOJANCIC, Warren PACKARD, Dennis KANYGIN
  • Patent number: 11615621
    Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: March 28, 2023
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Patent number: 11594028
    Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: February 28, 2023
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Publication number: 20220327829
    Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 13, 2022
    Applicant: STATS LLC
    Inventors: Mihailo STOJANCIC, Warren PACKARD
  • Patent number: 11373404
    Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: June 28, 2022
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20220180892
    Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard
  • Patent number: 11264048
    Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect one or more audio events indicative of one or more occurrences to be included in the highlight. Each audio event may be a brief, high-energy audio burst such as the sound made by a tennis serve. A time index within the audiovisual content, before or after the audio event, may be designated as the boundary, which may be the beginning or end of the highlight.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: March 1, 2022
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20220027631
    Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.
    Type: Application
    Filed: October 4, 2021
    Publication date: January 27, 2022
    Applicant: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Patent number: 11138438
    Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: October 5, 2021
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Patent number: 11025985
    Abstract: Metadata for highlights of audiovisual content depicting a sporting event or other event are extracted from audiovisual content. The highlights may be segments of the content, such as a broadcast of a sporting event, that are of particular interest. Audio data for the audiovisual content is stored, and portions of the audio data indicating crowd excitement (noise) is automatically identified by analyzing an audio signal in the joint time and frequency domains. Multiple indicators are derived and subsequently processed to detect, validate, and render occurrences of crowd noise. Metadata are automatically generated, including time of occurrence, level of noise (excitement), and duration of cheering. Metadata may be stored, comprising at least a time index indicating a time, within the audiovisual content, at which each of the portions occurs. Periods of intense crowd noise may be used to identify highlights and/or to indicate crowd excitement during viewing of a highlight.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: June 1, 2021
    Assignee: STATS LLC
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20200037022
    Abstract: A boundary of a highlight of audiovisual content depicting an event is identified. The audiovisual content may be a broadcast, such as a television broadcast of a sporting event. The highlight may be a segment of the audiovisual content deemed to be of particular interest. Audio data for the audiovisual content is stored, and the audio data is automatically analyzed to detect soft-entry points identified as low spectral activity points and/or low volume points in the analyzed audio data. A time index within the audiovisual content, corresponding to the soft-entry point, may be designated as the boundary, which may be the beginning or end of the highlight.
    Type: Application
    Filed: June 13, 2019
    Publication date: January 30, 2020
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20190373310
    Abstract: Metadata for highlights of audiovisual content depicting a sporting event or other event are extracted from audiovisual content. The highlights may be segments of the content, such as a broadcast of a sporting event, that are of particular interest. Audio data for the audiovisual content is stored, and portions of the audio data indicating crowd excitement (noise) is automatically identified by analyzing an audio signal in the joint time and frequency domains. Multiple indicators are derived and subsequently processed to detect, validate, and render occurrences of crowd noise. Metadata are automatically generated, including time of occurrence, level of noise (excitement), and duration of cheering. Metadata may be stored, comprising at least a time index indicating a time, within the audiovisual content, at which each of the portions occurs. Periods of intense crowd noise may be used to identify highlights and/or to indicate crowd excitement during viewing of a highlight.
    Type: Application
    Filed: May 23, 2019
    Publication date: December 5, 2019
    Inventors: Mihailo Stojancic, Warren Packard
  • Publication number: 20190354764
    Abstract: Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 21, 2019
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Publication number: 20190354763
    Abstract: One or more highlights of a video stream may be identified. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. According to one method, at least a portion of the video stream may be stored. The portion of the video stream may be compared with templates of a template database to identify the one or more highlights. Each highlight may be a subset of the video stream that is deemed likely to match the one or more templates. The highlights, an identifier that identifies each of the highlights within the video stream, and/or metadata pertaining particularly to the one or more highlights may be stored to facilitate playback of the highlights for the users.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 21, 2019
    Inventors: Mihailo Stojancic, Warren Packard, Dennis Kanygin
  • Publication number: 20190356948
    Abstract: Metadata for highlights of a video stream is extracted from card images embedded in the video stream. The highlights may be segments of a video stream, such as a broadcast of a sporting event, that are of particular interest to one or more users. Card images embedded in video frames of the video stream are identified and processed to extract text. The text characters may be recognized by applying a machine-learned model trained with a set of characters extracted from card images embedded in sports television programming contents. The training set of character vectors may be pre-processed to maximize metric distance between the training set members. The text may be interpreted to obtain the metadata. The metadata may be stored in association with the portion of the video stream. The metadata may provide information regarding the highlights, and may be presented concurrently with playback of the highlights.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 21, 2019
    Inventors: Mihailo Stojancic, Warren Packard
  • Patent number: 9510044
    Abstract: Content segmentation, categorization and identification methods are described. Content tracking approaches are illustrated that are suitable for large scale deployment. Time-aligned applications such as multi-language selection, customized advertisements, second screen services and content monitoring applications can be economically deployed at large scales. A client performs fingerprinting, scene change detection, audio turn detection, and logo detection on incoming video and gathers database search results, logos and text to identify and segment video streams into content, promos, and commercials. A learning engine is configured to learn rules for optimal identification and segmentation at each client for each channel and program. Content sensed at the client site is tracked with reduced computation and applications are executed with timing precision. A user interface for time-aligned publishing of content and subsequent usage and interaction on one or more displays is also described.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: November 29, 2016
    Assignee: GRACENOTE, INC.
    Inventors: Jose Pio Pereira, Sunil Suresh Kulkarni, Oleksiy Bolgarov, Prashant Ramanathan, Shashank Merchant, Mihailo Stojancic
  • Patent number: 9436689
    Abstract: An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described. Selected content is stored at the distributed local database and tier1 search server(s). Content matching frequent queries, and frequent unidentified queries are cached at various levels in the search system. Content is classified using feature descriptors and geographical aspects, at feature level and in time segments. Queries not identified at clients and tier1 search server(s) are queried against tier2 or lower search server(s). Search servers use classification and geographical partitioning to reduce search cost. Methods for content tracking and local content searching are executed on clients. The client performs local search, monitoring and/or tracking of the query content with the reference content and local search with a database of reference fingerprints.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: September 6, 2016
    Assignee: Gracenote, Inc.
    Inventors: Jose Pio Pereira, Shashank Merchant, Prashant Ramanathan, Sunil Suresh Kulkarni, Mihailo Stojancic
  • Publication number: 20160132500
    Abstract: An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described. Selected content is stored at the distributed local database and tier1 search server(s). Content matching frequent queries, and frequent unidentified queries are cached at various levels in the search system. Content is classified using feature descriptors and geographical aspects, at feature level and in time segments. Queries not identified at clients and tier1 search server(s) are queried against tier2 or lower search server(s). Search servers use classification and geographical partitioning to reduce search cost. Methods for content tracking and local content searching are executed on clients. The client performs local search, monitoring and/or tracking of the query content with the reference content and local search with a database of reference fingerprints.
    Type: Application
    Filed: January 7, 2016
    Publication date: May 12, 2016
    Applicant: Gracenote, Inc.
    Inventors: Jose Pio Pereira, Shashank Merchant, Prashant Ramanathan, Sunil Suresh Kulkarni, Mihailo Stojancic