Patents by Inventor Isselmou Ould Dellahy

Isselmou Ould Dellahy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230071845
    Abstract: Systems, methods, and devices for an interactive viewing experience by detecting on-screen data are disclosed. One or more frames of video data are analyzed to detect regions in the visual video content that contain text. A character recognition operation can be performed on the regions to generate textual data. Based on the textual data and the regions, a graphical user interface (GUI) definition to can be generated. The GUI definition can be used to generate a corresponding GUI superimposed onto the visual video content to present users with controls and functionality with which to interact with the text or enhance the video content. Context metadata can be determined from external sources or by analyzing the continuity of audio and visual aspects of the video data. The context metadata can then be used to improve the character recognition or inform the generation of the GUI.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 9, 2023
    Inventors: Isselmou Ould Dellahy, Shivajit Mohapatra, Anthony J. Braskich, Faisal Ishtiaq, Renxiang Li
  • Publication number: 20220217450
    Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.
    Type: Application
    Filed: March 23, 2022
    Publication date: July 7, 2022
    Applicant: ARRIS Enterprises LLC
    Inventors: Benedito J. FONSECA, JR., Faisal ISHTIAQ, Anthony J. BRASKICH, Venugopal VASUDEVAN, Isselmou OULD DELLAHY
  • Patent number: 11317168
    Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: April 26, 2022
    Assignee: ARRIS Enterprises LLC
    Inventors: Benedito J. Fonseca, Jr., Faisal Ishtiaq, Anthony J. Braskich, Venugopal Vasudevan, Isselmou Ould Dellahy
  • Patent number: 10148928
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: December 4, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9888279
    Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.
    Type: Grant
    Filed: September 11, 2014
    Date of Patent: February 6, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Faisal Ishtiaq, Benedito J. Fonseca, Jr., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
  • Publication number: 20170257612
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Application
    Filed: May 22, 2017
    Publication date: September 7, 2017
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9693030
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 27, 2017
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9596491
    Abstract: Methods of monitoring segment replacement within a multimedia stream are provided. A multimedia stream having a replacement segment spliced therein is evaluated by extracting at least one of video, text, and audio features from the multimedia stream adjacent a beginning or ending of the replacement segment, and the extracted features are analyzed to detect if a residual of a segment replaced by the replacement segment exists within the multimedia stream. Methods of ad replacement and a system for performing the above methods are also disclosed.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: March 14, 2017
    Assignee: ARRIS Enterprises, Inc.
    Inventors: Benedito J. Fonseca, Jr., Isselmou Ould Dellahy, Renxiang Li, Stephen P. Emeott
  • Publication number: 20170048596
    Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.
    Type: Application
    Filed: August 12, 2016
    Publication date: February 16, 2017
    Inventors: Benedito J. Fonseca, JR., Faisal Ishtiaq, Anthony J. Braskich, Venugopal Vasudevan, Isselmou Ould Dellahy
  • Publication number: 20160182922
    Abstract: Methods of monitoring segment replacement within a multimedia stream are provided. A multimedia stream having a replacement segment spliced therein is evaluated by extracting at least one of video, text, and audio features from the multimedia stream adjacent a beginning or ending of the replacement segment, and the extracted features are analyzed to detect if a residual of a segment replaced by the replacement segment exists within the multimedia stream. Methods of ad replacement and a system for performing the above methods are also disclosed.
    Type: Application
    Filed: December 19, 2014
    Publication date: June 23, 2016
    Inventors: Benedito J. Fonseca, JR., Isselmou Ould Dellahy, Renxiang Li, Stephen P. Emeott
  • Publication number: 20150319510
    Abstract: Systems, methods, and devices for an interactive viewing experience by detecting on-screen data are disclosed. One or more frames of video data are analyzed to detect regions in the visual video content that contain text. A character recognition operation can be performed on the regions to generate textual data. Based on the textual data and the regions, a graphical user interface (GUI) definition to can be generated. The GUI definition can be used to generate a corresponding GUI superimposed onto the visual video content to present users with controls and functionality with which to interact with the text or enhance the video content. Context metadata can be determined from external sources or by analyzing the continuity of audio and visual aspects of the video data. The context metadata can then be used to improve the character recognition or inform the generation of the GUI.
    Type: Application
    Filed: April 30, 2014
    Publication date: November 5, 2015
    Applicant: General Instrument Corporation
    Inventors: Isselmou Ould Dellahy, VIII, Shivajit Mohapatra, Anthony J. Braskich, Faisal Ishtiaq, Renxiang Li
  • Publication number: 20150082349
    Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.
    Type: Application
    Filed: September 11, 2014
    Publication date: March 19, 2015
    Inventors: Faisal Ishtiaq, Benedito J. Fonseca, JR., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
  • Publication number: 20150070587
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Application
    Filed: July 28, 2014
    Publication date: March 12, 2015
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Publication number: 20140028917
    Abstract: Disclosed are methods and apparatus for displaying multimedia feeds. The method comprises receiving a plurality of multimedia feeds and, for each of the plurality of multimedia feeds, acquiring a value of a metric and displaying, on a common display, the plurality of multimedia feeds. The metric is variable and its value for a particular multimedia feed and for a particular time is dependent upon either events occurring within that particular multimedia feed at or before that particular time or upon a rating (by one or more entities), at that particular time, of that particular multimedia feed. The multimedia feeds are displayed on the common display such that a first feed is displayed in a manner different from the manner of display of a second feed, the first feed having a first metric value, the second feed having a second metric value, and the first and second metric values being different.
    Type: Application
    Filed: July 30, 2012
    Publication date: January 30, 2014
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Alfonso Martinez Smith, Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Faisal Ishtiaq, Renxiang Li, Isselmou Ould Dellahy
  • Publication number: 20130144725
    Abstract: Systems and methods are provided for presenting content to a user. An exemplary method involves establishing a relationship between a first device and the user, wherein, based on the relationship, one or more instances of secondary content are automatically excluded from display by the first device while primary content is displayed by the first device. The method continues by presenting an instance of secondary content to the user in a manner that is influenced by the relationship.
    Type: Application
    Filed: December 2, 2011
    Publication date: June 6, 2013
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Renxiang Li, Faisal Ishtiaq, Nitya Narasimhan, Michael L. Needham, Isselmou Ould Dellahy