Patents by Inventor Isselmou Ould Dellahy
Isselmou Ould Dellahy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230071845Abstract: Systems, methods, and devices for an interactive viewing experience by detecting on-screen data are disclosed. One or more frames of video data are analyzed to detect regions in the visual video content that contain text. A character recognition operation can be performed on the regions to generate textual data. Based on the textual data and the regions, a graphical user interface (GUI) definition to can be generated. The GUI definition can be used to generate a corresponding GUI superimposed onto the visual video content to present users with controls and functionality with which to interact with the text or enhance the video content. Context metadata can be determined from external sources or by analyzing the continuity of audio and visual aspects of the video data. The context metadata can then be used to improve the character recognition or inform the generation of the GUI.Type: ApplicationFiled: November 15, 2022Publication date: March 9, 2023Inventors: Isselmou Ould Dellahy, Shivajit Mohapatra, Anthony J. Braskich, Faisal Ishtiaq, Renxiang Li
-
Publication number: 20220217450Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Applicant: ARRIS Enterprises LLCInventors: Benedito J. FONSECA, JR., Faisal ISHTIAQ, Anthony J. BRASKICH, Venugopal VASUDEVAN, Isselmou OULD DELLAHY
-
Patent number: 11317168Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.Type: GrantFiled: August 12, 2016Date of Patent: April 26, 2022Assignee: ARRIS Enterprises LLCInventors: Benedito J. Fonseca, Jr., Faisal Ishtiaq, Anthony J. Braskich, Venugopal Vasudevan, Isselmou Ould Dellahy
-
Patent number: 10148928Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: GrantFiled: May 22, 2017Date of Patent: December 4, 2018Assignee: ARRIS Enterprises LLCInventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 9888279Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.Type: GrantFiled: September 11, 2014Date of Patent: February 6, 2018Assignee: ARRIS Enterprises LLCInventors: Faisal Ishtiaq, Benedito J. Fonseca, Jr., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
-
Publication number: 20170257612Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: ApplicationFiled: May 22, 2017Publication date: September 7, 2017Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 9693030Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: GrantFiled: July 28, 2014Date of Patent: June 27, 2017Assignee: ARRIS Enterprises LLCInventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 9596491Abstract: Methods of monitoring segment replacement within a multimedia stream are provided. A multimedia stream having a replacement segment spliced therein is evaluated by extracting at least one of video, text, and audio features from the multimedia stream adjacent a beginning or ending of the replacement segment, and the extracted features are analyzed to detect if a residual of a segment replaced by the replacement segment exists within the multimedia stream. Methods of ad replacement and a system for performing the above methods are also disclosed.Type: GrantFiled: December 19, 2014Date of Patent: March 14, 2017Assignee: ARRIS Enterprises, Inc.Inventors: Benedito J. Fonseca, Jr., Isselmou Ould Dellahy, Renxiang Li, Stephen P. Emeott
-
Publication number: 20170048596Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining an audio signature corresponding to a time period of a multimedia asset, identifying a match between the obtained audio signature and one or more stored audio signatures, comparing programming data of the multimedia assets of the obtained audio signature and the matching audio signatures, and determining whether the time period of the multimedia asset contains an advertisement based on the comparison of the programming data of the multimedia assets of the obtained audio signature and the one or more matching audio signatures. Another method includes identifying matches between a plurality of obtained audio signatures and a plurality of stored audio signatures, and determining whether consecutive time periods of the multimedia asset contain an advertisement based on a number of consecutive matching audio signatures of the plurality of stored audio signatures.Type: ApplicationFiled: August 12, 2016Publication date: February 16, 2017Inventors: Benedito J. Fonseca, JR., Faisal Ishtiaq, Anthony J. Braskich, Venugopal Vasudevan, Isselmou Ould Dellahy
-
Publication number: 20160182922Abstract: Methods of monitoring segment replacement within a multimedia stream are provided. A multimedia stream having a replacement segment spliced therein is evaluated by extracting at least one of video, text, and audio features from the multimedia stream adjacent a beginning or ending of the replacement segment, and the extracted features are analyzed to detect if a residual of a segment replaced by the replacement segment exists within the multimedia stream. Methods of ad replacement and a system for performing the above methods are also disclosed.Type: ApplicationFiled: December 19, 2014Publication date: June 23, 2016Inventors: Benedito J. Fonseca, JR., Isselmou Ould Dellahy, Renxiang Li, Stephen P. Emeott
-
Publication number: 20150319510Abstract: Systems, methods, and devices for an interactive viewing experience by detecting on-screen data are disclosed. One or more frames of video data are analyzed to detect regions in the visual video content that contain text. A character recognition operation can be performed on the regions to generate textual data. Based on the textual data and the regions, a graphical user interface (GUI) definition to can be generated. The GUI definition can be used to generate a corresponding GUI superimposed onto the visual video content to present users with controls and functionality with which to interact with the text or enhance the video content. Context metadata can be determined from external sources or by analyzing the continuity of audio and visual aspects of the video data. The context metadata can then be used to improve the character recognition or inform the generation of the GUI.Type: ApplicationFiled: April 30, 2014Publication date: November 5, 2015Applicant: General Instrument CorporationInventors: Isselmou Ould Dellahy, VIII, Shivajit Mohapatra, Anthony J. Braskich, Faisal Ishtiaq, Renxiang Li
-
Publication number: 20150082349Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.Type: ApplicationFiled: September 11, 2014Publication date: March 19, 2015Inventors: Faisal Ishtiaq, Benedito J. Fonseca, JR., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
-
Publication number: 20150070587Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: ApplicationFiled: July 28, 2014Publication date: March 12, 2015Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Publication number: 20140028917Abstract: Disclosed are methods and apparatus for displaying multimedia feeds. The method comprises receiving a plurality of multimedia feeds and, for each of the plurality of multimedia feeds, acquiring a value of a metric and displaying, on a common display, the plurality of multimedia feeds. The metric is variable and its value for a particular multimedia feed and for a particular time is dependent upon either events occurring within that particular multimedia feed at or before that particular time or upon a rating (by one or more entities), at that particular time, of that particular multimedia feed. The multimedia feeds are displayed on the common display such that a first feed is displayed in a manner different from the manner of display of a second feed, the first feed having a first metric value, the second feed having a second metric value, and the first and second metric values being different.Type: ApplicationFiled: July 30, 2012Publication date: January 30, 2014Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Alfonso Martinez Smith, Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Faisal Ishtiaq, Renxiang Li, Isselmou Ould Dellahy
-
Publication number: 20130144725Abstract: Systems and methods are provided for presenting content to a user. An exemplary method involves establishing a relationship between a first device and the user, wherein, based on the relationship, one or more instances of secondary content are automatically excluded from display by the first device while primary content is displayed by the first device. The method continues by presenting an instance of secondary content to the user in a manner that is influenced by the relationship.Type: ApplicationFiled: December 2, 2011Publication date: June 6, 2013Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Renxiang Li, Faisal Ishtiaq, Nitya Narasimhan, Michael L. Needham, Isselmou Ould Dellahy