Patents by Inventor Faisal Ishtiaq

Faisal Ishtiaq has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180352271
    Abstract: Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
    Type: Application
    Filed: August 13, 2018
    Publication date: December 6, 2018
    Inventors: Renxiang Li, Kevin L. Baum, Faisal Ishtiaq, Michael L. Needham
  • Patent number: 10148928
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: December 4, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 10097865
    Abstract: Particular embodiments can refine a seed sentinel frame signature for a seed sentinel frame. The seed sentinel frame may be predictable or partially predictable content that demarks a beginning and/or end of certain content in a video program. The seed sentinel frame may be first used to detect other sentinel frames in the video program. However, other sentinel frames throughout the video program, or in other video programs, may be slightly different from the given sentinel frame due to different reasons. The seed sentinel frame signature may not detect the sentinel frames of a video program with a desired accuracy. Accordingly, particular embodiments may refine the sentinel frame signature to a synthetic sentinel frame signature. The synthetic sentinel frame signature may then be used to analyze the current video program or other video programs. The synthetic sentinel frame signature may more accurately detect the sentinel frames within the video program.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: October 9, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Renxiang Li, Stephen P. Emeott, Faisal Ishtiaq
  • Patent number: 10070173
    Abstract: A method is provided for encoding a video program that includes receiving a video program to be encoded and evaluating the program with a profile is selected from among a plurality of profiles stored in a database. Each of the plurality of profiles include program attributes associated with one or more video programs and information pertaining to one or more static graphical elements that overlay content in the one or more video programs. The selected profile is applicable to the video program to be encoded. At least a portion of the video program is caused to be encoded based at least in part on the information in the selected profile. The encoded portion of the video program is evaluated to assess an accuracy of the selected profile and, based at least in part on the evaluation, a confidence level is assigned to the selected profile.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: September 4, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Anthony J. Braskich, Faisal Ishtiaq, Venugopal Vasudevan, Myungcheol Doo
  • Patent number: 10051295
    Abstract: Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: August 14, 2018
    Assignee: Google Technology Holdings LLC
    Inventors: Renxiang Li, Kevin L. Baum, Faisal Ishtiaq, Michael L. Needham
  • Publication number: 20180189276
    Abstract: Systems and methods for determining the location of advertisements in multimedia assets are disclosed. A method includes obtaining a multimedia asset comprising at least one of video, audio and text, identifying segments of the multimedia asset, and for each segment, determining whether the respective segment represents an advertisement. For each segment that represents an advertisement, the method further includes extracting a predetermined number of fingerprints from the respective segment, submitting, to an advertisement database, the predetermined number of fingerprints, receiving, from the advertisement database, a count of the extracted fingerprints matching fingerprints previously stored in the advertisement database, and determining, based on the count, whether to add an indication that the respective segment is a new advertisement to the advertisement database.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Benedito J. Fonseca, JR., Venugopal Vasudevan, Faisal Ishtiaq, Anthony J. Braskich
  • Publication number: 20180191796
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Publication number: 20180192158
    Abstract: A method of video segment detection within a transport stream of a video asset is provided. Boundaries of candidate video segments of interest (i.e., advertisements, sports highlights, news highlights, content summaries, etc.) within a video asset are detected with a media analysis detector and are separately detected based on statistical models generated from historic transport control event data collected from a population of viewers of the video asset. The above referenced information concerning the candidate video segments of interest is used to identify beginning and end boundaries of selected candidate video segments within the transport stream. A transcoder is provided with parameters corresponding to the selected candidate video segments and performs group of pictures (GOP) and chunk boundary alignment of chunks of the transport stream with the boundaries of the selected candidate video segments. A system and non-transitory computer-readable storage medium are also disclosed.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Alfonso Martinez Smith, Anthony J. Braskich, Faisal Ishtiaq, Benedito J. Fonseca, JR.
  • Publication number: 20180184154
    Abstract: A method is provided for encoding a video program that includes receiving a video program to be encoded and evaluating the program with a profile is selected from among a plurality of profiles stored in a database. Each of the plurality of profiles include program attributes associated with one or more video programs and information pertaining to one or more static graphical elements that overlay content in the one or more video programs. The selected profile is applicable to the video program to be encoded. At least a portion of the video program is caused to be encoded based at least in part on the information in the selected profile. The encoded portion of the video program is evaluated to assess an accuracy of the selected profile and, based at least in part on the evaluation, a confidence level is assigned to the selected profile.
    Type: Application
    Filed: December 22, 2016
    Publication date: June 28, 2018
    Inventors: Anthony J. Braskich, Faisal Ishtiaq, Venugopal Vasudevan, Myungcheol Doo
  • Publication number: 20180150696
    Abstract: A method is provided for detecting static graphical elements in a sequence of video frames that compares a selected frame in the sequence to each of a plurality of previous frames in the sequence to identify a graphical element that can be a logo. For each pair of frames compared, an absolute difference frame is determined by acquiring an absolute difference value between pixel values for corresponding pixels over at least a portion of the frames in the frame pair. A metric associated with each absolute difference frame is generated, which reflects a degree of dissimilarity. At least some of the absolute difference frames weighted in accordance with the metric associated therewith are summed to generate an accumulation difference frame such that pairs of frames that are more dissimilar have a greater weight. A static graphical element is then identified over a region of the accumulation difference frame in which pixel values satisfy specified criteria.
    Type: Application
    Filed: November 30, 2016
    Publication date: May 31, 2018
    Inventors: Renxiang Li, Faisal Ishtiaq
  • Patent number: 9973662
    Abstract: Particular embodiments detect a solid color frame, such as a black frame, that may include visible content other than the solid color in a portion of the frame. These frames may conventionally not be detected as a solid color frame because of the visible content in the portion of the frame. However, these solid color frames may be “functional” black or white frames, in that the solid color frames are performing the function of the solid color frame even though the frames include the visible content. The visible content may be content that may always be displayed on the screen even if the video content is transitioning to an advertisement. Particular embodiments use techniques to detect the functional solid color frames even when visible content appears in the solid color frames. Particular embodiments use color layout information and edge distribution information to detect solid color frames.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: May 15, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Renxiang Li, Faisal Ishtiaq, Alfonso Martinez Smith
  • Patent number: 9888279
    Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.
    Type: Grant
    Filed: September 11, 2014
    Date of Patent: February 6, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Faisal Ishtiaq, Benedito J. Fonseca, Jr., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
  • Patent number: 9854202
    Abstract: Particular embodiments provide supplemental content that may be related to video content that a user is watching. A segment of closed-caption text from closed-captions for the video content is determined. A first set of information from the segment of closed-caption text, such as terms may be extracted. Particular embodiments use an external source that can be determined from a set of external sources. To determine the supplemental content, particular embodiments may extract a second set of information from the external source. Because the external source may be more robust and include more text than the segment of closed-caption text, the second set of information may include terms that better represent the segment of closed-caption text. Particular embodiments thus use the second set of information to determine supplemental content for the video content, and can provide the supplemental content to a user watching the video content.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: December 26, 2017
    Assignee: ARRIS Enterprises LLC
    Inventors: Benedito J. Fonseca, Jr., Anthony J. Braskich, Faisal Ishtiaq, Alfonso Martinez Smith
  • Publication number: 20170332112
    Abstract: Particular embodiments can refine a seed sentinel frame signature for a seed sentinel frame. The seed sentinel frame may be predictable or partially predictable content that demarks a beginning and/or end of certain content in a video program. The seed sentinel frame may be first used to detect other sentinel frames in the video program. However, other sentinel frames throughout the video program, or in other video programs, may be slightly different from the given sentinel frame due to different reasons. The seed sentinel frame signature may not detect the sentinel frames of a video program with a desired accuracy. Accordingly, particular embodiments may refine the sentinel frame signature to a synthetic sentinel frame signature. The synthetic sentinel frame signature may then be used to analyze the current video program or other video programs. The synthetic sentinel frame signature may more accurately detect the sentinel frames within the video program.
    Type: Application
    Filed: May 12, 2016
    Publication date: November 16, 2017
    Inventors: Renxiang Li, Stephen P. Emeott, Faisal Ishtiaq
  • Publication number: 20170257612
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Application
    Filed: May 22, 2017
    Publication date: September 7, 2017
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9729920
    Abstract: A method implemented in a computer system for controlling the delivery of data and audio/video content. The method delivers primary content to the subscriber device for viewing by a subscriber. The method also delivers secondary content to the companion device for viewing by the subscriber in parallel with the subscriber viewing the primary content, where the secondary content relates to the primary content. The method extracts attention estimation features from the primary content, and monitors the companion device to determine an interaction measurement for the subscriber viewing the secondary content on the companion device. The method calculates an attention measurement for the subscriber viewing the primary content based on the attention estimation features, and the interaction measurement, and controls the delivery of the secondary content to the companion device based on the attention measurement.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 8, 2017
    Assignee: ARRIS Enterprises, Inc.
    Inventors: Michael L. Needham, Kevin L. Baum, Faisal Ishtiaq, Renxiang Li, Shivajit Mohapatra
  • Patent number: 9721185
    Abstract: Particular embodiments automatically identify and track a logo that appears in video content. For example, particular embodiments can track a branding logo's position and size without any prior knowledge about the logo, such as the position, type, structure, and content of the logo. In one embodiment, a heat map is used that accumulates a frequency of short-term logos that are detected in the video content. The heat map is then used to identify a branding logo in the video content.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: August 1, 2017
    Assignee: ARRIS Enterprises LLC
    Inventors: Renxiang Li, Faisal Ishtiaq
  • Publication number: 20170188061
    Abstract: Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
    Type: Application
    Filed: March 10, 2017
    Publication date: June 29, 2017
    Inventors: Renxiang Li, Kevin L. Baum, Faisal Ishtiaq, Michael L. Needham
  • Patent number: 9693030
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 27, 2017
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9646219
    Abstract: A video processing system detects an overlay image, such as a logo, in a picture of a video stream, the overlay for example being a broadcaster's logo. The detection is based on evaluation of blending characteristics of a picture frame. The method of detection of an overlay defines first and second areas within the image, the first and second areas being non-overlapping. Next an alpha-blended value is calculated for the mean color value of the second area with an overlay color value. Then, if the mean color value of the first area is closer to the alpha-blended value than it is to the mean color value of the second area, the overlay can be indicated as detected and defined within the picture. Detection of the overlay can be used to identify an owner of the video, or detect when a scene change such as a commercial occurs.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: May 9, 2017
    Assignee: ARRIS Enterprises, Inc.
    Inventors: Kevin L. Baum, Faisal Ishtiaq