Patents by Inventor Bhavan Gandhi

Bhavan Gandhi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230231775
    Abstract: Systems and methods that adaptively model network traffic to predict network capacity utilization and quality of experience into the future. The adaptive model of network traffic may be used to recommend capacity upgrades based on a score expressed in a QoE space.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 20, 2023
    Applicant: ARRIS Enterprises LLC
    Inventors: Bhavan Gandhi, Harindranath P. Nair, Hyeongjin Song, Sanjeev Mishra
  • Publication number: 20230224352
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Application
    Filed: March 7, 2023
    Publication date: July 13, 2023
    Applicant: ARRIS Enterprises LLC
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Patent number: 11627176
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: April 11, 2023
    Assignee: ARRIS Enterprises LLC
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Publication number: 20210250396
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Application
    Filed: April 20, 2021
    Publication date: August 12, 2021
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Patent number: 10986152
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: April 20, 2021
    Assignee: ARRIS Enterprises LLC
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Publication number: 20190372857
    Abstract: Systems and methods that adaptively model network traffic to predict network capacity utilization and quality of experience into the future. The adaptive model of network traffic may be used to recommend capacity upgrades based on a score expressed in a QoE space.
    Type: Application
    Filed: May 29, 2018
    Publication date: December 5, 2019
    Inventors: Bhavan Gandhi, Harindranath P. Nair, Hyeongjin Song, Sanjeev Mishra
  • Patent number: 10148928
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: December 4, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Publication number: 20180192138
    Abstract: A system and method are provided for recommending a segment of a segmented video asset of particular interest to a client. A copy of a video asset is created such that the copy is in the form of a set of segments for being transmitted to a client device for playback. A relationship is established between start and end times of each segment relative to a standard version of segments of the video asset, and metadata is generated for each segment of the copy. The metadata and relationship is used relative to the standard version with viewing data collected across a population of viewers having viewed the video asset to produce viewing metrics for each segment of the set of segments of the copy. The viewing metrics are provided to a recommender which uses the viewing metrics to generate a recommendation of a segment of the copy to a client.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Bhavan Gandhi, Andrew Aftelak
  • Publication number: 20180191796
    Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
  • Patent number: 10015548
    Abstract: A system and method are provided for recommending a segment of a segmented video asset of particular interest to a client. A copy of a video asset is created such that the copy is in the form of a set of segments for being transmitted to a client device for playback. A relationship is established between start and end times of each segment relative to a standard version of segments of the video asset, and metadata is generated for each segment of the copy. The metadata and relationship is used relative to the standard version with viewing data collected across a population of viewers having viewed the video asset to produce viewing metrics for each segment of the set of segments of the copy. The viewing metrics are provided to a recommender which uses the viewing metrics to generate a recommendation of a segment of the copy to a client.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 3, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Bhavan Gandhi, Andrew Aftelak
  • Patent number: 9888279
    Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.
    Type: Grant
    Filed: September 11, 2014
    Date of Patent: February 6, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Faisal Ishtiaq, Benedito J. Fonseca, Jr., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
  • Publication number: 20170257612
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Application
    Filed: May 22, 2017
    Publication date: September 7, 2017
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9693030
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 27, 2017
    Assignee: ARRIS Enterprises LLC
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 9338508
    Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.
    Type: Grant
    Filed: October 23, 2012
    Date of Patent: May 10, 2016
    Assignee: GOOGLE TECHNOLOGY HOLDINGS LLC
    Inventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao
  • Patent number: 9118951
    Abstract: Disclosed is a method of operating a secondary device in a manner associated with operation of a primary device including obtaining first information corresponding to a media asset being output by the primary device, processing the first information to determine local media-signature information, transmitting the first information for receipt by a server, receiving secondary information from the server, wherein the secondary information includes a plurality of asset-media signatures that respectively correspond to respective portions of the media asset, attempting to determine a time-based correlation between at least one portion of the local media-signature information and at least one of the asset-media signatures, and outputting one or more portions of time-relevant asset streams from the secondary device, the one or more portions being determined at least indirectly based upon the correlation.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: August 25, 2015
    Assignee: ARRIS Technology, Inc.
    Inventors: Bhavan Gandhi, Benedito J. Fonseca, Jr.
  • Publication number: 20150082349
    Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.
    Type: Application
    Filed: September 11, 2014
    Publication date: March 19, 2015
    Inventors: Faisal Ishtiaq, Benedito J. Fonseca, JR., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
  • Publication number: 20150070587
    Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.
    Type: Application
    Filed: July 28, 2014
    Publication date: March 12, 2015
    Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
  • Patent number: 8763042
    Abstract: Disclosed are methods and apparatus for providing information to a first client device (e.g., a tablet computer) for presentation on that device. The information may be related to multimedia content (e.g., a television program) that may be presented using a second client device (e.g., a television). Firstly, an activity level for a portion of the multimedia content is determined. Using the activity level, an amount of the information is assigned to that portion of the multimedia content. The amount of the information assigned is dependent on that determined activity level. The assigned information is then provided for use by (e.g., for display on) the first client device.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: June 24, 2014
    Assignee: Motorola Mobility LLC
    Inventors: Faisal Ishtiaq, Bhavan Gandhi, Crysta J. Metcalf
  • Publication number: 20140115032
    Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.
    Type: Application
    Filed: October 23, 2012
    Publication date: April 24, 2014
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao
  • Publication number: 20140115031
    Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.
    Type: Application
    Filed: October 23, 2012
    Publication date: April 24, 2014
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao