Patents by Inventor Bhavan Gandhi
Bhavan Gandhi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230231775Abstract: Systems and methods that adaptively model network traffic to predict network capacity utilization and quality of experience into the future. The adaptive model of network traffic may be used to recommend capacity upgrades based on a score expressed in a QoE space.Type: ApplicationFiled: March 21, 2023Publication date: July 20, 2023Applicant: ARRIS Enterprises LLCInventors: Bhavan Gandhi, Harindranath P. Nair, Hyeongjin Song, Sanjeev Mishra
-
Publication number: 20230224352Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.Type: ApplicationFiled: March 7, 2023Publication date: July 13, 2023Applicant: ARRIS Enterprises LLCInventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
-
Patent number: 11627176Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.Type: GrantFiled: April 20, 2021Date of Patent: April 11, 2023Assignee: ARRIS Enterprises LLCInventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
-
Publication number: 20210250396Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.Type: ApplicationFiled: April 20, 2021Publication date: August 12, 2021Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
-
Patent number: 10986152Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.Type: GrantFiled: December 29, 2016Date of Patent: April 20, 2021Assignee: ARRIS Enterprises LLCInventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
-
Publication number: 20190372857Abstract: Systems and methods that adaptively model network traffic to predict network capacity utilization and quality of experience into the future. The adaptive model of network traffic may be used to recommend capacity upgrades based on a score expressed in a QoE space.Type: ApplicationFiled: May 29, 2018Publication date: December 5, 2019Inventors: Bhavan Gandhi, Harindranath P. Nair, Hyeongjin Song, Sanjeev Mishra
-
Patent number: 10148928Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: GrantFiled: May 22, 2017Date of Patent: December 4, 2018Assignee: ARRIS Enterprises LLCInventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Publication number: 20180192138Abstract: A system and method are provided for recommending a segment of a segmented video asset of particular interest to a client. A copy of a video asset is created such that the copy is in the form of a set of segments for being transmitted to a client device for playback. A relationship is established between start and end times of each segment relative to a standard version of segments of the video asset, and metadata is generated for each segment of the copy. The metadata and relationship is used relative to the standard version with viewing data collected across a population of viewers having viewed the video asset to produce viewing metrics for each segment of the set of segments of the copy. The viewing metrics are provided to a recommender which uses the viewing metrics to generate a recommendation of a segment of the copy to a client.Type: ApplicationFiled: December 29, 2016Publication date: July 5, 2018Inventors: Bhavan Gandhi, Andrew Aftelak
-
Publication number: 20180191796Abstract: Methods and systems are provided for bitrate adaptation of a video asset to be streamed to a client device for playback. The method includes selecting a representation from a manifest which expresses a set of representations available for each chunk of the video asset and generating a dynamic manifest for the video asset in which the representation selected for the at least one chunk is recommended for streaming to the client device. The selection of the representation recommended for the chunk may be based on at least one of historic viewing behavior of previous viewers of the chunk, content analysis information for the chunk, a level of available network bandwidth, a level of available network storage, and data rate utilization information of network resources including current, average, peak, and minimum data rate of network resources.Type: ApplicationFiled: December 29, 2016Publication date: July 5, 2018Inventors: Bhavan Gandhi, Faisal Ishtiaq, Anthony J. Braskich, Andrew Aftelak
-
Patent number: 10015548Abstract: A system and method are provided for recommending a segment of a segmented video asset of particular interest to a client. A copy of a video asset is created such that the copy is in the form of a set of segments for being transmitted to a client device for playback. A relationship is established between start and end times of each segment relative to a standard version of segments of the video asset, and metadata is generated for each segment of the copy. The metadata and relationship is used relative to the standard version with viewing data collected across a population of viewers having viewed the video asset to produce viewing metrics for each segment of the set of segments of the copy. The viewing metrics are provided to a recommender which uses the viewing metrics to generate a recommendation of a segment of the copy to a client.Type: GrantFiled: December 29, 2016Date of Patent: July 3, 2018Assignee: ARRIS Enterprises LLCInventors: Bhavan Gandhi, Andrew Aftelak
-
Patent number: 9888279Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.Type: GrantFiled: September 11, 2014Date of Patent: February 6, 2018Assignee: ARRIS Enterprises LLCInventors: Faisal Ishtiaq, Benedito J. Fonseca, Jr., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
-
Publication number: 20170257612Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: ApplicationFiled: May 22, 2017Publication date: September 7, 2017Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 9693030Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: GrantFiled: July 28, 2014Date of Patent: June 27, 2017Assignee: ARRIS Enterprises LLCInventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 9338508Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.Type: GrantFiled: October 23, 2012Date of Patent: May 10, 2016Assignee: GOOGLE TECHNOLOGY HOLDINGS LLCInventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao
-
Patent number: 9118951Abstract: Disclosed is a method of operating a secondary device in a manner associated with operation of a primary device including obtaining first information corresponding to a media asset being output by the primary device, processing the first information to determine local media-signature information, transmitting the first information for receipt by a server, receiving secondary information from the server, wherein the secondary information includes a plurality of asset-media signatures that respectively correspond to respective portions of the media asset, attempting to determine a time-based correlation between at least one portion of the local media-signature information and at least one of the asset-media signatures, and outputting one or more portions of time-relevant asset streams from the secondary device, the one or more portions being determined at least indirectly based upon the correlation.Type: GrantFiled: June 26, 2012Date of Patent: August 25, 2015Assignee: ARRIS Technology, Inc.Inventors: Bhavan Gandhi, Benedito J. Fonseca, Jr.
-
Publication number: 20150082349Abstract: A method receives video content and metadata associated with video content. The method then extracts features of the video content based on the metadata. Portions of the visual, audio, and textual features are fused into composite features that include multiple features from the visual, audio, and textual features. A set of video segments of the video content is identified based on the composite features of the video content. Also, the segments may be identified based on a user query.Type: ApplicationFiled: September 11, 2014Publication date: March 19, 2015Inventors: Faisal Ishtiaq, Benedito J. Fonseca, JR., Kevin L. Baum, Anthony J. Braskich, Stephen P. Emeott, Bhavan Gandhi, Renxiang Li, Alfonso Martinez Smith, Michael L. Needham, Isselmou Ould Dellahy
-
Publication number: 20150070587Abstract: Systems and methods for generating alerts and enhanced viewing experience features using on-screen data are disclosed. Textual data corresponding to on-screen text is determined from the visual content of video data. The textual data is associated with corresponding regions and frames of the video data in which the corresponding on-screen text was detected. Users can select regions in the frames of the visual content to monitor for a particular triggering item (e.g., a triggering word, name, or phrase). During play back of the video data, the textual data associated with the selected regions in the frames can be monitored for the triggering item. When the triggering item is detected in the textual data, an alert can be generated. Alternatively, the textual data for the selected region can be extracted to compile supplemental information that can be rendered over the playback of the video data or over other video data.Type: ApplicationFiled: July 28, 2014Publication date: March 12, 2015Inventors: Stephen P. Emeott, Kevin L. Baum, Bhavan Gandhi, Faisal Ishtiaq, Isselmou Ould Dellahy
-
Patent number: 8763042Abstract: Disclosed are methods and apparatus for providing information to a first client device (e.g., a tablet computer) for presentation on that device. The information may be related to multimedia content (e.g., a television program) that may be presented using a second client device (e.g., a television). Firstly, an activity level for a portion of the multimedia content is determined. Using the activity level, an amount of the information is assigned to that portion of the multimedia content. The amount of the information assigned is dependent on that determined activity level. The assigned information is then provided for use by (e.g., for display on) the first client device.Type: GrantFiled: October 5, 2012Date of Patent: June 24, 2014Assignee: Motorola Mobility LLCInventors: Faisal Ishtiaq, Bhavan Gandhi, Crysta J. Metcalf
-
Publication number: 20140115032Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.Type: ApplicationFiled: October 23, 2012Publication date: April 24, 2014Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao
-
Publication number: 20140115031Abstract: Continuity of an entire user session (including the primary content stream, secondary content streams, and user context) is preserved so that the user can resume the session at a later time, at a different place, and, possibly, using different equipment. When a user pauses a session, the context of that session is automatically preserved. Upon resumption, the session begins where the user left off, resuming the primary media stream at the point where the user stopped, knowing what secondary content items the user has already seen, and re-establishing any user-set parameters for the session (e.g., playback volume, allocation of streams to particular screen real estate, whether closed captioning is turned on, and the like). For time-shifted content consumption, the system intelligently selects, filters, and processes contextual information (such as characteristics of the primary media) in order to present companion streams that are relevant and engaging to the user.Type: ApplicationFiled: October 23, 2012Publication date: April 24, 2014Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Varma L. Chanderraju, Bhavan Gandhi, Vinay Kalra, Sridhar Kunisetty, Sanjeev K. Mishra, Bharath R. Rao