Patents by Inventor Anil Kokaram
Anil Kokaram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240086041Abstract: An interactive multi-view module identifies a plurality of media items associated with an event. Each of the plurality of media items is created by capturing the event. The interactive multi-view module synchronizes the audio portions of the media items according to a common reference timeline. The interactive multi-view module provides the media items for presentation in an interactive multi-view player interface based on the synchronized audio portions and multiple relative geographic locations. The interactive multi-view player interface allows a user of a plurality of users to switch between the plurality of media items, and indicates a video density indicating a quantity of media items available at a given point in time and a popularity indicator of one of the media items at the given point in time. The popularity indicator is determined using factors comprising a number of viewers of the media items at the given point in time.Type: ApplicationFiled: November 13, 2023Publication date: March 14, 2024Inventors: Neil Birkbeck, Isasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Perry Tobin, Steve Benting, Anil Kokaram, Jeremy Doig
-
Patent number: 11816310Abstract: An interactive multi-view module identifies a plurality of media items associated with one or more real-world events. Each of the plurality of media items is created by capturing the real-world events from a particular geographic location. The interactive multi-view module determines a geographic position associated with each of the media items and presents the media items in an interactive multi-view player interface based at least on the geographic positions. The interactive multi-view player interface allows a user to switch between of media items, and indicates at least one of a video density indicating a number of media items available at a given point in time or an event highlight indicating a popularity of the media items at a given point in time. The popularity of the respective media items is determined using one or more factors comprising a number of views of the media items at a given point in time.Type: GrantFiled: August 24, 2020Date of Patent: November 14, 2023Assignee: Google LLCInventors: Neil Birkbeck, Isasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Perry Tobin, Steve Benting, Anil Kokaram, Jeremy Doig
-
Publication number: 20230267623Abstract: A media items to be shared with users of a content sharing service are identified. Each of the media items corresponds to a video recording generated by a client device that depicts one or more objects corresponding to a real-world event and/or a geographic location. A location of the client device that generated the video recording corresponding to a respective media item of the media items is determined based on image features depicted in a set of frames of the video recording. A request for content associated with at least one of the real-world event and/or the geographic location is received from another client device connected to the content sharing service. The media items and, for each of the media items, an indication of the location of the client device that generated the corresponding video recording are provided in accordance with the request for content.Type: ApplicationFiled: April 24, 2023Publication date: August 24, 2023Inventors: Joan Lasenby, Stuart Bennett, Sasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Patent number: 11636610Abstract: A set of media items to be shared with users of a content sharing service is identified. Each of the set of media items corresponds to a video recording generated by a client device that depicts one or more objects corresponding to a real-world event at a geographic location. A positioning of the client device that generated the video recording corresponding to a respective media item of the set of media items is determined. The positioning is determined based on image features depicted in a set of frames of the video recording. A request for content associated with at least one of the real-world event or the geographic location is received from another client device connected to the content sharing service. The set of media items and, for each of the set of media items, an indication of the determined positioning of the client device that generated the corresponding video recording is provided in accordance with the request for content.Type: GrantFiled: June 21, 2021Date of Patent: April 25, 2023Assignee: Google LLCInventors: Joan Lasenby, Stuart Bennett, Sasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Publication number: 20210312641Abstract: A set of media items to be shared with users of a content sharing service is identified. Each of the set of media items corresponds to a video recording generated by a client device that depicts one or more objects corresponding to a real-world event at a geographic location. A positioning of the client device that generated the video recording corresponding to a respective media item of the set of media items is determined. The positioning is determined based on image features depicted in a set of frames of the video recording. A request for content associated with at least one of the real-world event or the geographic location is received from another client device connected to the content sharing service. The set of media items and, for each of the set of media items, an indication of the determined positioning of the client device that generated the corresponding video recording is provided in accordance with the request for content.Type: ApplicationFiled: June 21, 2021Publication date: October 7, 2021Inventors: Joan Lasenby, Stuart Bennett, Andrew Crawford, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Patent number: 11042991Abstract: The technology disclosed herein includes a method for determining the position of multiple cameras relative to each other. In one example, the method may include: receiving, by a processor, a first video recording of a first camera and a second video recording of a second camera; selecting a set of frames of the first video recording; determining a blurriness measure for multiple frames of the set; identifying feature points in multiple frames of the set; selecting a frame from the set of frames based on the blurriness measure and the identified feature points; and determining a position of a first camera relative to a second camera by comparing the selected frame with a frame of the second video recording.Type: GrantFiled: October 2, 2018Date of Patent: June 22, 2021Assignee: Google LLCInventors: Joan Lasenby, Stuart Bennett, Sasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Patent number: 10754511Abstract: An interactive multi-view module identifies a plurality of media items associated with a real-world event, each of the plurality of media items comprising a video portion and an audio portion. The interactive multi-view module synchronizes the audio portions of each of the plurality of media items according to a common reference timeline, determines a relative geographic position associated with each of the plurality of media items and presents the plurality of media items in an interactive multi-view player interface based at least on the synchronized audio portions and the relative geographic positions.Type: GrantFiled: July 3, 2014Date of Patent: August 25, 2020Assignee: GOOGLE LLCInventors: Neil Birkbeck, Isasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Perry Tobin, Steve Benting, Anil Kokaram, Jeremy Doig
-
Patent number: 10674045Abstract: Implementations disclose mutual noise estimation for videos. A method includes determining an optimal frame noise variance for intensity values of each frame of frames of a video, the optimal frame noise variance based on a determined relationship between spatial variance and temporal variance of the intensity values of homogeneous blocks in the frame, identifying an optimal video noise variance for the video based on optimal frame noise variances of the frames of the video, selecting, for each frame of the video, one or more of the blocks having a spatial variance that is less than the optimal video noise variance, the one or more frames selected as the homogeneous blocks, and utilizing the selected homogeneous blocks to estimate a noise signal of the video.Type: GrantFiled: May 31, 2017Date of Patent: June 2, 2020Assignee: Google LLCInventors: Neil Birkbeck, Mohammad Izadi, Anil Kokaram, Balineedu C. Adsumilli, Damien Kelly
-
Patent number: 10454987Abstract: Implementations disclose bitrate optimization for multi-representation encoding using playback statistics.Type: GrantFiled: October 28, 2016Date of Patent: October 22, 2019Assignee: Google LLCInventors: Chao Chen, Yao-Chung Lin, Anil Kokaram, Steve Benting
-
Patent number: 10277919Abstract: Described herein are techniques related to noise reduction for image sequences or videos. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope and meaning of the claims. A noise reduction tool includes a motion estimator configured to estimated motion in the video, a noise spectrum estimator configured to estimate noise in the video, a shot detector configured to trigger the noise estimation process, a noise spectrum validator configured to validate the estimated noise spectrum, and a noise reducer to reduce noise in the video using the estimated noise spectrum.Type: GrantFiled: March 22, 2016Date of Patent: April 30, 2019Assignee: GOOGLE LLCInventors: Anil Kokaram, Damien Kelly, Andrew Joseph Crawford, Hugh Pierre Denman
-
Publication number: 20190035090Abstract: The technology disclosed herein includes a method for determining the position of multiple cameras relative to each other. In one example, the method may include: receiving, by a processor, a first video recording of a first camera and a second video recording of a second camera; selecting a set of frames of the first video recording; determining a blurriness measure for multiple frames of the set; identifying feature points in multiple frames of the set; selecting a frame from the set of frames based on the blurriness measure and the identified feature points; and determining a position of a first camera relative to a second camera by comparing the selected frame with a frame of the second video recording.Type: ApplicationFiled: October 2, 2018Publication date: January 31, 2019Inventors: Joan Lasenby, Stuart Bennett, Sasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Publication number: 20180352118Abstract: Implementations disclose mutual noise estimation for videos. A method includes determining an optimal frame noise variance for intensity values of each frame of frames of a video, the optimal frame noise variance based on a determined relationship between spatial variance and temporal variance of the intensity values of homogeneous blocks in the frame, identifying an optimal video noise variance for the video based on optimal frame noise variances of the frames of the video, selecting, for each frame of the video, one or more of the blocks having a spatial variance that is less than the optimal video noise variance, the one or more frames selected as the homogeneous blocks, and utilizing the selected homogeneous blocks to estimate a noise signal of the video.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Neil Birkbeck, Mohammad Izadi, Anil Kokaram, Balineedu C. Adsumilli, Damien Kelly
-
Patent number: 10096114Abstract: A method for determining the position of multiple cameras relative to each other includes at a processor, receiving video data from at least one video recording taken by each camera; selecting a subset of frames of each video recording, including determining relative blurriness of each frame of each video recording, selecting frames having a lowest relative blurriness, counting features points in each of the lowest relative blurriness frames, and selecting for further analysis, lowest relative blurriness frames having a highest count of feature points; and processing each selected subset of frames from each video recording to estimate the location and orientation of each camera.Type: GrantFiled: November 27, 2013Date of Patent: October 9, 2018Assignee: GOOGLE LLCInventors: Joan Lasenby, Stuart Bennett, Sasi Inguva, Damien Kelly, Andrew Crawford, Hugh Denman, Anil Kokaram
-
Publication number: 20180124146Abstract: Implementations disclose bitrate optimization for multi-representation encoding using playback statistics.Type: ApplicationFiled: October 28, 2016Publication date: May 3, 2018Inventors: Chao Chen, Yao-Chung Lin, Anil Kokaram, Steve Benting
-
Patent number: 9888255Abstract: A method for pull frame interpolation includes receiving an encoded bitstream including information representing a plurality of frames of video data, decoding the plurality of frames, including identifying a plurality of motion vectors indicating motion from a first frame of the plurality of video frames to a second frame of the plurality of video frames, identifying an interpolation point between the first frame and the second frame, identifying a plurality of candidate interpolation motion vectors indicating motion from the first frame to the interpolation point and from the second frame to the interpolation point based on the plurality of motion vectors, selecting an interpolation motion vector from the plurality of candidate interpolation motion vectors based on a metric, and generating an interpolated frame at the interpolation point based on the selected interpolation motion vector, which may include correcting an artifact in the interpolated frame by blending the interpolated frame.Type: GrantFiled: March 24, 2016Date of Patent: February 6, 2018Assignee: GOOGLE INC.Inventors: Anil Kokaram, Damien Kelly, Andrew Joseph Crawford
-
Patent number: 9524733Abstract: Methods and systems are provided for using a model of human speech quality perception to provide an objective measure for predicting subjective quality assessments. A Virtual Speech Quality Objective Listener (ViSQOL) model is a signal-based full-reference metric that uses a spectro-temporal measure of similarity between a reference signal and test speech signal. Specifically, the model provides for the ability to detect and predict the level of clock drift, and determine whether such clock drift will impact a listener's quality of experience.Type: GrantFiled: May 10, 2013Date of Patent: December 20, 2016Assignee: Google Inc.Inventors: Jan Skoglund, Andrew J. Hines, Noami A. Harte, Anil Kokaram
-
Patent number: 9479700Abstract: A system for video stabilization is provided. The system includes a media component, a transformation component, an offset component and a zoom component. The media component receives a video sequence including at least a first video frame and a second video frame. The transformation component calculates at least a first motion parameter associated with translational motion for the first video frame and at least a second motion parameter associated with the translational motion for the second video frame. The offset component subtracts an offset value generated as a function of a maximum motion parameter and a minimum motion parameter from the first motion parameter and the second motion parameter to generate a set of modified motion parameters. The zoom component determines a zoom value for the video sequence based at least in part on the set of modified motion parameters.Type: GrantFiled: November 4, 2014Date of Patent: October 25, 2016Assignee: Google Inc.Inventors: Andrew Joseph Crawford, Damien Kelly, Anil Kokaram, Hugh Pierre Denman
-
Publication number: 20160205415Abstract: Described herein are techniques related to noise reduction for image sequences or videos. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope and meaning of the claims. A noise reduction tool includes a motion estimator configured to estimated motion in the video, a noise spectrum estimator configured to estimate noise in the video, a shot detector configured to trigger the noise estimation process, a noise spectrum validator configured to validate the estimated noise spectrum, and a noise reducer to reduce noise in the video using the estimated noise spectrum.Type: ApplicationFiled: March 22, 2016Publication date: July 14, 2016Applicant: GOOGLE INC.Inventors: Anil KOKARAM, Damien KELLY, Andrew Joseph CRAWFORD, Hugh Pierre DENMAN
-
Patent number: 9390481Abstract: Implementations generally relate to enhancing content appearance. In some implementations, a method includes receiving an image, selecting a reference object in the image. The method also includes determining one or more image parameter adjustments based on the selected reference object, and applying the one or more image parameter adjustments to the entire image.Type: GrantFiled: June 1, 2013Date of Patent: July 12, 2016Assignee: Google Inc.Inventors: Thor Carpenter, Anil Kokaram
-
Patent number: 9326008Abstract: Described herein are techniques related to noise reduction for image sequences or videos. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope and meaning of the claims. A noise reduction tool includes a motion estimator configured to estimated motion in the video, a noise spectrum estimator configured to estimate noise in the video, a shot detector configured to trigger the noise estimation process, a noise spectrum validator configured to validate the estimated noise spectrum, and a noise reducer to reduce noise in the video using the estimated noise spectrum.Type: GrantFiled: April 10, 2012Date of Patent: April 26, 2016Assignee: GOOGLE INC.Inventors: Anil Kokaram, Damien Kelly, Andrew Joseph Crawford, Hugh Pierre Denman