Abstract: Systems and methods for providing map data of a selected region and supplemental data associated with one or more locations outside of the selected region are disclosed. A computing system can initiate a request for map data associated with a selected region of a mapped region. The one or more requests can be associated with one or more search criteria. The computing system can receive the map data associated with the selected region and supplemental data associated with a subset of records. Each record may satisfy the one or more search criteria and be associated with a respected location outside of the selected region. The computing system can present, within a viewport of a digital mapping application, the map data of the selected region and a visual indication of the supplemental data associated with the subset of records.
Abstract: A method includes generating, using first restoration parameters, a first guide tile for a degraded tile of the degraded frame, the degraded tile corresponding to a source tile of the source frame; generating, using second restoration parameters, a second guide tile for the degraded tile of the degraded frame, the second restoration parameters being different from the first restoration parameters; determining a first tile difference between the source tile and the first guide tile; determining a second tile difference between the source tile and the second guide tile; calculating projection parameters that minimize a difference between a restored tile of the degraded tile and the source tile; and encoding, in an encoded bitstream, the projection parameters. The difference between the restored tile of the degraded tile and the source tile is a linear combination, using the projection parameters, of the first tile difference and the second tile difference.
Abstract: The present disclosure provides systems and methods that provide feedback to a user of an image capture device that includes an artificial intelligence system that analyzes incoming image frames to, for example, determine whether to automatically capture and store the incoming frames. An example system can also, in the viewfinder portion of a user interface presented on a display, a graphical intelligence feedback indicator in association with a live video stream. The graphical intelligence feedback indicator can graphically indicate, for each of a plurality of image frames as such image frame is presented within the viewfinder portion of the user interface, a respective measure of one or more attributes of the respective scene depicted by the image frame output by the artificial intelligence system.
Type:
Grant
Filed:
January 22, 2019
Date of Patent:
August 2, 2022
Assignee:
GOOGLE LLC
Inventors:
Aaron Michael Donsbach, Christopher Breithaupt, Li Zhang, Arushan Rajasekaram, Navid Shiee
Abstract: The present disclosure provides systems and methods for on-device machine learning. In particular, the present disclosure is directed to an on-device machine learning platform and associated techniques that enable on-device prediction, training, example collection, and/or other machine learning tasks or functionality. The on-device machine learning platform can include a context provider that securely injects context features into collected training examples and/or client-provided input data used to generate predictions/inferences. Thus, the on-device machine learning platform can enable centralized training example collection, model training, and usage of machine-learned models as a service to applications or other clients.
Type:
Grant
Filed:
August 11, 2017
Date of Patent:
August 2, 2022
Assignee:
GOOGLE LLC
Inventors:
Pannag Sanketi, Wolfgang Grieskamp, Daniel Ramage, Hrishikesh Aradhye
Abstract: Methods and systems are provided for ranking search results and generating a presentation. In some implementations, a search system generates a presentation based on a search query. In some implementations, a search system ranks search results based on data stored in a knowledge graph. In some implementations, a search system identifies a modifying concept such as a superlative in a received search query, and determines ranking properties based on the modifying concept.
Abstract: Systems and methods of providing mediated social interactions are provided. For instance, a user input from a first user indicative a request to facilitate a provision of emotive contextual signals to a second user can be received. One or more emotive contextual signals to be provided to the second user can be determined based at least in part on the user input. The one or more first emotive contextual signals can include one or more haptic feedback signals intended to facilitate a mediated social interaction associated with the second user.
Abstract: Implementations include systems and methods for querying a data graph. An example method includes receiving a machine learning module trained to produce a model with multiple features for a query, each feature representing a path in a data graph. The method also includes receiving a search query that includes a first search term, mapping the search query to the query, and mapping the first search term to a first entity in the data graph. The method may also include identifying a second entity in the data graph using the first entity and at least one of the multiple weighted features, and providing information relating to the second entity in a response to the search query. Some implementations may also include training the machine learning module by, for example, generating positive and negative training examples from an answer to a query.
Type:
Grant
Filed:
October 13, 2020
Date of Patent:
August 2, 2022
Assignee:
GOOGLE LLC
Inventors:
Amarnag Subramanya, Fernando Pereira, Ni Lao, John Blitzer, Rahul Gupta
Abstract: A method for classifying media content is disclosed. The method includes identifying, by a processing device, a plurality of search results corresponding to a search query, the plurality of search results corresponding to a plurality of media items; identifying, by the processing device, at least one first media item and a second media item of the plurality of media items, the first media item being associated with a first content label, the second media item being associated with a second content label; determining, based at least in part on a first user interaction with the first media item, whether the search query represents a request for media content associated with the first content label; and in response to determining that the search query represents the request for media content associated with the first content label, associating, by the processing device, the second media item with the first content label.
Abstract: An apparatus for entropy coding a sequence of bits obtains, using a first probability distribution, a first conditional probability for coding a bit at a position within the sequence of bits, the first conditional probability being a conditional probability of the bit having a certain value given that a sub-sequence of the sequence of bits has first respective values; obtains, using a second probability distribution that is different from the first probability distribution, a second conditional probability for coding the bit, the second conditional probability being a conditional probability of the bit having the certain value given that the sub-sequence has second respective values; obtains, using the first conditional probability and the second conditional probability, a mixed probability for coding the bit; and codes the bit using the mixed probability.
Abstract: Decoding a current frame includes identifying a first reference frame and a second reference frame for decoding the current frame; storing reference motion vectors of reference blocks of the first reference frame, where other reference frames are used to decode the first reference frame; identifying motion trajectories that pass through the current frame by projecting the reference motion vectors of the reference blocks of the first reference frame onto the current frame using at least a third reference frame of the other reference frames, where the projecting identifies, for a first current block of the current frame a corresponding first reference block in the first reference frame, and a corresponding reference motion vector of the reference motion vectors is associated with the corresponding first reference block; and projecting the corresponding reference motion vector onto the second reference frame to obtain a second reference block in the second reference frame.
Type:
Grant
Filed:
August 3, 2020
Date of Patent:
August 2, 2022
Assignee:
GOOGLE LLC
Inventors:
Jingning Han, Yaowu Xu, James Bankoski, Jia Feng
Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
Abstract: Recommending an automated assistant action for inclusion in an existing automated assistant routine of a user, where the existing automated assistant routine includes a plurality of preexisting automated assistant actions. If the user confirms the recommendation through affirmative user interface input, the automated assistant action can be automatically added to the existing automated assistant routine. Thereafter, when the automated assistant routine is initialized, the preexisting automated assistant actions of the routine will be performed, as well as the automated assistant action that was automatically added to the routine in response to affirmative user interface input received in response to the recommendation.
Inventors:
Rochus Emmanuel Jacob, Oliver Mueller, Nicholas Unger Webb, Adam Duckworth Mittleman, Jason Goulden, Kevin Edward Booth, Tyler Scott Wilson, Mark Kraz, Jeffrey Hui-Kwun Law, William Dong