Patents Assigned to Google LLC
-
Patent number: 12111872Abstract: According to an aspect, a method for searching within user-generated reviews includes receiving, from a client device, a search query to search within a plurality of user-generated reviews relating to a plurality of entities, and identifying, in response to the search query, a set of user-generated reviews from the plurality of user-generated reviews that correspond to one or more search terms of the search query, where the set of user-generated reviews includes a user-generated review for a first entity and a user-generated review for a second entity. The first entity is different from the second entity. The method includes providing at least a portion of the user-generated review for the first entity and at least a portion of the user-generated review for the second entity for simultaneous display on a comparison layout of a user interface of the client device.Type: GrantFiled: January 27, 2023Date of Patent: October 8, 2024Assignee: Google LLCInventors: Diego Baron, Hillary Page Ive, Rudi Anggono
-
Patent number: 12114394Abstract: This document describes aspects of multiple active-coordination-set (ACS) aggregation for mobility management. A master base station coordinates aggregation of control-plane and user-plane communications, generated by a first active-coordination-set for a first joint communication between the first ACS and a user equipment, where the first ACS includes the master base station and at least a second base station. The master base station receives, from a second master base station of a second ACS, control-plane information or user-plane data associated with a second joint communication between the second ACS and the UE, the second ACS including the second master base station and at least a third base station. The master base station aggregates the control-plane and user-plane communications with at least a portion of the control-plane information or the user-plane data to coordinate data throughput to the user equipment.Type: GrantFiled: December 31, 2019Date of Patent: October 8, 2024Assignee: Google LLCInventors: Jibing Wang, Erik Richard Stauffer
-
Patent number: 12111834Abstract: Systems and methods for generating and providing outputs in a multi-device system can include leveraging environment-based prompt generation and generative model response generation to provide dynamic response generation and display. The systems and methods can obtain input data associated with one or more computing devices within an environment, can obtain environment data descriptive of the plurality of computing devices within the environment, and can generate a prompt based on the input data and environment data. The prompt can be processed with a generative model to generate a model-generated output. The model-generated output can then be transmitted to a particular computing device of the plurality of computing devices.Type: GrantFiled: December 20, 2023Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Arash Sadr, Matthew Sharifi
-
Patent number: 12112030Abstract: A method includes outputting, for display, a graphical user interface that includes a graphical slider, the graphical slider including a trackbar that defines an axis in a first direction and a position indicator located at a first position along the trackbar. The method also includes receiving data indicative of a user input including a first displacement in the first direction and a second displacement in a second direction, the first direction perpendicular to the second direction. The method also includes mapping, based on both the first displacement in the first direction and the second displacement in the second direction, the user input to a second position along the trackbar. The method further includes outputting, for display, an updated graphical user interface that includes the position indicator at the second position along the trackbar.Type: GrantFiled: November 9, 2020Date of Patent: October 8, 2024Assignee: Google LLCInventor: Philip Quinn
-
Patent number: 12111875Abstract: Implementations described herein relate to pairing a location-based automated assistant with a user device. The user device can include, for example, a headphones apparatus and/or a device that is paired with the headphones apparatus. The user device provides an indication that it is present at a location that is associated with a location-based automated assistant. A trust measure is determined that is indicative of trust between the user device and the location-based automated assistant. User information is provided by the user device to the location-based automated assistant. The location-based automated assistant determines response data to provide, via one or more speakers associated with the user device, that is specific to the location and further based on the user information.Type: GrantFiled: December 14, 2022Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Matthew Sharifi
-
Patent number: 12112198Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing machine learning workloads, e.g., computations for training a neural network or computing an inference using a neural network, across multiple hardware accelerators.Type: GrantFiled: December 15, 2022Date of Patent: October 8, 2024Assignee: Google LLCInventors: Jeffrey Adgate Dean, Sudip Roy, Michael Acheson Isard, Aakanksha Chowdhery, Brennan Saeta, Chandramohan Amyangot Thekkath, Daniel William Hurt, Hyeontaek Lim, Laurent El Shafey, Parker Edward Schuh, Paul Ronald Barham, Ruoming Pang, Ryan Sepassi, Sanjay Ghemawat, Yonghui Wu
-
Patent number: 12114048Abstract: A method for aligning a translation of original caption data with an audio portion of a video is provided. The method involves identifying original caption data for the video that includes caption character strings, identifying translated language caption data for the video that includes translated character strings associated with audio portion of the video, and mapping caption sentence fragments generated from the caption character strings to corresponding translated sentence fragments generated from the translated character strings based on timing associated with the original caption data and the translated language caption data.Type: GrantFiled: February 13, 2023Date of Patent: October 8, 2024Assignee: Google LLCInventors: Terrance Paul McCartney, Jr., Brian Colonna, Michael Nechyba
-
Patent number: 12111713Abstract: This document describes techniques and systems that enable a smartphone-based radar system for determining user intention in a lower-power mode. The techniques and systems use a radar field to enable the smartphone to accurately determine the presence or absence of a user and further determine the intention of the user to interact with the smartphone. Using these techniques, the smartphone can account for the user's nonverbal communication cues to determine and maintain an awareness of users in its environment, and only respond to direct interactions once a user has demonstrated an intention to interact, which preserves battery power. The smartphone may determine the user's intention by recognizing various cues from the user, such as a change in position relative to the smartphone, a change in posture, or by an explicit action, such as a gesture.Type: GrantFiled: February 9, 2022Date of Patent: October 8, 2024Assignee: Google LLCInventors: Leonardo Giusti, Ivan Poupyrev, Eiji Hayashi, Patrick M. Amihood
-
Patent number: 12112538Abstract: A computer-implemented method for classifying video data with improved accuracy includes obtaining, by a computing system comprising one or more computing devices, video data comprising a plurality of video frames; extracting, by the computing system, a plurality of video tokens from the video data, the plurality of video tokens comprising a representation of spatiotemporal information in the video data; providing, by the computing system, the plurality of video tokens as input to a video understanding model, the video understanding model comprising a video transformer encoder model; and receiving, by the computing system, a classification output from the video understanding model.Type: GrantFiled: July 8, 2021Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, Cordelia Luise Schmid
-
Patent number: 12114078Abstract: The present disclosure relates to a low-light autofocus technique. One example embodiment includes a method. The method includes receiving an indication of a low-light condition for a camera system. The method also includes determining an extended exposure time for a low-light autofocus procedure of the camera system. Further, the method includes capturing, by the camera system, an extended frame for the low-light autofocus procedure. The extended frame is captured by die camera system using the determined extended exposure time. In addition, the method includes determining, based on the captured extended frame, an in-focus lens setting for a lens of the camera system.Type: GrantFiled: October 11, 2019Date of Patent: October 8, 2024Assignee: Google LLCInventors: Ying Chen Lou, Leung Chun Chan, Kiran Murthy, Qiurui He, Szepo Robert Hung, Sushil Nath
-
Patent number: 12114052Abstract: A method includes causing, at a first time point, an initial view of immersive video content to be presented on a user device, the initial view including an interactive element and having a first horizontal field of view at a first angular direction. The interactive element is initially at a first angular position outside the first horizontal field of view. An input made via the user device is received at a second time point, the input indicating the initial view is to be changed towards the first angular position. A viewpoint of the immersive video content is caused to change to a first view having a second horizontal field of view at a second angular direction. The method includes determining that the first angular position is within the second horizontal field of view, identifying a content creator associated with the interactive element, and assigning attribution information to the content creator.Type: GrantFiled: October 13, 2023Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Justin Lewis, Ruxandra Georgiana Davies
-
Patent number: 12112755Abstract: Methods, systems, and apparatus for an automated calling system are disclosed. Some implementations are directed to using a bot to initiate telephone calls and conduct telephone conversations with a user. The bot may be interrupted while providing synthesized speech during the telephone call. The interruption can be classified into one of multiple disparate interruption types, and the bot can react to the interruption based on the interruption type. Some implementations are directed to determining that a first user is placed on hold by a second user during a telephone conversation, and maintaining the telephone call in an active state in response to determining the first user hung up the telephone call. The first user can be notified when the second user rejoins the call, and a bot associated with the first user can notify the first user that the second user has rejoined the telephone call.Type: GrantFiled: September 8, 2022Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Asaf Aharoni, Eyal Segalis, Yaniv Leviathan
-
Patent number: 12111867Abstract: Methods, systems, and media for associating scenes depicted in media content with a map of where the media content was produced are provided. In some embodiments, a method for presenting map information with video information is provided, the method comprising: receiving a request for a video from a user device; determining if there is location information associated with portions of the video; in response to determining that there is location information associated with the video, causing first map information corresponding to the location information to be presented in a first format during presentation of the video; receiving an indication that the first map information has been selected; in response to receiving the indication, causing second map information corresponding to the portion of the video that was being presented to be presented by the user device, wherein the second map information is presented in a second format.Type: GrantFiled: February 13, 2023Date of Patent: October 8, 2024Assignee: Google LLCInventors: Cinthia Rodrigues Abou Assali, Nayeli Rodriguez, Jonathan Becquemin, Leon Bayliss, Gregory Blay-Desforges
-
Patent number: 12112494Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.Type: GrantFiled: February 28, 2020Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Honglak Lee, Xinchen Yan, Soeren Pirk, Yunfei Bai, Seyed Mohammad Khansari Zadeh, Yuanzheng Gong, Jasmine Hsu
-
Publication number: 20240329405Abstract: Various configurations of projectors and cameras are disclosed that use shared wafer level optics, in which optical elements, e.g., microlenses, of a projector are fabricated on the same wafer as optical elements, e.g., microlenses, of a camera. Projectors and cameras can be mounted together on a mixed reality headset, e.g., an AR/VR headset, for example, as a feature of smart glasses. Some projectors and/or cameras can be co-located in the arm or temple of the glasses. Some projectors and/or cameras can be co-located near a center point of the frame of the glasses. Use of shared wafer-level optics provides a compact and efficient solution for simultaneously guiding light leaving a projector and light entering a camera.Type: ApplicationFiled: March 28, 2024Publication date: October 3, 2024Applicant: Google LLCInventors: Daniel Adema, Shreyas Potnis
-
Publication number: 20240330767Abstract: A method includes training a client machine learning (ML) model on client training data at a client device. While training the client ML model, the method also includes obtaining, from a server, server model weights of a server ML model trained on server training data, the server training data different that the client training data. While training the client ML model, the method also includes: transmitting, to the server, client model weights of the client ML model; updating the client ML model using the server model weights; obtaining, from the server, updated server model weights of the server ML model, the updated server model weights updated based on the transmitted client model weights; and further updating the client ML model using the updated server model weights.Type: ApplicationFiled: March 20, 2024Publication date: October 3, 2024Applicant: Google LLCInventors: Andrew Hard, Rajiv Mathews
-
Publication number: 20240334047Abstract: The various embodiments described herein include methods, devices, and systems for power-management on camera devices. In one aspect, a method is performed at a camera device having memory, one or more processors, and an image sensor. The method includes: (1) while a wireless communication component of the camera device is deactivated: (a) capturing a plurality of images containing a motion event; (b) characterizing the motion event; and (c) determining, based on the characterization of the motion event, whether to send video data to a remote computing system; and (2) in accordance with a determination to send video data to the remote computing system: (i) activating the wireless communication component of the camera device; (ii) establishing a wireless connection to the remote computing system via the wireless communication component; and (iii) sending video information to the remote computing system via the established wireless connection.Type: ApplicationFiled: June 14, 2024Publication date: October 3, 2024Applicant: Google LLCInventors: Sahana Mysore, Jacobi Grillo, Mikko Pekka Sannala, Robinder Virk, William Saperstein
-
Publication number: 20240330766Abstract: A method includes receiving, from a client device, a client machine learning (ML) model and obtaining a set of training data including a plurality of training samples. The client ML model is trained locally on the client device. For each respective training sample in the plurality of training samples, the method also includes determining, using the respective training sample, a first loss of the client ML model; determining, using the respective training sample, a second loss of a server machine learning (ML) model; and determining a respective score based on the first loss and the second loss. The method also includes selecting, based on each respective score of each respective training sample in the plurality of training samples, a subset of training samples from the plurality of training samples and training the server ML model using the subset of training samples.Type: ApplicationFiled: March 19, 2024Publication date: October 3, 2024Applicant: Google LLCInventors: Andrew Hard, Rajiv Mathews
-
Publication number: 20240331700Abstract: A method includes receiving a sequence of input audio frames and processing each corresponding input audio frame to determine a language ID event that indicates a predicted language. The method also includes obtaining speech recognition events each including a respective speech recognition result determined by a first language pack. Based on determining that the utterance includes a language switch from the first language to a second language, the method also includes loading a second language pack onto the client device and rewinding the input audio data buffered by an audio buffer to a time of the corresponding input audio frame associated with the language ID event that first indicated the second language as the predicted language. The method also includes emitting a first transcription and processing, using the second language pack loaded onto the client device, the rewound buffered audio data to generate a second transcription.Type: ApplicationFiled: March 28, 2023Publication date: October 3, 2024Applicant: Google LLCInventors: Yang Yu, Quan Wang, Ignacio Lopez Moreno
-
Publication number: 20240331683Abstract: A method for a soft acceptance of a hotword receives audio data characterizing a soft hotword event detected by a hotword detector in streaming audio captured by a user device. The method also processes the audio data to determine that the audio data corresponds to a query specifying an action to perform on the user device. Without triggering performance of the action on the user device or the other device, the method provides a notification for output from the user device where the notification prompts a user associated with the user device to provide an affirmative input indication in order to trigger performance of the action on the user device or the other device and, when the user fails to provide the affirmative input indication, instructs the user device or the other device to not perform the action specified by the query.Type: ApplicationFiled: June 6, 2024Publication date: October 3, 2024Applicant: Google LLCInventors: Brett Aladdin Barros, James Flynn, Theo Goguely