Patents Assigned to Google LLC
  • Patent number: 12271558
    Abstract: A computing device may determine, based on one or more inputs detected by a presence-sensitive screen, whether at least a threshold amount of liquid is present on the presence-sensitive screen. The computing device may automatically transition the computing device from operating in a first operating mode to operating in a second operating mode responsive to determining that at least the threshold amount of liquid is present. The computing device may discard inputs detected by the presence-sensitive screen while the computing device is operating in the second operating mode.
    Type: Grant
    Filed: July 6, 2023
    Date of Patent: April 8, 2025
    Assignee: Google LLC
    Inventors: John J. Anthony, III, Aaron Michael Rudolph, Tyler Gore, Sushant Sundaresh
  • Patent number: 12273167
    Abstract: A user equipment (UE) manages thermal levels of antenna modules with reference to a temperature threshold. The UE includes multiple antenna modules having a first antenna module and a second antenna module and at least one wireless transceiver coupled to the multiple antenna modules. The UE also includes a processor and memory system implementing an antenna module thermal manager. The manager is configured to obtain a first temperature indication corresponding to the first antenna module of the multiple antenna modules. The manager is also configured to perform a comparison of the first temperature indication to at least one temperature threshold. The manager is further configured to switch, based on the comparison, from using the first antenna module to using the second antenna module for wireless communication with the at least one wireless transceiver.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: April 8, 2025
    Assignee: Google LLC
    Inventors: Erik Richard Stauffer, Jibing Wang, Aamir Akram, Vijay L. Asrani
  • Publication number: 20250111570
    Abstract: A media application receives a set of images that include a source image and a target image, the source image and the target image including at least a subject. The media application determines whether to use one or more editors selected from a group of a head editor, a face editor, or combinations thereof. Responsive to determining to use the head editor, the media application generates a compositive image by replacing at least a portion of head pixels associated with a target head of the subject in the target image with head pixels from a source head of the subject in the source image and replacing neck pixels associated with a target neck and shoulder pixels associated with target shoulders that include an area between the target head and a target torso with an interpolated region that is generated from an interpolation of the source image and the target image.
    Type: Application
    Filed: October 3, 2024
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Jay TENENBAUM, Assaf ZOMET, Barel LEVY, Yaron BRODSKY, Shiran ZADA, Yael Pritch KNAAN, Ariel EPHRAT, Inbar MOSSERI, Avram GOLBERT
  • Publication number: 20250111674
    Abstract: This document describes systems and techniques for implementing event summarization over a period of time. A request is received to create an event summarization that includes details associated with the event summarization. The systems and techniques identify at least one image relevant to the event summarization based on the details associated with the event summarization. At least one of the identified images is selected that is relevant to the event summarization. The selected images are arranged based on how they will be included in the event summarization. A video summary is created that represents the event summarization and includes the arrangement of the selected images.
    Type: Application
    Filed: January 23, 2024
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Yuan Li, Indu Ramamurthi, Ryan Kam Wang Tai
  • Publication number: 20250110731
    Abstract: A computer-implemented method includes receiving an original code snapshot corresponding to original code from a first file of a plurality of files. The method also includes receiving a modified code snapshot corresponding to modified code that includes a code modification modifying the original code. The method also includes generating, using a large language model (LLM), refactoring code based on the original code snapshot and the modified code snapshot. The refactoring code is configured to apply the code modification to code from other files of the plurality of files associated with the original code. The method also includes identifying target code from a second file of the plurality of files where the target code is associated with the original code. The method also includes applying the code modification to the identified target code using the refactoring code.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Mateusz Lewko, Marko Ivankovic, Luka Rimanic
  • Publication number: 20250110850
    Abstract: Techniques are described for evaluating computer-readable code. In example aspects, a machine-learned model is trained to evaluate computer-readable code and/or its corresponding code description. As part of the evaluation, the machine-learned model can determine a level of agreement between the code description and the computer-readable code. Additionally or alternatively, the machine-learned model can determine that a prohibited feature is absent from (or present in) the computer-readable code. If present, the prohibited feature can compromise a security of a device that executes the computer-readable code, a safety of a user operating the device, and/or the user's privacy. Additionally or alternatively, the prohibited feature can violate a policy of a manufacturer of the device.
    Type: Application
    Filed: September 27, 2024
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Indu Ramamurthi, Ryan Kam Wang Tai, Karen Chia Lin Yao
  • Publication number: 20250110592
    Abstract: Techniques and apparatuses are described that perform screen protector presence detection. In example aspects, an electronic device detects the presence (or absence) of a screen protector based on touch screen data provided by a touch screen during a time period that a user performs a touch-based gesture. With the touch screen data, screen protector presence detection can be performed without the need for additional sensors and without placing manufacturing requirements on the screen protector. Furthermore, this technique can support the detection of a variety of different screen protectors, including screen protectors with different types of materials and thicknesses.
    Type: Application
    Filed: June 14, 2024
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Mark Chang, Chiayun Kuan
  • Publication number: 20250110650
    Abstract: A method for an orphan bucket scanner includes obtaining a directory including a plurality of storage buckets deployed in a container-based environment. The method includes for each respective storage bucket of the plurality of storage buckets, identifying a resource associated with the respective storage bucket of the plurality of storage buckets. The method also includes for at least one storage bucket from the plurality of storage buckets determining that the resource has been deleted from the container-based environment and adding the at least one storage bucket corresponding to the deleted resource to a subset of storage buckets. The method also includes generating an alert including the subset of storage buckets.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Alankrit Kharbanda, Joshua Sosa, Xiangqian Yu
  • Publication number: 20250110591
    Abstract: This document describes systems and techniques for false-input suppression at touch-sensitive displays. In aspects, an electronic device with a touch-sensitive display generates a touch frame having a heatmap matrix based on touch input received at the touch-sensitive display. The electronic device further obtains contextual data to determine if the contextual data satisfies contextual conditions. If the contextual conditions are satisfied, a machine-learned model analyzes the touch frame to generate a confidence score for a likelihood that one or more hotspots within the heatmap matrix are indicative of touch inputs from machine-learned entities. Based on the confidence score being above a threshold, the electronic device suppresses touch inputs to prevent user interface interactions.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Leonardo Blanger, Chiayun Kuan, Stephen Mathew Pfetsch
  • Publication number: 20250111272
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for distributing digital contents to client devices are described. The system obtains, for each user in a set of users, user attribute data and, for a subset of the users, consent data for controlling usage of the user attribute data. The system partitions, based at least on the consent data for the subset of users, the set of users into a first group of users and a second group of users. The system generates a respective training dataset based on the data for each group of user, and uses the datasets to train a machine learning model configured to predict information about one or more users. In particular, the system applies differential privacy to the second training dataset without applying differential privacy to the first training dataset during training.
    Type: Application
    Filed: April 25, 2023
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Wei Huang, Zhenyu Liu, Liang Wang, Kumar Rishabh
  • Publication number: 20250110612
    Abstract: A method including determining an eye-gaze characteristic of a user of a wearable device, determining a head movement of the user, determining a gesture based on the eye-gaze characteristic, the head movement, and a correlation between the eye-gaze characteristic and the head movement, selecting a user interface (UI) element of a head-locked UI operating on the wearable device based on the gesture, and triggering a UI operation based on the selected UI element.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 3, 2025
    Applicant: GOOGLE LLC
    Inventors: Jochen Weber, Tobias Toft, Ryan Alexander West, Cosmo Rettig, Prasanthi Gurumurthy, Joost Korngold, Tarik Hany Abdel-Gawad, Samuel Legge, Jingying Hu
  • Publication number: 20250113067
    Abstract: The present document describes devices and methods for rendering multiple live-streams on a user interface (UI) with minimal resources. The UI is activated, having a first set of remote sensors loaded for rendering. The first set of remote sensors receive a first activation signal and begin streaming first data, which the UI renders. Respondent to an action changing the set of remote sensors to be rendered on the UI, a second set of remote sensors are loaded for rendering. The second set of remote sensors receive a second activation signal and begin streaming second data, which the UI renders while the first set of remote sensors continue streaming the first data. The first data is no longer streamed after a threshold time is reached.
    Type: Application
    Filed: October 3, 2023
    Publication date: April 3, 2025
    Applicant: Google LLC
    Inventors: Adam Amir Mostafavi, Adnan Begovic, Alexander Crettenand, Mohammad Aleagha, Heidi Muth, Daniel Fredrick Zucker, Nikita Slushkin, Nicholas Michael Ritchie, Cale Williams Hopkins, Howard Zeng, Jeremy Newton-Smith, Jonathan Chen, Teddy Ku
  • Patent number: 12265911
    Abstract: A computing system can include one or more non-transitory computer-readable media that collectively store a neural network including one or more layers with relaxed spatial invariance. Each of the one or more layers can be configured to receive a respective layer input. Each of the one or more layers can be configured to convolve a plurality of different kernels against the respective layer input to generate a plurality of intermediate outputs, each of the plurality of intermediate outputs having a plurality of portions. Each of the one or more layers can be configured to apply, for each of the plurality of intermediate outputs, a respective plurality of weights respectively associated with the plurality of portions to generate a respective weighted output. Each of the one or more layers can be configured to generate a respective layer output based on the weighted outputs.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: April 1, 2025
    Assignee: GOOGLE LLC
    Inventors: Gamaleldin Elsayed, Prajit Ramachandran, Jon Shlens, Simon Kornblith
  • Patent number: 12265666
    Abstract: Techniques and apparatuses are described that facilitate ambient computing using a radar system. Compared to other smart devices that rely on a physical user interface, a smart device with a radar system can support ambient computing by providing an eye-free interaction and less cognitively demanding gesture-based user interface. The radar system can be designed to address a variety of challenges associated with ambient computing, including power consumption, environmental variations, background noise, size, and user privacy. The radar system uses an ambient-computing machine-learned module to quickly recognize gestures performed by a user up to at least two meters away. The ambient-computing machine-learned module is trained to filter background noise and have a sufficiently low false positive rate to enhance the user experience.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: April 1, 2025
    Assignee: Google LLC
    Inventors: Eiji Hayashi, Jaime Lien, Nicholas Edward Gillian, Andrew C. Felch, Jin Yamanaka, Blake Charles Jacquot
  • Patent number: 12266065
    Abstract: Systems and methods for providing visual indications of generative model responses can include obtaining a user input and processing the user input with a generative model to generate a model-generated-response. The systems and methods can process the model-generated response and an image of an environment to generate an augmented image. The augmented image can include visual indicators of the model-generated response, which can include annotating the image based on detected features within the image. Generation of the augmented image can include object detection and annotation based on the content of the model-generated response.
    Type: Grant
    Filed: January 10, 2024
    Date of Patent: April 1, 2025
    Assignee: GOOGLE LLC
    Inventors: Harshit Kharbanda, Louis Wang, Christopher James Kelley, Jessica Lee, Igor Bonaci, Daniel Valcarce Silva
  • Patent number: 12265430
    Abstract: A foldable device may include a foldable layer and a hinge mechanism. The hinge mechanism may include a plurality of rod assemblies, arranged side by side, each defining an individual pivot axis of the hinge mechanism. The rod assemblies may each include a plurality of segments. One or more of the plurality of segments of one of the plurality of rod assemblies may be coupled to one both of the adjacent rod assemblies, such that the rod assemblies pivot sequentially to guide the folding and the unfolding of the foldable layer of the foldable device.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: April 1, 2025
    Assignee: Google LLC
    Inventors: Young Im, Kingston Xu
  • Patent number: 12265524
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium for managing a relationship between content and an environment for provisioning the content. In one aspect, a method includes receiving a request for a content item; and in response to receiving the request: selecting a creative from a plurality of creatives, the creative including a reference to a profile associated with one or more elements; retrieving content data from one or more content feeds bound to the elements; and delivering the creative and the content data to a user device.
    Type: Grant
    Filed: August 3, 2023
    Date of Patent: April 1, 2025
    Assignee: Google LLC
    Inventors: Stephen Tsun, Vikas Jha, Shamim Samadi, Vishal Goenka, David Monsees
  • Patent number: 12267586
    Abstract: This document describes techniques and systems that enable an interface for communicating a threshold in a camera. An electronic device recognizes an in-camera, drag gesture that triggers a camera application to switch modes from a real-time display mode (displaying real-time preview images in a viewfinder) to a buffer-display mode, which displays frames recorded in the camera buffer. During the motion of the drag gesture, the electronic device provides dynamic visual feedback indicating a relation between a drag distance of the drag gesture and a target threshold for the drag gesture. For simplicity and conciseness, the visual feedback can be combined with the virtual shutter control. After meeting the threshold, the user releases the touch input of the drag gesture and the system triggers the camera application to switch modes. This allows capture of a “missed” moment that was recorded in the camera buffer but not stored in non-volatile memory.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: April 1, 2025
    Assignee: Google LLC
    Inventor: Rachit Gupta
  • Patent number: D1068801
    Type: Grant
    Filed: March 24, 2023
    Date of Patent: April 1, 2025
    Assignee: GOOGLE LLC
    Inventor: Apoorv Gupta
  • Patent number: D1069750
    Type: Grant
    Filed: March 8, 2023
    Date of Patent: April 8, 2025
    Assignee: Google LLC
    Inventors: Maj Isabelle Olsson, Willy Carteau, Diana Chang, Katherine Morgenroth, Carl Cepress