Patents Assigned to Google LLC
-
Patent number: 12192159Abstract: Techniques are described for generating a notification in connection with a video content item are provided. An example method comprises receiving, from at least a portion of the plurality of user devices, a plurality of messages via the first message interface, determining whether a message from the plurality of messages is associated with participant information that would be of interest to a content creator of the video content item, and responsive to determining that the message is associated with the participant information that would be of interest to the content creator of the video content item, causing a creator interface including a second message interface to be presented on a user device associated with the content creator, wherein the second message interface presents a notification concerning the participant information by modifying an appearance of the message in the second message interface based on the participant information.Type: GrantFiled: January 19, 2024Date of Patent: January 7, 2025Assignee: Google LLCInventors: David Patierno, Jokubas Zukerman, Christopher Cooke, Tomer Margolin
-
Patent number: 12189083Abstract: A system for using mobile data to improve weather information is provided. The system includes a weather prediction station configured to receive stationary observation data provided by a plurality of stationary weather stations along with data from a plurality of input weather models and generate unified weather model estimates based on the stationary observation data, the input weather model data, and a processor. The processor is configured to aggregate mobile observation data provided by a plurality of non-stationary sensors and use the aggregated mobile observation data to adjust the weather model estimates.Type: GrantFiled: October 5, 2022Date of Patent: January 7, 2025Assignee: GOOGLE LLCInventor: William B. Gail
-
Patent number: 12189921Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.Type: GrantFiled: August 14, 2023Date of Patent: January 7, 2025Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 12190862Abstract: A method includes receiving a set of training utterances each including a non-synthetic speech representation of a corresponding utterance, and for each training utterance, generating a corresponding synthetic speech representation by using a voice conversion model. The non-synthetic speech representation and the synthetic speech representation form a corresponding training utterance pair. At each of a plurality of output steps for each training utterance pair, the method also includes generating, for output by a speech recognition model, a first probability distribution over possible non-synthetic speech recognition hypotheses for the non-synthetic speech representation and a second probability distribution over possible synthetic speech recognition hypotheses for the synthetic speech representation.Type: GrantFiled: April 25, 2022Date of Patent: January 7, 2025Assignee: Google LLCInventors: Andrew M. Rosenberg, Gary Wang, Bhuvana Ramabhadran, Fadi Biadsy
-
Patent number: 12192650Abstract: An electronic device receives a first plurality of images of a scene captured by an image sensor of an electronic device, receives an ambient light level proximate to the electronic device, and determines whether the ambient light level is less than a first threshold value. In accordance with a determination that the ambient light level is less than the first threshold value, the electronic device detects motion in the scene based on one or more of the first plurality of images. In accordance with detecting motion in the scene, the electronic device receives a second plurality of images of the scene captured by the image sensor of the electronic device, forms a composite image from two or more of the second plurality of images, and causes the composite image to be presented for display on a user device.Type: GrantFiled: September 25, 2023Date of Patent: January 7, 2025Assignee: Google LLCInventors: Bill Duran, Adrian Mircea Proca, Wei Zhong, Siddarth Raghunathan
-
Patent number: 12189738Abstract: This document describes techniques and systems that enable face authentication embedding migration and drift-compensation. The techniques and systems include a user device that is updated to include both a current version of firmware and an updated version of the firmware. Then, an indication of a face-authentication attempt is received along with image data associated with a user's face. After successful authentication, using the current version of firmware on the image data, the user device uses the updated version of the firmware on the same image data to generate a new embedding. The new embedding is stored as part of a migration profile for the user. Additional new embeddings are collected over a series of subsequent face-authentication attempts until a complete set of new embeddings is stored for the migration profile. Then, the old profile is deleted and the migration profile becomes the enrollment profile used for face authentication.Type: GrantFiled: September 9, 2019Date of Patent: January 7, 2025Assignee: Google LLCInventors: Michael Moreno, Michael Williams, Ji Soo Shin, Madhi Hamzeh
-
Patent number: 12189824Abstract: An integrated circuit chip can provide protection with registers of a register file. A processor can be part of general or security-oriented (e.g., root-of-trust (RoT)) circuitry. In described implementations, the processor includes multiple register blocks for storing multiple register values. The processor also includes multiple integrity blocks for storing multiple integrity codes. A respective integrity block is associated with a respective register block. The respective integrity block can store a respective integrity code that is derived from a respective register value that is stored in the respective register block. The integrity code can enable detection or correction of one or more corrupted bits in the register value. An integrity controller of the processor can monitor the register value regularly or in response to an access by an execution unit. The controller can take a protective action if corruption is detected. This enables information protection to extend to processor execution units.Type: GrantFiled: June 3, 2021Date of Patent: January 7, 2025Assignee: Google LLCInventors: Thomas Edward Roberts, Timothy Jay Chen
-
Patent number: 12191007Abstract: Example embodiments relate to a method for training a predictive model from data. The method includes defining a multitude of predicates as binary functions operating on time sequences of the features or logical operations on the time sequences of the features. The method also includes iteratively training a boosting model by generating a number of new random predicates, scoring all the new random predicates by weighted information gain with respect to a class label associated with a prediction of the boosting model, selecting a number of the new random predicates with the highest weighted information gain and adding them to the boosting model, computing weights for all the predicates in the boosting model, removing one or more of the selected new predicates with the highest information gain from the boosting model in response to input from an operator. The method may include repeating the prior steps a plurality of times.Type: GrantFiled: September 29, 2017Date of Patent: January 7, 2025Assignee: Google LLCInventors: Kai Chen, Eyal Oren, Hector Yee, James Wilson, Alvin Rajkomar, Michaela Hardt
-
Patent number: 12193089Abstract: Various arrangements are presented for increasing a link margin of a wireless audio link. A short-range wireless communication link having a first physical layer (PHY) symbol rate is established between an audio source device and an audio output device. An audio stream is transmitted using the communication link, which includes a connected isochronous stream (CIS) link. A number of packet retransmissions are detected on the CIS. Based on the detected number of packet retransmissions on the CIS, the first PHY symbol rate of the CIS can be altered to a second PHY symbol rate for transmitting the audio stream.Type: GrantFiled: November 2, 2023Date of Patent: January 7, 2025Assignee: Google LLCInventors: Li-Xuan Chuo, Qi Jiang, Daniel Barros, Sunil Kumar
-
Patent number: 12190221Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.Type: GrantFiled: July 25, 2023Date of Patent: January 7, 2025Assignee: GOOGLE LLCInventors: Ammar Husain, Joerg Mueller
-
Patent number: 12190064Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for announcing and detecting automated conversation are disclosed. One of the methods includes initiating, over a natural language communication channel, a conversation with a communication participant using a natural language communication method that includes a dialogue of natural language communications. The communication participant is determined to be automated using a pre-defined adaptive interactive protocol that specifies natural language linguistic transformations defined in a sequence. The conversation can be transitioned to a communication method that is different form the natural language communication method in response to determining that the communication participant is automated.Type: GrantFiled: June 30, 2023Date of Patent: January 7, 2025Assignee: GOOGLE LLCInventors: Sebastian Millius, Sandro Feuz
-
Patent number: 12192651Abstract: Methods, systems, and media for generating compressed images are provided. In some embodiments, the method comprises: identifying a multi-plane image, MPI, that represents a three-dimensional image, wherein the MPI comprises a plurality of fronto-parallel planes; splitting the MPI into a plurality of sub-volumes, wherein each sub-volume in the plurality of sub-volumes includes a subset of the plurality of fronto-parallel planes; calculating, for each sub-volume of the MPI, a depthmap; converting each depthmap to a mesh, wherein each mesh corresponds to a layer of a plurality of layers associated with a multi-depth image, MDI, to be rendered; calculating, for each layer of the plurality of layers, an image; and storing the meshes corresponding to the plurality of layers of the MDI and the images corresponding to the plurality of layers of the MDI as the MDI.Type: GrantFiled: September 8, 2023Date of Patent: January 7, 2025Assignee: GOOGLE LLCInventor: Ryan Overbeck
-
Patent number: 12190869Abstract: A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.Type: GrantFiled: September 29, 2022Date of Patent: January 7, 2025Assignee: Google LLCInventors: Tara N. Sainath, Rami Botros, Anmol Gulati, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu
-
Publication number: 20250007948Abstract: This document describes techniques and apparatuses directed at implementing control flow integrity measurements to validate control flow in computing systems. Within a scope, a local variable is initialized and configured to store a measurement value of a local control flow. During operations within the scope, at least one expression is computed, outputting a return value. A fingerprinting algorithm obtains the return value, combines the return value and the measurement value, and hashes the combination to produce a digest value. The local variable is then redefined as the digest value. Next, the return value is compared to the expected, distinguished success return value in a branch instruction. Before returning a final return value, the measurement value is compared against an expected value stored in a static variable. If the comparison fails, then the program can detect an attack on the control flow.Type: ApplicationFiled: March 25, 2022Publication date: January 2, 2025Applicant: Google LLCInventors: Miguel Cristian Young de la Sota, Miguel Angel Osorio Lozano
-
Publication number: 20250002205Abstract: This document describes systems directed at a lip feature in tube packaging structures. In aspects, a system includes a sleeve portion having a hollow tube structure. The system also includes an open-faced enclosure (a box) configured to be slidably placed inside the sleeve portion such that at least a portion of the open-faced enclosure is disposed within the sleeve portion. The system further includes a lip feature configured to fold from a first position to a second position such that when the open-faced enclosure is slidably placed inside the sleeve portion, the lip feature folds from the first position to the second position and reverts back to the first position once the open-faced enclosure is at least partially disposed within the sleeve portion. Through such techniques, the lip feature can prevent boxes from falling out of tube structures.Type: ApplicationFiled: September 16, 2024Publication date: January 2, 2025Applicant: Google LLCInventor: Nicole Danielle Hermann
-
Method and System of Static Charge Variation Sensing Based Human Jaw Motion Detection for User Voice
Publication number: 20250008252Abstract: The present disclosure provides a system and method using a charge collection antenna in a wearable device to collect charge variation based on a user's jawbone and muscle motion. The collected charge variation may be used to determine an on-body status of the wearable device. For example, in wireless earbuds, information acquired from a charge collection antenna may be used to determine whether the earbud is worn in-ear by the user. The collected charge variation may also be used to detect jaw motion by the user.Type: ApplicationFiled: December 14, 2021Publication date: January 2, 2025Applicant: Google LLCInventors: Fang Liu, Trausti Thormundsson, Yuan Jen Chang, Nicholas Jordan Sanders, Kari Antero Pulli, Kuan-Lin Chen -
Publication number: 20250006217Abstract: A method includes receiving training data that includes a set of transcribed speech utterances where each respective transcribed speech utterance is paired with a corresponding transcription. For each respective transcribed speech utterance, the method includes generating an encoded audio representation and an encoded textual representation, generating a higher order audio feature representation for a corresponding encoded audio representation, generating a higher order textual feature representation for a corresponding encoded textual representation, and determining a loss for the respective transcribed speech utterance based on the higher order audio feature representation and the higher order textual feature representation. The method also includes training a speech encoder and a text encoder of a correction model based on the loss determined for each transcribed speech utterance of the set of transcribed speech utterances.Type: ApplicationFiled: June 29, 2023Publication date: January 2, 2025Applicant: Google LLCInventors: Christopher Li, Kyle Scott Kastner, Yuan Wang, Zhehuai Chen, Andrew Maxwell Rosenberg, Heng Su, Qian Chen, Leonid Aleksandrovich Velikovich, Patrick Maxim Rondon, Diamantino Antonio Caseiro, Zelin Wu
-
Publication number: 20250000380Abstract: Various arrangements for performing radar-based measurement of vital signs. Waveform data may be received then filtered of data indicative of static objects to obtain motion-indicative waveform data. The motion-indicative waveform data may be analyzed to determine one or more frequencies of movement present within the motion-indicative waveform data. A spectral analysis may be performed on the motion-indicative waveform data to determine a spectral-analysis state of a monitored region. The spectral-analysis state of the monitored region may be determined to match a predefined spectral-analysis state during which vital sign monitoring is permitted. One or more vital signs of a monitored user present within the monitored region may be determined and output based on analyzing the motion-indicative waveform data.Type: ApplicationFiled: September 10, 2024Publication date: January 2, 2025Applicant: Google LLCInventors: Dongeek Shin, Brandon Barbello, Shwetak Patel, Anupam Pathak, Michael Dixon
-
Patent number: 12184901Abstract: Video coding using constructed reference frames may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed video. Generating the reconstructed video may include receiving an encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed non-showable reference frame. Generating the reconstructed non-showable reference frame may include decoding a first encoded frame from the encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed frame. Generating the reconstructed frame may include decoding a second encoded frame from the encoded bitstream using the reconstructed non-showable reference frame as a reference frame. Video coding using constructed reference frames may include including the reconstructed frame in the reconstructed video and outputting the reconstructed video.Type: GrantFiled: June 8, 2022Date of Patent: December 31, 2024Assignee: GOOGLE LLCInventors: James Bankoski, Yaowu Xu, Paul Wilkins
-
Patent number: 12183328Abstract: Methods, systems, and apparatus for receiving audio data corresponding to a user utterance and context data, identifying an initial set of one or more n-grams from the context data, generating an expanded set of one or more n-grams based on the initial set of n-grams, adjusting a language model based at least on the expanded set of n-grams, determining one or more speech recognition candidates for at least a portion of the user utterance using the adjusted language model, adjusting a score for a particular speech recognition candidate determined to be included in the expanded set of n-grams, determining a transcription of user utterance that includes at least one of the one or more speech recognition candidates, and providing the transcription of the user utterance for output.Type: GrantFiled: May 16, 2023Date of Patent: December 31, 2024Assignee: Google LLCInventors: Petar Aleksic, Pedro J. Moreno Mengibar