Patents Assigned to Google LLC
  • Patent number: 11564334
    Abstract: A server tray assembly includes a server tray support configured to receive a server board that includes a working fluid conduit fluidly coupled to a server board connector disposed on a back plane of the server board, a back wall of the server tray support includes a fluid connector configured to form an unbiased fluid connection with the server board connector; and a locking assembly secured to at least one of a server rack or the server tray support, the locking assembly disposed opposite the fluid connector is configured to engage a portion of the server board to bias the server board toward the fluid connector to fluidly seal the unbiased fluid connection between the server board connector and the fluid connector of the server tray support.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Madhusudan Krishnan Iyengar, Avinash Panga
  • Patent number: 11564059
    Abstract: A user-to-entity communication channel is established for providing increased information regarding entities to the general population. Ambassadors for a entity are identified and selected based on location history of devices for which location reporting is authorized. The ambassadors may provide information regarding the entity to the public through the communication channel. Communications between the users and ambassadors may be reported to the entity owner for analysis by the entity owner.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Matteo Agosti, Ankit Gupta
  • Patent number: 11562129
    Abstract: A method to generated a chart recommendation based on machine understanding of spreadsheet data, including determining a set of data that each include content of a cell of one or more cells in a column of a spreadsheet presented to a user. The method further determines an entity type associated with the column based on the set of data. The entity type represents a semantic meaning of the set of data in the column of the spreadsheet. The method further identifies at least one of a plurality of charts that is relevant to the entity type associated with the column. The method then provides the identified chart for presentation to the user.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Weihao Lin, Vishnu Sivaji
  • Patent number: 11562748
    Abstract: Techniques are described herein for detecting and suppressing commands in media that may trigger another automated assistant. A method includes: determining, for each of a plurality of automated assistant devices in an environment that are each executing at least one automated assistant, an active capability of the automated assistant device; initiating playback of digital media by an automated assistant; in response to initiating playback, processing the digital media to identify an audio segment in the digital media that, upon playback, is expected to trigger activation of at least one automated assistant executing on at least one of the plurality of automated assistant devices in the environment, based on the active capability of the at least one of the plurality of automated assistant devices; and in response to identifying the audio segment in the digital media, modifying the digital media to suppress the activation of the at least one automated assistant.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11562518
    Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
  • Patent number: 11563809
    Abstract: The technology provides for live migration from a first cluster to a second cluster. For instance, when requests to one or more cluster control planes are received, a predetermined fraction of the received requests may be allocated to a control plane of the second cluster, while a remaining fraction of the received requests may be allocated to a control plane of the first cluster. The predetermined fraction of requests are handled using the control plane of the second cluster. While handling the predetermined fraction of requests, it is detected whether there are failures in the second cluster. Based on not detecting failures in the second cluster, the predetermined fraction of requests allocated to the control plane of the second cluster may be increased in predetermined stages until all requests are allocated to the control plane of the second cluster.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventor: Daniel Veritas Smith
  • Patent number: 11561764
    Abstract: Implementations described herein relate to transitioning a computing device between operating modes according to whether the computing device is suitably oriented for received non-audio related gestures. For instance, the user can attach a portable computing device to a docking station of a vehicle and, while in transit, wave their hand near the portable computing device in order to invoke the automated assistant. Such action by the user can be detected by a proximity sensor and/or any other device capable of determining a context of the portable computing device and/or an interest of the user in invoking the automated assistant. In some implementations location, orientation, and/or motion of the portable computing device can be detected and used in combination with an output of the proximity sensor to determine whether to invoke the automated assistant in response to an input gesture from the user.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventor: Haywai Chan
  • Patent number: 11562285
    Abstract: Methods, systems, and apparatus for training quantum evolutions using sub-logical controls. In one aspect, a method includes the actions of accessing quantum hardware, wherein the quantum hardware includes a quantum system comprising one or more multi-level quantum subsystems; one or more control devices that operate on the one or more multi-level quantum subsystems according to one or more respective control parameters that relate to a parameter of a physical environment in which the multi-level quantum subsystems are located; initializing the quantum system in an initial quantum state, wherein an initial set of control parameters form a parameterization that defines the initial quantum state; obtaining one or more quantum system observables and one or more target quantum states; and iteratively training until an occurrence of a completion event.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Ryan Babbush, Hartmut Neven
  • Publication number: 20230018830
    Abstract: Implementations described herein relate to methods, devices, and computer-readable media to generate and provide image-based creations. A computer-implemented method includes obtaining a plurality of episodes, each episode associated with a corresponding time period and including a respective set of images and person identifiers for each image. The method further includes forming a respective cluster for each episode that includes at least two person identifiers. The method further includes determining whether one or more person identifiers are included in less than a threshold number of clusters, and in response, removing the one or more person identifiers from the clusters that the one or more person identifiers that are included in. The method further includes merging identical clusters to obtain a plurality of people groups that each include two or more person identifiers and providing a user interface that includes an image-based creation based on a particular people group.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Dominick LIM, Kristi BOHL, Jason CHANG, Vidya VALMIKINATHAN, Taehee LEE, Jeremy ZHU
  • Publication number: 20230016900
    Abstract: Systems and techniques are provided for anomalous path detection within cameras' fields of view. Video of the field of view of a camera in an environment may be received from the camera. A person may be detected in the video. Motion of the person in the video may be tracked to generate a motion path. Contextual data for the motion path may be received. The motion path and contextual data may be stored in a camera training data set. A camera model for the camera and the field of view may be generated by inputting the camera training data set to a machine learning system. The camera model for the camera and the field of view may be stored.
    Type: Application
    Filed: July 25, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventor: Marci Meingast
  • Publication number: 20230019737
    Abstract: A method for a soft acceptance of a hotword receives audio data characterizing a soft hotword event detected by a hotword detector in streaming audio captured by a user device. The method also processes the audio data to determine that the audio data corresponds to a query specifying an action to perform on the user device. Without triggering performance of the action on the user device or the other device, the method provides a notification for output from the user device where the notification prompts a user associated with the user device to provide an affirmative input indication in order to trigger performance of the action on the user device or the other device and, when the user fails to provide the affirmative input indication, instructs the user device or the other device to not perform the action specified by the query.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Brett Aladdin Barros, James Flynn, Theo Goguely
  • Publication number: 20230013587
    Abstract: A method includes receiving training data that includes unspoken text utterances, un-transcribed non-synthetic speech utterances, and transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. Each transcribed non-synthetic speech utterance is paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances, the un-transcribed non-synthetic speech utterances, and the transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.
    Type: Application
    Filed: April 15, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Andrew Rosenberg, Zhehuai Chen, Bhuvana Ramabhadran, Pedro J. Moreno Mengibar, Gary Wang, Yu Zhang
  • Publication number: 20230018686
    Abstract: Various arrangements for performing fall detection are presented. A smart-home device (110, 201), comprising a monolithic radar integrated circuit (205), may transmit radar waves. Based on reflected radar waves, raw waveform data may be created. The raw waveform data may be processed to determine that a fall by a person (101) has occurred. Speech may then be output announcing that the fall has been detected via the speaker (217) of the smart home device (110, 201).
    Type: Application
    Filed: December 12, 2019
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Dongeek SHIN, Shwetak PATEL, Rizwan CHAUDHRY, Chetan BHOLE, Vaibhav DARBARI, Todd WHITEHURST, Anupam PATHAK
  • Publication number: 20230015169
    Abstract: A method of generating an accurate speaker representation for an audio sample includes receiving a first audio sample from a first speaker and a second audio sample from a second speaker. The method includes dividing a respective audio sample into a plurality of audio slices. The method also includes, based on the plurality of slices, generating a set of candidate acoustic embeddings where each candidate acoustic embedding includes a vector representation of acoustic features. The method further includes removing a subset of the candidate acoustic embeddings from the set of candidate acoustic embeddings. The method additionally includes generating an aggregate acoustic embedding from the remaining candidate acoustic embeddings in the set of candidate acoustic embeddings after removing the subset of the candidate acoustic embeddings.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Yeming Fang, Quan Wang, Pedro Jose Moreno Mengibar, Ignacio Lopez Moreno, Gang Feng, Fang Chu, Jin Shi, Jason William Pelecanos
  • Publication number: 20230013114
    Abstract: Described is a computer-implemented method which comprises receiving a plurality of images captured by at least one user device, wherein each image is associated with one of a corresponding plurality of geographic locations; determining a path between the plurality of geographic locations; determining a confidence indicator representative of whether the determined path corresponds to a demarked path, wherein determining the confidence indicator comprises determining a time of capture of each of the plurality of images; identifying the path as corresponding to a demarked route, based on the confidence indicator; and marking the plurality of images for display as a demarked route.
    Type: Application
    Filed: September 20, 2022
    Publication date: January 19, 2023
    Applicants: Google LLC, Google LLC
    Inventor: Stephen Charles Hsu
  • Publication number: 20230012793
    Abstract: A method (500) for toggling multi-network connectivity of a mobile device (110) includes, for the mobile device simultaneously connected to one or more carrier-mediated wireless networks (120) associated with a network operator (70), executing a graphical user interface that renders a status graphic (320) indicating the mobile device is currently connected to at least one carrier-mediated wireless network associated with the network operator, and an interactive graphic (330) for selecting between disabling and enabling connections (122) between the mobile device and carrier-mediated wireless networks associated with the network operator.
    Type: Application
    Filed: December 11, 2019
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Daniel Chak, Varun Anand, Alex Stillwell, Shishir Agrawal, Qingxi Li
  • Publication number: 20230013777
    Abstract: A direct speech-to-speech translation (S2ST) model includes an encoder configured to receive an input speech representation that to an utterance spoken by a source speaker in a first language and encode the input speech representation into a hidden feature representation. The S2ST model also includes an attention module configured to generate a context vector that attends to the hidden representation encoded by the encoder. The S2ST model also includes a decoder configured to receive the context vector generated by the attention module and predict a phoneme representation that corresponds to a translation of the utterance in a second different language. The S2ST model also includes a synthesizer configured to receive the context vector and the phoneme representation and generate a translated synthesized speech representation that corresponds to a translation of the utterance spoken in the different second language.
    Type: Application
    Filed: December 15, 2021
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz
  • Publication number: 20230018384
    Abstract: A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the corresponding transcript and the training synthesized speech representation. The method also includes receiving an input text utterance to be synthesized into speech in the second accent/dialect. The method also includes obtaining conditioning inputs that include a speaker embedding and an accent/dialect identifier that identifies the second accent/dialect.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Lev Finkelstein, Chun-an Chan, Byungha Chun, Norman Casagrande, Yu Zhang, Robert Andrew James Clark, Vincent Wan
  • Publication number: 20230013347
    Abstract: A method for remote attestation includes establishing, using a cryptographic protocol, a communication session between a first computing device and a second computing device. The communication session includes communications encrypted by an ephemeral session key. The method includes receiving, at the first communication device via the communication session, from the second computing device, an attestation request requesting the first computing device to provide an attestation report. The method includes generating, by the first computing device, the attestation report based on the ephemeral session key and sending, using the communication session, the attestation report to the second computing device.
    Type: Application
    Filed: July 19, 2021
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Keith Moyer, Benjamin Seth Moore, Ari Medvinksy, Kevin Yap, Ivan Petrov, Tiziano Santoro, Ariel Joseph Feldman, Marcel Catalin Rosu
  • Publication number: 20230017892
    Abstract: A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances and the un-transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.
    Type: Application
    Filed: June 21, 2022
    Publication date: January 19, 2023
    Applicant: Google LLC
    Inventors: Zhehuai Chen, Bhuvana Ramabhadran, Andrew M. Rosenberg, Yu Zhang, Pedro J. Moreno Mengibar