Patents by Inventor Blaise Aguera-Arcas

Blaise Aguera-Arcas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240318971
    Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.
    Type: Application
    Filed: February 29, 2024
    Publication date: September 26, 2024
    Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
  • Publication number: 20240169186
    Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface Nprediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).
    Type: Application
    Filed: June 2, 2021
    Publication date: May 23, 2024
    Inventors: Xiaoxue Zang, Ying Xu, Srinivas Kumar Sunkara, Abhinav Kumar Rastogi, Jindong Chen, Blaise Aguera-Arcas, Chongyang Bai
  • Patent number: 11946762
    Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: April 2, 2024
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
  • Publication number: 20240071406
    Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.
    Type: Application
    Filed: December 5, 2022
    Publication date: February 29, 2024
    Inventors: Johan Schalkwyk, Blaise Aguera-Arcas, Diego Melendo Casado, Oren Litvin
  • Publication number: 20240004677
    Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).
    Type: Application
    Filed: September 13, 2023
    Publication date: January 4, 2024
    Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
  • Patent number: 11842736
    Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: December 12, 2023
    Assignee: Google LLC
    Inventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
  • Patent number: 11805208
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, at a mobile computing device that is associated with a called user, a call from a calling computing device that is associated with a calling user; in response to receiving the call, determining, by the mobile computing device, that data associated with the called user indicates that the called user will not respond to the call; in response to determining that the called user will not respond to the call, inferring, by the mobile computing device, an informational need of the calling user; and automatically providing, from the mobile computing device to the calling computing device, information associated with the called user and that satisfies the inferred informational need of the calling user.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: October 31, 2023
    Assignee: Google LLC
    Inventors: Shavit Matias, Noam Etzion-Rosenberg, Blaise Aguera-Arcas, Benjamin Schlesinger, Brandon Barbello, Ori Kabeli, David Petrou, Yossi Matias, Nadav Bar
  • Patent number: 11789753
    Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: October 17, 2023
    Assignee: GOOGLE LLC
    Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
  • Publication number: 20230316081
    Abstract: The present disclosure provides a new type of generalized artificial neural network where neurons and synapses maintain multiple states. While classical gradient-based backpropagation in artificial neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients with update rules derived from the chain rule, example implementations of the generalized framework proposed herein may additionally: have neither explicit notion of nor ever receive gradients; contain more than two states; and/or implement or apply learned (e.g., meta-learned) update rules that control updates to the state(s) of the neuron during forward and/or backward propagation of information.
    Type: Application
    Filed: May 6, 2022
    Publication date: October 5, 2023
    Inventors: Mark Sandler, Andrey Zhmoginov, Thomas Edward Madams, Maksym Vladymyrov, Nolan Andrew Miller, Blaise Aguera-Arcas, Andrew Michael Jackson
  • Publication number: 20230186917
    Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.
    Type: Application
    Filed: February 10, 2023
    Publication date: June 15, 2023
    Applicant: Google LLC
    Inventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
  • Publication number: 20230160710
    Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.
    Type: Application
    Filed: August 12, 2020
    Publication date: May 25, 2023
    Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
  • Patent number: 11580978
    Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: February 14, 2023
    Assignee: Google LLC
    Inventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
  • Publication number: 20220382565
    Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).
    Type: Application
    Filed: June 1, 2021
    Publication date: December 1, 2022
    Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
  • Patent number: 11159763
    Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: October 26, 2021
    Assignee: Google LLC
    Inventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook
  • Publication number: 20210183383
    Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.
    Type: Application
    Filed: November 24, 2020
    Publication date: June 17, 2021
    Inventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
  • Publication number: 20210090750
    Abstract: The present disclosure provides systems and methods that leverage machine-learned models in conjunction with user-associated data and disease prevalence mapping to predict disease infections with improved user privacy. In one example, a computer-implemented method can include obtaining, by a user computing device associated with a user, a machine-learned prediction model configured to predict a probability that the user may be infected with a disease based at least in part on user-associated data associated with the user. The method can further include receiving, by the user computing device, the user-associated data associated with the user. The method can further include providing, by the user computing device, the user-associated data as input to the machine-learned prediction model, the machine-learned prediction model being implemented on the user computing device.
    Type: Application
    Filed: September 27, 2018
    Publication date: March 25, 2021
    Inventors: Adam Sadilek, Blaise Aguera-Arcas, Keith Allen Bonawitz
  • Publication number: 20200358901
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, at a mobile computing device that is associated with a called user, a call from a calling computing device that is associated with a calling user; in response to receiving the call, determining, by the mobile computing device, that data associated with the called user indicates that the called user will not respond to the call; in response to determining that the called user will not respond to the call, inferring, by the mobile computing device, an informational need of the calling user; and automatically providing, from the mobile computing device to the calling computing device, information associated with the called user and that satisfies the inferred informational need of the calling user.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Applicant: Google LLC
    Inventors: Shavit Matias, Noam Etzion-Rosenberg, Blaise Aguera-Arcas, Benjamin Schlesinger, Brandon Barbello, Ori Kabeli, David Petrou, Yossi Matias, Nadav Bar
  • Publication number: 20200351466
    Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.
    Type: Application
    Filed: July 23, 2020
    Publication date: November 5, 2020
    Inventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook
  • Patent number: 10732809
    Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. The mobile image capture device is operable to input an image into at least one neural network and to receive at least one descriptor of the desirability of a scene depicted by the image as an output of the at least one neural network. The mobile image capture device is operable to determine, based at least in part on the at least one descriptor of the desirability of the scene of the image, whether to store a second copy of such image and/or one or more contemporaneously captured images in a non-volatile memory of the mobile image capture device or to discard a first copy of such image from a temporary image buffer without storing the second copy of such image in the non-volatile memory.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: August 4, 2020
    Assignee: Google LLC
    Inventors: Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Hugh Brendan McMahan, Oliver Fritz Lange, Emily Anne Fortuna, Divya Tyamagundlu, Jess Holbrook, Kristine Kohlhepp, Juston Payne, Krzysztof Duleba, Benjamin Vanik, Alison Lentz, Jon Gabriel Clapper, Joshua Denali Lovejoy, Aaron Michael Donsbach
  • Patent number: 10728489
    Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: July 28, 2020
    Assignee: Google LLC
    Inventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook