Patents by Inventor Blaise Aguera-Arcas
Blaise Aguera-Arcas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240318971Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.Type: ApplicationFiled: February 29, 2024Publication date: September 26, 2024Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
-
Publication number: 20240169186Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface Nprediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).Type: ApplicationFiled: June 2, 2021Publication date: May 23, 2024Inventors: Xiaoxue Zang, Ying Xu, Srinivas Kumar Sunkara, Abhinav Kumar Rastogi, Jindong Chen, Blaise Aguera-Arcas, Chongyang Bai
-
Patent number: 11946762Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.Type: GrantFiled: August 12, 2020Date of Patent: April 2, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
-
Publication number: 20240071406Abstract: Implementations disclosed herein are directed to utilizing ephemeral learning techniques and/or federated learning techniques to update audio-based machine learning (ML) model(s) based on processing streams of audio data generated via radio station(s) across the world. This enables the audio-based ML model(s) to learn representations and/or understand languages across the world, including tail languages for which there is no/minimal audio data. In various implementations, one or more deduping techniques may be utilized to ensure the same stream of audio data is not overutilized in updating the audio-based ML model(s). In various implementations, a given client device may determine whether to employ an ephemeral learning technique or a federated learning technique based on, for instance, a connection status with a remote system. Generally, the streams of audio data are received at client devices, but the ephemeral learning techniques may be implemented at the client device and/or at the remote system.Type: ApplicationFiled: December 5, 2022Publication date: February 29, 2024Inventors: Johan Schalkwyk, Blaise Aguera-Arcas, Diego Melendo Casado, Oren Litvin
-
Publication number: 20240004677Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).Type: ApplicationFiled: September 13, 2023Publication date: January 4, 2024Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
-
Patent number: 11842736Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.Type: GrantFiled: February 10, 2023Date of Patent: December 12, 2023Assignee: Google LLCInventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
-
Patent number: 11805208Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, at a mobile computing device that is associated with a called user, a call from a calling computing device that is associated with a calling user; in response to receiving the call, determining, by the mobile computing device, that data associated with the called user indicates that the called user will not respond to the call; in response to determining that the called user will not respond to the call, inferring, by the mobile computing device, an informational need of the calling user; and automatically providing, from the mobile computing device to the calling computing device, information associated with the called user and that satisfies the inferred informational need of the calling user.Type: GrantFiled: May 9, 2019Date of Patent: October 31, 2023Assignee: Google LLCInventors: Shavit Matias, Noam Etzion-Rosenberg, Blaise Aguera-Arcas, Benjamin Schlesinger, Brandon Barbello, Ori Kabeli, David Petrou, Yossi Matias, Nadav Bar
-
Patent number: 11789753Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).Type: GrantFiled: June 1, 2021Date of Patent: October 17, 2023Assignee: GOOGLE LLCInventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
-
Publication number: 20230316081Abstract: The present disclosure provides a new type of generalized artificial neural network where neurons and synapses maintain multiple states. While classical gradient-based backpropagation in artificial neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients with update rules derived from the chain rule, example implementations of the generalized framework proposed herein may additionally: have neither explicit notion of nor ever receive gradients; contain more than two states; and/or implement or apply learned (e.g., meta-learned) update rules that control updates to the state(s) of the neuron during forward and/or backward propagation of information.Type: ApplicationFiled: May 6, 2022Publication date: October 5, 2023Inventors: Mark Sandler, Andrey Zhmoginov, Thomas Edward Madams, Maksym Vladymyrov, Nolan Andrew Miller, Blaise Aguera-Arcas, Andrew Michael Jackson
-
Publication number: 20230186917Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.Type: ApplicationFiled: February 10, 2023Publication date: June 15, 2023Applicant: Google LLCInventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
-
Publication number: 20230160710Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.Type: ApplicationFiled: August 12, 2020Publication date: May 25, 2023Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
-
Patent number: 11580978Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.Type: GrantFiled: November 24, 2020Date of Patent: February 14, 2023Assignee: Google LLCInventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
-
Publication number: 20220382565Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
-
Patent number: 11159763Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.Type: GrantFiled: July 23, 2020Date of Patent: October 26, 2021Assignee: Google LLCInventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook
-
Publication number: 20210183383Abstract: Provided is an in-ear device and associated computational support system that leverages machine learning to interpret sensor data descriptive of one or more in-ear phenomena during subvocalization by the user. An electronic device can receive sensor data generated by at least one sensor at least partially positioned within an ear of a user, wherein the sensor data was generated by the at least one sensor concurrently with the user subvocalizing a subvocalized utterance. The electronic device can then process the sensor data with a machine-learned subvocalization interpretation model to generate an interpretation of the subvocalized utterance as an output of the machine-learned subvocalization interpretation model.Type: ApplicationFiled: November 24, 2020Publication date: June 17, 2021Inventors: Yaroslav Volovich, Ant Oztaskent, Blaise Aguera-Arcas
-
Publication number: 20210090750Abstract: The present disclosure provides systems and methods that leverage machine-learned models in conjunction with user-associated data and disease prevalence mapping to predict disease infections with improved user privacy. In one example, a computer-implemented method can include obtaining, by a user computing device associated with a user, a machine-learned prediction model configured to predict a probability that the user may be infected with a disease based at least in part on user-associated data associated with the user. The method can further include receiving, by the user computing device, the user-associated data associated with the user. The method can further include providing, by the user computing device, the user-associated data as input to the machine-learned prediction model, the machine-learned prediction model being implemented on the user computing device.Type: ApplicationFiled: September 27, 2018Publication date: March 25, 2021Inventors: Adam Sadilek, Blaise Aguera-Arcas, Keith Allen Bonawitz
-
Publication number: 20200358901Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, at a mobile computing device that is associated with a called user, a call from a calling computing device that is associated with a calling user; in response to receiving the call, determining, by the mobile computing device, that data associated with the called user indicates that the called user will not respond to the call; in response to determining that the called user will not respond to the call, inferring, by the mobile computing device, an informational need of the calling user; and automatically providing, from the mobile computing device to the calling computing device, information associated with the called user and that satisfies the inferred informational need of the calling user.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Applicant: Google LLCInventors: Shavit Matias, Noam Etzion-Rosenberg, Blaise Aguera-Arcas, Benjamin Schlesinger, Brandon Barbello, Ori Kabeli, David Petrou, Yossi Matias, Nadav Bar
-
Publication number: 20200351466Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.Type: ApplicationFiled: July 23, 2020Publication date: November 5, 2020Inventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook
-
Patent number: 10732809Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. The mobile image capture device is operable to input an image into at least one neural network and to receive at least one descriptor of the desirability of a scene depicted by the image as an output of the at least one neural network. The mobile image capture device is operable to determine, based at least in part on the at least one descriptor of the desirability of the scene of the image, whether to store a second copy of such image and/or one or more contemporaneously captured images in a non-volatile memory of the mobile image capture device or to discard a first copy of such image from a temporary image buffer without storing the second copy of such image in the non-volatile memory.Type: GrantFiled: March 6, 2018Date of Patent: August 4, 2020Assignee: Google LLCInventors: Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Hugh Brendan McMahan, Oliver Fritz Lange, Emily Anne Fortuna, Divya Tyamagundlu, Jess Holbrook, Kristine Kohlhepp, Juston Payne, Krzysztof Duleba, Benjamin Vanik, Alison Lentz, Jon Gabriel Clapper, Joshua Denali Lovejoy, Aaron Michael Donsbach
-
Patent number: 10728489Abstract: The present disclosure provides an image capture, curation, and editing system that includes a resource-efficient mobile image capture device that continuously captures images. In particular, the present disclosure provides low power frameworks for controlling image sensor mode in a mobile image capture device. On example low power frame work includes a scene analyzer that analyzes a scene depicted by a first image and, based at least in part on such analysis, causes an image sensor control signal to be provided to an image sensor to adjust at least one of the frame rate and the resolution of the image sensor.Type: GrantFiled: August 22, 2018Date of Patent: July 28, 2020Assignee: Google LLCInventors: Aaron Michael Donsbach, Benjamin Vanik, Jon Gabriel Clapper, Alison Lentz, Joshua Denali Lovejoy, Robert Douglas Fritz, III, Krzysztof Duleba, Li Zhang, Juston Payne, Emily Anne Fortuna, Iwona Bialynicka-Birula, Blaise Aguera-Arcas, Daniel Ramage, Benjamin James McMahan, Oliver Fritz Lange, Jess Holbrook