Patents by Inventor Victor Carbune

Victor Carbune has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230252995
    Abstract: Various implementations include determining whether further spoken input is intended to correct at least one word in a candidate text representation of spoken input. Various implementations include receiving audio data capturing spoken input of a user. Various implementations include rendering output based on the candidate text representation to the user. Various implementations include receiving, while the output is being rendered, further audio data capturing the further spoken input. In response to determining the further spoken input is intended to correct the at least one word in the candidate text representation, various implementations include generating a revised text representation of the spoken input by altering at least one word in the candidate text representation based on one or more terms in the further candidate text representation.
    Type: Application
    Filed: February 8, 2022
    Publication date: August 10, 2023
    Inventors: Matthew Sharifi, Victor Carbune, Bogdan Prisacari, Alexander Froemmgen, Milosz Kmieciak, Felix Weissenberger, Daniel Valcarce
  • Publication number: 20230251877
    Abstract: Automated content switching rules may be generated and/or utilized for automatically switching away from certain interactive content during presentation of that interactive content when one or more switch conditions are met. In some instances, automated content switching rules may define one or more non-temporal switch conditions, e.g., based upon reaching certain points or milestones in certain interactive content, that may be used to initiate actions that switch away from the interactive content. In addition, in some instances, automated content switching rules may be used to not only switch away from particular interactive content, but additionally switch to other interactive content, thereby enabling a user to effectively schedule a workflow across different interactive content, applications and/or other computer-related tasks.
    Type: Application
    Filed: February 7, 2022
    Publication date: August 10, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Patent number: 11722731
    Abstract: While an assistant-enabled device is playing back media content, a method includes receiving a contextual signal from an environment of the assistant-enabled device and executing an event recognition routine to determine whether the received contextual signal is indicative of an event that conflicts with the playback of the media content from the assistant-enabled device. When the event recognition routine determines that the received contextual signal is indicative of the event that conflicts with the playback of the media content, the method also includes adjusting content playback settings of the assistant-enabled device.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: August 8, 2023
    Assignee: Google LLC
    Inventors: Victor Carbune, Matthew Sharifi
  • Patent number: 11720613
    Abstract: Techniques are described herein for determining an information gain score for one or more documents of interest to the user and present information from the documents based on the information gain score. An information gain score for a given document is indicative of additional information that is included in the document beyond information contained in documents that were previously viewed by the user. In some implementations, the information gain score may be determined for one or more documents by applying data from the documents across a machine learning model to generate an information gain score. Based on the information gain scores of a set of documents, the documents can be provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to view the documents.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Pedro Gonnet Anders
  • Publication number: 20230237312
    Abstract: Techniques are disclosed that enable automating user interface input by generating a sequence of actions to perform a task utilizing a multi-agent reinforcement learning framework. Various implementations process an intent associated with received user interface input using a holistic reinforcement policy network to select a software reinforcement learning policy network. The sequence of actions can be generated by processing the intent, as well as a sequence of software client state data, using the selected software reinforcement learning policy network. The sequence of actions are utilized to control the software client corresponding to the selected software reinforcement learning policy network.
    Type: Application
    Filed: March 29, 2023
    Publication date: July 27, 2023
    Inventors: Victor Carbune, Thomas Deselaers
  • Publication number: 20230229530
    Abstract: Implementations set forth herein relate to intervening notifications provided by an application for mitigating computationally wasteful application launching behavior that is exhibited by some users. A state of a module of a target application can be identified by emulating user inputs previously provided by the user to the target application. In this way, the state of the module can be determined without visibly launching the target application. When the state of the module is determined to satisfy criteria for providing a notification to the user, the application can render a notification for the user. The application can provide intervening notifications for a variety of different target applications in order to reduce a frequency at which the user launches and closes applications to check for variations in target application content.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Sandro Feuz, Victor Carbune
  • Publication number: 20230230578
    Abstract: A personalized endpointing measure can be used to determine whether a user has finished speaking a spoken utterance. Various implementations include using the personalized endpointing measure to determine whether a candidate endpoint indicates a user has finished speaking the spoken utterance or whether the user has paused and has not finished speaking the spoken utterance. Various implementations include determining the personalized endpointing measure based on a portion of a text representation of the spoken utterance immediately preceding the candidate endpoint and a user-specific measure. Additionally or alternatively, the user-specific measure can be based on the text representation immediately preceding the candidate endpoint and one or more historical interactions with the user. In various implementations, each of the historical interactions are specific to the text representation and the user, and indicate whether a previous instance of the text representation was a previous endpoint for the user.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 20, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230223031
    Abstract: Implementations set forth herein relate to an automated assistant that can solicit other devices for data that can assist with user authentication. User authentication can be streamlined for certain requests by removing a requirement that all authentication be performed at a single device and/or by a single application. For instance, the automated assistant can rely on data from other devices, which can indicate a degree to which a user is predicted to be present at a location of an assistant-enabled device. The automated assistant can process this data to make a determination regarding whether the user should be authenticated in response to an assistant input and/or pre-emptively before the user provides an assistant input. In some implementations, the automated assistant can perform one or more factors of authentication and utilize the data to verify the user in lieu of performing one or more other factors of authentication.
    Type: Application
    Filed: January 11, 2022
    Publication date: July 13, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230215422
    Abstract: Implementations described herein include detecting a stream of audio data that captures a spoken utterance of the user and that captures ambient noise occurring within a threshold time period of the spoken utterance being spoken by the user. Implementations further include processing a portion of the audio data that includes the ambient noise to determine ambient noise classification(s), processing a portion of the audio data that includes the spoken utterance to generate a transcription, processing both the transcription and the ambient noise classification(s) with a machine learning model to generate a user intent and parameter(s) for the user intent, and performing one or more automated assistant actions based on the user intent and using the parameter(s).
    Type: Application
    Filed: January 5, 2022
    Publication date: July 6, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Patent number: 11694685
    Abstract: A method includes receiving audio data corresponding to an utterance spoken by the user and captured by the user device. The utterance includes a command for a digital assistant to perform an operation. The method also includes determining, using a hotphrase detector configured to detect each trigger word in a set of trigger words associated with a hotphrase, whether any of the trigger words in the set of trigger words are detected in the audio data during the corresponding fixed-duration time window. The method also includes determining identifying, in the audio corresponding to the utterance, the hotphrase when each other trigger word in the set of trigger words was also detected in the audio data. The method also includes triggering an automated speech recognizer to perform speech recognition on the audio data when the hotphrase is identified in the audio data corresponding to the utterance.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: July 4, 2023
    Assignee: Google LLC
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230206923
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for collaboration between multiple voice controlled devices are disclosed. In one aspect, a method includes the actions of identifying, by a first computing device, a second computing device that is configured to respond to a particular, predefined hotword; receiving audio data that corresponds to an utterance; receiving a transcription of additional audio data outputted by the second computing device in response to the utterance; based on the transcription of the additional audio data and based on the utterance, generating a transcription that corresponds to a response to the additional audio data; and providing, for output, the transcription that corresponds to the response.
    Type: Application
    Filed: December 5, 2022
    Publication date: June 29, 2023
    Inventors: Victor Carbune, Pedro Gonnet Andres, Thomas Deselaers, Sandro Feuz
  • Publication number: 20230195815
    Abstract: Techniques are described herein for collaborative search sessions through an automated assistant. A method includes: receiving, from a first user of a first client device, a first query in a query session; providing, to the first user, a first set of search results; determining, based on at least one term in the first query, that the first query is relevant to a second user of the first client device; providing, to the second user, a selectable option to join the query session; in response to receiving, from the second user, an indication of acceptance of the selectable option, adding the second user to the query session; receiving, from the second user, additional input; generating, based on the additional input received from the second user, a modified set of search results; and providing, to the first user and the second user, the modified set of search results.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 22, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230194294
    Abstract: A first computing device can implement a method for providing navigation instructions. The method includes initiating a first navigation session for providing a first set of navigation instructions to a user from a starting location to a destination location along a first route. The method also includes detecting a second computing device in proximity to the first computing device, and determining that the second computing device is implementing a second navigation session for providing a second set of navigation instructions to the destination location along a second route. Further, the method includes adjusting the first navigation session in accordance with the second navigation session.
    Type: Application
    Filed: September 11, 2020
    Publication date: June 22, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230197072
    Abstract: Techniques are described herein for warm word arbitration between automated assistant devices. A method includes: determining that warm word arbitration is to be initiated between a first assistant device and one or more additional assistant devices, including a second assistant device; broadcasting, by the first assistant device, to the one or more additional assistant devices, an active set of warm words for the first assistant device; for each of the one or more additional assistant devices, receiving, from the additional assistant device, an active set of warm words for the additional assistant device; identifying a matching warm word included in the active set of warm words for the first assistant device and included in the active set of warm words for the second assistant device; and enabling or disabling detection of the matching warm word by the first assistant device, in response to identifying the matching warm word.
    Type: Application
    Filed: January 11, 2022
    Publication date: June 22, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230197071
    Abstract: An overall endpointing measure can be generated based on an audio-based endpointing measure and (1) an accelerometer-based endpointing measure and/or (2) a gaze-based endpointing measure. The overall endpointing measure can be used in determining whether a candidate endpoint is an actual endpoint. Various implementations include generating the audio-based endpointing measure by processing an audio data stream, capturing a spoken utterance of a user, using an audio model. Various implementations additionally or alternatively include generating the accelerometer-based endpointing measure by processing a stream of accelerometer data using an accelerometer model. Various implementations additionally or alternatively include processing an image data stream using a gaze model to generate the gaze-based endpointing measure.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 22, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11683320
    Abstract: The present disclosure is generally directed to a data processing system for customizing content in a voice activated computer network environment. With user consent, the data processing system can improve the efficiency and effectiveness of auditory data packet transmission over one or more computer networks by, for example, increasing the accuracy of the voice identification process used in the generation of customized content. The present solution can make accurate identifications while generating fewer audio identification models, which are computationally intensive to generate.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: June 20, 2023
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Thomas Deselaers, Sandro Feuz
  • Publication number: 20230186029
    Abstract: A computing system can include one or more machine-learned models configured to receive context data that describes one or more entities to be named. In response to receipt of the context data, the machine-learned model(s) can generate output data that describes one or more names for the entity or entities described by the context data. The computing system can be configured to perform operations including inputting the context data into the machine-learned model(s). The operations can include receiving, as an output of the machine-learned model(s), the output data that describes the name(s) for the entity or entities described by the context data. The operations can include storing at least one name described by the output data.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 15, 2023
    Inventors: Victor Carbune, Alexandru-Marian Damian
  • Publication number: 20230186909
    Abstract: Systems and methods for determining, based on invocation input that is common to multiple automated assistants, which automated assistant to invoke in lieu of invoking other automated assistants. The invocation input is processed to determine one or more invocation features that may be utilized to determine which, of a plurality of candidate automated assistants, to invoke. Further, additional features are processed that can indicate which, of the plurality of invocable automated assistants, to invoke. Once an automated assistant has been invoked, additional audio data and/or features of additional audio data are provided to the invoked automated assistant for further processing.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230186908
    Abstract: Implementations relate to interactions between a user and an automated assistant during a dialog between the user and the automated assistant. Some implementations relate to processing received user request input to determine that it is of a particular type that is associated with a source parameter rule and, in response, causing one or more sources indicated as preferred by the source parameter rule and one or more additional sources not indicated by the source parameter rule to be searched based on the user request input. Further, those implementations relate to identifying search results of the search(es), and generating, in dependence on the search results, a response to the user request that includes content from search result(s) of the preferred source(s) and/or content from search result(s) of the additional source(s). Generating the response further includes including, in the response, some indication that indicates whether the source parameter rule was followed or violated in generating the response.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230186922
    Abstract: Implementations set forth herein relate to an automated assistant that can be customized by a user to provide custom assistant responses to certain assistant queries, which may originate from other users. The user can establish certain custom assistant responses by providing an assistant response request to the automated assistant and/or responding to a request from the automated assistant to establish a particular custom assistant response. In some instances, a user can elect to establish a custom assistant response when the user determines or acknowledges that certain common queries are being submitted to the automated assistant—but the automated assistant is unable to resolve the common query. Establishing such custom assistant responses can therefore condense interactions between other users and the automated assistant.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Inventors: Victor Carbune, Matthew Sharifi