Patents by Inventor Victor Carbune

Victor Carbune has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230186029
    Abstract: A computing system can include one or more machine-learned models configured to receive context data that describes one or more entities to be named. In response to receipt of the context data, the machine-learned model(s) can generate output data that describes one or more names for the entity or entities described by the context data. The computing system can be configured to perform operations including inputting the context data into the machine-learned model(s). The operations can include receiving, as an output of the machine-learned model(s), the output data that describes the name(s) for the entity or entities described by the context data. The operations can include storing at least one name described by the output data.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 15, 2023
    Inventors: Victor Carbune, Alexandru-Marian Damian
  • Publication number: 20230186909
    Abstract: Systems and methods for determining, based on invocation input that is common to multiple automated assistants, which automated assistant to invoke in lieu of invoking other automated assistants. The invocation input is processed to determine one or more invocation features that may be utilized to determine which, of a plurality of candidate automated assistants, to invoke. Further, additional features are processed that can indicate which, of the plurality of invocable automated assistants, to invoke. Once an automated assistant has been invoked, additional audio data and/or features of additional audio data are provided to the invoked automated assistant for further processing.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230186908
    Abstract: Implementations relate to interactions between a user and an automated assistant during a dialog between the user and the automated assistant. Some implementations relate to processing received user request input to determine that it is of a particular type that is associated with a source parameter rule and, in response, causing one or more sources indicated as preferred by the source parameter rule and one or more additional sources not indicated by the source parameter rule to be searched based on the user request input. Further, those implementations relate to identifying search results of the search(es), and generating, in dependence on the search results, a response to the user request that includes content from search result(s) of the preferred source(s) and/or content from search result(s) of the additional source(s). Generating the response further includes including, in the response, some indication that indicates whether the source parameter rule was followed or violated in generating the response.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230186922
    Abstract: Implementations set forth herein relate to an automated assistant that can be customized by a user to provide custom assistant responses to certain assistant queries, which may originate from other users. The user can establish certain custom assistant responses by providing an assistant response request to the automated assistant and/or responding to a request from the automated assistant to establish a particular custom assistant response. In some instances, a user can elect to establish a custom assistant response when the user determines or acknowledges that certain common queries are being submitted to the automated assistant—but the automated assistant is unable to resolve the common query. Establishing such custom assistant responses can therefore condense interactions between other users and the automated assistant.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230188481
    Abstract: Implementations are directed to enabling a representative associated with an entity to quickly and efficiently modify a voice bot associated with the entity. The voice bot can be previously trained to communicate with user(s) on behalf of the entity through various communication channels (e.g., a telephone communication channel, a software application communication channel, a messaging communication channel, etc.). Processor(s) of a computing device can receive, from the representative, representative input to modify behavior(s) and/or parameter(s) that the voice bot utilizes in communicating with the plurality of users via the communication channels, determine whether the representative is authorized to cause the behavior(s) and/or parameter(s) to be modified, and cause the behavior(s) and/or parameter(s) to be modified in response to determining that the representative is authorized.
    Type: Application
    Filed: December 15, 2021
    Publication date: June 15, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11676594
    Abstract: A method for decaying speech processing includes receiving, at a voice-enabled device, an indication of a microphone trigger event indicating a possible interaction with the device through speech where the device has a microphone that, when open, is configured to capture speech for speech recognition. In response to receiving the indication of the microphone trigger event, the method also includes instructing the microphone to open or remain open for a duration window to capture an audio stream in an environment of the device and providing the audio stream captured by the open microphone to a speech recognition system. During the duration window, the method further includes decaying a level of the speech recognition processing based on a function of the duration window and instructing the speech recognition system to use the decayed level of speech recognition processing over the audio stream captured by the open microphone.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: June 13, 2023
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230178083
    Abstract: Implementations relate to at least intermittently processing dynamic contextual parameters and dynamically automatically adapting, in dependence on the processing of the dynamic contextual parameters, audio data processing that is performed at an assistant device. The dynamic and automatic adapting of the audio data processing mitigates occurrences of false positives and/or false negatives in hot word processing, invocation-free speech recognition, and/or other automated assistant audio data based processing techniques. Implementations dynamically automatically adapt the audio data processing between two or more states and the automatic adaptation of the audio data processing from a current state to an alternate state is in response to the processing, of current values for the dynamic contextual parameters, satisfying one or more conditions.
    Type: Application
    Filed: December 3, 2021
    Publication date: June 8, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230178078
    Abstract: Implementations relate to an automated assistant that can respond to communications received via a third party application and/or other third party communication modality. The automated assistant can determine that the user is participating in multiple different conversations via multiple different third party communication services. In some implementations, conversations can be processed to identify particular features of the conversations. When the automated assistant is invoked to provide input to a conversation, the automated assistant can compare the input to the identified conversation features in order to select the particular conversation that is most relevant to the input. In this way, the automated assistant can assist with any of multiple disparate conversations that are each occurring via a different third party application.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230173657
    Abstract: Implementations set forth herein relate to a robotic computing device that can seek additional information from other nearby device(s) for fulfilling a request and/or delegating certain operations to the other nearby device(s). Delegating certain operations can involve the robotic computing device maneuvering to a location of a nearby device and soliciting the nearby device for assistance by providing an input from the robotic computing device to the nearby device. In some instances, the input can include an audible rendering of an invocation phrase and a command phrase for invoking an automated assistant that is accessible via the nearby device. A determination of whether to delegate certain operations or seek additional information can be based on a variety of factors such as predicted efficiency and estimated accuracy of performance for performing certain operations.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 8, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230169963
    Abstract: Systems and methods for obfuscating and/or omitting potentially sensitive information in a spoken query before providing the query to a secondary automated assistant. A general automated assistant may be invoked by a user, followed by a query. The audio data can be processed to omit and/or obfuscate potentially sensitive information before providing one or more processed queries to secondary automated assistants based on a trust metric associated with each of the secondary automated assistants. The trust metric for a secondary automated assistant is indicative of trust in being provided with sensitive information. In response, the automated assistants can generate responses, which can be filtered to provide a response to the user.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230169980
    Abstract: Techniques are described herein for detecting and handling failures in other automated assistants. A method includes: executing a first automated assistant in an inactive state at least in part on a computing device operated by a user; while in the inactive state, determining, by the first automated assistant, that a second automated assistant failed to fulfill a request of the user; in response to determining that the second automated assistant failed to fulfill the request of the user, the first automated assistant processing cached audio data that captures a spoken utterance of the user comprising the request that the second automated assistant failed to fulfill, or features of the cached audio data, to determine a response that fulfills the request of the user; and providing, by the first automated assistant to the user, the response that fulfills the request of the user.
    Type: Application
    Filed: January 13, 2023
    Publication date: June 1, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230169976
    Abstract: A method for streaming action fulfillment receives audio data corresponding to an utterance where the utterance includes a query to perform an action that requires performance of a sequence of sub-actions in order to fulfill the action. While receiving the audio data, but before receiving an end of speech condition, the method processes the audio data to generate intermediate automated speech recognition (ASR) results, performs partial query interpretation on the intermediate ASR results to determine whether the intermediate ASR results identify an application type needed to perform the action and, when the intermediate ASR results identify a particular application type, performs a first sub-action in the sequence of sub-actions by launching a first application to execute on the user device where the first application is associated with the particular application type. The method, in response to receiving an end of speech condition, fulfills performance of the action.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Applicant: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230168101
    Abstract: To present a navigation directions preview, a server device receives a request for navigation directions from a starting location to a destination location and generates a set of navigation directions in response to the request. The set of navigation directions includes a set of route segments for traversing from the starting location to the destination location. The server device selects a subset of the route segments based on characteristics of each route segment in the set of route segments. For each selected route segment, the server device provides a preview of the route segment to be displayed on a client device. The preview of the route segment includes panoramic street level imagery depicting the route segment.
    Type: Application
    Filed: August 18, 2020
    Publication date: June 1, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230171258
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for extending application access across devices. In some implementations, an electronic device receives a request to provide access to the electronic device to a particular user that is not registered as a user of the electronic device. The electronic device receives authentication credentials for the particular user. The electronic device provides the authentication credentials to a server system and receives data from the server system that (i) indicates that the providing access to the electronic device in a guest mode is authorized, and (ii) indicates a state of an instance of an application installed on a second device. The electronic device provides access to the electronic device in the guest mode that provides an interface that at least partially recreates the state of the instance of the application installed on the second device.
    Type: Application
    Filed: January 26, 2023
    Publication date: June 1, 2023
    Inventors: Victor Carbune, Sandro Feuz
  • Publication number: 20230158683
    Abstract: Implementations set forth herein relate to a robotic computing device that can perform certain operations, such as communicating between users in a common space, according to certain preferences of the users. When interacting with a particular user, the robotic computing device can perform an operation at a preferred location relative to the particular user based on an express or implied preference of that particular user. For instance, certain types of operations can be performed at a first location within a room, and other types of operations can be performed at a second location within the room. When an operation involves following or guiding a user, parameters for driving the robotic computing device can be selected based on preferences of the user and/or a context in which the robotic computing device is interacting with the user (e.g., whether or not the context indicates some amount of urgency).
    Type: Application
    Filed: November 23, 2021
    Publication date: May 25, 2023
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20230160710
    Abstract: The present disclosure is directed to interactive voice navigation. In particular, a computing system can provide audio information including one or more navigation instructions to a user via a computing system associated with the user. The computing system can activate an audio sensor associated with the computing system. The computing system can collect, using the audio sensor, audio data associated with the user. The computing system can determine, based on the audio data, whether the audio data is associated with one or more navigation instructions. The computing system can, in accordance with a determination that the audio data is associated with one or more navigation instructions, determine a context-appropriate audio response. The computing system can provide the context-appropriate audio response to the user.
    Type: Application
    Filed: August 12, 2020
    Publication date: May 25, 2023
    Inventors: Victor Carbune, Matthew Sharifi, Blaise Aguera-Arcas
  • Patent number: 11657817
    Abstract: Implementations set forth relate to suggesting an alternate interface modality when an automated assistant and/or a user is expected to not understand a particular interaction between the user and the automated assistant. In some instances, the automated assistant can pre-emptively determine that a forthcoming and/or ongoing interaction between a user and an automated assistant may experience interference. Based on this determination, the automated assistant can provide an indication that the interaction may not be successful and/or that the user should interact with the automated assistant through a different modality. For example, the automated assistant can render a keyboard interface at a portable computing device when the automated assistant determines that an audio interface of the portable computing device is experiencing interference.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: May 23, 2023
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20230156322
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20230153410
    Abstract: A method for sharing assistant profiles includes receiving, at a profile service, from an assistant service interacting with a user device of a user, a request requesting the profile service to release personal information associated with the user to the assistant service. The operations also include performing, through the assistant service, a verification process to verify that the user consents to releasing the requested personal information by: instructing the assistant service to prompt the user to recite a unique token prescribed to the user; receiving audio data characterizing a spoken utterance captured by the user device of the user; processing the audio data to determine whether a transcription of the spoken utterance recites the unique token; and when the transcription of the spoken utterance recites the unique token, releasing, to the assistant service, the requested personal information stored on a centralized data store managed by the profile service.
    Type: Application
    Filed: January 14, 2022
    Publication date: May 18, 2023
    Applicant: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11651196
    Abstract: Techniques are disclosed that enable automating user interface input by generating a sequence of actions to perform a task utilizing a multi-agent reinforcement learning framework. Various implementations process an intent associated with received user interface input using a holistic reinforcement policy network to select a software reinforcement learning policy network. The sequence of actions can be generated by processing the intent, as well as a sequence of software client state data, using the selected software reinforcement learning policy network. The sequence of actions are utilized to control the software client corresponding to the selected software reinforcement learning policy network.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: May 16, 2023
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Thomas Deselaers