Patents by Inventor Victor Carbune

Victor Carbune has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117594
    Abstract: Techniques include using a generative model to make changes to content such that the mechanisms used to guide the user into a decision become plain to the user and/or minimizes the perceived urgency. Implementations can operate as part of the browser or as an extension to the browser. Implementations may identify a targeted UI element in browser content (a web page) and use the generative model to modify the targeted UI element before presenting the browser content to the user. In some implementations, the identification of the targeted UI element may be performed by the generative model.
    Type: Application
    Filed: October 4, 2023
    Publication date: April 10, 2025
    Inventors: Victor Carbune, Ondrej Škopek
  • Patent number: 12260858
    Abstract: Systems and methods for providing dialog data, from an initially invoked automated assistant to a subsequently invoked automated assistant. A first automated assistant may be invoked by a user utterance, followed by a dialog with the user that is processed by the first automated assistant. During the dialog, a request to transfer dialog data to a second automated assistant is received. The request may originate with the user, by the first automated assistant, and/or by the second automated assistant. Once authorized, the first automated assistant provides the previous dialog data to the second automated assistant. The second automated assistant performs one or more actions based on the dialog data.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: March 25, 2025
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20250094521
    Abstract: Disclosed implementations relate to structures that support an on-demand navigational corpus. An example method involves receiving a navigation request from a client device pertaining to an intent, determining seed content associated with the navigation request, utilizing a large foundational model to create a web page incorporating the seed content, based on a navigation model, and the intent, and delivering the generated web page for presentation on the client device. The method enables efficient and personalized web page generation based on user intent, enhancing user experience and facilitating dynamic navigation using raw seed content.
    Type: Application
    Filed: September 18, 2024
    Publication date: March 20, 2025
    Inventors: Victor Carbune, Arash Sadr, Matthew Sharifi
  • Publication number: 20250093164
    Abstract: Training data is obtained. The training data includes (a) route information indicative of a route from a starting location to a destination location, wherein the route comprises a plurality of route segments comprising a first subset of route segments and a second subset of route segments, and (b) route characteristic information descriptive of one or more route characteristics. At least the first subset of route segments and a portion of the route characteristic information associated with the first subset of route segments is processed with a machine-learned semantic routing model to obtain one or more predicted route segments for the second subset of route segments. One or more parameters of the machine-learned semantic routing model are adjusted based on an optimization function that evaluates a difference between the one or more predicted route segments and the second subset of route segments.
    Type: Application
    Filed: September 15, 2023
    Publication date: March 20, 2025
    Inventors: Victor Carbune, Polina Zablotskaia, Matthew Sharifi, Manuel Tragut
  • Publication number: 20250095657
    Abstract: Implementations set forth herein relate to an automated assistant that can solicit other devices for data that can assist with user authentication. User authentication can be streamlined for certain requests by removing a requirement that all authentication be performed at a single device and/or by a single application. For instance, the automated assistant can rely on data from other devices, which can indicate a degree to which a user is predicted to be present at a location of an assistant-enabled device. The automated assistant can process this data to make a determination regarding whether the user should be authenticated in response to an assistant input and/or pre-emptively before the user provides an assistant input. In some implementations, the automated assistant can perform one or more factors of authentication and utilize the data to verify the user in lieu of performing one or more other factors of authentication.
    Type: Application
    Filed: November 25, 2024
    Publication date: March 20, 2025
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 12254038
    Abstract: Implementations described herein relate to receiving user input directed to an automated assistant, processing the user input to determine whether data from a server and/or third-party application is needed to perform certain fulfillment of an assistant command included in the user input, and generating a prompt that requests a user consent to transmitting of a request to the server and/or the third-party application to obtain the data needed to perform the certain fulfillment. In implementations where the user consents, the data can be obtained and utilized to perform the certain fulfillment. In implementations where the user does not consent, client data can be generated locally at a client device and utilized to perform alternate fulfillment of the assistant command. In various implementations, the request transmitted to the server and/or third-party application can be modified based on ambient noise captured when the user input is received.
    Type: Grant
    Filed: December 13, 2023
    Date of Patent: March 18, 2025
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 12254885
    Abstract: Techniques are described herein for detecting and handling failures in other automated assistants. A method includes: executing a first automated assistant in an inactive state at least in part on a computing device operated by a user; while in the inactive state, determining, by the first automated assistant, that a second automated assistant failed to fulfill a request of the user; in response to determining that the second automated assistant failed to fulfill the request of the user, the first automated assistant processing cached audio data that captures a spoken utterance of the user comprising the request that the second automated assistant failed to fulfill, or features of the cached audio data, to determine a response that fulfills the request of the user; and providing, by the first automated assistant to the user, the response that fulfills the request of the user.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: March 18, 2025
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20250087214
    Abstract: An overall endpointing measure can be generated based on an audio-based endpointing measure and (1) an accelerometer-based endpointing measure and/or (2) a gaze-based endpointing measure. The overall endpointing measure can be used in determining whether a candidate endpoint is an actual endpoint. Various implementations include generating the audio-based endpointing measure by processing an audio data stream, capturing a spoken utterance of a user, using an audio model. Various implementations additionally or alternatively include generating the accelerometer-based endpointing measure by processing a stream of accelerometer data using an accelerometer model. Various implementations additionally or alternatively include processing an image data stream using a gaze model to generate the gaze-based endpointing measure.
    Type: Application
    Filed: November 25, 2024
    Publication date: March 13, 2025
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20250077599
    Abstract: The present disclosure provides a computing device and method for providing personal specific information based on semantic queries. The semantic queries may be input in a natural language form, and may include user specific context, such as by referring to prior or future events related to a place the user is searching for. With the user's authorization, data associated with prior or planned activities of the user may be accessed and information from the accessed data may be identified, wherein the information is correlated with the user specific context. One or more query results are determined based on the identified information and provided for output to the user.
    Type: Application
    Filed: November 18, 2024
    Publication date: March 6, 2025
    Inventors: Victor Carbune, Mathew Sharifi
  • Patent number: 12242472
    Abstract: Methods, systems, and computer readable media related to generating a combined search query based on search parameters of a current search query of a user and search parameters of one or more previously submitted search quer(ies) of the user that are determined to be of the same line of inquiry as the current search query. Two or more search queries may be determined to share a line of inquiry when it is determined that they are within a threshold level of semantic similarity to one another. Once a shared line of inquiry has been identified and a combined search query generated, users may interact with the search parameters and/or the search results to update the search parameters of the combined search query.
    Type: Grant
    Filed: July 31, 2023
    Date of Patent: March 4, 2025
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20250069617
    Abstract: A method includes receiving a natural language query specifying an action for an assistant interface to perform and selecting one or more business large language models (LLMs) for the assistant interface to interact with to fulfill performance of the action. For each business LLM, method also includes accessing an adapter module to structure the natural language query into a respective prompt specifically formulated for the corresponding business LLM, issuing, for input to the corresponding business LLM, the respective prompt, and receiving corresponding response content from the corresponding business LLM that conveys details regarding performance of a corresponding portion of the action. The method also includes presenting, for output from the user device, presentation content based on the corresponding response content received from each corresponding business LLM.
    Type: Application
    Filed: August 22, 2023
    Publication date: February 27, 2025
    Applicant: Google LLC
    Inventors: Victor Carbune, Matthew Sharifi
  • Patent number: 12236195
    Abstract: A computing system can include one or more machine-learned models configured to receive context data that describes one or more entities to be named. In response to receipt of the context data, the machine-learned model(s) can generate output data that describes one or more names for the entity or entities described by the context data. The computing system can be configured to perform operations including inputting the context data into the machine-learned model(s). The operations can include receiving, as an output of the machine-learned model(s), the output data that describes the name(s) for the entity or entities described by the context data. The operations can include storing at least one name described by the output data.
    Type: Grant
    Filed: February 9, 2023
    Date of Patent: February 25, 2025
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Alexandru-Marian Damian
  • Publication number: 20250061892
    Abstract: Generating audio tracks is provided. The system selects a digital component object having a visual output format. The system determines to convert the digital component object into an audio output format. The system generates text for the digital component object. The system selects, based on context of the digital component object, a digital voice to render the text. The system constructs a baseline audio track of the digital component object with the text rendered by the digital voice. The system generates, based on the digital component object, non-spoken audio cues. The system combines the non-spoken audio cues with the baseline audio form of the digital component object to generate an audio track of the digital component object. The system provides the audio track of the digital component object to the computing device for output via a speaker of the computing device.
    Type: Application
    Filed: November 5, 2024
    Publication date: February 20, 2025
    Applicant: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 12230252
    Abstract: Generating audio tracks is provided. The system selects a digital component object having a visual output format. The system determines to convert the digital component object into an audio output format. The system generates text for the digital component object. The system selects, based on context of the digital component object, a digital voice to render the text. The system constructs a baseline audio track of the digital component object with the text rendered by the digital voice. The system generates, based on the digital component object, non-spoken audio cues. The system combines the non-spoken audio cues with the baseline audio form of the digital component object to generate an audio track of the digital component object. The system provides the audio track of the digital component object to the computing device for output via a speaker of the computing device.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: February 18, 2025
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20250054495
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively communicate audio data to a recipient when a user solicits the automated assistant to send a text message to the recipient. The audio data can include a snippet of audio that characterizes content of the text message, and the automated assistant can communicate the audio data to the recipient when score data for a speech recognition hypothesis does not satisfy a confidence threshold. The score data can correspond to an entirety of content of a text message and/or speech recognition hypothesis, and/or less than an entirety of the content. A recipient device can optionally re-process the audio data using a model that is associated with the recipient device. This can provide more accurate transcripts in some instances, thereby improving accuracy of communications and decreasing a number of corrective messages sent between users.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 13, 2025
    Inventors: Victor Carbune, Matthew Sharifi
  • Publication number: 20250053596
    Abstract: Implementations can identify a given assistant device from among a plurality of assistant devices in an ecosystem, obtain device-specific signal(s) that are generated by the given assistant device, process the device-specific signal(s) to generate candidate semantic label(s) for the given assistant device, select a given semantic label for the given semantic device from among the candidate semantic label(s), and assigning, in a device topology representation of the ecosystem, the given semantic label to the given assistant device. Implementations can optionally receive a spoken utterance that includes a query or command at the assistant device(s), determine a semantic property of the query or command matches the given semantic label to the given assistant device, and cause the given assistant device to satisfy the query or command.
    Type: Application
    Filed: October 25, 2024
    Publication date: February 13, 2025
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 12223960
    Abstract: Implementations relate to generating a proficiency measure, and utilizing the proficiency measure to adapt one or more automated assistant functionalities. The generated proficiency measure is for a particular class of automated assistant actions, and is specific to an assistant device and/or is specific to a particular user. A generated proficiency measure for a class can reflect a degree of proficiency, of a user and/or of an assistant device, for that class. Various automated assistant functionalities can be adapted, for a particular class, responsive to determining the proficiency measure satisfies a threshold, or fails to satisfy the threshold (or an alternate threshold). The adaptation(s) can make automated assistant processing more efficient and/or improve (e.g., shorten the duration of) user-assistant interaction(s).
    Type: Grant
    Filed: March 18, 2024
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 12223410
    Abstract: To select a lane in a multi-lane road segment for a vehicle travelling on the road segment, a system identifies, in multiple lanes and in a region ahead of the vehicle, another vehicle defining a target; the system applies an optical flow technique to track the target during a period of time, to generate an estimate of how fast traffic moves; and the system applies the estimate to machine learning (ML) model to generate a recommendation which one of the plurality of lanes the vehicle is to choose.
    Type: Grant
    Filed: February 27, 2024
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Thomas Deselaers, Victor Carbune
  • Publication number: 20250045326
    Abstract: A method for handling contradictory queries on a shared device includes receiving a first query issued by a first user, the first query specifying a first long-standing operation for a digital assistant to perform, and while the digital assistant is performing the first long-standing operation, receiving a second query, the second query specifying a second long-standing operation for the digital assistant to perform. The method also includes determining that the second query was issued by another user different than the first user and determining, using a query resolver, that performing the second long-standing operation would conflict with the first long-standing operation. The method further includes identifying one or more compromise operations for the digital assistant to perform, and instructing the digital assistant to perform a selected compromise operation among the identified one or more compromise operations.
    Type: Application
    Filed: October 18, 2024
    Publication date: February 6, 2025
    Applicant: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Publication number: 20250047930
    Abstract: Voice-based interaction with video content being presented by a media player application is enhanced through the use of an automated assistant capable of identifying when a spoken utterance by a user is a request to playback a specific scene in the video content. A query identified in a spoken utterance may be used to access stored scene metadata associated with video content being presented in the vicinity of the user to identify one or more locations in the video content that correspond to the query, such that a media control command may be issued to the media player application to cause the media player application to seek to a particular location in the video content that satisfies the query.
    Type: Application
    Filed: October 22, 2024
    Publication date: February 6, 2025
    Inventors: Matthew Sharifi, Victor Carbune