Patents by Inventor Martin Baeuml

Martin Baeuml has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12223944
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Publication number: 20250037711
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can receive a stream of audio data that captures a spoken utterance including an assistant query, determine, based on processing the stream of audio data, a set of assistant outputs that are each predicted to be responsive to the assistant query, process, using large language model (LLM) output(s), the assistant outputs and context of the dialog session to generate a set of modified assistant outputs, and cause given modified assistant output, from among the set of modified assistant outputs, to be provided for presentation to the user in response to the spoken utterance. In some implementations, the LLM output(s) can be generated in an offline manner for subsequent use in an online manner. In additional or alternative implementations, the LLM output(s) can be generated in an online manner when the spoken utterance is received.
    Type: Application
    Filed: October 10, 2024
    Publication date: January 30, 2025
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Vikram Sridar, Daniel De Freitas Adiwardana, Noam M. Shazeer, Quoc Le
  • Patent number: 12183342
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, based on content of an existing human-to-computer dialog session between a user and an automated assistant, an entity mentioned by the user or automated assistant may be identified. Fact(s)s related to the entity or to another entity that is related to the entity may be identified based on entity data contained in database(s). For each of the fact(s), a corresponding measure of potential interest to the user may be determined. Unsolicited natural language content may then be generated that includes one or more of the facts selected based on the corresponding measure(s) of potential interest. The automated assistant may then incorporate the unsolicited content into the existing human-to-computer dialog session or a subsequent human-to-computer dialog session.
    Type: Grant
    Filed: August 4, 2023
    Date of Patent: December 31, 2024
    Assignee: GOOGLE LLC
    Inventors: Vladimir Vuskovic, Stephan Wenger, Zineb Ait Bahajji, Martin Baeuml, Alexandru Dovlecel, Gleb Skobeltsyn
  • Patent number: 12148421
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can receive a stream of audio data that captures a spoken utterance including an assistant query, determine, based on processing the stream of audio data, a set of assistant outputs that are each predicted to be responsive to the assistant query, process, using large language model (LLM) output(s), the assistant outputs and context of the dialog session to generate a set of modified assistant outputs, and cause given modified assistant output, from among the set of modified assistant outputs, to be provided for presentation to the user in response to the spoken utterance. In some implementations, the LLM output(s) can be generated in an offline manner for subsequent use in an online manner. In additional or alternative implementations, the LLM output(s) can be generated in an online manner when the spoken utterance is received.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: November 19, 2024
    Assignee: GOOGLE LLC
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Vikram Sridar, Daniel De Freitas Adiwardana, Noam M. Shazeer, Quoc Le
  • Publication number: 20240377852
    Abstract: A pedal unit for controlling a vehicle function has a modularly constructed pedal head which includes a pad support and a replaceable pedal pad for receiving an actuation force of a driver's foot and connected to a small stroke movable housing cover along a vertical direction of the pedal unit. The pedal pad is releasably connected to the pad support. A fastening arrangement is configured to connect the pad support to the movable housing cover in a releasable and form-fit and/or friction-fit manner as well. A method for assembling such a pedal unit and a method for disassembling a pedal head from such a pedal unit is also disclosed.
    Type: Application
    Filed: May 3, 2024
    Publication date: November 14, 2024
    Inventors: Guenter Escher, Eduard Maiterth, Juergen Kissner, Manfred Fischer, Martin Baeuml, Matthias Maidel, Timo Knecht
  • Publication number: 20240311405
    Abstract: Implementations disclose selecting, in response to receiving a request and from among multiple candidate generative models (e.g., multiple candidate large language models (LLMs)) with differing computational efficiencies, a particular generative model to utilize in generating a response to the request. Those implementations reduce latency and/or conserve computational resource(s) through selection, for various requests, of a more computationally efficient generative model for utilization in lieu of a less computationally efficient generative model. Further, those implementations seek to achieve such benefits, through utilization of more computationally efficient generative models, while also still selectively utilizing less computationally efficient generative models for certain requests to mitigate occurrences of a generated response being inaccurate and/or under-specified.
    Type: Application
    Filed: June 19, 2023
    Publication date: September 19, 2024
    Inventors: Seungyeon Kim, Ankit Singh Rawat, Wittawat Jitkrittum, Hari Narasimhan, Sashank Reddi, Neha Gupta, Srinadh Bhojanapalli, Aditya Menon, Manzil Zaheer, Tal Schuster, Sanjiv Kumar, Toby Boyd, Zhifeng Chen, Emanuel Taropa, Vikram Kasivajhula, Trevor Strohman, Martin Baeuml, Leif Schelin, Yanping Huang
  • Publication number: 20240311575
    Abstract: Implementations relate to dialog management of a large language model (LLM) utilized in generating natural language (NL) output during an ongoing dialog. Processor(s) of a system can: receive NL based input as part of the ongoing dialog, generate NL based output utilizing the LLM, and cause the NL based output to be rendered. Further, the processor(s) can receive subsequent NL based input as part of the ongoing dialog. In some implementations, the processor(s) can determine whether to modify a corresponding dialog context in generating subsequent NL based output, and modify the corresponding dialog context accordingly. For example, the processor(s) can restrict the corresponding dialog context, or supplant the corresponding dialog context with a corresponding curated dialog context. In additional or alternative implementations, the processor(s) can modify a corresponding NL based output threshold utilized in generating the subsequent NL based response to ensure the resulting NL based output is desirable.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 19, 2024
    Inventors: Martin Baeuml, Alexander Bailey, Jonas Bragagnolo, Florent D'Halluin, Trevor Strohman
  • Publication number: 20240311402
    Abstract: Implementations relate to reducing latency in generating and/or rendering natural language (NL) output generated using a large language model (LLM). Processor(s) of a system can: receive NL based input associated with a client device, and generate the NL based output utilizing the LLM. The NL based output can be a stream of NL based output in that it includes a plurality of segments, and is generated on a segment-by-segment basis. In some implementations, a first segment of the stream of NL based output is selected for inclusion in the stream of NL based output as a second segment (and any subsequent segment) is being generated to reduce latency in evaluating the NL based output as a whole prior to rendering thereof. In some versions of those implementations, the first segment is rendered as the second segment (and any subsequent segment) is being generated to further reduce latency in rendering thereof.
    Type: Application
    Filed: April 19, 2023
    Publication date: September 19, 2024
    Inventors: Martin Baeuml, Yanping Huang, Wenhao Jia, Chang Lan, Yuanzhong Xu, Junwhan Ahn, Alexander Bailey, Leif Schelin, Trevor Strohman, Emanuel Taropa, Sidharth Mudgal, Yanyan Zheng, Zhifeng Chen, Ahmad Beirami
  • Publication number: 20240304184
    Abstract: As part of an ongoing dialog between a user and an automated assistant, processor(s) can receive a natural language (NL) based input from the user during a turn of the ongoing dialog, obtain style signal(s) for the turn, and determine, based on the style signal(s), a NL based response style that is not specified in the NL based input. Further, the processor(s) can process, using a large language model (LLM), the NL based input and a NL based response style tag for the NL based response style to generate LLM output, determine, based on the LLM output, a NL based response in the NL based response style, and cause the NL based response to be rendered. In some implementations, a LLM behavior controller is utilized to determine the NL based response style, whereas in other implementations, the LLM is fine-tuned to determine the NL based response style.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Inventors: Roberto Pieraccini, Wangqing Yuan, Martin Baeuml
  • Publication number: 20240166174
    Abstract: A sensor arrangement for an actuation device of a motor vehicle is disclosed which has a force transfer element and which can be assigned or is assigned in particular to an actuating surface of the actuation device for transferring an actuating force applied to the actuation device, in particular on the actuating surface. The sensor arrangement further has a measuring head arranged in a printed circuit board. The measuring head has a measuring membrane. The measuring membrane has a force sensor element at one end and is assigned to the force transfer element at the other end, in particular abutting the force transfer element.
    Type: Application
    Filed: November 16, 2023
    Publication date: May 23, 2024
    Inventors: Andreas Baumgartner, Bernd Lutz, Guenter Escher, Juergen Kissner, Manfred Fischer, Martin Baeuml, Masaya Eto, Timo Knecht
  • Publication number: 20240166044
    Abstract: An actuation device for a motor vehicle, in particular for specifying a braking and/or acceleration request, includes a first housing portion which is slidably mounted on a second housing portion in a vertical extension of the second housing portion. The first housing portion includes an actuating surface on a top surface facing away from the second housing portion, or a cover having the actuating surface is arranged on the top surface. The first housing portion includes a bar-shaped projection on an inner side facing the second housing portion. Further, the second housing portion includes a guide receptacle for the projection for the guided, slidable mounting of the first housing portion on the second housing portion.
    Type: Application
    Filed: November 13, 2023
    Publication date: May 23, 2024
    Inventors: Martin Baeuml, Guenter Escher, Andreas Baumgartner, Juergen Kissner, Manfred Fischer, Timo Knecht
  • Publication number: 20240166173
    Abstract: An actuation device for a motor vehicle, in particular for specifying a braking and/or acceleration request, with a first and a second housing portion is disclosed. The first housing portion includes an actuating surface on a top surface facing away from the second housing portion, or a cover with the actuating surface is arranged on the top surface. The housing portions are connected to each other by a hinge. An axis of rotation of the hinge is arranged laterally spaced apart from the actuating surface.
    Type: Application
    Filed: November 15, 2023
    Publication date: May 23, 2024
    Inventors: Guenter Escher, Bernd Lutz, Eduard Maiterth, Juergen Kissner, Manfred Fischer, Martin Baeuml, Martin Winkler, Stephan Knackert, Timo Knecht
  • Patent number: 11929069
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, based on content of an existing human-to-computer dialog session between a user and an automated assistant, an entity mentioned by the user or automated assistant may be identified. Fact(s)s related to the entity or to another entity that is related to the entity may be identified based on entity data contained in database(s). For each of the fact(s), a corresponding measure of potential interest to the user may be determined. Unsolicited natural language content may then be generated that includes one or more of the facts selected based on the corresponding measure(s) of potential interest. The automated assistant may then incorporate the unsolicited content into the existing human-to-computer dialog session or a subsequent human-to-computer dialog session.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: March 12, 2024
    Assignee: GOOGLE LLC
    Inventors: Vladimir Vuskovic, Stephan Wenger, Zineb Ait Bahajji, Martin Baeuml, Alexandru Dovlecel, Gleb Skobeltsyn
  • Patent number: 11887592
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, based on content of an existing human-to-computer dialog session between a user and an automated assistant, an entity mentioned by the user or automated assistant may be identified. Fact(s)s related to the entity or to another entity that is related to the entity may be identified based on entity data contained in database(s). For each of the fact(s), a corresponding measure of potential interest to the user may be determined. Unsolicited natural language content may then be generated that includes one or more of the facts selected based on the corresponding measure(s) of potential interest. The automated assistant may then incorporate the unsolicited content into the existing human-to-computer dialog session or a subsequent human-to-computer dialog session.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: January 30, 2024
    Assignee: GOOGLE LLC
    Inventors: Vladimir Vuskovic, Stephan Wenger, Zineb Ait Bahajji, Martin Baeuml, Alexandru Dovlecel, Gleb Skobeltsyn
  • Publication number: 20230377571
    Abstract: Methods, apparatus, and computer readable media are described related to automated assistants that proactively incorporate, into human-to-computer dialog sessions, unsolicited content of potential interest to a user. In various implementations, based on content of an existing human-to-computer dialog session between a user and an automated assistant, an entity mentioned by the user or automated assistant may be identified. Fact(s)s related to the entity or to another entity that is related to the entity may be identified based on entity data contained in database(s). For each of the fact(s), a corresponding measure of potential interest to the user may be determined. Unsolicited natural language content may then be generated that includes one or more of the facts selected based on the corresponding measure(s) of potential interest. The automated assistant may then incorporate the unsolicited content into the existing human-to-computer dialog session or a subsequent human-to-computer dialog session.
    Type: Application
    Filed: August 4, 2023
    Publication date: November 23, 2023
    Inventors: Vladimir Vuskovic, Stephan Wenger, Zineb Ait Bahajji, Martin Baeuml, Alexandru Dovlecel, Gleb Skobeltsyn
  • Publication number: 20230343324
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Application
    Filed: May 13, 2022
    Publication date: October 26, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Publication number: 20230343323
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Publication number: 20230274729
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.
    Type: Application
    Filed: May 4, 2023
    Publication date: August 31, 2023
    Applicant: Google LLC
    Inventors: Olga Kapralova, Evgeny A. Cherepanov, Dmitry Osmakov, Martin Baeuml, Gleb Skobeltsyn
  • Patent number: 11682381
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: June 20, 2023
    Assignee: Google LLC
    Inventors: Olga Kapralova, Evgeny A. Cherepanov, Dmitry Osmakov, Martin Baeuml, Gleb Skobeltsyn
  • Publication number: 20230074406
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can receive a stream of audio data that captures a spoken utterance including an assistant query, determine, based on processing the stream of audio data, a set of assistant outputs that are each predicted to be responsive to the assistant query, process, using large language model (LLM) output(s), the assistant outputs and context of the dialog session to generate a set of modified assistant outputs, and cause given modified assistant output, from among the set of modified assistant outputs, to be provided for presentation to the user in response to the spoken utterance. In some implementations, the LLM output(s) can be generated in an offline manner for subsequent use in an online manner. In additional or alternative implementations, the LLM output(s) can be generated in an online manner when the spoken utterance is received.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 9, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Vikram Sridar, Daniel De Freitas Adiwardana, Noam M. Shazeer, Quoc Le