Patents by Inventor Garrett Frederick Berg

Garrett Frederick Berg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12014738
    Abstract: Techniques described herein are directed to arbitrating between multiple potentially-responsive, automated-assistant capable electronic devices to determine which should respond to the user's utterance, and/or which should defer to other electronic device(s). In various implementations, a spoken utterance of a user may be detected at a microphone of a first electronic device, a spoken utterance provided by a user. Sound(s) emitted by additional electronic device(s) may also be detected at the microphone. Each of the sound(s) may encode a timestamp corresponding to detection of the spoken utterance at a respective electronic device. Timestamp(s) may be extracted from the sound(s) and compared to a local timestamp corresponding to detection of the spoken utterance at the first electronic device. Based on the comparison, the first electronic device may either invoke an automated assistant locally or defer to one of the additional electronic devices.
    Type: Grant
    Filed: May 8, 2023
    Date of Patent: June 18, 2024
    Assignee: GOOGLE LLC
    Inventors: Garrett Frederick Berg, Zac Livingston
  • Publication number: 20230274740
    Abstract: Techniques described herein are directed to arbitrating between multiple potentially-responsive, automated-assistant capable electronic devices to determine which should respond to the user's utterance, and/or which should defer to other electronic device(s). In various implementations, a spoken utterance of a user may be detected at a microphone of a first electronic device, a spoken utterance provided by a user. Sound(s) emitted by additional electronic device(s) may also be detected at the microphone. Each of the sound(s) may encode a timestamp corresponding to detection of the spoken utterance at a respective electronic device. Timestamp(s) may be extracted from the sound(s) and compared to a local timestamp corresponding to detection of the spoken utterance at the first electronic device. Based on the comparison, the first electronic device may either invoke an automated assistant locally or defer to one of the additional electronic devices.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Garrett Frederick Berg, Zac Livingston
  • Patent number: 11670293
    Abstract: Techniques described herein are directed to arbitrating between multiple potentially-responsive, automated-assistant capable electronic devices to determine which should respond to the user's utterance, and/or which should defer to other electronic device(s). In various implementations, a spoken utterance of a user may be detected at a microphone of a first electronic device, a spoken utterance provided by a user. Sound(s) emitted by additional electronic device(s) may also be detected at the microphone. Each of the sound(s) may encode a timestamp corresponding to detection of the spoken utterance at a respective electronic device. Timestamp(s) may be extracted from the sound(s) and compared to a local timestamp corresponding to detection of the spoken utterance at the first electronic device. Based on the comparison, the first electronic device may either invoke an automated assistant locally or defer to one of the additional electronic devices.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: June 6, 2023
    Assignee: GOOGLE LLC
    Inventors: Garrett Frederick Berg, Zac Livingston
  • Publication number: 20220068271
    Abstract: Techniques described herein are directed to arbitrating between multiple potentially-responsive, automated-assistant capable electronic devices to determine which should respond to the user's utterance, and/or which should defer to other electronic device(s). In various implementations, a spoken utterance of a user may be detected at a microphone of a first electronic device, a spoken utterance provided by a user. Sound(s) emitted by additional electronic device(s) may also be detected at the microphone. Each of the sound(s) may encode a timestamp corresponding to detection of the spoken utterance at a respective electronic device. Timestamp(s) may be extracted from the sound(s) and compared to a local timestamp corresponding to detection of the spoken utterance at the first electronic device. Based on the comparison, the first electronic device may either invoke an automated assistant locally or defer to one of the additional electronic devices.
    Type: Application
    Filed: September 2, 2020
    Publication date: March 3, 2022
    Inventors: Garrett Frederick Berg, Zac Livingston