Patents by Inventor Brandon Charles Barbello

Brandon Charles Barbello has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240057030
    Abstract: This document describes systems and techniques to avoid and manage poor wireless connections on mobile devices. The described systems and techniques can determine, based on a determined signal quality or signal strength of a current wireless connection, that a superior signal quality or a superior signal strength is available at a location adjacent to, or within a determined distance of, a current location of the mobile device. In response to determining that the superior signal quality or the superior signal strength is available at the location, the mobile device can provide an alert to a user. The alert can indicate the location adjacent to, or within the determined distance of, the mobile device. In this way, the described systems and techniques can direct users to better network connections or alleviate their impact.
    Type: Application
    Filed: December 10, 2020
    Publication date: February 15, 2024
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Scott Douglas Kulchycki
  • Publication number: 20240040039
    Abstract: This document describes systems and techniques to enable selectable controls for interactive voice response (IVR) systems. The described systems and techniques can determine whether audio data associated with a voice or video call between a user of a computing device and a third party includes multiple selectable options. The third party audibly provides the selectable options during the call. In response to determining that the audio data includes the selectable options, the computing device can determine a text description of the multiple selectable options. The described systems and techniques can then display two or more selectable controls on a display. The user can select a selectable control to indicate a selected option of the multiple selectable options. In this way, the described systems and techniques can improve a user experience with voice calls and video calls by making IVR systems easier to navigate and understand.
    Type: Application
    Filed: December 8, 2020
    Publication date: February 1, 2024
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Jan Piotr Jedrzejowicz
  • Publication number: 20240031482
    Abstract: A computing device is described that accepts, a telephone call, from another device, initiated by a caller. Prior to establishing a telephone user interface that receives spoken input from the user and outputs spoken audio from the caller, the computing device executes a call screening service that outputs an audio user interface, to the other device and as part of the telephone call. The audio user interface interrogates the caller for additional information including a purpose of the telephone call, which allows the user to have more context of the telephone call before deciding whether to accept the call or hang up. The computing device outputs a graphical user interface associated with telephone call. The graphical user interface includes an indication of the additional information obtained via the audio user interface that interrogates the caller.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 25, 2024
    Inventors: Shavit Matias, Noam Etzion-Rosenberg, Rebecca Chiou, Benjamin Schlesinger, Brandon Charles Barbello, Ori Kabeli, Usman Abdullah, Eric Erfanian, Michelle Tadmor, Aditi Bhargava, Jan Piotr Jedrzejowicz, Alex Agranovich, Nir Shemy, Paul Dunlop, Yossi Matias, Kyungmin Youn, Nadav Bar
  • Publication number: 20230377183
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Application
    Filed: July 21, 2023
    Publication date: November 23, 2023
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Publication number: 20230376699
    Abstract: This document describes methods and systems of on-device real-time translation for media content on a mobile electronic device. The translation is managed and executed by an operating system of the electronic device rather than within a particular application executing on the electronic device. The operating system can translate media content, including visual content displayed on a display device of the electronic device or audio content output by the electronic device. Because the translation is at the OS level, the translation can be implemented, automatically or based on a user input, across a variety of (including all) applications and a variety of content on the electronic device to provide a consistent translation experience, which is provided via a system UI overlay that displays translated text as captions to video content or as a replacement to on-screen text.
    Type: Application
    Filed: December 18, 2020
    Publication date: November 23, 2023
    Applicant: Google LLC
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Khondokar Sami Iqram, Nikola Radicevic, Prasad Modali, Jeffrey Robert Pitman, Svetoslav Ganov, Qi Ge, Jonathan D. Wilson, Masakazu Seno, Xinxing Gu
  • Patent number: 11811968
    Abstract: A computing device is described that accepts, a telephone call, from another device, initiated by a caller. Prior to establishing a telephone user interface that receives spoken input from the user and outputs spoken audio from the caller, the computing device executes a call screening service that outputs an audio user interface, to the other device and as part of the telephone call. The audio user interface interrogates the caller for additional information including a purpose of the telephone call, which allows the user to have more context of the telephone call before deciding whether to accept the call or hang up. The computing device outputs a graphical user interface associated with telephone call. The graphical user interface includes an indication of the additional information obtained via the audio user interface that interrogates the caller.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: November 7, 2023
    Assignee: Google LLC
    Inventors: Shavit Matias, Noam Etzion-Rosenberg, Rebecca Chiou, Benjamin Schlesinger, Brandon Charles Barbello, Ori Kabeli, Usman Abdullah, Eric Erfanian, Michelle Tadmor, Aditi Bhargava, Jan Piotr Jedrzejowicz, Alex Agranovich, Nir Shemy, Paul Dunlop, Yossi Matias, Kyungmin Youn, Nadav Bar
  • Patent number: 11756223
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: September 12, 2023
    Assignee: Google LLC
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Patent number: 11256472
    Abstract: In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products. A computing device stores reference song characterization data and receives digital audio data. The computing device determines whether the digital audio data represents music and then performs a different process to recognize that the digital audio data represents a particular reference song. The computing device then outputs an indication of the particular reference song.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: February 22, 2022
    Assignee: Google LLC
    Inventors: Dominik Roblek, Blaise Hilary Aguera-Arcas, Thomas W. Hume, Marvin Karl Ritter, Brandon Charles Barbello, Kevin I. Kilgour, Mihajlo Velimirović, Christopher Thornton, Gabriel Oak Taubman, James David Lyon, Jan Heinrich Althaus, Katsiaryna Naliuka, Julian James Odell, Matthew Sharifi, Beat Gfeller
  • Publication number: 20210314440
    Abstract: A computing device is described that accepts, a telephone call, from another device, initiated by a caller. Prior to establishing a telephone user interface that receives spoken input from the user and outputs spoken audio from the caller, the computing device executes a call screening service that outputs an audio user interface, to the other device and as part of the telephone call. The audio user interface interrogates the caller for additional information including a purpose of the telephone call, which allows the user to have more context of the telephone call before deciding whether to accept the call or hang up. The computing device outputs a graphical user interface associated with telephone call. The graphical user interface includes an indication of the additional information obtained via the audio user interface that interrogates the caller.
    Type: Application
    Filed: January 8, 2019
    Publication date: October 7, 2021
    Applicant: Google LLC
    Inventors: Shavit Matias, Noam Etzion-Rosenberg, Rebecca Chiou, Benjamin Schlesinger, Brandon Charles Barbello, Ori Kabeli, Usman Abdullah, Eric Erfanian, Michelle Tadmor, Aditi Bhargava, Jan Piotr Jedrzejowicz, Alex Agranovich, Nir Shemy, Paul Dunlop, Yossi Matias, Kyungmin Youn, Nadav Bar
  • Publication number: 20210304431
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Application
    Filed: June 10, 2021
    Publication date: September 30, 2021
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Patent number: 11100664
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 24, 2021
    Assignee: GOOGLE LLC
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Publication number: 20210103348
    Abstract: This document describes techniques that enable facilitating user-proficiency in using radar gestures to interact with an electronic device. Using the described techniques, an electronic device can employ a radar system to detect and determine radar-based touch-independent gestures (radar gestures) that are made by the user to interact with the electronic device and applications running on the electronic device. For the radar gestures to be used to control or interact with the electronic device, the user must properly perform the radar gestures. The described techniques therefore also provide a tutorial or game environment that allows the user to learn and practice radar gestures in a natural way. The tutorial or game environments also provide visual feedback elements that give the user feedback when radar gestures are properly made and when the radar gestures are not properly made, which makes the learning and practicing a pleasant and enjoyable experience for the user.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 8, 2021
    Applicant: Google LLC
    Inventors: Daniel Per Jeppsson, Lauren Marie Bedal, Vignesh Sachidanandam, Morgwn Quin McCarty, Brandon Charles Barbello, Alexander Lee, Leonardo Giusti
  • Publication number: 20210103337
    Abstract: This document describes techniques that enable facilitating user-proficiency in using radar gestures to interact with an electronic device. Using the described techniques, an electronic device can employ a radar system to detect and determine radar-based touch-independent gestures (radar gestures) that are made by the user to interact with the electronic device and applications running on the electronic device. For the radar gestures to be used to control or interact with the electronic device, the user must properly perform the radar gestures. The described techniques therefore also provide a game or tutorial environment that allows the user to learn and practice radar gestures in a natural way. The game or tutorial environments also provide visual gaming elements that give the user feedback when radar gestures are properly made and when the radar gestures are not properly made, which makes the learning and practicing a pleasant and enjoyable experience for the user.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 8, 2021
    Applicant: Google LLC
    Inventors: Daniel Per Jeppsson, Vignesh Sachidanandam, Lauren Marie Bedal, Morgwn Quin McCarty, Brandon Charles Barbello, Alexander Lee, Leonardo Giusti
  • Publication number: 20210042950
    Abstract: The methods and systems described herein provide for depth-aware image editing and interactive features. In particular, a computer application may provide image-related features that utilize a combination of a (a) the depth map, and (b) segmentation data to process one or more images, and generate an edited version of the one or more images.
    Type: Application
    Filed: December 19, 2019
    Publication date: February 11, 2021
    Inventors: Tim Phillip Wantland, Brandon Charles Barbello, Christopher Max Breithaupt, Michael John Schoenberg, Adarsh Prakash Murthy Kowdle, Bryan Woods, Anshuman Kumar
  • Publication number: 20200401367
    Abstract: In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products. A computing device stores reference song characterization data and receives digital audio data. The computing device determines whether the digital audio data represents music and then performs a different process to recognize that the digital audio data represents a particular reference song. The computing device then outputs an indication of the particular reference song.
    Type: Application
    Filed: September 2, 2020
    Publication date: December 24, 2020
    Applicant: Google LLC
    Inventors: Dominik Roblek, Blaise Hilary Aguera-Arcas, Thomas W. Hume, Marvin Karl Ritter, Brandon Charles Barbello, Kevin I. Kilgour, Mihajlo Velimirovic, Christopher Thornton, Gabriel Oak Taubman, James David Lyon, Jan Heinrich Althaus, Katsiaryna Naliuka, Julian James Odell, Matthew Sharifi, Beat Gfeller
  • Patent number: 10809968
    Abstract: In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products. A computing device stores reference song characterization data and receives digital audio data. The computing device determines whether the digital audio data represents music and then performs a different process to recognize that the digital audio data represents a particular reference song. The computing device then outputs an indication of the particular reference song.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: October 20, 2020
    Assignee: Google LLC
    Inventors: Dominik Roblek, Blaise Hilary Aguera-Arcas, Thomas W. Hume, Marvin Karl Ritter, Brandon Charles Barbello, Kevin I. Kilgour, Mihajlo Velimirovic, Christopher Thornton, Gabriel Oak Taubman, James David Lyon, Jan Heinrich Althaus, Katsiaryna Naliuka, Julian James Odell, Matthew Sharifi, Beat Gfeller
  • Patent number: 10761802
    Abstract: In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products for indicating a reference song. A computing device stores reference song characterization data that identifies a plurality of audio characteristics for each reference song in a plurality of reference songs. The computing device receives digital audio data that represents audio recorded by a microphone, converts the digital audio data from time-domain format into frequency-domain format, and uses the digital audio data in the frequency-domain format in a music-characterization process. In response to determining that characterization values for the digital audio data are most relevant to characterization values for a particular reference song, the computing device outputs an indication of the particular reference song.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: September 1, 2020
    Assignee: Google LLC
    Inventors: Dominik Roblek, Blaise Hilary Aguera-Arcas, Thomas W. Hume, Marvin Karl Ritter, Brandon Charles Barbello, Kevin I. Kilgour, Mihajlo Velimirović, Christopher Thornton, Gabriel Oak Taubman, James David Lyon, Jan Heinrich Althaus, Katsiaryna Naliuka, Julian James Odell, Matthew Sharifi, Beat Gfeller