Patents by Inventor Tim Wantland

Tim Wantland has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240056512
    Abstract: A computing system can be configured to input model input that includes context data into a machine-learned model and receive model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to provide data descriptive of the semantic entity or entities to the computer application(s) and receive application output(s) respectively from the computing application(s) in response to providing the data descriptive of semantic entity or entities to the computer application(s). The application output(s) received from each computer application can describe available action(s) of the corresponding computer application with respect to the semantic entity or entities. The computing system can be configured to provide at least one indicator to a user that describes the available action(s) of the corresponding computer applications with respect to the semantic entity or entities.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 15, 2024
    Inventors: Tim Wantland, Brandon Barbello, Robert Berry
  • Publication number: 20240057030
    Abstract: This document describes systems and techniques to avoid and manage poor wireless connections on mobile devices. The described systems and techniques can determine, based on a determined signal quality or signal strength of a current wireless connection, that a superior signal quality or a superior signal strength is available at a location adjacent to, or within a determined distance of, a current location of the mobile device. In response to determining that the superior signal quality or the superior signal strength is available at the location, the mobile device can provide an alert to a user. The alert can indicate the location adjacent to, or within the determined distance of, the mobile device. In this way, the described systems and techniques can direct users to better network connections or alleviate their impact.
    Type: Application
    Filed: December 10, 2020
    Publication date: February 15, 2024
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Scott Douglas Kulchycki
  • Publication number: 20240040039
    Abstract: This document describes systems and techniques to enable selectable controls for interactive voice response (IVR) systems. The described systems and techniques can determine whether audio data associated with a voice or video call between a user of a computing device and a third party includes multiple selectable options. The third party audibly provides the selectable options during the call. In response to determining that the audio data includes the selectable options, the computing device can determine a text description of the multiple selectable options. The described systems and techniques can then display two or more selectable controls on a display. The user can select a selectable control to indicate a selected option of the multiple selectable options. In this way, the described systems and techniques can improve a user experience with voice calls and video calls by making IVR systems easier to navigate and understand.
    Type: Application
    Filed: December 8, 2020
    Publication date: February 1, 2024
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Jan Piotr Jedrzejowicz
  • Publication number: 20240012540
    Abstract: The present disclosure is directed to input suggestion. In particular, the methods and systems of the present disclosure can: receive, from a first application executed by one or more computing devices, data indicating information that has been presented by and/or input into the first application; generate, based at least in part on the received data, one or more suggested candidate inputs for a second application executed by the computing device(s); provide, in association with the second application, an interface comprising one or more options to select at least one suggested candidate input of the suggested candidate input(s); and responsive to receiving data indicating a selection of a particular suggested candidate input of the suggested candidate input(s) via the interface, communicate, to the second application, data indicating the particular suggested candidate input.
    Type: Application
    Filed: September 21, 2023
    Publication date: January 11, 2024
    Inventors: Tim Wantland, Julian Odell, Seungyeon Kim, Iulia Turc, Daniel Ramage, Wei Huang, Kaikai Wang
  • Publication number: 20240004511
    Abstract: A method includes, while a user device is using a first presentation mode to present content to a user, obtaining a current state of the user of the user device. The method also includes, based on the current state of the user, providing, as output from a user interface of the user device, a user-selectable option that when selected causes the user device to use a second presentation mode to present the content to the user. The method further includes, in response to receiving a user input indication indicating selection of the user-selectable option, initiating presentation of the content using the second presentation mode.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 4, 2024
    Applicant: Google LLC
    Inventors: Kristin A. Gray, Tim Wantland, Matthew Stokes, Bingying Xia, Karen Vertierra, Melissa Barnhart, Gus Winkleman
  • Patent number: 11831738
    Abstract: A computing system can be configured to input model input that includes context data into a machine-learned model and receive model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to provide data descriptive of the semantic entity or entities to the computer application(s) and receive application output(s) respectively from the computing application(s) in response to providing the data descriptive of semantic entity or entities to the computer application(s). The application output(s) received from each computer application can describe available action(s) of the corresponding computer application with respect to the semantic entity or entities. The computing system can be configured to provide at least one indicator to a user that describes the available action(s) of the corresponding computer applications with respect to the semantic entity or entities.
    Type: Grant
    Filed: December 15, 2022
    Date of Patent: November 28, 2023
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Brandon Barbello, Robert Berry
  • Publication number: 20230376699
    Abstract: This document describes methods and systems of on-device real-time translation for media content on a mobile electronic device. The translation is managed and executed by an operating system of the electronic device rather than within a particular application executing on the electronic device. The operating system can translate media content, including visual content displayed on a display device of the electronic device or audio content output by the electronic device. Because the translation is at the OS level, the translation can be implemented, automatically or based on a user input, across a variety of (including all) applications and a variety of content on the electronic device to provide a consistent translation experience, which is provided via a system UI overlay that displays translated text as captions to video content or as a replacement to on-screen text.
    Type: Application
    Filed: December 18, 2020
    Publication date: November 23, 2023
    Applicant: Google LLC
    Inventors: Brandon Charles Barbello, Shenaz Zack, Tim Wantland, Khondokar Sami Iqram, Nikola Radicevic, Prasad Modali, Jeffrey Robert Pitman, Svetoslav Ganov, Qi Ge, Jonathan D. Wilson, Masakazu Seno, Xinxing Gu
  • Patent number: 11803290
    Abstract: The present disclosure is directed to input suggestion. In particular, the methods and systems of the present disclosure can: receive, from a first application executed by one or more computing devices, data indicating information that has been presented by and/or input into the first application; generate, based at least in part on the received data, one or more suggested candidate inputs for a second application executed by the computing device(s); provide, in association with the second application, an interface comprising one or more options to select at least one suggested candidate input of the suggested candidate input(s); and responsive to receiving data indicating a selection of a particular suggested candidate input of the suggested candidate input(s) via the interface, communicate, to the second application, data indicating the particular suggested candidate input.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: October 31, 2023
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Julian Odell, Seungyeon Kim, Iulia Turc, Daniel Ramage, Wei Huang, Kaikai Wang
  • Patent number: 11782569
    Abstract: A method includes, while a user device is using a first presentation mode to present content to a user, obtaining a current state of the user of the user device. The method also includes, based on the current state of the user, providing, as output from a user interface of the user device, a user-selectable option that when selected causes the user device to use a second presentation mode to present the content to the user. The method further includes, in response to receiving a user input indication indicating selection of the user-selectable option, initiating presentation of the content using the second presentation mode.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: October 10, 2023
    Assignee: Google LLC
    Inventors: Kristin A. Gray, Tim Wantland, Matthew Stokes, Bingying Xia, Karen Vertierra, Melissa Barnhart, Gus Winkleman
  • Publication number: 20230319177
    Abstract: The application is directed to edge lighting for computing device (200)s (200). The computing device (200) may include a display including a first portion (280) and a second portion (270), where the first portion (280) includes a substantial portion of a perimeter of the display and excludes the second portion (270) of the display. The computing device (200) may also include one or more processors (210) configured to determine a change in a status of the computing device (200), and determine, based on the change in the status, a visual notification (306). The one or more processors (210) may also be configured to interface with the first portion (280) of the display to output, based on the visual notification, a pattern of light (308).
    Type: Application
    Filed: August 19, 2020
    Publication date: October 5, 2023
    Inventors: Tim Wantland, Vignesh Sachidanandam
  • Publication number: 20230252981
    Abstract: Systems and methods for determining identifying semantic entities in audio signals are provided. A method can include obtaining, by a computing device comprising one or more processors and one or more memory devices, an audio signal concurrently heard by a user. The method can further include analyzing, by a machine-learned model stored on the computing device, at least a portion of the audio signal in a background of the computing device to determine one or more semantic entities. The method can further include displaying the one or more semantic entities on a display screen of the computing device.
    Type: Application
    Filed: April 20, 2023
    Publication date: August 10, 2023
    Inventors: Tim Wantland, Brandon Barbello
  • Patent number: 11664017
    Abstract: Systems and methods for determining identifying semantic entities in audio signals are provided. A method can include obtaining, by a computing device comprising one or more processors and one or more memory devices, an audio signal concurrently heard by a user. The method can further include analyzing, by a machine-learned model stored on the computing device, at least a portion of the audio signal in a background of the computing device to determine one or more semantic entities. The method can further include displaying the one or more semantic entities on a display screen of the computing device.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: May 30, 2023
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Brandon Barbello
  • Publication number: 20230110421
    Abstract: A computing system can be configured to input model input that includes context data into a machine-learned model and receive model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to provide data descriptive of the semantic entity or entities to the computer application(s) and receive application output(s) respectively from the computing application(s) in response to providing the data descriptive of semantic entity or entities to the computer application(s). The application output(s) received from each computer application can describe available action(s) of the corresponding computer application with respect to the semantic entity or entities. The computing system can be configured to provide at least one indicator to a user that describes the available action(s) of the corresponding computer applications with respect to the semantic entity or entities.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 13, 2023
    Inventors: Tim Wantland, Brandon Barbello, Robert Berry
  • Publication number: 20230026521
    Abstract: A method includes, while a user device is using a first presentation mode to present content to a user, obtaining a current state of the user of the user device. The method also includes, based on the current state of the user, providing, as output from a user interface of the user device, a user-selectable option that when selected causes the user device to use a second presentation mode to present the content to the user. The method further includes, in response to receiving a user input indication indicating selection of the user-selectable option, initiating presentation of the content using the second presentation mode.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Applicant: Google LLC
    Inventors: Kristin A Gray, Tim Wantland, Matthew Stokes, Bingying Xia, Karen Vertierra, Melissa Bamhart, Gus Winkleman
  • Patent number: 11553063
    Abstract: A computing system can be configured to input model input that includes context data into a machine-learned model and receive model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to provide data descriptive of the semantic entity or entities to the computer application(s) and receive application output(s) respectively from the computing application(s) in response to providing the data descriptive of semantic entity or entities to the computer application(s). The application output(s) received from each computer application can describe available action(s) of the corresponding computer application with respect to the semantic entity or entities. The computing system can be configured to provide at least one indicator to a user that describes the available action(s) of the corresponding computer applications with respect to the semantic entity or entities.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: January 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Robert Berry, Brandon Barbello
  • Patent number: 11514672
    Abstract: Provided are methods, systems, and devices for generating semantic objects and an output based on the detection or recognition of the state of an environment that includes objects. State data, based in part on sensor output, can be received from one or more sensors that detect a state of an environment including objects. Based in part on the state data, semantic objects are generated. The semantic objects can correspond to the objects and include a set of attributes. Based in part on the set of attributes of the semantic objects, one or more operating modes, associated with the semantic objects can be determined. Based in part on the one or more operating modes, object outputs associated with the semantic objects can be generated. The object outputs can include one or more visual indications or one or more audio indications.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: November 29, 2022
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Donald A. Barnett, David Matthew Jones
  • Publication number: 20220245520
    Abstract: A computing system can include an artificial intelligence system including one or more machine-learned models that are configured to receive a model input that includes context data, and, in response, output a model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to obtain the context data during a first time interval; input the model input that includes the context data into the machine-learned model(s); receive, as an output of the machine-learned model(s), the model output that describes the one or more semantic entities referenced by the context data; store the model output in at least one tangible, non-transitory computer-readable medium; and provide, for display in a user interface during a second time interval that is after the first time interval, a suggested action with respect to the semantic entity or entities described by the model output.
    Type: Application
    Filed: August 2, 2019
    Publication date: August 4, 2022
    Inventors: Tim Wantland, Melissa Lauren Barnhart, Brian L. Jackson
  • Patent number: 11289084
    Abstract: Provided are methods, systems, and devices for generating semantic objects and an output based on the detection or recognition of the state of an environment that includes objects. State data, based in part on sensor output, can be received from one or more sensors that detect a state of an environment including objects. Based in part on the state data, semantic objects are generated. The semantic objects can correspond to the objects and include a set of attributes. Based in part on the set of attributes of the semantic objects, one or more operating modes, associated with the semantic objects can be determined. Based in part on the one or more operating modes, object outputs associated with the semantic objects can be generated. The object outputs can include one or more visual indications or one or more audio indications.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: March 29, 2022
    Assignee: GOOGLE LLC
    Inventors: Tim Wantland, Donald A. Barnett, David Matthew Jones, Christopher Breithaupt, Brett Aladdin Barros, Allison Lee Stanfield, Nicholas Aceves, Megan Elizabeth Fazio, Christopher Robert Conover
  • Publication number: 20220021749
    Abstract: A computing system can be configured to input model input that includes context data into a machine-learned model and receive model output that describes one or more semantic entities referenced by the context data. The computing system can be configured to provide data descriptive of the semantic entity or entities to the computer application(s) and receive application output(s) respectively from the computing application(s) in response to providing the data descriptive of semantic entity or entities to the computer application(s). The application output(s) received from each computer application can describe available action(s) of the corresponding computer application with respect to the semantic entity or entities. The computing system can be configured to provide at least one indicator to a user that describes the available action(s) of the corresponding computer applications with respect to the semantic entity or entities.
    Type: Application
    Filed: January 10, 2019
    Publication date: January 20, 2022
    Inventors: Tim Wantland, Robert Berry, Brandon Barbello
  • Publication number: 20210365684
    Abstract: Provided are methods, systems, and devices for generating semantic objects and an output based on the detection or recognition of the state of an environment that includes objects. State data, based in part on sensor output, can be received from one or more sensors that detect a state of an environment including objects. Based in part on the state data, semantic objects are generated. The semantic objects can correspond to the objects and include a set of attributes. Based in part on the set of attributes of the semantic objects, one or more operating modes, associated with the semantic objects can be determined. Based in part on the one or more operating modes, object outputs associated with the semantic objects can be generated. The object outputs can include one or more visual indications or one or more audio indications.
    Type: Application
    Filed: August 5, 2021
    Publication date: November 25, 2021
    Inventors: Tim Wantland, Donald A. Barnett, David Matthew Jones