Patents by Inventor Bryan Christopher Horling

Bryan Christopher Horling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948576
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: April 18, 2023
    Date of Patent: April 2, 2024
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20240064110
    Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria-such as whether the user is continuing to modify the assistant settings from their cellular device.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
  • Patent number: 11907214
    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
  • Publication number: 20230367835
    Abstract: Implementations relate to determining whether and/or how to implement a user request to prevent a particular search result from being provided in response to a search query. Some of those implementations grant or deny the request based on processing of the particular search result, the search query, and/or account information for a user submitting the user request. For example, some implementations process such information utilizing a classifier in determining whether to automatically deny the request, automatically approve the request, or to provide the request for manual review. Some additional or alternative implementations at least selectively automatically expand (or suggest for automatic expansion) an approval of a request to search result(s) and/or to one or more search queries that are not specified in the request.
    Type: Application
    Filed: February 7, 2023
    Publication date: November 16, 2023
    Inventors: Divya Sharma, Wei Chen, Ron Eden, Maryam Garrett, Bryan Christopher Horling, Angel Rodriguez, Sean Jordan, Onur Ozdemir, Molly Murphy
  • Patent number: 11805068
    Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria—such as whether the user is continuing to modify the assistant settings from their cellular device.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 31, 2023
    Assignee: GOOGLE LLC
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
  • Publication number: 20230252989
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: April 18, 2023
    Publication date: August 10, 2023
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20230177050
    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
  • Patent number: 11631412
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20230084294
    Abstract: Implementations relate to determining multilingual content to render at an interface in response to a user submitted query. Those implementations further relate to determining a first language response and a second language response to a query that is submitted to an automated assistant. Some of those implementations relate to determining multilingual content that includes a response to the query in both the first and second languages. Other implementations relate to determining multilingual content that includes a query suggestion in the first language and a query suggestion in a second language. Some of those implementations relate to pre-fetching results for the query suggestions prior to rendering the multilingual content.
    Type: Application
    Filed: September 15, 2021
    Publication date: March 16, 2023
    Inventors: Wangqing Yuan, Bryan Christopher Horling, David Kogan
  • Patent number: 11567935
    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: January 31, 2023
    Assignee: Google LLC
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
  • Publication number: 20220405488
    Abstract: Implementations relate to determining a well-formed phrase to suggest to a user to submit in lieu of a not well-formed phrase. The suggestion is rendered via an interface that is provided to a client device of the user. Those implementations relate to determining that a phrase is not well-formed, identifying alternate phrases that are related to the not well-formed phrase, and scoring the alternate phrases to select one or more of the alternate phrases to render via the interface. Some of those implementations are related to identifying that the phrase is not well-formed based on occurrences of the phrase in documents that are generated by a source with the language of the phrase as the primary language of the creator.
    Type: Application
    Filed: June 18, 2021
    Publication date: December 22, 2022
    Inventors: Wangqing Yuan, David Kogan, Vincent Lacey, Guanglei Wang, Shaun Post, Bryan Christopher Horling, Michael Anthony Schuler
  • Publication number: 20220318248
    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.
    Type: Application
    Filed: March 30, 2021
    Publication date: October 6, 2022
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
  • Publication number: 20220272048
    Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria—such as whether the user is continuing to modify the assistant settings from their cellular device.
    Type: Application
    Filed: February 26, 2021
    Publication date: August 25, 2022
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
  • Publication number: 20220059093
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: November 8, 2021
    Publication date: February 24, 2022
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 11194874
    Abstract: Systems and methods for ranking communities based on content are described. A method includes receiving a search query from a user device of a first user of a social network. The method further includes analyzing content within groups of the social network to identify one or more of the groups that have content related to the search query. The method may further include ranking the identified groups for presentation of the identified groups in a ranked order on a client device in response to the search query, where ranking of the identified groups is based on a corresponding majority or total amount of members that have posted content matching the search query, and spam content used within the groups by members of the groups.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: December 7, 2021
    Assignee: Google LLC
    Inventors: Bryan Christopher Horling, Okan Kolak
  • Patent number: 11170777
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 9, 2021
    Assignee: GOOGLE LLC
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Publication number: 20210074286
    Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
    Type: Application
    Filed: May 31, 2019
    Publication date: March 11, 2021
    Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
  • Patent number: 10877978
    Abstract: Implementations describe ranking communities based on members. A method includes receiving input of data entered into a search field, predicting a search term based on the data, determining communities associated with a social network that satisfy a content match rule directed to a match between the predicted search term and information identifying the communities, the determined communities having scores that are based on results of the content match rule as applied to the communities, determining levels of reputations of the members of the determined communities that satisfy the content match rule, modifying the scores for the determined communities based on the determined levels of the reputations of the members, ranking the determined communities based on the modified scores, and providing identification of the predicted search term and the determined communities, where the determined communities are presented in a ranked order in accordance with the ranking.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: December 29, 2020
    Assignee: Google LLC
    Inventors: Bryan Christopher Horling, Okan Kolak
  • Patent number: 10484319
    Abstract: Methods and apparatus are disclosed for resolving multiple interpretations of an ambiguous temporal term of a resource to a subset of the multiple interpretations. In some implementations, a group of one or more messages is identified, an ambiguous temporal term of the messages determined, additional content of the messages determined, and multiple interpretations of the ambiguous temporal term resolved to a subset based on the additional content.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: November 19, 2019
    Assignee: GOOGLE LLC
    Inventors: Bryan Christopher Horling, Ashutosh Shukla, Antoine Jean Bruguier
  • Patent number: 10277543
    Abstract: Methods and apparatus are disclosed for resolving multiple interpretations of an ambiguous temporal term of a resource to a subset of the multiple interpretations. In some implementations, a group of one or more messages is identified, an ambiguous temporal term of the messages determined, additional content of the messages determined, and multiple interpretations of the ambiguous temporal term resolved to a subset based on the additional content.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Bryan Christopher Horling, Ashutosh Shukla, Antoine Jean Bruguier