Patents by Inventor Bryan Christopher Horling
Bryan Christopher Horling has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11948576Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: GrantFiled: April 18, 2023Date of Patent: April 2, 2024Assignee: GOOGLE LLCInventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Publication number: 20240064110Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria-such as whether the user is continuing to modify the assistant settings from their cellular device.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
-
Patent number: 11907214Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.Type: GrantFiled: January 30, 2023Date of Patent: February 20, 2024Assignee: GOOGLE LLCInventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
-
Publication number: 20230367835Abstract: Implementations relate to determining whether and/or how to implement a user request to prevent a particular search result from being provided in response to a search query. Some of those implementations grant or deny the request based on processing of the particular search result, the search query, and/or account information for a user submitting the user request. For example, some implementations process such information utilizing a classifier in determining whether to automatically deny the request, automatically approve the request, or to provide the request for manual review. Some additional or alternative implementations at least selectively automatically expand (or suggest for automatic expansion) an approval of a request to search result(s) and/or to one or more search queries that are not specified in the request.Type: ApplicationFiled: February 7, 2023Publication date: November 16, 2023Inventors: Divya Sharma, Wei Chen, Ron Eden, Maryam Garrett, Bryan Christopher Horling, Angel Rodriguez, Sean Jordan, Onur Ozdemir, Molly Murphy
-
Patent number: 11805068Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria—such as whether the user is continuing to modify the assistant settings from their cellular device.Type: GrantFiled: February 26, 2021Date of Patent: October 31, 2023Assignee: GOOGLE LLCInventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
-
Publication number: 20230252989Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: ApplicationFiled: April 18, 2023Publication date: August 10, 2023Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Publication number: 20230177050Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.Type: ApplicationFiled: January 30, 2023Publication date: June 8, 2023Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
-
Patent number: 11631412Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: GrantFiled: November 8, 2021Date of Patent: April 18, 2023Assignee: GOOGLE LLCInventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Publication number: 20230084294Abstract: Implementations relate to determining multilingual content to render at an interface in response to a user submitted query. Those implementations further relate to determining a first language response and a second language response to a query that is submitted to an automated assistant. Some of those implementations relate to determining multilingual content that includes a response to the query in both the first and second languages. Other implementations relate to determining multilingual content that includes a query suggestion in the first language and a query suggestion in a second language. Some of those implementations relate to pre-fetching results for the query suggestions prior to rendering the multilingual content.Type: ApplicationFiled: September 15, 2021Publication date: March 16, 2023Inventors: Wangqing Yuan, Bryan Christopher Horling, David Kogan
-
Patent number: 11567935Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.Type: GrantFiled: March 30, 2021Date of Patent: January 31, 2023Assignee: Google LLCInventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
-
Publication number: 20220405488Abstract: Implementations relate to determining a well-formed phrase to suggest to a user to submit in lieu of a not well-formed phrase. The suggestion is rendered via an interface that is provided to a client device of the user. Those implementations relate to determining that a phrase is not well-formed, identifying alternate phrases that are related to the not well-formed phrase, and scoring the alternate phrases to select one or more of the alternate phrases to render via the interface. Some of those implementations are related to identifying that the phrase is not well-formed based on occurrences of the phrase in documents that are generated by a source with the language of the phrase as the primary language of the creator.Type: ApplicationFiled: June 18, 2021Publication date: December 22, 2022Inventors: Wangqing Yuan, David Kogan, Vincent Lacey, Guanglei Wang, Shaun Post, Bryan Christopher Horling, Michael Anthony Schuler
-
Publication number: 20220318248Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
-
Publication number: 20220272048Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria—such as whether the user is continuing to modify the assistant settings from their cellular device.Type: ApplicationFiled: February 26, 2021Publication date: August 25, 2022Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo
-
Publication number: 20220059093Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: ApplicationFiled: November 8, 2021Publication date: February 24, 2022Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Patent number: 11194874Abstract: Systems and methods for ranking communities based on content are described. A method includes receiving a search query from a user device of a first user of a social network. The method further includes analyzing content within groups of the social network to identify one or more of the groups that have content related to the search query. The method may further include ranking the identified groups for presentation of the identified groups in a ranked order on a client device in response to the search query, where ranking of the identified groups is based on a corresponding majority or total amount of members that have posted content matching the search query, and spam content used within the groups by members of the groups.Type: GrantFiled: May 25, 2018Date of Patent: December 7, 2021Assignee: Google LLCInventors: Bryan Christopher Horling, Okan Kolak
-
Patent number: 11170777Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: GrantFiled: May 31, 2019Date of Patent: November 9, 2021Assignee: GOOGLE LLCInventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Publication number: 20210074286Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.Type: ApplicationFiled: May 31, 2019Publication date: March 11, 2021Inventors: Daniel Cotting, Zaheed Sabur, Lan Huo, Bryan Christopher Horling, Behshad Behzadi, Lucas Mirelmann, Michael Golikov, Denis Burakov, Steve Cheng, Bohdan Vlasyuk, Sergey Nazarov, Mario Bertschler, Luv Kothari
-
Patent number: 10877978Abstract: Implementations describe ranking communities based on members. A method includes receiving input of data entered into a search field, predicting a search term based on the data, determining communities associated with a social network that satisfy a content match rule directed to a match between the predicted search term and information identifying the communities, the determined communities having scores that are based on results of the content match rule as applied to the communities, determining levels of reputations of the members of the determined communities that satisfy the content match rule, modifying the scores for the determined communities based on the determined levels of the reputations of the members, ranking the determined communities based on the modified scores, and providing identification of the predicted search term and the determined communities, where the determined communities are presented in a ranked order in accordance with the ranking.Type: GrantFiled: December 11, 2017Date of Patent: December 29, 2020Assignee: Google LLCInventors: Bryan Christopher Horling, Okan Kolak
-
Patent number: 10484319Abstract: Methods and apparatus are disclosed for resolving multiple interpretations of an ambiguous temporal term of a resource to a subset of the multiple interpretations. In some implementations, a group of one or more messages is identified, an ambiguous temporal term of the messages determined, additional content of the messages determined, and multiple interpretations of the ambiguous temporal term resolved to a subset based on the additional content.Type: GrantFiled: March 21, 2019Date of Patent: November 19, 2019Assignee: GOOGLE LLCInventors: Bryan Christopher Horling, Ashutosh Shukla, Antoine Jean Bruguier
-
Patent number: 10277543Abstract: Methods and apparatus are disclosed for resolving multiple interpretations of an ambiguous temporal term of a resource to a subset of the multiple interpretations. In some implementations, a group of one or more messages is identified, an ambiguous temporal term of the messages determined, additional content of the messages determined, and multiple interpretations of the ambiguous temporal term resolved to a subset based on the additional content.Type: GrantFiled: June 26, 2014Date of Patent: April 30, 2019Assignee: GOOGLE LLCInventors: Bryan Christopher Horling, Ashutosh Shukla, Antoine Jean Bruguier