Patents by Inventor Brett Barros
Brett Barros has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12248843Abstract: Systems and techniques include using a sensor of a computing device to detect the presence of a first portion of a code, the code including at least the first portion and a second portion, where the first portion of the code is decodable and includes an identifier and the second portion of the code is non-decodable. The computing device recognizes the identifier in the first portion of the code and obtains instructions for decoding the second portion of the code using the identifier and/or data associated with the identifier. The instructions to decode the second portion of the code are processed to generate a decoded second portion of the code. The computing device performs an action defined in the decoded second portion of the code.Type: GrantFiled: October 7, 2020Date of Patent: March 11, 2025Assignee: GOOGLE LLCInventors: Brett Barros, Alexander James Faaborg
-
Patent number: 12174931Abstract: A head-mounted device (HMD) may be used to determine an access request for accessing a device. An identifier identifying the device may be received at the HMD and from the device. By verifying receipt of the identifier at the HMD, and that access rights associated with the HMD enable granting of the access request, the access request may be granted.Type: GrantFiled: October 9, 2020Date of Patent: December 24, 2024Assignee: GOOGLE LLCInventors: Brett Barros, Alexander James Faaborg
-
Patent number: 12160746Abstract: Systems and methods are described for authenticating devices. The systems and methods may include detecting, by a sensor on a wearable device, at least one cloud anchor that includes an identifier associated with a network and configured for a physical environment. In response to detecting that a location associated with the at least one cloud anchor is within a threshold distance of the wearable device and detecting that the wearable device has access to the at least one cloud anchor, triggering extraction of the identifier from the at least one cloud anchor. The systems and methods may also include joining the wearable device to the network based on a received authentication corresponding to the extracted identifier.Type: GrantFiled: December 9, 2020Date of Patent: December 3, 2024Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros, Michael Schoenberg
-
Patent number: 12120521Abstract: Systems and methods are described for authenticating devices. The systems and methods may include detecting, by a sensor on a wearable device, at least one cloud anchor that includes an identifier associated with a network and configured for a physical environment. In response to detecting that a location associated with the at least one cloud anchor is within a threshold distance of the wearable device and detecting that the wearable device has access to the at least one cloud anchor, triggering extraction of the identifier from the at least one cloud anchor. The systems and methods may also include joining the wearable device to the network based on a received authentication corresponding to the extracted identifier.Type: GrantFiled: December 9, 2020Date of Patent: October 15, 2024Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros, Michael Schoenberg
-
Publication number: 20240321288Abstract: Implementations described herein relate to an automated assistant that iteratively renders various GUI elements as a user iteratively provides a spoken utterance, or sequence of spoken utterances, corresponding to a request directed to the automated assistant. These various GUI elements can be dynamically adapted as the user iteratively provides the spoken utterance to assist the user with efficiently completing the request. In some implementations, a generic container graphical element associated with candidate intent(s) can be initially rendered at a display interface of a computing device and dynamically adapted with tailored container graphical elements as a particular intent is determined while the user iteratively provides the spoken utterance.Type: ApplicationFiled: June 5, 2024Publication date: September 26, 2024Inventors: Brett Barros, Joanne J. Jang, Andrew Schoneweis
-
Publication number: 20240312459Abstract: Implementations set forth herein relate to an automated assistant that can initialize execution of an assistant command associated with an interpretation that is predicted to be responsive to a user input, while simultaneously providing suggestions for alternative assistant command(s) associated with alternative interpretation(s) that is/are also predicted to be responsive to the user input. The alternative assistant command(s) that are suggested can be selectable such that, when selected, the automated assistant can pivot from executing the assistant command to initializing execution of the selected alternative assistant command(s). Further, the alternative assistant command(s) that are suggested can be partially fulfilled prior to any user selection thereof. Accordingly, implementations set forth herein can enable the automated assistant to quickly and efficiently pivot between assistant commands that are predicted to be responsive to the user input.Type: ApplicationFiled: May 21, 2024Publication date: September 19, 2024Inventors: Brett Barros, Theo Goguely
-
Patent number: 12039996Abstract: Implementations described herein relate to an automated assistant that iteratively renders various GUI elements as a user iteratively provides a spoken utterance, or sequence of spoken utterances, corresponding to a request directed to the automated assistant. These various GUI elements can be dynamically adapted as the user iteratively provides the spoken utterance to assist the user with efficiently completing the request. In some implementations, a generic container graphical element associated with candidate intent(s) can be initially rendered at a display interface of a computing device and dynamically adapted with tailored container graphical elements as a particular intent is determined while the user iteratively provides the spoken utterance.Type: GrantFiled: November 22, 2021Date of Patent: July 16, 2024Assignee: GOOGLE LLCInventors: Brett Barros, Joanne J. Jang, Andrew Schoneweis
-
Publication number: 20240232505Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: ApplicationFiled: March 25, 2024Publication date: July 11, 2024Inventors: Alexander James Faaborg, Brett Barros
-
Patent number: 12027164Abstract: Implementations set forth herein relate to an automated assistant that can initialize execution of an assistant command associated with an interpretation that is predicted to be responsive to a user input, while simultaneously providing suggestions for alternative assistant command(s) associated with alternative interpretation(s) that is/are also predicted to be responsive to the user input. The alternative assistant command(s) that are suggested can be selectable such that, when selected, the automated assistant can pivot from executing the assistant command to initializing execution of the selected alternative assistant command(s). Further, the alternative assistant command(s) that are suggested can be partially fulfilled prior to any user selection thereof. Accordingly, implementations set forth herein can enable the automated assistant to quickly and efficiently pivot between assistant commands that are predicted to be responsive to the user input.Type: GrantFiled: June 16, 2021Date of Patent: July 2, 2024Assignee: GOOGLE LLCInventors: Brett Barros, Theo Goguely
-
Patent number: 11941342Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: GrantFiled: May 26, 2022Date of Patent: March 26, 2024Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20230410498Abstract: A computing system can perform image classification within a displayed image, while performing the image classification, determine that a first object of an object class is present within the displayed image, present a prompt within the displayed image, receive, in response to the prompt, a selection of the first object from a user, in response to receiving the selection of the first object, perform a function on the first object, based on the selection of the first object in response to the prompt, determine that the user is familiar with the function, based on determining that the user is familiar with the function, terminate performing image classification within the displayed image, and in response to the user selecting a second object of the object class within the displayed image, perform the function on the second object.Type: ApplicationFiled: October 13, 2020Publication date: December 21, 2023Inventors: Brett Barros, Megan Fazio
-
Publication number: 20230385431Abstract: A computer-implemented method comprises: detecting, by a first computer system, first content of a tangible instance of a first document; generating, by the first computer system, a first hash using the first content, the first hash including first obfuscation content; sending, by the first computer system, the first hash for receipt by a second computer system; and receiving, by the first computer system, a response to the first hash generated by the second computer system, the response including information corresponding to a second document associated with the first content.Type: ApplicationFiled: October 19, 2020Publication date: November 30, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230376711Abstract: Systems and techniques include using a sensor of a computing device to detect the presence of a first portion of a code, the code including at least the first portion and a second portion, where the first portion of the code is decodable and includes an identifier and the second portion of the code is non-decodable. The computing device recognizes the identifier in the first portion of the code and obtains instructions for decoding the second portion of the code using the identifier and/or data associated with the identifier. The instructions to decode the second portion of the code are processed to generate a decoded second portion of the code. The computing device performs an action defined in the decoded second portion of the code.Type: ApplicationFiled: October 7, 2020Publication date: November 23, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230325620Abstract: A system and method is provide which allows supplemental information to be used, in combination with a portion of a visual code that is readable, to identify the visual code, and to provide information related to the visual code, even in the event of a scan of a compromised visual code and/or an inadequate scan of the visual code which yields only a portion of the data payload associated with the visual code. The supplemental information may include, for example, location based information, image based information, audio based information, and other types of information which may allow the system to discriminate a location of the scanned visual code, and to identify the scanned visual code visual code based on the portion of the data payload and the supplemental information.Type: ApplicationFiled: September 21, 2020Publication date: October 12, 2023Inventors: Alexander James Faaborg, Brett Barros
-
Patent number: 11606529Abstract: A method including receiving at least one frame of a video targeted for display on a main display (or within the boundary of the main display), receiving metadata associated with the at least one frame of the video, the metadata being targeted for display on a supplemental display (or outside the boundary of the main display), and formatting the metadata for display on the supplemental display (or outside the boundary of the main display).Type: GrantFiled: October 16, 2020Date of Patent: March 14, 2023Assignee: Google LLCInventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230035713Abstract: Implementations described herein relate to an automated assistant that iteratively renders various GUI elements as a user iteratively provides a spoken utterance, or sequence of spoken utterances, corresponding to a request directed to the automated assistant. These various GUI elements can be dynamically adapted as the user iteratively provides the spoken utterance to assist the user with efficiently completing the request. In some implementations, a generic container graphical element associated with candidate intent(s) can be initially rendered at a display interface of a computing device and dynamically adapted with tailored container graphical elements as a particular intent is determined while the user iteratively provides the spoken utterance.Type: ApplicationFiled: November 22, 2021Publication date: February 2, 2023Inventors: Brett Barros, Joanne J. Jang, Andrew Schoneweis
-
Publication number: 20220406301Abstract: Implementations set forth herein relate to an automated assistant that can initialize execution of an assistant command associated with an interpretation that is predicted to be responsive to a user input, while simultaneously providing suggestions for alternative assistant command(s) associated with alternative interpretation(s) that is/are also predicted to be responsive to the user input. The alternative assistant command(s) that are suggested can be selectable such that, when selected, the automated assistant can pivot from executing the assistant command to initializing execution of the selected alternative assistant command(s). Further, the alternative assistant command(s) that are suggested can be partially fulfilled prior to any user selection thereof. Accordingly, implementations set forth herein can enable the automated assistant to quickly and efficiently pivot between assistant commands that are predicted to be responsive to the user input.Type: ApplicationFiled: June 16, 2021Publication date: December 22, 2022Inventors: Brett Barros, Theo Goguely
-
Publication number: 20220284168Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: ApplicationFiled: May 26, 2022Publication date: September 8, 2022Inventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20220182836Abstract: Systems and methods are described for authenticating devices. The systems and methods may include detecting, by a sensor on a wearable device, at least one cloud anchor that includes an identifier associated with a network and configured for a physical environment. In response to detecting that a location associated with the at least one cloud anchor is within a threshold distance of the wearable device and detecting that the wearable device has access to the at least one cloud anchor, triggering extraction of the identifier from the at least one cloud anchor. The systems and methods may also include joining the wearable device to the network based on a received authentication corresponding to the extracted identifier.Type: ApplicationFiled: December 9, 2020Publication date: June 9, 2022Inventors: Alexander James Faaborg, Brett Barros, Michael Schoenberg
-
Patent number: 11347927Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: GrantFiled: October 9, 2020Date of Patent: May 31, 2022Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros