Patents by Inventor Brett Barros
Brett Barros has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11941342Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: GrantFiled: May 26, 2022Date of Patent: March 26, 2024Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20230410498Abstract: A computing system can perform image classification within a displayed image, while performing the image classification, determine that a first object of an object class is present within the displayed image, present a prompt within the displayed image, receive, in response to the prompt, a selection of the first object from a user, in response to receiving the selection of the first object, perform a function on the first object, based on the selection of the first object in response to the prompt, determine that the user is familiar with the function, based on determining that the user is familiar with the function, terminate performing image classification within the displayed image, and in response to the user selecting a second object of the object class within the displayed image, perform the function on the second object.Type: ApplicationFiled: October 13, 2020Publication date: December 21, 2023Inventors: Brett Barros, Megan Fazio
-
Publication number: 20230385431Abstract: A computer-implemented method comprises: detecting, by a first computer system, first content of a tangible instance of a first document; generating, by the first computer system, a first hash using the first content, the first hash including first obfuscation content; sending, by the first computer system, the first hash for receipt by a second computer system; and receiving, by the first computer system, a response to the first hash generated by the second computer system, the response including information corresponding to a second document associated with the first content.Type: ApplicationFiled: October 19, 2020Publication date: November 30, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230376711Abstract: Systems and techniques include using a sensor of a computing device to detect the presence of a first portion of a code, the code including at least the first portion and a second portion, where the first portion of the code is decodable and includes an identifier and the second portion of the code is non-decodable. The computing device recognizes the identifier in the first portion of the code and obtains instructions for decoding the second portion of the code using the identifier and/or data associated with the identifier. The instructions to decode the second portion of the code are processed to generate a decoded second portion of the code. The computing device performs an action defined in the decoded second portion of the code.Type: ApplicationFiled: October 7, 2020Publication date: November 23, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230325620Abstract: A system and method is provide which allows supplemental information to be used, in combination with a portion of a visual code that is readable, to identify the visual code, and to provide information related to the visual code, even in the event of a scan of a compromised visual code and/or an inadequate scan of the visual code which yields only a portion of the data payload associated with the visual code. The supplemental information may include, for example, location based information, image based information, audio based information, and other types of information which may allow the system to discriminate a location of the scanned visual code, and to identify the scanned visual code visual code based on the portion of the data payload and the supplemental information.Type: ApplicationFiled: September 21, 2020Publication date: October 12, 2023Inventors: Alexander James Faaborg, Brett Barros
-
Patent number: 11606529Abstract: A method including receiving at least one frame of a video targeted for display on a main display (or within the boundary of the main display), receiving metadata associated with the at least one frame of the video, the metadata being targeted for display on a supplemental display (or outside the boundary of the main display), and formatting the metadata for display on the supplemental display (or outside the boundary of the main display).Type: GrantFiled: October 16, 2020Date of Patent: March 14, 2023Assignee: Google LLCInventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230035713Abstract: Implementations described herein relate to an automated assistant that iteratively renders various GUI elements as a user iteratively provides a spoken utterance, or sequence of spoken utterances, corresponding to a request directed to the automated assistant. These various GUI elements can be dynamically adapted as the user iteratively provides the spoken utterance to assist the user with efficiently completing the request. In some implementations, a generic container graphical element associated with candidate intent(s) can be initially rendered at a display interface of a computing device and dynamically adapted with tailored container graphical elements as a particular intent is determined while the user iteratively provides the spoken utterance.Type: ApplicationFiled: November 22, 2021Publication date: February 2, 2023Inventors: Brett Barros, Joanne J. Jang, Andrew Schoneweis
-
Publication number: 20220406301Abstract: Implementations set forth herein relate to an automated assistant that can initialize execution of an assistant command associated with an interpretation that is predicted to be responsive to a user input, while simultaneously providing suggestions for alternative assistant command(s) associated with alternative interpretation(s) that is/are also predicted to be responsive to the user input. The alternative assistant command(s) that are suggested can be selectable such that, when selected, the automated assistant can pivot from executing the assistant command to initializing execution of the selected alternative assistant command(s). Further, the alternative assistant command(s) that are suggested can be partially fulfilled prior to any user selection thereof. Accordingly, implementations set forth herein can enable the automated assistant to quickly and efficiently pivot between assistant commands that are predicted to be responsive to the user input.Type: ApplicationFiled: June 16, 2021Publication date: December 22, 2022Inventors: Brett Barros, Theo Goguely
-
Publication number: 20220284168Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: ApplicationFiled: May 26, 2022Publication date: September 8, 2022Inventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20220182836Abstract: Systems and methods are described for authenticating devices. The systems and methods may include detecting, by a sensor on a wearable device, at least one cloud anchor that includes an identifier associated with a network and configured for a physical environment. In response to detecting that a location associated with the at least one cloud anchor is within a threshold distance of the wearable device and detecting that the wearable device has access to the at least one cloud anchor, triggering extraction of the identifier from the at least one cloud anchor. The systems and methods may also include joining the wearable device to the network based on a received authentication corresponding to the extracted identifier.Type: ApplicationFiled: December 9, 2020Publication date: June 9, 2022Inventors: Alexander James Faaborg, Brett Barros, Michael Schoenberg
-
Patent number: 11347927Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: GrantFiled: October 9, 2020Date of Patent: May 31, 2022Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20220124279Abstract: A method including receiving at least one frame of a video targeted for display on a main display (or within the boundary of the main display), receiving metadata associated with the at least one frame of the video, the metadata being targeted for display on a supplemental display (or outside the boundary of the main display), and formatting the metadata for display on the supplemental display (or outside the boundary of the main display).Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20220114327Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20220114248Abstract: A head-mounted device (HMD) may be used to determine an access request for accessing a device. An identifier identifying the device may be received at the HMD and from the device. By verifying receipt of the identifier at the HMD, and that access rights associated with the HMD enable granting of the access request, the access request may be granted.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Brett Barros, Alexander James Faaborg
-
Patent number: 11263819Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.Type: GrantFiled: June 23, 2020Date of Patent: March 1, 2022Assignee: Google LLCInventors: Xavier Benavides Palos, Brett Barros
-
Patent number: 11100712Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.Type: GrantFiled: May 13, 2020Date of Patent: August 24, 2021Assignee: Google LLCInventors: Brett Barros, Xavier Benavides Palos
-
Patent number: 11043031Abstract: Systems and methods for inserting and transforming content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying a physical location associated with a display management entity within the image. The example system and method may also include retrieving content display parameters associated with the display management entity. Additionally, the example system and method may also include identifying content to display and displaying the content using the display parameters associated with the display management entity.Type: GrantFiled: October 22, 2018Date of Patent: June 22, 2021Assignee: GOOGLE LLCInventors: Brett Barros, Xavier Benavides Palos
-
Patent number: 10922889Abstract: Systems and methods for drawing attention to points of interest within inserted content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying content to display over the image. The system and method may also include identifying a location within the image to display the content and identifying a point of interest of the content. Additionally, the example system and method may also include triggering display of the content overlaid on the image by identifying a portion of the content based on the point of interest, rendering the portion of the content using first shading parameters; and rendering the content other than the portion using second shading parameters.Type: GrantFiled: November 19, 2018Date of Patent: February 16, 2021Assignee: Google LLCInventors: Xavier Benavides Palos, Brett Barros, Paul Bechard
-
Publication number: 20200320798Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.Type: ApplicationFiled: June 23, 2020Publication date: October 8, 2020Inventors: Xavier Benavides Palos, Brett Barros
-
Publication number: 20200273250Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.Type: ApplicationFiled: May 13, 2020Publication date: August 27, 2020Inventors: Brett Barros, Xavier Benavides Palos