Patents by Inventor Alexander James Faaborg
Alexander James Faaborg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11977953Abstract: The present disclosure relates generally to the processing of machine-readable visual encodings in view of contextual information. One embodiment of aspects of the present disclosure comprises obtaining image data descriptive of a scene that includes a machine-readable visual encoding; processing the image data with a first recognition system configured to recognize the machine-readable visual encoding; processing the image data with a second, different recognition system configured to recognize a surrounding portion of the scene that surrounds the machine-readable visual encoding; identifying a stored reference associated with the machine-readable visual encoding based at least in part on one or more first outputs generated by the first recognition system based on the image data and based at least in part on one or more second outputs generated by the second recognition system based on the image data; and performing one or more actions responsive to identification of the stored reference.Type: GrantFiled: October 21, 2022Date of Patent: May 7, 2024Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Brett Aladdin Barros
-
Publication number: 20240135126Abstract: The present disclosure relates generally to the processing of machine-readable visual encodings in view of contextual information. One embodiment of aspects of the present disclosure comprises obtaining image data descriptive of a scene that includes a machine-readable visual encoding; processing the image data with a first recognition system configured to recognize the machine-readable visual encoding; processing the image data with a second, different recognition system configured to recognize a surrounding portion of the scene that surrounds the machine-readable visual encoding; identifying a stored reference associated with the machine-readable visual encoding based at least in part on one or more first outputs generated by the first recognition system based on the image data and based at least in part on one or more second outputs generated by the second recognition system based on the image data; and performing one or more actions responsive to identification of the stored reference.Type: ApplicationFiled: December 27, 2023Publication date: April 25, 2024Inventors: Alexander James Faaborg, Brett Aladdin Barros
-
Patent number: 11947859Abstract: A system and method is provided that provides for the transfer of the execution of content from a user device to an external device for output of the content by the external device. External devices may be detected in a physical space, and identified based on previous connection with the user device, based on a shared network or shared system of connected devices including the user device, based on image information captured by the user device and previously stored anchoring information that identifies the external devices, and the like. An external device may be selected for potential output of the content based on previously stored configuration information associated with the external device including, for example, output capabilities associated with the external device. The identified external device may output the transferred content in response to a user verification input, verifying that the content is to be output by the external device.Type: GrantFiled: November 16, 2020Date of Patent: April 2, 2024Assignee: GOOGLE LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11941342Abstract: Gaze data collected from eye gaze tracking performed while training text was read may be used to train at least one layout interpretation model. In this way, the at least one layout interpretation model may be trained to determine current text that includes words arranged according to a layout, process the current text with the at least one layout interpretation model to determine the layout, and output the current text with the words arranged according to the layout.Type: GrantFiled: May 26, 2022Date of Patent: March 26, 2024Assignee: Google LLCInventors: Alexander James Faaborg, Brett Barros
-
Patent number: 11935199Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.Type: GrantFiled: July 26, 2021Date of Patent: March 19, 2024Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Shengzhi Wu
-
Patent number: 11836553Abstract: The present disclosure relates generally to the processing of machine-readable visual encodings in view of contextual information. One embodiment of aspects of the present disclosure comprises obtaining image data descriptive of a scene that includes a machine-readable visual encoding; processing the image data with a first recognition system configured to recognize the machine-readable visual encoding; processing the image data with a second, different recognition system configured to recognize a surrounding portion of the scene that surrounds the machine-readable visual encoding; identifying a stored reference associated with the machine-readable visual encoding based at least in part on one or more first outputs generated by the first recognition system based on the image data and based at least in part on one or more second outputs generated by the second recognition system based on the image data; and performing one or more actions responsive to identification of the stored reference.Type: GrantFiled: August 24, 2022Date of Patent: December 5, 2023Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Brett Aladdin Barros
-
Publication number: 20230385431Abstract: A computer-implemented method comprises: detecting, by a first computer system, first content of a tangible instance of a first document; generating, by the first computer system, a first hash using the first content, the first hash including first obfuscation content; sending, by the first computer system, the first hash for receipt by a second computer system; and receiving, by the first computer system, a response to the first hash generated by the second computer system, the response including information corresponding to a second document associated with the first content.Type: ApplicationFiled: October 19, 2020Publication date: November 30, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230376711Abstract: Systems and techniques include using a sensor of a computing device to detect the presence of a first portion of a code, the code including at least the first portion and a second portion, where the first portion of the code is decodable and includes an identifier and the second portion of the code is non-decodable. The computing device recognizes the identifier in the first portion of the code and obtains instructions for decoding the second portion of the code using the identifier and/or data associated with the identifier. The instructions to decode the second portion of the code are processed to generate a decoded second portion of the code. The computing device performs an action defined in the decoded second portion of the code.Type: ApplicationFiled: October 7, 2020Publication date: November 23, 2023Inventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230360264Abstract: According to an aspect, a method of identifying a position of a controllable device includes receiving visual data from an image sensor on a wearable device, generating, by an object recognition module, identification data based on the visual data, and identifying, using the identification data, a first three-dimensional (3D) map from a map database that stores a plurality of 3D maps including the first 3D map and a second 3D map, where the first 3D map is associated with a first controllable device and the second 3D map is associated with a second controllable device. The method includes obtaining a position of the first controllable device in a physical space based on visual positioning data of the first 3D map and rendering a user interface (UI) object on a display in a position that is within a threshold distance of the position of the first controllable device.Type: ApplicationFiled: November 16, 2020Publication date: November 9, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230359241Abstract: According to an aspect, a method includes detecting, by at least one imaging sensor of a wearable device, facial features of an entity, detecting an interactive communication between a user of the wearable device and the entity based on at least image data from the at least one imaging sensor, updating a conversation graph in response to the interactive communication being detected between the user and the entity, and managing content for display on the wearable device based on the conversation graph.Type: ApplicationFiled: September 30, 2020Publication date: November 9, 2023Inventors: Alexander James Faaborg, Michael Schoenberg
-
Publication number: 20230325620Abstract: A system and method is provide which allows supplemental information to be used, in combination with a portion of a visual code that is readable, to identify the visual code, and to provide information related to the visual code, even in the event of a scan of a compromised visual code and/or an inadequate scan of the visual code which yields only a portion of the data payload associated with the visual code. The supplemental information may include, for example, location based information, image based information, audio based information, and other types of information which may allow the system to discriminate a location of the scanned visual code, and to identify the scanned visual code visual code based on the portion of the data payload and the supplemental information.Type: ApplicationFiled: September 21, 2020Publication date: October 12, 2023Inventors: Alexander James Faaborg, Brett Barros
-
Publication number: 20230252739Abstract: According to an aspect, a method for sharing a collaborative augmented reality (AR) environment including obtaining, by a sensor system of a first computing system, visual data representing a physical space of an AR environment, where the visual data is used to create a three-dimensional (3D) map of the physical space. The 3D map includes a coordinate space having at least one virtual object added by a user of the first computing system. The method includes broadcasting, by a transducer on the first computing system, an ultrasound signal, where the ultrasound signal includes an identifier associated with the 3D map. The identifier is configured to be detected by a second computing system to join the AR environment.Type: ApplicationFiled: January 13, 2023Publication date: August 10, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230113461Abstract: Systems and methods are described for providing co-presence in an augmented reality environment. The method may include receiving a visual scene within a viewing window depicting a multi-frame real-time visual scene captured by a camera onboard an electronic device associated with the augmented reality environment, identifying a plurality of elements of the visual scene, detecting at least one graphic indicator associated with at least one of the plurality of elements, detecting at least one boundary associated with the at least one element, and generating, in the viewing window and based on the detection of the at least one graphic indicator, Augmented Reality (AR) motion graphics within the detected boundary. In response to determining that content related to the at least one element is available, the method may include retrieving the content and visually indicating an AR tracked control on the at least one element within the viewing window.Type: ApplicationFiled: December 14, 2022Publication date: April 13, 2023Inventors: Alexander James Faaborg, Kankan Meng, Joost Korngold
-
Patent number: 11606529Abstract: A method including receiving at least one frame of a video targeted for display on a main display (or within the boundary of the main display), receiving metadata associated with the at least one frame of the video, the metadata being targeted for display on a supplemental display (or outside the boundary of the main display), and formatting the metadata for display on the supplemental display (or outside the boundary of the main display).Type: GrantFiled: October 16, 2020Date of Patent: March 14, 2023Assignee: Google LLCInventors: Brett Barros, Alexander James Faaborg
-
Publication number: 20230075389Abstract: Three-dimensional (3D) maps may be generated for different areas based on scans of the areas using sensor(s) of a mobile computing device. During each scan, locations of the mobile computing device can be measured relative to a fixed-positioned smart device using ultra-wideband communication (UWB). The 3D maps for the areas may be registered to the fixed position (i.e., anchor position) of the smart device based on the location measurements acquired during the scan so that the 3D maps can be merged into a combined 3D map. The combined (i.e., merged) 3D map may then be used to facilitate location-specific operation of the mobile computing device or other smart device.Type: ApplicationFiled: August 24, 2021Publication date: March 9, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11592907Abstract: A user may routinely wear or hold more than one computing devices. One of the computing devices may be a head-mounted computing-device configured for augmented reality. The head-mounted computing-device may include a camera. While imaging, the camera can consume power and processing resources that diminish a battery of the head-mounted computing device. To improve a battery life and to enhance a user's privacy, imaging of the camera can be deactivated during periods when the user is not interacting with the head-mounted computing device and activated when the user wishes to interact with the head-mounted computing device. The activation of the camera can be triggered by gestured data collected by a computing device other than the head-mounted computing-device.Type: GrantFiled: October 20, 2020Date of Patent: February 28, 2023Assignee: Google LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11585917Abstract: Three-dimensional (3D) maps may be generated for different areas based on scans of the areas using sensor(s) of a mobile computing device. During each scan, locations of the mobile computing device can be measured relative to a fixed-positioned smart device using ultra-wideband communication (UWB). The 3D maps for the areas may be registered to the fixed position (i.e., anchor position) of the smart device based on the location measurements acquired during the scan so that the 3D maps can be merged into a combined 3D map. The combined (i.e., merged) 3D map may then be used to facilitate location-specific operation of the mobile computing device or other smart device.Type: GrantFiled: August 24, 2021Date of Patent: February 21, 2023Assignee: GOOGLE LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230042215Abstract: The present disclosure relates generally to the processing of machine-readable visual encodings in view of contextual information. One embodiment of aspects of the present disclosure comprises obtaining image data descriptive of a scene that includes a machine-readable visual encoding; processing the image data with a first recognition system configured to recognize the machine-readable visual encoding; processing the image data with a second, different recognition system configured to recognize a surrounding portion of the scene that surrounds the machine-readable visual encoding; identifying a stored reference associated with the machine-readable visual encoding based at least in part on one or more first outputs generated by the first recognition system based on the image data and based at least in part on one or more second outputs generated by the second recognition system based on the image data; and performing one or more actions responsive to identification of the stored reference.Type: ApplicationFiled: October 21, 2022Publication date: February 9, 2023Inventors: Alexander James Faaborg, Brett Aladdin Barros
-
Publication number: 20230024254Abstract: The present disclosure provides for device localization using ultra wide band (UWB) detection and gesture detection using inertial measurement units (IMUs) on one or more wearable devices to control smart devices, such as home assistants, smart lights, smart locks, etc.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230026575Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Alexander James Faaborg, Shengzhi Wu