Patents by Inventor Shengzhi Wu
Shengzhi Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11946831Abstract: Determining an active jacking force of a tunneling closely undercrossing existing station, includes: acquiring design parameters of stations, and geological parameters of a construction site; obtaining a stress solution expression of the active jacking force on the existing station to obtain a stress solution of any point in a semi-infinite space under the action of a vertical load, replacing the vertical load with a vertical load acting on the existing station, and obtain a vertical stress solution of the existing station; obtaining a relational expression between the deflection and the stress solution of the existing station; and substituting the stress solution expression of the active jacking force into the relational expression between the deflection and the stress solution of the existing station according to boundary conditions at both ends of a beam on elastic foundation to obtain the active jacking force that controls the deflection of the existing station.Type: GrantFiled: April 28, 2023Date of Patent: April 2, 2024Assignee: SHANDONG JIANZHU UNIVERSITYInventors: Shengzhi Wu, Guangbiao Shao, Erbin Liang, Jun Wang, Guohui Liu, Zhikang Wang, Jianyong Han
-
Patent number: 11947859Abstract: A system and method is provided that provides for the transfer of the execution of content from a user device to an external device for output of the content by the external device. External devices may be detected in a physical space, and identified based on previous connection with the user device, based on a shared network or shared system of connected devices including the user device, based on image information captured by the user device and previously stored anchoring information that identifies the external devices, and the like. An external device may be selected for potential output of the content based on previously stored configuration information associated with the external device including, for example, output capabilities associated with the external device. The identified external device may output the transferred content in response to a user verification input, verifying that the content is to be output by the external device.Type: GrantFiled: November 16, 2020Date of Patent: April 2, 2024Assignee: GOOGLE LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11935199Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.Type: GrantFiled: July 26, 2021Date of Patent: March 19, 2024Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Shengzhi Wu
-
Patent number: 11886628Abstract: The present disclosure provides systems and methods for delivering notifications to a device or accessory based on the context. A host device may be wirelessly coupled to one or more accessories that are available to receive a notification. The host device may analyze a context for transmitting a notification, such as analyzing user attention and accessory state. Analyzing user attention and accessory state may be done by analyzing sensor data, such as audio input, image sensors, proximity sensors, etc. The host device may determine a content type, such as text, e-mail, news, or download, content classification, such as urgent, sensitive, or reminder, and a notification type, such as visual, audio, or haptic. The host device may select at least one of the accessories based on the context. The host device may transmit the notification to the selected accessory.Type: GrantFiled: May 7, 2020Date of Patent: January 30, 2024Assignee: Google LLCInventors: Elena Jessop Nattinger, Shengzhi Wu, Diane C. Wang
-
Publication number: 20240011864Abstract: Determining an active jacking force of a tunneling closely undercrossing existing station, includes: acquiring design parameters of stations, and geological parameters of a construction site; obtaining a stress solution expression of the active jacking force on the existing station to obtain a stress solution of any point in a semi-infinite space under the action of a vertical load, replacing the vertical load with a vertical load acting on the existing station, and obtain a vertical stress solution of the existing station; obtaining a relational expression between the deflection and the stress solution of the existing station; and substituting the stress solution expression of the active jacking force into the relational expression between the deflection and the stress solution of the existing station according to boundary conditions at both ends of a beam on elastic foundation to obtain the active jacking force that controls the deflection of the existing station.Type: ApplicationFiled: April 28, 2023Publication date: January 11, 2024Applicant: SHANDONG JIANZHU UNIVERSITYInventors: Shengzhi WU, Guangbiao SHAO, Erbin LIANG, Jun WANG, Guohui LIU, Zhikang WANG, Jianyong HAN
-
Publication number: 20230360264Abstract: According to an aspect, a method of identifying a position of a controllable device includes receiving visual data from an image sensor on a wearable device, generating, by an object recognition module, identification data based on the visual data, and identifying, using the identification data, a first three-dimensional (3D) map from a map database that stores a plurality of 3D maps including the first 3D map and a second 3D map, where the first 3D map is associated with a first controllable device and the second 3D map is associated with a second controllable device. The method includes obtaining a position of the first controllable device in a physical space based on visual positioning data of the first 3D map and rendering a user interface (UI) object on a display in a position that is within a threshold distance of the position of the first controllable device.Type: ApplicationFiled: November 16, 2020Publication date: November 9, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11739639Abstract: A composite support structure, a construction system, and a method, the composite support structure includes a plurality of arc plate rings that are longitudinally arranged along a roadway. A concrete fill steel tube support is arranged on an inner side or an outer side of each arc plate ring. The arc plate ring is formed by splicing a plurality of arc plates. Each concrete fill steel tube support is formed by splicing a plurality of steel pipe sections. The arc plate rings and the concrete fill steel tube supports are capable of jointly supporting walls of the roadway. The support structure has high bearing capability, high construction efficiency of a construction system, and low labor intensity.Type: GrantFiled: September 15, 2020Date of Patent: August 29, 2023Assignees: SHANDONG JIANZHU UNIVERSITY, ENGINEERING APPRAISAL AND REINFORCEMENT INSTITUTE CO., LTD. SHANDONG JIANZHU UNIVERSITYInventors: Jun Wang, Xiang Gao, Xisen Fan, Guohui Liu, Jianyong Han, Shengzhi Wu, Hougang Ding
-
Publication number: 20230252739Abstract: According to an aspect, a method for sharing a collaborative augmented reality (AR) environment including obtaining, by a sensor system of a first computing system, visual data representing a physical space of an AR environment, where the visual data is used to create a three-dimensional (3D) map of the physical space. The 3D map includes a coordinate space having at least one virtual object added by a user of the first computing system. The method includes broadcasting, by a transducer on the first computing system, an ultrasound signal, where the ultrasound signal includes an identifier associated with the 3D map. The identifier is configured to be detected by a second computing system to join the AR environment.Type: ApplicationFiled: January 13, 2023Publication date: August 10, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230075389Abstract: Three-dimensional (3D) maps may be generated for different areas based on scans of the areas using sensor(s) of a mobile computing device. During each scan, locations of the mobile computing device can be measured relative to a fixed-positioned smart device using ultra-wideband communication (UWB). The 3D maps for the areas may be registered to the fixed position (i.e., anchor position) of the smart device based on the location measurements acquired during the scan so that the 3D maps can be merged into a combined 3D map. The combined (i.e., merged) 3D map may then be used to facilitate location-specific operation of the mobile computing device or other smart device.Type: ApplicationFiled: August 24, 2021Publication date: March 9, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11592907Abstract: A user may routinely wear or hold more than one computing devices. One of the computing devices may be a head-mounted computing-device configured for augmented reality. The head-mounted computing-device may include a camera. While imaging, the camera can consume power and processing resources that diminish a battery of the head-mounted computing device. To improve a battery life and to enhance a user's privacy, imaging of the camera can be deactivated during periods when the user is not interacting with the head-mounted computing device and activated when the user wishes to interact with the head-mounted computing device. The activation of the camera can be triggered by gestured data collected by a computing device other than the head-mounted computing-device.Type: GrantFiled: October 20, 2020Date of Patent: February 28, 2023Assignee: Google LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11585917Abstract: Three-dimensional (3D) maps may be generated for different areas based on scans of the areas using sensor(s) of a mobile computing device. During each scan, locations of the mobile computing device can be measured relative to a fixed-positioned smart device using ultra-wideband communication (UWB). The 3D maps for the areas may be registered to the fixed position (i.e., anchor position) of the smart device based on the location measurements acquired during the scan so that the 3D maps can be merged into a combined 3D map. The combined (i.e., merged) 3D map may then be used to facilitate location-specific operation of the mobile computing device or other smart device.Type: GrantFiled: August 24, 2021Date of Patent: February 21, 2023Assignee: GOOGLE LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230024254Abstract: The present disclosure provides for device localization using ultra wide band (UWB) detection and gesture detection using inertial measurement units (IMUs) on one or more wearable devices to control smart devices, such as home assistants, smart lights, smart locks, etc.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20230026575Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.Type: ApplicationFiled: July 26, 2021Publication date: January 26, 2023Inventors: Alexander James Faaborg, Shengzhi Wu
-
Patent number: 11557097Abstract: According to an aspect, a method for sharing a collaborative augmented reality (AR) environment including obtaining, by a sensor system of a first computing system, visual data representing a physical space of an AR environment, where the visual data is used to create a three-dimensional (3D) map of the physical space. The 3D map includes a coordinate space having at least one virtual object added by a user of the first computing system. The method includes broadcasting, by a transducer on the first computing system, an ultrasound signal, where the ultrasound signal includes an identifier associated with the 3D map. The identifier is configured to be detected by a second computing system to join the AR environment.Type: GrantFiled: November 13, 2020Date of Patent: January 17, 2023Assignee: GOOGLE LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20220364469Abstract: A composite support structure, a construction system, and a method, the composite support structure includes a plurality of arc plate rings that are longitudinally arranged along a roadway. A concrete fill steel tube support is arranged on an inner side or an outer side of each arc plate ring. The arc plate ring is formed by splicing a plurality of arc plates. Each concrete fill steel tube support is formed by splicing a plurality of steel pipe sections. The arc plate rings and the concrete fill steel tube supports are capable of jointly supporting walls of the roadway. The support structure has high bearing capability, high construction efficiency of a construction system, and low labor intensity.Type: ApplicationFiled: September 15, 2020Publication date: November 17, 2022Applicants: SHANDONG JIANZHU UNIVERSITY, ENGINEERING APPRAISAL AND REINFORCEMENT INSTITUTE CO.,LTD, SHANDONG JIANZHU UNIVERSITYInventors: Jun WANG, Xiang GAO, Xisen FAN, Guohui LIU, Jianyong HAN, Shengzhi WU, Hougang DING
-
Publication number: 20220236942Abstract: A system and method is provided that provides for the transfer of the execution of content from a user device to an external device for output of the content by the external device. External devices may be detected in a physical space, and identified based on previous connection with the user device, based on a shared network or shared system of connected devices including the user device, based on image information captured by the user device and previously stored anchoring information that identifies the external devices, and the like. An external device may be selected for potential output of the content based on previously stored configuration information associated with the external device including, for example, output capabilities associated with the external device. The identified external device may output the transferred content in response to a user verification input, verifying that the content is to be output by the external device.Type: ApplicationFiled: November 16, 2020Publication date: July 28, 2022Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20220237875Abstract: In one general aspect, a method can include detecting a surface within a real-world area where the surface is captured by a user via an image sensor in a mobile device. The method can include receiving an augmented reality (AR) generation instruction to generate an AR anchor intersecting the surface within the real-world area where the AR anchor is at a target location for display of an AR object. The method can include defining a capture instruction, in response to the AR generation instruction and based on the intersection.Type: ApplicationFiled: July 22, 2020Publication date: July 28, 2022Inventors: Shengzhi Wu, Anshuman Kumar
-
Publication number: 20220157023Abstract: According to an aspect, a method for sharing a collaborative augmented reality (AR) environment including obtaining, by a sensor system of a first computing system, visual data representing a physical space of an AR environment, where the visual data is used to create a three-dimensional (3D) map of the physical space. The 3D map includes a coordinate space having at least one virtual object added by a user of the first computing system. The method includes broadcasting, by a transducer on the first computing system, an ultrasound signal, where the ultrasound signal includes an identifier associated with the 3D map. The identifier is configured to be detected by a second computing system to join the AR environment.Type: ApplicationFiled: November 13, 2020Publication date: May 19, 2022Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20220121288Abstract: A user may routinely wear or hold more than one computing devices. One of the computing devices may be a head-mounted computing-device configured for augmented reality. The head-mounted computing-device may include a camera. While imaging, the camera can consume power and processing resources that diminish a battery of the head-mounted computing device. To improve a battery life and to enhance a user's privacy, imaging of the camera can be deactivated during periods when the user is not interacting with the head-mounted computing device and activated when the user wishes to interact with the head-mounted computing device. The activation of the camera can be triggered by gestured data collected by a computing device other than the head-mounted computing-device.Type: ApplicationFiled: October 20, 2020Publication date: April 21, 2022Inventors: Shengzhi Wu, Alexander James Faaborg
-
Publication number: 20220050518Abstract: The present disclosure provides systems and methods for delivering notifications to a device or accessory based on the context. A host device may be wirelessly coupled to one or more accessories that are available to receive a notification. The host device may analyze a context for transmitting a notification, such as analyzing user attention and accessory state. Analyzing user attention and accessory state may be done by analyzing sensor data, such as audio input, image sensors, proximity sensors, etc. The host device may determine a content type, such as text, e-mail, news, or download, content classification, such as urgent, sensitive, or reminder, and a notification type, such as visual, audio, or haptic. The host device may select at least one of the accessories based on the context. The host device may transmit the notification to the selected accessory.Type: ApplicationFiled: May 7, 2020Publication date: February 17, 2022Applicant: Google LLCInventors: Elena Jessop Nattinger, Shengzhi Wu, Diane C. Wang