Patents by Inventor Yuanzheng Gong
Yuanzheng Gong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12165347Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.Type: GrantFiled: May 18, 2023Date of Patent: December 10, 2024Assignee: GOOGLE LLCInventors: Yunfei Bai, Yuanzheng Gong
-
Patent number: 12112494Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.Type: GrantFiled: February 28, 2020Date of Patent: October 8, 2024Assignee: GOOGLE LLCInventors: Honglak Lee, Xinchen Yan, Soeren Pirk, Yunfei Bai, Seyed Mohammad Khansari Zadeh, Yuanzheng Gong, Jasmine Hsu
-
Patent number: 11883132Abstract: A system for the optical measurement of pH includes a light emitter to emit an excitation light, and a detector coupled to receive florescence light produced by a compound in a mouth of a patient in response to the excitation light. A controller is coupled to the detector, and the controller includes logic that when executed by the controller, causes the system to perform operations. The operations may include emitting the excitation light from the light emitter; measuring an intensity of the florescence light emitted from a surface of individual teeth in a plurality of teeth in the mouth; and determining, based on the intensity of the florescence light, one or more locations on the individual teeth likely to develop demineralization.Type: GrantFiled: October 27, 2017Date of Patent: January 30, 2024Assignee: University of WashingtonInventors: Eric J. Seibel, Yuanzheng Gong, Zheng Xu, Jeffrey S. McLean, Yaxuan Zhou
-
Publication number: 20230410540Abstract: Systems and processes for locating objects and environmental mapping are provided. For example, a device may receive an input including a reference to a first object, wherein the input includes a request for a location of the first object, and the first object is located in a physical environment. The device may determine a location of the first object, and in response to receiving the input, if first criteria are met, provide a description of the location, wherein a criterion of the one or more first criteria is met when the location is available, and the description includes a relationship between the first object and a reference within the physical environment. If second criteria are met, the device may forgo providing the description of the location, wherein a criterion of the one or more second criteria is met when the location is not available.Type: ApplicationFiled: September 23, 2022Publication date: December 21, 2023Inventors: Afshin DEHGHAN, Angela BLECHSCHMIDT, Yuanzheng GONG, Feng TANG, Yang YANG, Zhihao ZHU, Monica Laura ZUENDORF
-
Publication number: 20230289988Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.Type: ApplicationFiled: May 18, 2023Publication date: September 14, 2023Inventors: Yunfei Bai, Yuanzheng Gong
-
Patent number: 11657527Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.Type: GrantFiled: May 28, 2019Date of Patent: May 23, 2023Assignee: X DEVELOPMENT LLCInventors: Yunfei Bai, Yuanzheng Gong
-
Publication number: 20210101286Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.Type: ApplicationFiled: February 28, 2020Publication date: April 8, 2021Inventors: Honglak Lee, Xinchen Yan, Soeren Pirk, Yunfei Bai, Seyed Mohammad Khansari Zadeh, Yuanzheng Gong, Jasmine Hsu
-
Publication number: 20200376675Abstract: Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.Type: ApplicationFiled: May 28, 2019Publication date: December 3, 2020Inventors: Yunfei Bai, Yuanzheng Gong
-
Patent number: 10592552Abstract: Methods, apparatus, systems, and computer-readable media for assigning a real-time clock domain timestamp to sensor frames from a sensor component that operates in a non-real-time time-domain. In some implementations, a real-time component receives capture instances that each indicate capturing of a corresponding sensor data frame by the sensor component. In response to a capture output instance, the real-time component or an additional real-time component assigns a real-time timestamp to the capture output instance, where the real-time timestamp is based on the real-time clock domain. Separately, a non-real-time component receives the corresponding sensor data frames captured by the sensor component, along with corresponding metadata. For each sensor data frame, it is determined whether there is a real-time timestamp that corresponds to the data frame and, if so, the real-time timestamp is assigned to the sensor data frame.Type: GrantFiled: April 4, 2019Date of Patent: March 17, 2020Assignee: X DEVELOPMENT LLCInventors: Emily Cooper, David Deephanphongs, Yuanzheng Gong, Thomas Buschmann, Matthieu Guilbert
-
Publication number: 20190328234Abstract: A system for the optical measurement of pH includes a light emitter to emit an excitation light, and a detector coupled to receive florescence light produced by a compound in a mouth of a patient in response to the excitation light. A controller is coupled to the detector, and the controller includes logic that when executed by the controller, causes the system to perform operations. The operations may include emitting the excitation light from the light emitter; measuring an intensity of the florescence light emitted from a surface of individual teeth in a plurality of teeth in the mouth; and determining, based on the intensity of the florescence light, one or more locations on the individual teeth likely to develop demineralization.Type: ApplicationFiled: October 27, 2017Publication date: October 31, 2019Inventors: Eric J. Seibel, Yuanzheng Gong, Zheng Xu, Jeffrey S. McLean, Yaxuan Zhou
-
Patent number: 10296602Abstract: Methods, apparatus, systems, and computer-readable media for assigning a real-time clock domain timestamp to sensor frames from a sensor component that operates in a non-real-time time-domain. In some implementations, a real-time component receives capture instances that each indicate capturing of a corresponding sensor data frame by the sensor component. In response to a capture output instance, the real-time component or an additional real-time component assigns a real-time timestamp to the capture output instance, where the real-time timestamp is based on the real-time clock domain. Separately, a non-real-time component receives the corresponding sensor data frames captured by the sensor component, along with corresponding metadata. For each sensor data frame, it is determined whether there is a real-time timestamp that corresponds to the data frame and, if so, the real-time timestamp is assigned to the sensor data frame.Type: GrantFiled: April 18, 2017Date of Patent: May 21, 2019Assignee: X DEVELOPMENT LLCInventors: Emily Cooper, Matthieu Guilbert, Thomas Buschmann, David Deephanphongs, Yuanzheng Gong
-
Publication number: 20150346115Abstract: Embodiments regard 3D optical metrology of internal surfaces. Embodiments may include a system having an imaging device to capture multiple images of an internal surface, including a first image that is captured at a first location on an axial path and a second image that is captured at a second location on the axial path, and a transport apparatus to move the imaging device along the axial path. The system further includes a control system that is coupled with the imaging, wherein the control system is to receive the multiple images from the imaging device and to generate a 3D representation of the surface based at least in part on content information and location information for the multiple images.Type: ApplicationFiled: June 1, 2015Publication date: December 3, 2015Inventors: Eric J. Seibel, Yuanzheng Gong, Fred Braun, David L. Gourley