Patents by Inventor Youding Zhu
Youding Zhu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230357076Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.Type: ApplicationFiled: May 2, 2023Publication date: November 9, 2023Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Viabhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
-
Patent number: 11698272Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.Type: GrantFiled: August 31, 2020Date of Patent: July 11, 2023Assignee: NVIDIA CorporationInventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
-
Patent number: 10948726Abstract: Optimizations are provided for generating passthrough visualizations for Head Mounted Displays. The interpupil distance of a user wearing a head-mounted device is determined and a stereo camera pair with a left and right camera is used to capture raw images. The center-line perspectives of the images captured by the left camera have non-parallel alignments with respect to center-line perspectives of any images captured by the right camera. After the raw images are captured, various camera distortion corrections are applied to the images to create corrected images. Epipolar transforms are then applied to the corrected images to create transformed images having parallel center-line perspectives. Thereafter, a depth map is generated of the transformed images. Finally, left and right passthrough visualizations are generated and rendered by reprojecting the transformed left and right images.Type: GrantFiled: October 4, 2019Date of Patent: March 16, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Youding Zhu, Michael Bleyer, Denis Claude Pierre Demandolx, Raymond Kirk Price
-
Publication number: 20210063200Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.Type: ApplicationFiled: August 31, 2020Publication date: March 4, 2021Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
-
Publication number: 20200041799Abstract: Optimizations are provided for generating passthrough visualizations for Head Mounted Displays. The interpupil distance of a user wearing a head-mounted device is determined and a stereo camera pair with a left and right camera is used to capture raw images. The center-line perspectives of the images captured by the left camera have non-parallel alignments with respect to center-line perspectives of any images captured by the right camera. After the raw images are captured, various camera distortion corrections are applied to the images to create corrected images. Epipolar transforms are then applied to the corrected images to create transformed images having parallel center-line perspectives. Thereafter, a depth map is generated of the transformed images. Finally, left and right passthrough visualizations are generated and rendered by reprojecting the transformed left and right images.Type: ApplicationFiled: October 4, 2019Publication date: February 6, 2020Inventors: Youding ZHU, Michael BLEYER, Denis Claude Pierre DEMANDOLX, Raymond Kirk PRICE
-
Patent number: 10546426Abstract: A virtual reality scene is displayed via a display device. A real-world positioning of a peripheral control device is identified relative to the display device. Video of a real-world scene of a physical environment located behind a display region of the display device is captured via a camera. A real-world portal is selectively displayed via the display device that includes a portion of the real-world scene and simulates a view through the virtual reality scene at a position within the display region that tracks the real-world positioning of the peripheral control device.Type: GrantFiled: February 1, 2018Date of Patent: January 28, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandru Octavian Balan, Youding Zhu, Min shik Park
-
Patent number: 10437065Abstract: Optimizations are provided for generating passthrough visualizations for Head Mounted Displays. The interpupil distance of a user wearing a head-mounted device is determined and a stereo camera pair with a left and right camera is used to capture raw images. The center-line perspectives of the images captured by the left camera have non-parallel alignments with respect to center-line perspectives of any images captured by the right camera. After the raw images are captured, various camera distortion corrections are applied to the images to create corrected images. Epipolar transforms are then applied to the corrected images to create transformed images having parallel center-line perspectives. Thereafter, a depth map is generated of the transformed images. Finally, left and right passthrough visualizations are generated and rendered by reprojecting the transformed left and right images.Type: GrantFiled: October 3, 2017Date of Patent: October 8, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Youding Zhu, Michael Bleyer, Denis Claude Pierre Demandolx, Raymond Kirk Price
-
Publication number: 20190213793Abstract: A virtual reality scene is displayed via a display device. A real-world positioning of a peripheral control device is identified relative to the display device. Video of a real-world scene of a physical environment located behind a display region of the display device is captured via a camera. A real-world portal is selectively displayed via the display device that includes a portion of the real-world scene and simulates a view through the virtual reality scene at a position within the display region that tracks the real-world positioning of the peripheral control device.Type: ApplicationFiled: February 1, 2018Publication date: July 11, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Alexandru Octavian BALAN, Youding ZHU, Min shik PARK
-
Publication number: 20190101758Abstract: Optimizations are provided for generating passthrough visualizations for Head Mounted Displays. The interpupil distance of a user wearing a head-mounted device is determined and a stereo camera pair with a left and right camera is used to capture raw images. The center-line perspectives of the images captured by the left camera have non-parallel alignments with respect to center-line perspectives of any images captured by the right camera. After the raw images are captured, various camera distortion corrections are applied to the images to create corrected images. Epipolar transforms are then applied to the corrected images to create transformed images having parallel center-line perspectives. Thereafter, a depth map is generated of the transformed images. Finally, left and right passthrough visualizations are generated and rendered by reprojecting the transformed left and right images.Type: ApplicationFiled: October 3, 2017Publication date: April 4, 2019Inventors: Youding Zhu, Michael Bleyer, Denis Claude Pierre Demandolx, Raymond Kirk Price
-
Patent number: 9746675Abstract: A head-mounted display device is disclosed, which includes an at least partially see-through display, a processor configured to detect a physical feature, generate an alignment hologram based on the physical feature, determine a view of the alignment hologram based on a default view matrix for a first eye of a user of the head-mounted display device, display the view of the alignment hologram to the first eye of the user on the at least partially see-through display, output an instruction to the user to enter an adjustment input to visually align the alignment hologram with the physical feature, determine a calibrated view matrix based on the default view matrix and the adjustment input, and adjust a view matrix setting of the head-mounted display device based on the calibrated view matrix.Type: GrantFiled: May 28, 2015Date of Patent: August 29, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Quentin Simon Charles Miller, Drew Steedly, Denis Demandolx, Youding Zhu, Qi Kuan Zhou, Todd Michael Lyon
-
Patent number: 9658686Abstract: Various embodiments relating to using motion based view matrix tuning to calibrate a head-mounted display device are disclosed. In one embodiment, the holograms are rendered with different view matrices, each view matrix corresponding to a different inter-pupillary distance. Upon selection by the user of the most stable hologram, the head-mounted display device can be calibrated to the inter-pupillary distance corresponding to the selected most stable hologram.Type: GrantFiled: May 28, 2015Date of Patent: May 23, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Quentin Simon Charles Miller, Drew Steedly, Denis Demandolx, Youding Zhu, Qi Kuan Zhou, Todd Michael Lyon
-
Publication number: 20160349837Abstract: Various embodiments relating to using motion based view matrix tuning to calibrate a head-mounted display device are disclosed. In one embodiment, the holograms are rendered with different view matrices, each view matrix corresponding to a different inter-pupillary distance. Upon selection by the user of the most stable hologram, the head-mounted display device can be calibrated to the inter-pupillary distance corresponding to the selected most stable hologram.Type: ApplicationFiled: May 28, 2015Publication date: December 1, 2016Inventors: Quentin Simon Charles Miller, Drew Steedly, Denis Demandolx, Youding Zhu, Qi Kuan Zhou, Todd Michael Lyon
-
Publication number: 20160349510Abstract: A head-mounted display device is disclosed, which includes an at least partially see-through display, a processor configured to detect a physical feature, generate an alignment hologram based on the physical feature, determine a view of the alignment hologram based on a default view matrix for a first eye of a user of the head-mounted display device, display the view of the alignment hologram to the first eye of the user on the at least partially see-through display, output an instruction to the user to enter an adjustment input to visually align the alignment hologram with the physical feature, determine a calibrated view matrix based on the default view matrix and the adjustment input, and adjust a view matrix setting of the head-mounted display device based on the calibrated view matrix.Type: ApplicationFiled: May 28, 2015Publication date: December 1, 2016Inventors: Quentin Simon Charles Miller, Drew Steedly, Denis Demandolx, Youding Zhu, Qi Kuan Zhou, Todd Michael Lyon
-
Patent number: 9352230Abstract: Techniques for interpreting motions of a motion-sensitive device are described to allow natural and intuitive interfaces for controlling an application (e.g. video game). A motion-sensitive device includes inertial and generates sensor signals sufficient to derive positions and orientations of the device in six degrees of freedom. The motion of the device in six degrees of freedom is tracked by analyzing sensor data from the inertial sensors in conjunction with data from a secondary source that may be a camera or a non-inertial sensor. Different techniques are provided to correct or minimize errors in deriving the positions and orientations of the device. These techniques include “stop” detection, back tracking, extrapolation of sensor data beyond the sensor ranges, and using constraints from multi trackable objects in an application being interfaced with the motion-sensitive device.Type: GrantFiled: May 11, 2011Date of Patent: May 31, 2016Assignee: AiLive Inc.Inventors: Dana Wilkinson, Charles Musick, Jr., William Robert Powers, III, Youding Zhu
-
Patent number: 9292734Abstract: Techniques for performing accurate and automatic head pose estimation are disclosed. According to one aspect of the techniques, head pose estimation is integrated with a scale-invariant head tracking method along with facial features detected from a located head in images. Thus the head pose estimation works efficiently even when there are large translational movements resulting from the head motion. Various computation techniques are used to optimize the process of estimation so that the head pose estimation can be applied to control one or more objects in a virtual environment and virtual character gaze control.Type: GrantFiled: July 11, 2014Date of Patent: March 22, 2016Assignee: AiLive, Inc.Inventors: Youding Zhu, Charles Musick, Jr., Robert Kay, William Robert Powers, III, Dana Wilkinson, Stuart Reynolds
-
Patent number: 9165199Abstract: A system, method, and computer program product for estimating human body pose are described. According to one aspect, anatomical features are detected in a depth image of a human actor. The method detects a head, neck, and trunk (H-N-T) template in the depth image, and detects limbs in the depth image based on the H-N-T template. The anatomical features are detected based on the H-N-T template and the limbs. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.Type: GrantFiled: May 29, 2009Date of Patent: October 20, 2015Assignee: Honda Motor Co., Ltd.Inventors: Youding Zhu, Behzad Dariush, Kikuo Fujimura
-
Patent number: 9098766Abstract: A system, method, and computer program product for estimating upper body human pose are described. According to one aspect, a plurality of anatomical features are detected in a depth image of the human actor. The method detects a head, neck, and torso (H-N-T) template in the depth image, and detects the features in the depth image based on the H-N-T template. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.Type: GrantFiled: December 19, 2008Date of Patent: August 4, 2015Assignee: Honda Motor Co., Ltd.Inventors: Behzad Dariush, Youding Zhu, Kikuo Fujimura
-
Patent number: 9007299Abstract: Techniques for using a motion sensitive device as a controller are disclosed. A motion controller as an input/control device is used to control an existing electronic device (a.k.a., controlled device) previously configured for taking inputs from a pre-defined controlling device. The signals from the input device are in a different form from the pre-defined controlling device. According to one aspect of the present invention, the controlled device was designed to respond to signals from a pre-defined controlling device (e.g., a touch-screen device). The inputs from the motion controller are converted into touch-screen like signals that are then sent to the controlled device or programs being executed in the controlled device to cause the behavior of the controlled device to change or respond thereto, without reconfiguration of the applications running on the controlled device.Type: GrantFiled: September 30, 2011Date of Patent: April 14, 2015Assignee: AiLive Inc.Inventors: Charles Musick, Jr., Robert Kay, Stuart Reynolds, Dana Wilkinson, Anupam Chakravorty, William Robert Powers, III, Wei Yen, Youding Zhu
-
Publication number: 20140342830Abstract: Techniques for providing compatibility between two different game controllers are disclosed. When a new or more advanced controller is introduced, it is important that such a new controller works with a system originally configured for an existing or old controller. The new controller may provide more functionalities than the old one does. In some cases, the new controller provides more sensing signals than the old one does. The new controller is configured to work with the system to transform the sensing signals therefrom to masquerade as though they were coming from the old controller. The transforming of the sensing signals comprises: replicating operational characteristics of the old controller, and relocating virtually the sensing signals to appear as though the sensing signals were generated from inertial sensors located in a certain location in the new controller responsive to a certain location of the inertial sensors in the old controller.Type: ApplicationFiled: August 1, 2014Publication date: November 20, 2014Inventors: Charles Musick, JR., Robert Kay, William Robert Powers, III, Dana Wilkinson, Youding Zhu
-
Publication number: 20140320691Abstract: Techniques for performing accurate and automatic head pose estimation are disclosed. According to one aspect of the techniques, head pose estimation is integrated with a scale-invariant head tracking method along with facial features detected from a located head in images. Thus the head pose estimation works efficiently even when there are large translational movements resulting from the head motion. Various computation techniques are used to optimize the process of estimation so that the head pose estimation can be applied to control one or more objects in a virtual environment and virtual character gaze control.Type: ApplicationFiled: July 11, 2014Publication date: October 30, 2014Inventors: Youding Zhu, Charles Musick, JR., Robert Kay, William Robert Powers, III, Dana Wilkinson, Stuart Reynolds