Patents by Inventor Zhenzhong Wei

Zhenzhong Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10690492
    Abstract: The present invention discloses a structural light parameter calibration device and method based on a front-coating plane mirror. The calibration device includes a camera, a laser, a front-coating plane mirror, a flat glass target, and white printing paper. The white printing paper receives a laser beam and presents a real light strip image, and the camera captures the real light stripe image and a mirror light stripe image on the front-coating plane mirror. Feature points are used to determine a rotation matrix, a translation vector, and a vanishing point for the image. The present invention achieves better quality of light stripe and better extraction accuracy, provides feature points with micron-level positional accuracy and more calibration points, and features higher calibration accuracy and more stable calibration results.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: June 23, 2020
    Assignee: Beihang University
    Inventors: Zhenzhong Wei, Wei Zou, Binhu Chai, Yue Min
  • Publication number: 20200132451
    Abstract: The present invention discloses a structural light parameter calibration device and method based on a front-coating plane mirror. The calibration device includes a camera, a laser, a front-coating plane mirror, a flat glass target, and white printing paper. The white printing paper receives a laser beam and presents a real light strip image, and the camera captures the real light stripe image and a mirror light stripe image on the front-coating plane mirror. Feature points are used to determine a rotation matrix, a translation vector, and a vanishing point for the image. The present invention achieves better quality of light stripe and better extraction accuracy, provides feature points with micron-level positional accuracy and more calibration points, and features higher calibration accuracy and more stable calibration results.
    Type: Application
    Filed: March 22, 2018
    Publication date: April 30, 2020
    Inventors: Zhenzhong Wei, Wei Zou, Binhu Chai, Yue Min
  • Patent number: 9524555
    Abstract: A method and software for the simultaneous pose and points-correspondences determination from a planar model are disclosed. The method includes using a coarse pose estimation algorithm to obtain two possible coarse poses, and using each one of the two coarse poses as the initialization of the extended TsPose algorithm to obtain two candidate estimated poses; selecting one from the two candidate estimated poses based on the cost function value. Thus, The method solves the problem of pose redundancy in the simultaneous pose and points-correspondences determination from a planar model, i.e., the problem that the numbers of estimated poses increase exponentially as the iterations go. The disclosed embodiment is based on the coplanar points, and does not place restriction on the shape of a planar model. It performs well in a cluttered and occluded environment, and is noise-resilient in the presence of different levels of noise.
    Type: Grant
    Filed: December 12, 2011
    Date of Patent: December 20, 2016
    Assignee: BEIHANG UNIVERSITY
    Inventors: Guangjun Zhang, Zhenzhong Wei, Wei Wang
  • Patent number: 8964027
    Abstract: The present disclosure provides a global calibration method based on a rigid bar for a multi-sensor vision measurement system, comprising: step 1, executing the following procedure for at least nine times: placing, in front of two vision sensors to be calibrated, a rigid bar fasten with two targets respectively corresponding to the vision sensors; capturing images of the respective targets by their corresponding vision sensors; extracting coordinates of feature points of the respective targets in their corresponding images; and computing 3D coordinates of each feature points of the respective targets under their corresponding vision sensor coordinate frames; and Step 2, computing the transformation matrix between the two vision sensors, with the constraint of the fixed position relationship between the two targets. The present disclosure also provides a global calibration apparatus based on a rigid bar for a multi-sensor vision measurement system.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: February 24, 2015
    Assignee: Beihang University
    Inventors: Guangjun Zhang, Zhen Liu, Zhenzhong Wei, Junhua Sun, Meng Xie
  • Patent number: 8934721
    Abstract: The present disclosure provides a microscopic vision measurement method based on the adaptive positioning of the camera coordinate frame which includes: calibrating parameters of a microscopic stereo vision measurement model (201); acquiring pairs of synchronical images and transmitting the acquired images to a computer through an image acquisition card (202); calculating 3D coordinates of feature points in a scene according to the matched pairs of feature points in the scene obtained from the synchronical images and the calibrated parameters of the microscopic stereo vision measurement model (203); and performing specific measurement according to the 3D coordinates of the feature points in the scene (204). With the method, the nonlinearity of the objective function in the microscopic vision calibration optimization is effectively decreased and a better calibration result is obtained.
    Type: Grant
    Filed: April 25, 2011
    Date of Patent: January 13, 2015
    Assignee: Beihang University
    Inventors: Guangjun Zhang, Zhenzhong Wei, Weixian Li, Zhipeng Cao, Yali Wang
  • Publication number: 20140321735
    Abstract: A method and software for the simultaneous pose and points-correspondences determination from a planar model are disclosed. The method includes using a coarse pose estimation algorithm to obtain two possible coarse poses, and using each one of the two coarse poses as the initialization of the extended TsPose algorithm to obtain two candidate estimated poses; selecting one from the two candidate estimated poses based on the cost function value. Thus, The method solves the problem of pose redundancy in the simultaneous pose and points-correspondences determination from a planar model, i.e., the problem that the numbers of estimated poses increase exponentially as the iterations go. The disclosed embodiment is based on the coplanar points, and does not place restriction on the shape of a planar model. It performs well in a cluttered and occluded environment, and is noise-resilient in the presence of different levels of noise.
    Type: Application
    Filed: December 12, 2011
    Publication date: October 30, 2014
    Applicant: BEIHANG UNIVERSITY
    Inventors: Guangjun Zhang, Zhenzhong Wei, Wei Wang
  • Publication number: 20130058581
    Abstract: The present disclosure provides a microscopic vision measurement method based on the adaptive positioning of the camera coordinate frame which includes: calibrating parameters of a microscopic stereo vision measurement model (201); acquiring pairs of synchronical images and transmitting the acquired images to a computer through an image acquisition card (202); calculating 3D coordinates of feature points in a scene according to the matched pairs of feature points in the scene obtained from the synchronical images and the calibrated parameters of the microscopic stereo vision measurement model (203); and performing specific measurement according to the 3D coordinates of the feature points in the scene (204). With the method, the nonlinearity of the objective function in the microscopic vision calibration optimization is effectively decreased and a better calibration result is obtained.
    Type: Application
    Filed: April 25, 2011
    Publication date: March 7, 2013
    Applicant: BEIHANG UNIVERSITY
    Inventors: Guangjun Zhang, Zhenzhong Wei, Weixian Li, Zhipeng Cao, Yali Wang
  • Publication number: 20120162414
    Abstract: The present disclosure provides a global calibration method based on a rigid bar for a multi-sensor vision measurement system, comprising: step 1, executing the following procedure for at least nine times: placing, in front of two vision sensors to be calibrated, a rigid bar fasten with two targets respectively corresponding to the vision sensors; capturing images of the respective targets by their corresponding vision sensors; extracting coordinates of feature points of the respective targets in their corresponding images; and computing 3D coordinates of each feature points of the respective targets under their corresponding vision sensor coordinate frames; and Step 2, computing the transformation matrix between the two vision sensors, with the constraint of the fixed position relationship between the two targets. The present disclosure also provides a global calibration apparatus based on a rigid bar for a multi-sensor vision measurement system.
    Type: Application
    Filed: August 9, 2011
    Publication date: June 28, 2012
    Applicant: BEIHANG UNIVERSITY
    Inventors: Guangjun Zhang, Zhen Liu, Zhenzhong Wei, Junhua Sun, Meng Xie
  • Patent number: 8180101
    Abstract: This disclosure provides a calibration method for structure parameters of a structured-light vision sensor, which includes setting up the coordinate frames of a camera, image plane and target for calibration. The calculation of coordinates in the camera coordinate frame of stripes, projected by structured-light, on the planar target and a structured-light equation fitting according to the coordinates in the camera coordinate frame of the stripes on the planar target, by moving the planar target arbitrarily multiple times. The calibration method of the structured-light vision sensor provided by the disclosure is easy to operate and no auxiliary apparatus is needed, which can not only promote the efficiency of the calibration of structured-light, but also extend the application scope of calibration of structured-light.
    Type: Grant
    Filed: January 30, 2008
    Date of Patent: May 15, 2012
    Assignee: Beihang University
    Inventors: Junhua Sun, Guangjun Zhang, Zhenzhong Wei
  • Patent number: 8078025
    Abstract: The invention discloses a vehicle dynamic measurement device for comprehensive parameters of rail wear, which comprises a vision sensor, a computer and a milometer. A high-speed image acquisition card and a measurement module are installed in the computer. The vision sensor comprises imaging system for rail cross-section and a raster projector which can project more than one light plane perpendicular to the measured rail. The measurement module is used for calculating vertical wear, horizontal wear, the amplitude and wavelength of corrugation wear. The invention also discloses a vehicle dynamic measurement method for comprehensive parameters of rail wear. The invention can increase the sampling rate of image sensing and acquisition hardware equipment with no need of improving the performance of it, thereby satisfy high-speed on-line dynamic measurement requirements for corrugation wear, and the amplitude and wavelength of corrugation wear can be calculated more precisely.
    Type: Grant
    Filed: October 25, 2008
    Date of Patent: December 13, 2011
    Assignee: Beihang University
    Inventors: Guangjun Zhang, Junhua Sun, Zhenzhong Wei, Fuqiang Zhou, Qingbo Li, Zhen Liu, Qianzhe Liu
  • Patent number: 8019147
    Abstract: The present disclosure is directed to a three-dimensional data registration method for vision measurement in flow style based on a double-sided target. An embodiment of the disclosed method that comprises A. Setting up two digital cameras which can observe the entire measured object; B. Calibrating intrinsic parameters and a transformation between the two digital camera coordinate frames; C. A double-sided target being placed near the measured area of the measured object, the two digital cameras and a vision sensor taking images of at least three non-collinear feature points of the double-sided target; D. Removing the target, measuring the measured area by using the vision sensor; E. Respectively computing the three dimensional coordinates of the feature points in the global coordinate frame and in the vision sensor coordinate frame; F.
    Type: Grant
    Filed: February 4, 2008
    Date of Patent: September 13, 2011
    Assignee: Beihang University
    Inventors: Guangjun Zhang, Junhua Sun, Zhenzhong Wei, Qianzhe Liu
  • Patent number: 7773797
    Abstract: The present invention relates to a high-performance computer vision system and method for measuring the wings deformation of insects with high flapping-frequency, large stroke-amplitude and excellent mobility during free-flight. A geometrical optic unit composed of a polyhedral reflector with four reflection-planes and four planar reflectors is used to image one high-speed CMOS camera to four virtual cameras, combined with double laser-sheet sources, multiple virtual stereo and structured-light sensors are available to observe the free-flight of insect at different viewpoints simultaneously. In addition, an optoelectronic guiding equipment is included to lead the free-flight of insect and trigger the camera to capture the image sequences of insect-flight automatically. The deformation of insect-wings can be reconstructed by the spatial coordinates of wing-edges and the distorted light-lines projected on the surface of wings.
    Type: Grant
    Filed: February 15, 2006
    Date of Patent: August 10, 2010
    Assignee: Beijing University of Aeronautics and Astronautics
    Inventors: Guangjun Zhang, Dazhi Chen, Ying Wang, Fuqiang Zhou, Zhenzhong Wei
  • Patent number: 7768527
    Abstract: The disclosure relates to a hardware-in-the-loop simulation system and method for computer vision. An embodiment of the disclosed system comprises a software simulation and a hardware simulation. The software simulation includes a virtual scene and an observed object that are generated by virtual reality software. The virtual scene images are obtained at different viewpoints. The hardware simulation includes the virtual scene images being projected onto a screen by a projector, wherein the projected scene images are shot by a camera, and where in the direction of the camera is controlled by a pan-tilt.
    Type: Grant
    Filed: November 20, 2006
    Date of Patent: August 3, 2010
    Assignee: Beihang University
    Inventors: Guangjun Zhang, Suqi Li, Zhenzhong Wei, Zhen Liu
  • Publication number: 20090112487
    Abstract: The invention discloses a vehicle dynamic measurement device for comprehensive parameters of rail wear, which comprises a vision sensor, a computer and a milometer. A high-speed image acquisition card and a measurement module are installed in the computer. The vision sensor comprises imaging system for rail cross-section and a raster projector which can project more than one light plane perpendicular to the measured rail. The measurement module is used for calculating vertical wear, horizontal wear, the amplitude and wavelength of corrugation wear. The invention also discloses a vehicle dynamic measurement method for comprehensive parameters of rail wear. The invention can increase the sampling rate of image sensing and acquisition hardware equipment with no need of improving the performance of it, thereby satisfy high-speed on-line dynamic measurement requirements for corrugation wear, and the amplitude and wavelength of corrugation wear can be calculated more precisely.
    Type: Application
    Filed: October 25, 2008
    Publication date: April 30, 2009
    Applicant: BEIHANG UNIVERSITY
    Inventors: Guangjun Zhang, Junhua Sun, Zhenzhong Wei, Fuqiang Zhou, Qingbo Li, Zhen Liu, Qianzhe Liu
  • Publication number: 20090059011
    Abstract: This disclosure provides a calibration method for structure parameters of a structured-light vision sensor, which includes setting up the coordinate frames of a camera, image plane and target for calibration. The calculation of coordinates in the camera coordinate frame of stripes, projected by structured-light, on the planar target and a structured-light equation fitting according to the coordinates in the camera coordinate frame of the stripes on the planar target, by moving the planar target arbitrarily multiple times. The calibration method of the structured-light vision sensor provided by the disclosure is easy to operate and no auxiliary apparatus is needed, which can not only promote the efficiency of the calibration of structured-light, but also extend the application scope of calibration of structured-light.
    Type: Application
    Filed: January 30, 2008
    Publication date: March 5, 2009
    Inventors: Junhua Sun, Guangjun Zhang, Zhenzhong Wei
  • Publication number: 20080298673
    Abstract: The present disclosure is directed to a three-dimensional data registration method for vision measurement in flow style based on a double-sided target. An embodiment of the disclosed method that comprises A. Setting up two digital cameras which can observe the entire measured object; B. Calibrating intrinsic parameters and a transformation between the two digital camera coordinate frames; C. A double-sided target being placed near the measured area of the measured object, the two digital cameras and a vision sensor taking images of at least three non-collinear feature points of the double-sided target; D. Removing the target, measuring the measured area by using the vision sensor; E. Respectively computing the three dimensional coordinates of the feature points in the global coordinate frame and in the vision sensor coordinate frame; F.
    Type: Application
    Filed: February 4, 2008
    Publication date: December 4, 2008
    Inventors: Guangjun Zhang, Junhua Sun, Zhenzhong Wei, Qianzhe Liu
  • Publication number: 20080050042
    Abstract: The disclosure relates to a hardware-in-the-loop simulation system and method for computer vision. An embodiment of the disclosed system comprises a software simulation and a hardware simulation. The software simulation includes a virtual scene and an observed object that are generated by virtual reality software. The virtual scene images are obtained at different viewpoints. The hardware simulation includes the virtual scene images being projected onto a screen by a projector, wherein the projected scene images are shot by a camera, and where in the direction of the camera is controlled by a pan-tilt.
    Type: Application
    Filed: November 20, 2006
    Publication date: February 28, 2008
    Inventors: Guangjun Zhang, Suqi Li, Zhenzhong Wei, Zhen Liu
  • Publication number: 20070183631
    Abstract: The present invention relates to a high-performance computer vision system and method for measuring the wings deformation of insects with high flapping-frequency, large stroke-amplitude and excellent mobility during free-flight. A geometrical optic unit composed of a polyhedral reflector with four reflection-planes and four planar reflectors is used to image one high-speed CMOS camera to four virtual cameras, combined with double laser-sheet sources, multiple virtual stereo and structured-light sensors are available to observe the free-flight of insect at different viewpoints simultaneously. In addition, an optoelectronic guiding equipment is included to lead the free-flight of insect and trigger the camera to capture the image sequences of insect-flight automatically. The deformation of insect-wings can be reconstructed by the spatial coordinates of wing-edges and the distorted light-lines projected on the surface of wings.
    Type: Application
    Filed: February 15, 2006
    Publication date: August 9, 2007
    Applicant: BEIJING UNIVERSITY OF AERONAUTICS AND ASTRONAUTICS
    Inventors: Guangjun ZHANG, Dazhi CHEN, Ying WANG, Fuqiang ZHOU, Zhenzhong WEI