Patents by Inventor Zhenyu Wang

Zhenyu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10681355
    Abstract: Disclosed are a describing method and a coding method of panoramic video ROIs based on multiple layers of spherical circumferences. The describing method comprises: first setting a center of the panoramic video ROIs; then setting the number of layers of ROIs as N; obtaining the size Rn of the current layer ROI based on a radius or angle; obtaining the sizes of all of the N layers of ROIs, and writing information such as the center of the ROIs, the number of layers, and the size of each layer into a sequence header of a code stream. The coding method comprises adjusting or filtering an initial QP based on a QP adjusted value and then coding an image. By flexibly assigning code rates to multiple layers of panoramic video ROIs, while guaranteeing a relatively high image quality of ROIs, the code rate needed for coding and transmission is greatly reduced.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: June 9, 2020
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Zhenyu Wang, Ronggang Wang, Yueming Wang, Wen Gao
  • Publication number: 20200175648
    Abstract: Disclosed is a panoramic image mapping method, wherein mapping regions and a non-mapping region are partitioned for an equirectangular panoramic image with a resolution of 2M×M, where only the partitioned mapping regions are mapped as square regions; the method comprises:, computing a vertical distance and a horizontal distance from a point on the square region to a center of the square region, a larger one of which being denoted as m; computing a distance n from the point to a zeroth (0th) point on a concentric square region; computing a longitude and a latitude corresponding to the point; computing a corresponding position (X, Y) in the equirectangular panoramic image to which the point is mapped; and then assigning a value to the point. The method may effectively reduce oversampling, thereby effectively reducing the number of pixels of the panoramic image and the code rate required for coding with little distortion.
    Type: Application
    Filed: December 13, 2016
    Publication date: June 4, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200160048
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Application
    Filed: November 24, 2017
    Publication date: May 21, 2020
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10658891
    Abstract: Embodiments describe a motor. The motor includes a stator, and a rotor, which is arranged within the stator. An end part of at least one air-gap slot of the rotor has an offset with a predetermined distance and/or a predetermined angle relative to a main body part adjacent immediately to the end part. With the offset of a predetermined distance and/or a predetermined angle configured at the end part of at least one air-gap slot of the rotor, ripple torque of the motor is effectively lower down while complexity of the motor, stator or rotor will not be increased.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: May 19, 2020
    Assignee: DANFOSS (TIANJIN), LTD.
    Inventors: Wanzhen Liu, Li Yao, Yan Lin, Guangqiang Liu, Zhenyu Wang, Meng Wang, Weiping Tang
  • Publication number: 20200154111
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: July 17, 2019
    Publication date: May 14, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200143511
    Abstract: Disclosed are a panoramic video forward mapping method and a panoramic video inverse mapping method, which relates to the field of virtual reality (VR) videos. In the present disclosure, the forward mapping method comprises: mapping, based on a main viewpoint, the Areas I, II, and III on the sphere onto corresponding areas on the plane, wherein Area I corresponds to the area with the included angle 0°˜Z1, the Area II corresponds to the area with the included angle Z1˜Z2, and the Area III corresponds to the area with the included angle Z2˜180°. The panoramic video forward mapping method refers to mapping a spherical source corresponding to the panoramic image A onto a plane square image B; the panoramic video inverse mapping method refers to mapping the plane square image B back to the sphere for being rendered and viewed.
    Type: Application
    Filed: August 4, 2017
    Publication date: May 7, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Patent number: 10643299
    Abstract: Embodiments of the present disclosure provide a method for accelerating CDVS extraction process based on a GPGPU platform, wherein for the stages of feature detection and local descriptor computation of the CDVS extraction process, operation logics and parallelism strategies of respective inter-pixel parallelism sub-procedures and respective inter-feature point parallelism sub-procedures are implemented by leveraging an OpenCL general-purpose parallelism programming framework, and acceleration is achieved by leveraging a GPU's parallelism computation capability; including: partitioning computing tasks for a GPU and a CPU; reconstructing an image scale pyramid storage model; assigning parallelism strategies to respective sub-procedures for the GPU; and applying local memory to mitigate the access bottleneck. The technical solution of the present disclosure may accelerate the CDVS extraction process and significantly enhances the extraction performance.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: May 5, 2020
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Shen Zhang, Zhenyu Wang, Wen Gao
  • Patent number: 10628924
    Abstract: Method and device for deblurring an out-of-focus blurred image: first, a preset blur kernel is used to carry out blurring processing on an original image to obtain a re-blurred image. Blur amounts of pixels in an edge area of the original image are estimated according to the change of the image edge information in the blurring processing to obtain a sparse blur amount diagram. Blur amounts of pixels in a non-edge area of the original image are estimated according to the sparse blur amount diagram to obtain a complete blur amount diagram. Deblurring processing is carried out according to the complete blur amount diagram to obtain a deblurred image. In the method and device provided, since a blur amount diagram is obtained based on the change of edge information after image blurring, the blur amount diagram can be more accurate, so that the quality of a deblurred image is improved.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: April 21, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Ronggang Wang, Xinxin Zhang, Zhenyu Wang, Wen Gao
  • Publication number: 20200112033
    Abstract: A platinum/black phosphorus@carbon sphere methanol fuel cell anode catalyst and preparation method thereof including the following steps: (1) dispersing a black phosphorus solid in an organic solvent to obtain a single or a few layers of black phosphorus dispersion with set concentration; (2) mixing the dispersion with glucose and stirring until dissolved; (3) performing a hydrothermal reaction on the solution to obtain an aqueous solution of the composite material containing a carbon core black phosphorus shell structure; (4) uniformly mixing the aqueous solution with an ethylene glycol solution of sodium chloroplatinate, adjusting the pH, then reducing the platinum on the surface by using a microwave irradiation heating method; and (5) filtering, washing and drying the obtained composite material to obtain a platinum/black phosphorus @carbon sphere composite material.
    Type: Application
    Filed: December 8, 2017
    Publication date: April 9, 2020
    Applicant: QINGDAO UNIVERSITY
    Inventors: Feifei ZHANG, Zonghua WANG, Zhenyu WANG, Xing LUO, Xiaofang DUAN
  • Publication number: 20200092471
    Abstract: Disclosed is a panoramic image mapping method and a corresponding reversely mapping method. Particularly, the mapping process includes mapping a panoramic image or a spherical surface corresponding to Video A: first, dividing the spherical surface into three areas based on the latitudes of the spherical surface, denoted as Area I, Area II, and Area III, respectively; mapping the three areas to a square plane I?, a rectangular plane II?, and a square plane III?, respectively; then, splicing the planes I?, II? and III? into a plane, wherein the resulting plane is the two-dimensional image or video B. Compared with the equirectangular mapping method, the method according to the present disclosure may effectively ameliorate oversampling in high-latitude areas and effectively lower the bit rate needed by coding and the complexity of decoding. The present disclosure relates to the field of virtual reality, which may be applied to panoramic images and videos.
    Type: Application
    Filed: August 22, 2017
    Publication date: March 19, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200082165
    Abstract: A Collaborative Deep Network model method for pedestrian detection includes constructing a new collaborative multi-model learning framework to complete a classification process during pedestrian detection; and using an artificial neuron network to integrate judgment results of sub-classifiers in a collaborative model, and training the network by means of the method for machine learning, so that information fed back by sub-classifiers can be more effectively synthesized. A re-sampling method based on a K-means clustering algorithm can enhance the classification effect of each classifier in the collaborative model, and thus improves the overall classification effect.
    Type: Application
    Filed: July 24, 2017
    Publication date: March 12, 2020
    Inventors: Wenmin Wang, Hongmeng Song, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20200074593
    Abstract: Disclosed are a panoramic image mapping method, apparatus, and device. The method comprises: obtaining a to-be-mapped panoramic image; splitting the to-be-mapped panoramic image into three areas according to a first latitude and a second latitude, wherein the area corresponding to a latitude range from ?90° to the first latitude is referred to as a first area, the area corresponding to a latitude range from the first latitude to the second latitude is referred to as a second area, and the area corresponding to a latitude range from the second latitude to 90° is referred to as a third area; mapping the first area to a first target image according to a first mapping method; mapping the second area to the second target image according to a second mapping method; mapping the third area to a third target image according to a third mapping method, and splicing the first target image, the second target image, and the third target image to obtain a two-dimensional plane image.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 5, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200057935
    Abstract: A video action detection method based on a convolutional neural network (CNN) is disclosed in the field of computer vision recognition technologies. A temporal-spatial pyramid pooling layer is added to a network structure, which eliminates limitations on input by a network, speeds up training and detection, and improves performance of video action classification and time location. The disclosed convolutional neural network includes a convolutional layer, a common pooling layer, a temporal-spatial pyramid pooling layer and a full connection layer. The outputs of the convolutional neural network include a category classification output layer and a time localization calculation result output layer. The disclosed method does not require down-sampling to obtain video clips of different durations, but instead utilizes direct input of the whole video at once, improving efficiency.
    Type: Application
    Filed: August 16, 2017
    Publication date: February 20, 2020
    Inventors: Wenmin Wang, Zhihao Li, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10531093
    Abstract: A method and system for video frame interpolation based on an optical flow method is disclosed. The process includes calculating bidirectional motion vectors between two adjacent frames in a frame sequence of input video by using the optical flow method, judging reliabilities of the bidirectional motion vectors between the two adjacent frames, and processing a jagged problem and a noise problem in the optical flow method; marking “shielding” and “exposure” regions in the two adjacent frames, and updating an unreliable motion vector; with regard to the two adjacent frames, according to marking information about the “shielding” and “exposure” regions and the bidirectional motion vector field, mapping front and back frames to an interpolated frame to obtain a forward interpolated frame and a backward interpolated frame; synthesizing the forward interpolated frame and the backward interpolated frame into the interpolated frame; repairing a hole point in the interpolated frame to obtain a final interpolated frame.
    Type: Grant
    Filed: May 25, 2015
    Date of Patent: January 7, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Chuanxin Tang, Ronggang Wang, Zhenyu Wang, Wen Gao
  • Publication number: 20190387234
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: August 30, 2019
    Publication date: December 19, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Publication number: 20190373281
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: July 24, 2017
    Publication date: December 5, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Publication number: 20190354786
    Abstract: Techniques for determining lighting states of a tracked object, such as a vehicle, are discussed herein. An autonomous vehicle can include an image sensor to capture image data of an environment. Objects such can be identified in the image data as objects to be tracked. Frames of the image data representing the tracked object can be selected and input to a machine learning algorithm (e.g., a convolutional neural network, a recurrent neural network, etc.) that is trained to determine probabilities associated with one or more lighting states of the tracked object. Such lighting states include, but are not limited to, a blinker state(s), a brake state, a hazard state, etc. Based at least in part on the one or more probabilities associated with the one or more lighting states, the autonomous vehicle can determine a trajectory for the autonomous vehicle and/or can determine a predicted trajectory for the tracked object.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Tencia Lee, Kai Zhenyu Wang, James William Vaisey Philbin
  • Publication number: 20190346527
    Abstract: The invention relates to a method for the hyperpolarization of a material sample (4), which hits a number of first spin moments (10) of a first spin moment type, wherein the number of first spin moments (10) is brought into interaction with a second spin moment (16) of a second spin moment type, wherein the first spin moments (10) are nuclear spin moments and the second spin moment (16) is an election spin moment, wherein the first and second spin moments (10, 16) are exposed to a homogeneous magnetic field (B), wherein the second spin moment (16) is polarized along the magnetic field (B), wherein the second spin moment (16) is coherently manipulated by means of a, preferably repeated, sequence (S) having a number of successive high-frequency pulses (Pki, Pk?i) temporally offset to each by durations (Tki, Tk?i, T), in such a way that a polarization transfer from the second spin moment (16) to the first spin moments (10) occurs, and wherein durations (Tki, Tk?i, T) inversely proportional to a Lamor frequency (
    Type: Application
    Filed: December 21, 2017
    Publication date: November 14, 2019
    Applicant: NVision Imaging Technologies GmbH
    Inventors: Ilai Schwartz, Martin Plenio, Qiong Chen, Zhenyu Wang
  • Publication number: 20190336992
    Abstract: A saddle seal assembly for a high-pressure airless spray nozzle having a spray tip includes a metal sealing sleeve and a cylindrical elastic seal. The metal sealing sleeve may include a first saddle-shaped semi-cylinder surface closely matching with an outer surface of the spray tip to form an outer hard sealing structure. The cylindrical elastic seal may include a second saddle-shaped semi-cylinder surface closely matching with the outer surface of the spray tip to form an inner flexible sealing structure. A first end portion of the cylindrical elastic seal is configured to be inserted into the metal sealing sleeve, and the first saddle-shaped semi-cylinder surface and the second saddle-shaped semi-cylinder surface are configured to be spliced to form a continuous saddle-shaped semi-cylinder surface, to thereby seal a stepped inlet hole of the high-pressure airless spray nozzle.
    Type: Application
    Filed: February 19, 2019
    Publication date: November 7, 2019
    Inventors: Zhenyu Wang, Qinghua Li
  • Patent number: 10425656
    Abstract: A video encoding and decoding method, and its inter-frame prediction method, device and system thereof are disclosed. The inter-frame prediction method includes obtaining a motion vector of the current image block and related spatial position of a current pixel, obtaining a motion vector of the current pixel according to the motion vector of the current image block and the related spatial position of the current pixel; and obtaining a predicted value of the current pixel according to the motion vector of the current pixel. The method considers both the motion vector of the current image block and the related spatial position information of the current pixel during inter-frame prediction. The method can accommodate lens distortion characteristics of different images and zoom-in/zoom-out produced when the object moves in pictures, thereby improving the calculation accuracy of pixels' motion vectors, and improving inter-frame prediction performance and compression efficiency in video encoding and decoding.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao