Patents by Inventor Ronggang Wang

Ronggang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200175648
    Abstract: Disclosed is a panoramic image mapping method, wherein mapping regions and a non-mapping region are partitioned for an equirectangular panoramic image with a resolution of 2M×M, where only the partitioned mapping regions are mapped as square regions; the method comprises:, computing a vertical distance and a horizontal distance from a point on the square region to a center of the square region, a larger one of which being denoted as m; computing a distance n from the point to a zeroth (0th) point on a concentric square region; computing a longitude and a latitude corresponding to the point; computing a corresponding position (X, Y) in the equirectangular panoramic image to which the point is mapped; and then assigning a value to the point. The method may effectively reduce oversampling, thereby effectively reducing the number of pixels of the panoramic image and the code rate required for coding with little distortion.
    Type: Application
    Filed: December 13, 2016
    Publication date: June 4, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Patent number: 10666292
    Abstract: The disclosure provided a compressing method of the grayscale compensation table of an OLED display panel, which comprising: step 10, when transmitting a set of grayscale compensation table of the OLED display panel to an encoder for encoding, firstly, performing a differential calculation on many grayscale compensation tables with a same color channel and different gray scales in the set of which to acquire a corresponding reference image and a difference image as replacements of many grayscale compensation tables; step 20, transmitting the above images to the encoder; step 30, the encoder compressing and encoding a received grayscale compensation table. The compressing method of the grayscale compensation table of the OLED display panel performs the intra-level differences between the same color component and the different grayscale compensation tables in the same OLED compensation table to improve an efficiency and a performance of the compression compensation table.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: May 26, 2020
    Assignee: SHENZHEN CHINA STAR OPTOELECTRONICS SEMICONDUCTOR DISPLAY TECHNOLOGY CO., LTD.
    Inventors: Yufan Deng, Mingjong Jou, Shensian Syu, Ronggang Wang, Kui Fan, Hao Li
  • Publication number: 20200160048
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Application
    Filed: November 24, 2017
    Publication date: May 21, 2020
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20200154111
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: July 17, 2019
    Publication date: May 14, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200143511
    Abstract: Disclosed are a panoramic video forward mapping method and a panoramic video inverse mapping method, which relates to the field of virtual reality (VR) videos. In the present disclosure, the forward mapping method comprises: mapping, based on a main viewpoint, the Areas I, II, and III on the sphere onto corresponding areas on the plane, wherein Area I corresponds to the area with the included angle 0°˜Z1, the Area II corresponds to the area with the included angle Z1˜Z2, and the Area III corresponds to the area with the included angle Z2˜180°. The panoramic video forward mapping method refers to mapping a spherical source corresponding to the panoramic image A onto a plane square image B; the panoramic video inverse mapping method refers to mapping the plane square image B back to the sphere for being rendered and viewed.
    Type: Application
    Filed: August 4, 2017
    Publication date: May 7, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Patent number: 10643299
    Abstract: Embodiments of the present disclosure provide a method for accelerating CDVS extraction process based on a GPGPU platform, wherein for the stages of feature detection and local descriptor computation of the CDVS extraction process, operation logics and parallelism strategies of respective inter-pixel parallelism sub-procedures and respective inter-feature point parallelism sub-procedures are implemented by leveraging an OpenCL general-purpose parallelism programming framework, and acceleration is achieved by leveraging a GPU's parallelism computation capability; including: partitioning computing tasks for a GPU and a CPU; reconstructing an image scale pyramid storage model; assigning parallelism strategies to respective sub-procedures for the GPU; and applying local memory to mitigate the access bottleneck. The technical solution of the present disclosure may accelerate the CDVS extraction process and significantly enhances the extraction performance.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: May 5, 2020
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Shen Zhang, Zhenyu Wang, Wen Gao
  • Patent number: 10628924
    Abstract: Method and device for deblurring an out-of-focus blurred image: first, a preset blur kernel is used to carry out blurring processing on an original image to obtain a re-blurred image. Blur amounts of pixels in an edge area of the original image are estimated according to the change of the image edge information in the blurring processing to obtain a sparse blur amount diagram. Blur amounts of pixels in a non-edge area of the original image are estimated according to the sparse blur amount diagram to obtain a complete blur amount diagram. Deblurring processing is carried out according to the complete blur amount diagram to obtain a deblurred image. In the method and device provided, since a blur amount diagram is obtained based on the change of edge information after image blurring, the blur amount diagram can be more accurate, so that the quality of a deblurred image is improved.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: April 21, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Ronggang Wang, Xinxin Zhang, Zhenyu Wang, Wen Gao
  • Publication number: 20200099911
    Abstract: Disclosed is a virtual viewpoint synthesis method based on image local segmentation, which relates to the digital image processing technology.
    Type: Application
    Filed: August 14, 2017
    Publication date: March 26, 2020
    Inventors: Ronggang WANG, Xiubao JIANG, Wen GAO
  • Publication number: 20200092471
    Abstract: Disclosed is a panoramic image mapping method and a corresponding reversely mapping method. Particularly, the mapping process includes mapping a panoramic image or a spherical surface corresponding to Video A: first, dividing the spherical surface into three areas based on the latitudes of the spherical surface, denoted as Area I, Area II, and Area III, respectively; mapping the three areas to a square plane I?, a rectangular plane II?, and a square plane III?, respectively; then, splicing the planes I?, II? and III? into a plane, wherein the resulting plane is the two-dimensional image or video B. Compared with the equirectangular mapping method, the method according to the present disclosure may effectively ameliorate oversampling in high-latitude areas and effectively lower the bit rate needed by coding and the complexity of decoding. The present disclosure relates to the field of virtual reality, which may be applied to panoramic images and videos.
    Type: Application
    Filed: August 22, 2017
    Publication date: March 19, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200082165
    Abstract: A Collaborative Deep Network model method for pedestrian detection includes constructing a new collaborative multi-model learning framework to complete a classification process during pedestrian detection; and using an artificial neuron network to integrate judgment results of sub-classifiers in a collaborative model, and training the network by means of the method for machine learning, so that information fed back by sub-classifiers can be more effectively synthesized. A re-sampling method based on a K-means clustering algorithm can enhance the classification effect of each classifier in the collaborative model, and thus improves the overall classification effect.
    Type: Application
    Filed: July 24, 2017
    Publication date: March 12, 2020
    Inventors: Wenmin Wang, Hongmeng Song, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20200074593
    Abstract: Disclosed are a panoramic image mapping method, apparatus, and device. The method comprises: obtaining a to-be-mapped panoramic image; splitting the to-be-mapped panoramic image into three areas according to a first latitude and a second latitude, wherein the area corresponding to a latitude range from ?90° to the first latitude is referred to as a first area, the area corresponding to a latitude range from the first latitude to the second latitude is referred to as a second area, and the area corresponding to a latitude range from the second latitude to 90° is referred to as a third area; mapping the first area to a first target image according to a first mapping method; mapping the second area to the second target image according to a second mapping method; mapping the third area to a third target image according to a third mapping method, and splicing the first target image, the second target image, and the third target image to obtain a two-dimensional plane image.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 5, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200057935
    Abstract: A video action detection method based on a convolutional neural network (CNN) is disclosed in the field of computer vision recognition technologies. A temporal-spatial pyramid pooling layer is added to a network structure, which eliminates limitations on input by a network, speeds up training and detection, and improves performance of video action classification and time location. The disclosed convolutional neural network includes a convolutional layer, a common pooling layer, a temporal-spatial pyramid pooling layer and a full connection layer. The outputs of the convolutional neural network include a category classification output layer and a time localization calculation result output layer. The disclosed method does not require down-sampling to obtain video clips of different durations, but instead utilizes direct input of the whole video at once, improving efficiency.
    Type: Application
    Filed: August 16, 2017
    Publication date: February 20, 2020
    Inventors: Wenmin Wang, Zhihao Li, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10531093
    Abstract: A method and system for video frame interpolation based on an optical flow method is disclosed. The process includes calculating bidirectional motion vectors between two adjacent frames in a frame sequence of input video by using the optical flow method, judging reliabilities of the bidirectional motion vectors between the two adjacent frames, and processing a jagged problem and a noise problem in the optical flow method; marking “shielding” and “exposure” regions in the two adjacent frames, and updating an unreliable motion vector; with regard to the two adjacent frames, according to marking information about the “shielding” and “exposure” regions and the bidirectional motion vector field, mapping front and back frames to an interpolated frame to obtain a forward interpolated frame and a backward interpolated frame; synthesizing the forward interpolated frame and the backward interpolated frame into the interpolated frame; repairing a hole point in the interpolated frame to obtain a final interpolated frame.
    Type: Grant
    Filed: May 25, 2015
    Date of Patent: January 7, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Chuanxin Tang, Ronggang Wang, Zhenyu Wang, Wen Gao
  • Publication number: 20190387234
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: August 30, 2019
    Publication date: December 19, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Publication number: 20190387210
    Abstract: Disclosed are a method, apparatus, and device for synthesizing virtual viewpoint images.
    Type: Application
    Filed: August 28, 2019
    Publication date: December 19, 2019
    Inventors: Ronggang WANG, Xiubao JIANG, Wen GAO
  • Publication number: 20190373281
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: July 24, 2017
    Publication date: December 5, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Publication number: 20190311524
    Abstract: Embodiments of the present disclosure provide a method and apparatus for real-time virtual viewpoint synthesis; during the whole process of synthesizing virtual viewpoint images, unlike the prior art, the method and apparatus for virtual viewpoint synthesis according to the embodiments above do not rely on depth maps and thus effectively avoid the problems incurred by depth-image-based rendering.
    Type: Application
    Filed: July 22, 2016
    Publication date: October 10, 2019
    Inventors: Ronggang WANG, Jiajia LUO, Xiubao JIANG, Wen GAO
  • Patent number: 10424052
    Abstract: An image representation method and processing device based on local PCA whitening. A first mapping module maps words and characteristics to a high-dimension space. A principal component analysis module conducts principal component analysis in each corresponding word space, to obtain a projection matrix. A VLAD computation module computes a VLAD image representation vector; a second mapping module maps the VLAD image representation vector to the high-dimension space. A projection transformation module conducts projection transformation on the VLAD image representation vector obtained by means of projection. A normalization processing module conducts normalization on characteristics obtained by means of projection transformation, to obtain a final image representation vector.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mingmin Zhen, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 10425656
    Abstract: A video encoding and decoding method, and its inter-frame prediction method, device and system thereof are disclosed. The inter-frame prediction method includes obtaining a motion vector of the current image block and related spatial position of a current pixel, obtaining a motion vector of the current pixel according to the motion vector of the current image block and the related spatial position of the current pixel; and obtaining a predicted value of the current pixel according to the motion vector of the current pixel. The method considers both the motion vector of the current image block and the related spatial position information of the current pixel during inter-frame prediction. The method can accommodate lens distortion characteristics of different images and zoom-in/zoom-out produced when the object moves in pictures, thereby improving the calculation accuracy of pixels' motion vectors, and improving inter-frame prediction performance and compression efficiency in video encoding and decoding.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Patent number: 10425640
    Abstract: A method, a device and an encoder for controlling filtering of intra-frame prediction reference pixel point are disclosed. The method includes: when various reference pixel points in a reference pixel group of an intra-frame block to be predicted are filtered, and the target reference pixel point to be filtered currently is not an edge reference pixel point in a reference pixel group (S202), acquiring a pixel difference value between the target reference pixel point and n adjacent reference pixel points thereof (S203); and selecting a filter with the filtering grade thereof corresponding to the pixel difference value to filter the target reference pixel point (S204). For various reference pixel points not located at an edge in a reference pixel group, according to the local difference characteristics of these reference pixel points, filters with corresponding filtering grades are flexibly configured, thus providing flexibility and adaptivity to the filtering, achieving better effect.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Ronggang Wang, Kui Fan, Zhenyu Wang, Wen Gao