Patents by Inventor Ronggang Wang

Ronggang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10424054
    Abstract: A low-illumination image processing method and device address the problem of noise amplification in existing contrast enhancement techniques when applied to original low-illumination image. A noise suppression filter is additionally arranged before an operation of contrast enhancement, and smoothing processing is performed on an inverse color image of a low-illumination image by adopting a first filtering coefficient and a second filtering coefficient, so that image contrast is enhanced while random noise is suppressed. Texture and noise level parameter of an image are calculated according to a local characteristic inside block of the image. Weighted averaging is performed on a first smoothing image and a second smoothing image after smoothing processing according to the texture and noise level parameters.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Lin Li, Ronggang Wang, Chengzhou Tang, Zhenyu Wang, Wen Gao
  • Publication number: 20190289298
    Abstract: Embodiments of the present disclosure disclose a quick intra code-rate predicting method in the field of video coding, which may skip an entropy coding procedure during an RDO process by modeling residual information of a prediction block and predicting the number of coding bits of the prediction block based on an information entropy theory under a corresponding model. The code-rate predicting method comprises: making statistics on prediction block distribution information and modeling to obtain a combined model, predicting the number of coding bits of a prediction mode based on the model, and correcting the predicted number of coding bits for predicting the code-rate of each prediction mode during the RDO process, so as to replace a high time complexity in an actual entropy coding procedure, thereby effectively reducing the coding time with less video quality loss. The present disclosure is applicable to Iframe code-rate prediction for in video coding.
    Type: Application
    Filed: July 24, 2017
    Publication date: September 19, 2019
    Inventors: Ronggang WANG, Hongbin CAO, Zhenyu WANG, Wen GAO
  • Publication number: 20190273934
    Abstract: The present disclosure discloses a method and apparatus for fast inverse discrete cosine transform and a video coding/decoding method and framework. By recording positions of non-zero coefficients in a transform unit during a procedure of performing inverse quantization scanning to the coefficients, determining a distribution pattern of non-zero coefficients of the transform unit based on the positions of non-zero coefficients, and further selecting an inverse discrete cosine transform function corresponding to the non-zero coefficient distribution pattern, and then performing inverse cosine transform to the function, the technical solutions of the present disclosure need not compute the zero coefficients, which thus improves the overall speed of an algorithm; because the positions of non-zero coefficients have been recorded during the procedure of performing inverse quantization scanning to the coefficients, the complexity of the algorithm is lowered.
    Type: Application
    Filed: March 17, 2016
    Publication date: September 5, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Ronggang WANG, Kaili YAO, Zhenyu WANG, Wen GAO
  • Patent number: 10395374
    Abstract: Disclosed in the present invention is a video foreground extraction method for a surveillance video, which adjusts a size of a block to adapt to different video resolutions based on an image block processing method; and then extracts a foreground object in a moving state by establishing a background block model, the method comprising: representing each frame of image I in the surveillance video as a block; initializing; updating a block background weight, a block temporary background and a temporary background; updating a block background and a background; saving a foreground, and updating a foreground block weight and a foreground block; and performing binarization processing on the foreground to obtain a final foreground result.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: August 27, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ge Li, Xianghao Zang, Wenmin Wang, Ronggang Wang
  • Publication number: 20190253719
    Abstract: Disclosed are a describing method and a coding method of panoramic video ROIs based on multiple layers of spherical circumferences. The describing method comprises: first setting a center of the panoramic video ROIs; then setting the number of layers of ROIs as N; obtaining the size Rn of the current layer ROI based on a radius or angle; obtaining the sizes of all of the N layers of ROIs, and writing information such as the center of the ROIs, the number of layers, and the size of each layer into a sequence header of a code stream. The coding method comprises adjusting or filtering an initial QP based on a QP adjusted value and then coding an image. By flexibly assigning code rates to multiple layers of panoramic video ROIs, while guaranteeing a relatively high image quality of ROIs, the code rate needed for coding and transmission is greatly reduced.
    Type: Application
    Filed: July 12, 2017
    Publication date: August 15, 2019
    Inventors: Zhenyu WANG, Ronggang WANG, Yueming WANG, Wen GAO
  • Publication number: 20190205393
    Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.
    Type: Application
    Filed: December 1, 2016
    Publication date: July 4, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10339633
    Abstract: The present application provides a method and a device for super-resolution image reconstruction based on dictionary matching. The method includes: establishing a matching dictionary library; inputting an image to be reconstructed into a multi-layer linear filter network; extracting a local characteristic of the image to be reconstructed; searching the matching dictionary library for a local characteristic of a low-resolution image block having the highest similarity with the local characteristic of the image to be reconstructed; searching the matching dictionary library for a residual of a combined sample where the local characteristic of the low-resolution image block with the highest similarity is located; performing interpolation amplification on the local characteristic of the low-resolution image block having the highest similarity; and adding the residual to a result of the interpolation amplification to obtain a reconstructed high-resolution image block.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Yang Zhao, Ronggang Wang, Wen Gao, Zhenyu Wang, Wenmin Wang
  • Patent number: 10339409
    Abstract: A method and a device for extracting local features of a 3D point cloud are disclosed. Angle information and the concavo-convex information about a feature point to be extracted and a point of an adjacent body element are calculated based on a local reference system corresponding to the points of each body element. The feature relation between the two points can be calculated accurately. The property of invariance in translation and rotation is possessed. Since concavo-convex information about a local point cloud is contained during extraction, the inaccurate extraction caused by ignoring concavo-convex ambiguity in previous 3D local feature description is resolved.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mingmin Zhen, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 10341682
    Abstract: Methods and devices are disclosed for panoramic video coding and decoding based on multi-mode boundary fill. If a predicted image block of a current image block is obtained by inter-frame prediction. The inter-frame prediction includes a boundary fill step of adaptively selecting a boundary fill method according to coordinates of a reference sample when the reference sample of a pixel in the current image block is outside the boundary of a corresponding reference image, to obtain a sample value of the reference sample. The panoramic video encoding and decoding method and device based on multi-mode boundary fill in the present invention make full use of the characteristic that horizontal image contents in a panoramic video are cyclically connected to optimize an image boundary fill method, such that the encoding can adaptively select a more reasonable boundary fill method according to the coordinates of a reference sample, thereby improving compression efficiency.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Patent number: 10325358
    Abstract: A method for image de-blurring includes estimating an intermediate image L by marking and constraining an edge region and a smooth region in an input image; estimating a blur kernel k by extracting salient edges from the intermediate image L, wherein the salient edges have scales greater than those of the blur kernel k; and restoring the input image to a clear image by performing non-blind deconvolution on the input image and the estimated blur kernel k. Imposing constraints on the edge region and the smooth region allows the intermediate image to maintain the edge while effectively removing noise and ringing artifacts in the smooth region. The use of the salient edges in the intermediate image L enables more accurate blur kernel estimation. Performing non-blind deconvolution on the input image and the estimated blur kernel k restores the input image to a clear image achieving desired de-blurring effect.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: June 18, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Xinxin Zhang, Ronggang Wang, Zhenyu Wang, Wen Gao
  • Patent number: 10297016
    Abstract: Disclosed is a video background removal method, which relates to the technical field of video analysis, and in particular to a background removal method based on an image block, a Gaussian mixture model and a random process. Firstly, the concept of blocks is defined, and a foreground and a background are determined by means of comparing a difference between blocks; a threshold value is automatically adjusted by using a Gaussian mixture model, and at the same time, the background is updated by using the idea of random process; and finally, an experiment is made on a BMC dataset, and a result shows that this method surpasses most of the current advanced algorithms, and the accuracy is very high. This method has wide applicability, can be applied to monitor video background subtraction, and is applied very importantly in the field of video analysis.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: May 21, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Ge Li, Xianghao Zang, Wenmin Wang, Ronggang Wang
  • Patent number: 10298950
    Abstract: A P frame-based multi-hypothesis motion compensation method includes: taking an encoded image block adjacent to a current image block as a reference image block and obtaining a first motion vector of the current image block by using a motion vector of the reference image block, the first motion vector pointing to a first prediction block; taking the first motion vector as a reference value and performing joint motion estimation on the current image block to obtain a second motion vector of the current image block, the second motion vector pointing to a second prediction block; and performing weighted averaging on the first prediction block and the second prediction block to obtain a final prediction block of the current image block. The method increases the accuracy of the obtained prediction block of the current image block without increasing the code rate.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: May 21, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Lei Chen, Zhenyu Wang, Siwei Ma, Wen Gao, Tiejun Huang, Wenmin Wang, Shengfu Dong
  • Publication number: 20190139199
    Abstract: An image deblurring method based on light streak information in an image is provided, wherein shape information of a blur kernel is obtained based on a light streak in a motion blur image and image restoration is constrained by combining the shape information, a natural image and the blur kernel to thereby obtain an accurate blur kernel and a high-quality restored image. The method specifically comprises: selecting an optimum image patch including an optimum light streak; extracting shape information of a blur kernel from the optimum image patch including the optimum light streak; performing blur kernel estimation to obtain the final blur kernel; performing non-blind deconvolution and restoring a sharp restored image as a final deblurred image. The present disclosure establishes a blurry image test set of captured images including light streaks and a method to: obtain an accurate blur kernel and a high quality restore image.
    Type: Application
    Filed: July 15, 2016
    Publication date: May 9, 2019
    Inventors: Ronggang WANG, Xinxin ZHANG, Zhenyu WANG, Wen GAO
  • Publication number: 20190139186
    Abstract: Embodiments of the present disclosure provide a method for accelerating CDVS extraction process based on a GPGPU platform, wherein for the stages of feature detection and local descriptor computation of the CDVS extraction process, operation logics and parallelism strategies of respective inter-pixel parallelism sub-procedures and respective inter-feature point parallelism sub-procedures are implemented by leveraging an OpenCL general-purpose parallelism programming framework, and acceleration is achieved by leveraging a GPU's parallelism computation capability; including: partitioning computing tasks for a GPU and a CPU; reconstructing an image scale pyramid storage model; assigning parallelism strategies to respective sub-procedures for the GPU; and applying local memory to mitigate the access bottleneck. The technical solution of the present disclosure may accelerate the CDVS extraction process and significantly enhances the extraction performance.
    Type: Application
    Filed: December 5, 2016
    Publication date: May 9, 2019
    Inventors: Ronggang Wang, Shen Zhang, Zhenyu Wang, Wen Gao
  • Publication number: 20190132001
    Abstract: The disclosure provided a compressing method of the grayscale compensation table of an OLED display panel, which comprising: step 10, when transmitting a set of grayscale compensation table of the OLED display panel to an encoder for encoding, firstly, performing a differential calculation on many grayscale compensation tables with a same color channel and different gray scales in the set of which to acquire a corresponding reference image and a difference image as replacements of many grayscale compensation tables; step 20, transmitting the above images to the encoder; step 30, the encoder compressing and encoding a received grayscale compensation table. The compressing method of the grayscale compensation table of the OLED display panel performs the intra-level differences between the same color component and the different grayscale compensation tables in the same OLED compensation table to improve an efficiency and a performance of the compression compensation table.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 2, 2019
    Inventors: Yufan DENG, Mingjong JOU, Shensian SYU, Ronggang WANG, Kui FAN, Hao LI
  • Publication number: 20190114753
    Abstract: Disclosed is a video background removal method, which relates to the technical field of video analysis, and in particular to a background removal method based on an image block, a Gaussian mixture model and a random process. Firstly, the concept of blocks is defined, and a foreground and a background are determined by means of comparing a difference between blocks; a threshold value is automatically adjusted by using a Gaussian mixture model, and at the same time, the background is updated by using the idea of random process; and finally, an experiment is made on a BMC dataset, and a result shows that this method surpasses most of the current advanced algorithms, and the accuracy is very high. This method has wide applicability, can be applied to monitor video background subtraction, and is applied very importantly in the field of video analysis.
    Type: Application
    Filed: January 5, 2017
    Publication date: April 18, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Ge LI, Xianghao ZANG, Wenmin WANG, Ronggang WANG
  • Publication number: 20190110060
    Abstract: A video encoding and decoding method, and its inter-frame prediction method, device and system thereof are disclosed. The inter-frame prediction method includes obtaining a motion vector of the current image block and related spatial position of a current pixel, obtaining a motion vector of the current pixel according to the motion vector of the current image block and the related spatial position of the current pixel; and obtaining a predicted value of the current pixel according to the motion vector of the current pixel. The method considers both the motion vector of the current image block and the related spatial position information of the current pixel during inter-frame prediction. The method can accommodate lens distortion characteristics of different images and zoom-in/zoom-out produced when the object moves in pictures, thereby improving the calculation accuracy of pixels' motion vectors, and improving inter-frame prediction performance and compression efficiency in video encoding and decoding.
    Type: Application
    Filed: January 19, 2016
    Publication date: April 11, 2019
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Publication number: 20190108642
    Abstract: Disclosed in the present invention is a video foreground extraction method for a surveillance video, which adjusts a size of a block to adapt to different video resolutions based on an image block processing method; and then extracts a foreground object in a moving state by establishing a background block model, the method comprising: representing each frame of image I in the surveillance video as a block; initializing; updating a block background weight, a block temporary background and a temporary background; updating a block background and a background; saving a foreground, and updating a foreground block weight and a foreground block; and performing binarization processing on the foreground to obtain a final foreground result.
    Type: Application
    Filed: April 6, 2017
    Publication date: April 11, 2019
    Inventors: Ge LI, Xianghao ZANG, Wenmin WANG, Ronggang WANG
  • Publication number: 20190098303
    Abstract: A method, a device and an encoder for controlling filtering of intra-frame prediction reference pixel point are disclosed. The method includes: when various reference pixel points in a reference pixel group of an intra-frame block to be predicted are filtered, and the target reference pixel point to be filtered currently is not an edge reference pixel point in a reference pixel group (S202), acquiring a pixel difference value between the target reference pixel point and n adjacent reference pixel points thereof (S203); and selecting a filter with the filtering grade thereof corresponding to the pixel difference value to filter the target reference pixel point (S204). For various reference pixel points not located at an edge in a reference pixel group, according to the local difference characteristics of these reference pixel points, filters with corresponding filtering grades are flexibly configured, thus providing flexibility and adaptivity to the filtering, achieving better effect.
    Type: Application
    Filed: June 16, 2016
    Publication date: March 28, 2019
    Inventors: Ronggang Wang, Kui Fan, Zhenyu Wang, Wen Gao
  • Patent number: 10230990
    Abstract: A chroma interpolation method, including: 1) determining a pixel accuracy for interpolation; 2) determining coordinate positions of interpolated fractional-pel pixels between integer-pel pixels; and 3) performing two-dimensional separated interpolation on the interpolated fractional-pel pixels by an interpolation filter according to the coordinate positions. The invention also provides a filter device using the above method for chroma interpolation.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: March 12, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Hao Lv, Zhenyu Wang, Shengfu Dong, Wen Gao