Patents by Inventor Ronggang Wang

Ronggang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200099911
    Abstract: Disclosed is a virtual viewpoint synthesis method based on image local segmentation, which relates to the digital image processing technology.
    Type: Application
    Filed: August 14, 2017
    Publication date: March 26, 2020
    Inventors: Ronggang WANG, Xiubao JIANG, Wen GAO
  • Publication number: 20200092471
    Abstract: Disclosed is a panoramic image mapping method and a corresponding reversely mapping method. Particularly, the mapping process includes mapping a panoramic image or a spherical surface corresponding to Video A: first, dividing the spherical surface into three areas based on the latitudes of the spherical surface, denoted as Area I, Area II, and Area III, respectively; mapping the three areas to a square plane I?, a rectangular plane II?, and a square plane III?, respectively; then, splicing the planes I?, II? and III? into a plane, wherein the resulting plane is the two-dimensional image or video B. Compared with the equirectangular mapping method, the method according to the present disclosure may effectively ameliorate oversampling in high-latitude areas and effectively lower the bit rate needed by coding and the complexity of decoding. The present disclosure relates to the field of virtual reality, which may be applied to panoramic images and videos.
    Type: Application
    Filed: August 22, 2017
    Publication date: March 19, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200082165
    Abstract: A Collaborative Deep Network model method for pedestrian detection includes constructing a new collaborative multi-model learning framework to complete a classification process during pedestrian detection; and using an artificial neuron network to integrate judgment results of sub-classifiers in a collaborative model, and training the network by means of the method for machine learning, so that information fed back by sub-classifiers can be more effectively synthesized. A re-sampling method based on a K-means clustering algorithm can enhance the classification effect of each classifier in the collaborative model, and thus improves the overall classification effect.
    Type: Application
    Filed: July 24, 2017
    Publication date: March 12, 2020
    Inventors: Wenmin Wang, Hongmeng Song, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20200074593
    Abstract: Disclosed are a panoramic image mapping method, apparatus, and device. The method comprises: obtaining a to-be-mapped panoramic image; splitting the to-be-mapped panoramic image into three areas according to a first latitude and a second latitude, wherein the area corresponding to a latitude range from ?90° to the first latitude is referred to as a first area, the area corresponding to a latitude range from the first latitude to the second latitude is referred to as a second area, and the area corresponding to a latitude range from the second latitude to 90° is referred to as a third area; mapping the first area to a first target image according to a first mapping method; mapping the second area to the second target image according to a second mapping method; mapping the third area to a third target image according to a third mapping method, and splicing the first target image, the second target image, and the third target image to obtain a two-dimensional plane image.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 5, 2020
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Publication number: 20200057935
    Abstract: A video action detection method based on a convolutional neural network (CNN) is disclosed in the field of computer vision recognition technologies. A temporal-spatial pyramid pooling layer is added to a network structure, which eliminates limitations on input by a network, speeds up training and detection, and improves performance of video action classification and time location. The disclosed convolutional neural network includes a convolutional layer, a common pooling layer, a temporal-spatial pyramid pooling layer and a full connection layer. The outputs of the convolutional neural network include a category classification output layer and a time localization calculation result output layer. The disclosed method does not require down-sampling to obtain video clips of different durations, but instead utilizes direct input of the whole video at once, improving efficiency.
    Type: Application
    Filed: August 16, 2017
    Publication date: February 20, 2020
    Inventors: Wenmin Wang, Zhihao Li, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10531093
    Abstract: A method and system for video frame interpolation based on an optical flow method is disclosed. The process includes calculating bidirectional motion vectors between two adjacent frames in a frame sequence of input video by using the optical flow method, judging reliabilities of the bidirectional motion vectors between the two adjacent frames, and processing a jagged problem and a noise problem in the optical flow method; marking “shielding” and “exposure” regions in the two adjacent frames, and updating an unreliable motion vector; with regard to the two adjacent frames, according to marking information about the “shielding” and “exposure” regions and the bidirectional motion vector field, mapping front and back frames to an interpolated frame to obtain a forward interpolated frame and a backward interpolated frame; synthesizing the forward interpolated frame and the backward interpolated frame into the interpolated frame; repairing a hole point in the interpolated frame to obtain a final interpolated frame.
    Type: Grant
    Filed: May 25, 2015
    Date of Patent: January 7, 2020
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Chuanxin Tang, Ronggang Wang, Zhenyu Wang, Wen Gao
  • Publication number: 20190387210
    Abstract: Disclosed are a method, apparatus, and device for synthesizing virtual viewpoint images.
    Type: Application
    Filed: August 28, 2019
    Publication date: December 19, 2019
    Inventors: Ronggang WANG, Xiubao JIANG, Wen GAO
  • Publication number: 20190387234
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: August 30, 2019
    Publication date: December 19, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Publication number: 20190373281
    Abstract: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.
    Type: Application
    Filed: July 24, 2017
    Publication date: December 5, 2019
    Inventors: Ronggang WANG, Kui FAN, Zhenyu WANG, Wen GAO
  • Patent number: 10475865
    Abstract: A flexible display substrate, a method for manufacturing the same and a flexible display motherboard are provided in the present disclosure. The flexible display motherboard includes a carrier substrate, a separation layer provided above the carrier substrate, a deformable layer covering the separation layer, a flexible substrate provided above the deformable layer, and a display device provided on the flexible substrate, wherein the deformable layer shrinks and deforms under a preset triggering condition to separate the flexible substrate from the carrier substrate without any damage to the display element, and the light-outgoing effect of the flexible display device can be improved.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: November 12, 2019
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., HEFEI XINSHENG OPTOELECTRONICS TECHNOLOGY CO., LTD.
    Inventors: Xinxin Wang, Zhijie Ye, Wenbin Jia, Ronggang Shangguan, Lingyun Liu
  • Publication number: 20190311524
    Abstract: Embodiments of the present disclosure provide a method and apparatus for real-time virtual viewpoint synthesis; during the whole process of synthesizing virtual viewpoint images, unlike the prior art, the method and apparatus for virtual viewpoint synthesis according to the embodiments above do not rely on depth maps and thus effectively avoid the problems incurred by depth-image-based rendering.
    Type: Application
    Filed: July 22, 2016
    Publication date: October 10, 2019
    Inventors: Ronggang WANG, Jiajia LUO, Xiubao JIANG, Wen GAO
  • Patent number: 10425656
    Abstract: A video encoding and decoding method, and its inter-frame prediction method, device and system thereof are disclosed. The inter-frame prediction method includes obtaining a motion vector of the current image block and related spatial position of a current pixel, obtaining a motion vector of the current pixel according to the motion vector of the current image block and the related spatial position of the current pixel; and obtaining a predicted value of the current pixel according to the motion vector of the current pixel. The method considers both the motion vector of the current image block and the related spatial position information of the current pixel during inter-frame prediction. The method can accommodate lens distortion characteristics of different images and zoom-in/zoom-out produced when the object moves in pictures, thereby improving the calculation accuracy of pixels' motion vectors, and improving inter-frame prediction performance and compression efficiency in video encoding and decoding.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Patent number: 10425640
    Abstract: A method, a device and an encoder for controlling filtering of intra-frame prediction reference pixel point are disclosed. The method includes: when various reference pixel points in a reference pixel group of an intra-frame block to be predicted are filtered, and the target reference pixel point to be filtered currently is not an edge reference pixel point in a reference pixel group (S202), acquiring a pixel difference value between the target reference pixel point and n adjacent reference pixel points thereof (S203); and selecting a filter with the filtering grade thereof corresponding to the pixel difference value to filter the target reference pixel point (S204). For various reference pixel points not located at an edge in a reference pixel group, according to the local difference characteristics of these reference pixel points, filters with corresponding filtering grades are flexibly configured, thus providing flexibility and adaptivity to the filtering, achieving better effect.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Ronggang Wang, Kui Fan, Zhenyu Wang, Wen Gao
  • Patent number: 10424052
    Abstract: An image representation method and processing device based on local PCA whitening. A first mapping module maps words and characteristics to a high-dimension space. A principal component analysis module conducts principal component analysis in each corresponding word space, to obtain a projection matrix. A VLAD computation module computes a VLAD image representation vector; a second mapping module maps the VLAD image representation vector to the high-dimension space. A projection transformation module conducts projection transformation on the VLAD image representation vector obtained by means of projection. A normalization processing module conducts normalization on characteristics obtained by means of projection transformation, to obtain a final image representation vector.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mingmin Zhen, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 10424075
    Abstract: A method and a device for post-processing depth/disparity maps adopt a strategy of combining edge information and segmentation information when detecting irregular edge regions. The method includes dividing a color image into super pixels when performing image segmentation on a color image; partitioning a grayscale range into a preset number of intervals, and for each super pixel, statistically obtaining a histogram of all the pixel points that fall within the intervals; determining, in a current super pixel, whether a ratio of the number of pixels contained in the interval having a maximum interval distribution value, to the total number of pixels in the current super pixel is less than the first threshold; and if so, further dividing the current super pixel using a color-based segmentation method. The disclosed method and device improve accuracy of color image division while ensuring image processing speed, thus improving detection accuracy of the irregular edge regions.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Jianbo Jiao, Ronggang Wang, Zhenyu Wang, Wenmin Wang, Wen Gao
  • Patent number: 10424054
    Abstract: A low-illumination image processing method and device address the problem of noise amplification in existing contrast enhancement techniques when applied to original low-illumination image. A noise suppression filter is additionally arranged before an operation of contrast enhancement, and smoothing processing is performed on an inverse color image of a low-illumination image by adopting a first filtering coefficient and a second filtering coefficient, so that image contrast is enhanced while random noise is suppressed. Texture and noise level parameter of an image are calculated according to a local characteristic inside block of the image. Weighted averaging is performed on a first smoothing image and a second smoothing image after smoothing processing according to the texture and noise level parameters.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: September 24, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Lin Li, Ronggang Wang, Chengzhou Tang, Zhenyu Wang, Wen Gao
  • Publication number: 20190289298
    Abstract: Embodiments of the present disclosure disclose a quick intra code-rate predicting method in the field of video coding, which may skip an entropy coding procedure during an RDO process by modeling residual information of a prediction block and predicting the number of coding bits of the prediction block based on an information entropy theory under a corresponding model. The code-rate predicting method comprises: making statistics on prediction block distribution information and modeling to obtain a combined model, predicting the number of coding bits of a prediction mode based on the model, and correcting the predicted number of coding bits for predicting the code-rate of each prediction mode during the RDO process, so as to replace a high time complexity in an actual entropy coding procedure, thereby effectively reducing the coding time with less video quality loss. The present disclosure is applicable to Iframe code-rate prediction for in video coding.
    Type: Application
    Filed: July 24, 2017
    Publication date: September 19, 2019
    Inventors: Ronggang WANG, Hongbin CAO, Zhenyu WANG, Wen GAO
  • Publication number: 20190273934
    Abstract: The present disclosure discloses a method and apparatus for fast inverse discrete cosine transform and a video coding/decoding method and framework. By recording positions of non-zero coefficients in a transform unit during a procedure of performing inverse quantization scanning to the coefficients, determining a distribution pattern of non-zero coefficients of the transform unit based on the positions of non-zero coefficients, and further selecting an inverse discrete cosine transform function corresponding to the non-zero coefficient distribution pattern, and then performing inverse cosine transform to the function, the technical solutions of the present disclosure need not compute the zero coefficients, which thus improves the overall speed of an algorithm; because the positions of non-zero coefficients have been recorded during the procedure of performing inverse quantization scanning to the coefficients, the complexity of the algorithm is lowered.
    Type: Application
    Filed: March 17, 2016
    Publication date: September 5, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Ronggang WANG, Kaili YAO, Zhenyu WANG, Wen GAO
  • Patent number: 10395374
    Abstract: Disclosed in the present invention is a video foreground extraction method for a surveillance video, which adjusts a size of a block to adapt to different video resolutions based on an image block processing method; and then extracts a foreground object in a moving state by establishing a background block model, the method comprising: representing each frame of image I in the surveillance video as a block; initializing; updating a block background weight, a block temporary background and a temporary background; updating a block background and a background; saving a foreground, and updating a foreground block weight and a foreground block; and performing binarization processing on the foreground to obtain a final foreground result.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: August 27, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ge Li, Xianghao Zang, Wenmin Wang, Ronggang Wang
  • Publication number: 20190253719
    Abstract: Disclosed are a describing method and a coding method of panoramic video ROIs based on multiple layers of spherical circumferences. The describing method comprises: first setting a center of the panoramic video ROIs; then setting the number of layers of ROIs as N; obtaining the size Rn of the current layer ROI based on a radius or angle; obtaining the sizes of all of the N layers of ROIs, and writing information such as the center of the ROIs, the number of layers, and the size of each layer into a sequence header of a code stream. The coding method comprises adjusting or filtering an initial QP based on a QP adjusted value and then coding an image. By flexibly assigning code rates to multiple layers of panoramic video ROIs, while guaranteeing a relatively high image quality of ROIs, the code rate needed for coding and transmission is greatly reduced.
    Type: Application
    Filed: July 12, 2017
    Publication date: August 15, 2019
    Inventors: Zhenyu WANG, Ronggang WANG, Yueming WANG, Wen GAO