Patents by Inventor Wen Gao

Wen Gao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190273934
    Abstract: The present disclosure discloses a method and apparatus for fast inverse discrete cosine transform and a video coding/decoding method and framework. By recording positions of non-zero coefficients in a transform unit during a procedure of performing inverse quantization scanning to the coefficients, determining a distribution pattern of non-zero coefficients of the transform unit based on the positions of non-zero coefficients, and further selecting an inverse discrete cosine transform function corresponding to the non-zero coefficient distribution pattern, and then performing inverse cosine transform to the function, the technical solutions of the present disclosure need not compute the zero coefficients, which thus improves the overall speed of an algorithm; because the positions of non-zero coefficients have been recorded during the procedure of performing inverse quantization scanning to the coefficients, the complexity of the algorithm is lowered.
    Type: Application
    Filed: March 17, 2016
    Publication date: September 5, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Ronggang WANG, Kaili YAO, Zhenyu WANG, Wen GAO
  • Patent number: 10390040
    Abstract: Embodiments of the present disclosure provide a method, an apparatus, and a system for deep feature coding and decoding. The method comprises: extracting features of respective video frames; determining types of the features, the types reflecting time-domain correlation degrees between the features and a reference feature; encoding the features using predetermined coding patterns matching the types to obtain coded features; and transmitting the coded features to the server such that the server decodes the coded features for a vision analysis task. By using the embodiments of the present disclosure, videos per se may not be transmitted to the cloud server; instead, the features of the video, after being encoded, are transmitted to the cloud server for a vision analysis task; compared with the prior art, data transmission pressure may be lowered, and the storage pressure at the cloud server may also be lowered.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: August 20, 2019
    Assignee: PEKING UNIVERSITY
    Inventors: Yonghong Tian, Lin Ding, Tiejun Huang, Wen Gao
  • Publication number: 20190253719
    Abstract: Disclosed are a describing method and a coding method of panoramic video ROIs based on multiple layers of spherical circumferences. The describing method comprises: first setting a center of the panoramic video ROIs; then setting the number of layers of ROIs as N; obtaining the size Rn of the current layer ROI based on a radius or angle; obtaining the sizes of all of the N layers of ROIs, and writing information such as the center of the ROIs, the number of layers, and the size of each layer into a sequence header of a code stream. The coding method comprises adjusting or filtering an initial QP based on a QP adjusted value and then coding an image. By flexibly assigning code rates to multiple layers of panoramic video ROIs, while guaranteeing a relatively high image quality of ROIs, the code rate needed for coding and transmission is greatly reduced.
    Type: Application
    Filed: July 12, 2017
    Publication date: August 15, 2019
    Inventors: Zhenyu WANG, Ronggang WANG, Yueming WANG, Wen GAO
  • Patent number: 10379964
    Abstract: Systems, computer program products, and methods that can integrate resources at a disaster recovery site are provided. One method includes generating, on a primary site, a set of storage snapshots based on combining another set of storage snapshots and incremental changes to mirrored data on a backup site in which the storage snapshots include a second snapshot format utilized on the backup site. The method further includes converting the set of storage snapshots from the second snapshot format to a first snapshot format utilized on the primary site to generate a yet another set of storage snapshots and converting, on the backup site, a set of storage snapshots including the incremental changes and the second snapshot format to the first snapshot format to generate still another set of eighth storage snapshots. The storage snapshots on both sites represent the same data in the same snapshot format without bulk data transfer.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Henry E. Butterworth, Yi Zhi Gao, Long Wen Lan
  • Publication number: 20190212165
    Abstract: A method of identifying information during navigation is provided. Fork data is extracted from navigation data, the fork data corresponding to a road having a fork. A first node and at least two exit roads are extracted from the fork data, the at least two exit roads being roads in different directions. The fork data are identified as corresponding to a target fork in response to all of the at least two exit roads converging at the first node. A second node adjacent to the first node is queried in response to at least two exit roads not completely converging at the first node. The fork data are identified as corresponding to the target fork based on a distance between the first node and the second node meeting a preset condition.
    Type: Application
    Filed: March 13, 2019
    Publication date: July 11, 2019
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shuai Wen YANG, Wang Yu XIAO, Shu Feng GAO, Rui CAO
  • Publication number: 20190208651
    Abstract: The present disclosure provides a rear structure coupled to a display panel. The rear structure comprises a rear cover comprising a rib structure for supporting the display panel, the rib structure comprising a plurality of horizontal bars, a plurality of vertical bars, and a plurality of blocks, and each of the blocks formed by at least one of the horizontal bars and at least one of the vertical bars. Wherein a first section of the rib structure comprises at least two aligned adjacent horizontal bars and at least two aligned adjacent vertical bars, and a second section of the rib structure comprises at least two non-aligned vertical bars.
    Type: Application
    Filed: January 3, 2019
    Publication date: July 4, 2019
    Inventors: Wen-Pin Wang, Yao-Shih Chung, I-Ting Huang, Yu-Jen Chang, Hai-Ping Xiang, Hui Huang, Xiu-Gao Yang, Dong-Ping Zhang, Yong Yang, Hui Zhang, Bo Hu, Qin Sun
  • Publication number: 20190205393
    Abstract: A cross-media search method using a VGG convolutional neural network (VGG net) to extract image features. The 4096-dimensional feature of a seventh fully-connected layer (fc7) in the VGG net, after processing by a ReLU activation function, serves as image features. A Fisher Vector based on Word2vec is utilized to extract text features. Semantic matching is performed on heterogeneous images and the text features by means of logistic regression. A correlation between the two heterogeneous features, which are images and text, is found by means of semantic matching based on logistic regression, and thus cross-media search is achieved. The feature extraction method can effectively indicate deep semantics of image and text, improve cross-media search accuracy, and thus greatly improve the cross-media search effect.
    Type: Application
    Filed: December 1, 2016
    Publication date: July 4, 2019
    Applicant: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Liang Han, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10339633
    Abstract: The present application provides a method and a device for super-resolution image reconstruction based on dictionary matching. The method includes: establishing a matching dictionary library; inputting an image to be reconstructed into a multi-layer linear filter network; extracting a local characteristic of the image to be reconstructed; searching the matching dictionary library for a local characteristic of a low-resolution image block having the highest similarity with the local characteristic of the image to be reconstructed; searching the matching dictionary library for a residual of a combined sample where the local characteristic of the low-resolution image block with the highest similarity is located; performing interpolation amplification on the local characteristic of the low-resolution image block having the highest similarity; and adding the residual to a result of the interpolation amplification to obtain a reconstructed high-resolution image block.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Yang Zhao, Ronggang Wang, Wen Gao, Zhenyu Wang, Wenmin Wang
  • Patent number: 10341682
    Abstract: Methods and devices are disclosed for panoramic video coding and decoding based on multi-mode boundary fill. If a predicted image block of a current image block is obtained by inter-frame prediction. The inter-frame prediction includes a boundary fill step of adaptively selecting a boundary fill method according to coordinates of a reference sample when the reference sample of a pixel in the current image block is outside the boundary of a corresponding reference image, to obtain a sample value of the reference sample. The panoramic video encoding and decoding method and device based on multi-mode boundary fill in the present invention make full use of the characteristic that horizontal image contents in a panoramic video are cyclically connected to optimize an image boundary fill method, such that the encoding can adaptively select a more reasonable boundary fill method according to the coordinates of a reference sample, thereby improving compression efficiency.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Patent number: 10339409
    Abstract: A method and a device for extracting local features of a 3D point cloud are disclosed. Angle information and the concavo-convex information about a feature point to be extracted and a point of an adjacent body element are calculated based on a local reference system corresponding to the points of each body element. The feature relation between the two points can be calculated accurately. The property of invariance in translation and rotation is possessed. Since concavo-convex information about a local point cloud is contained during extraction, the inaccurate extraction caused by ignoring concavo-convex ambiguity in previous 3D local feature description is resolved.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: July 2, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mingmin Zhen, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 10325358
    Abstract: A method for image de-blurring includes estimating an intermediate image L by marking and constraining an edge region and a smooth region in an input image; estimating a blur kernel k by extracting salient edges from the intermediate image L, wherein the salient edges have scales greater than those of the blur kernel k; and restoring the input image to a clear image by performing non-blind deconvolution on the input image and the estimated blur kernel k. Imposing constraints on the edge region and the smooth region allows the intermediate image to maintain the edge while effectively removing noise and ringing artifacts in the smooth region. The use of the salient edges in the intermediate image L enables more accurate blur kernel estimation. Performing non-blind deconvolution on the input image and the estimated blur kernel k restores the input image to a clear image achieving desired de-blurring effect.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: June 18, 2019
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Xinxin Zhang, Ronggang Wang, Zhenyu Wang, Wen Gao
  • Patent number: 10299678
    Abstract: An apparatus for detecting conductance parameter of high protein body fluid sample is provided. The apparatus includes at least one liquid collection element, and at least two electrodes horizontally aligned in the liquid collection element. Also provided are methods for detecting dehydration in a subject, comprising the steps of measuring the conductance parameter of the saliva of the subject.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: May 28, 2019
    Assignees: CHANG GUNG MEMORIAL HOSPITAL, CHIAYI, NATIONAL APPLIED RESEARCH LABORATORIES, NATIONAL TAIWAN UNIVERSITY
    Inventors: Jen-Tsung Yang, Leng-Chieh Lin, I-Neng Lee, Jo-Wen Huang, Jer-Liang Andrew Yeh, Ming-Yu Lin, Yen-Pei Lu, Chih-Ting Lin, Chia-Hong Gao
  • Patent number: 10298950
    Abstract: A P frame-based multi-hypothesis motion compensation method includes: taking an encoded image block adjacent to a current image block as a reference image block and obtaining a first motion vector of the current image block by using a motion vector of the reference image block, the first motion vector pointing to a first prediction block; taking the first motion vector as a reference value and performing joint motion estimation on the current image block to obtain a second motion vector of the current image block, the second motion vector pointing to a second prediction block; and performing weighted averaging on the first prediction block and the second prediction block to obtain a final prediction block of the current image block. The method increases the accuracy of the obtained prediction block of the current image block without increasing the code rate.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: May 21, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Lei Chen, Zhenyu Wang, Siwei Ma, Wen Gao, Tiejun Huang, Wenmin Wang, Shengfu Dong
  • Publication number: 20190145085
    Abstract: A multi-function pull head comprises a valve seat, and that includes: a water input connector, an outer shell, a water output portion, a first function switching structure, a second function switching structure, and a water output panel. The water input connector is connected onto the outer shell through a thread structure, and two through holes are provided on the outer shell, for the first function switching structure and second function switching structure to pass through respectively. The water input connector is disposed on a top portion of the valve seat. The outer shell is disposed on a bottom portion of the water input connector, to protect the valve seat. The outer shell is engaged to the water input connector through an engaging ring. The water output portion is disposed at a lower portion of the valve seat.
    Type: Application
    Filed: October 30, 2018
    Publication date: May 16, 2019
    Inventors: LI-ZHONG LIAO, DING-JUN WANG, WEN GAO
  • Publication number: 20190139186
    Abstract: Embodiments of the present disclosure provide a method for accelerating CDVS extraction process based on a GPGPU platform, wherein for the stages of feature detection and local descriptor computation of the CDVS extraction process, operation logics and parallelism strategies of respective inter-pixel parallelism sub-procedures and respective inter-feature point parallelism sub-procedures are implemented by leveraging an OpenCL general-purpose parallelism programming framework, and acceleration is achieved by leveraging a GPU's parallelism computation capability; including: partitioning computing tasks for a GPU and a CPU; reconstructing an image scale pyramid storage model; assigning parallelism strategies to respective sub-procedures for the GPU; and applying local memory to mitigate the access bottleneck. The technical solution of the present disclosure may accelerate the CDVS extraction process and significantly enhances the extraction performance.
    Type: Application
    Filed: December 5, 2016
    Publication date: May 9, 2019
    Inventors: Ronggang Wang, Shen Zhang, Zhenyu Wang, Wen Gao
  • Publication number: 20190139199
    Abstract: An image deblurring method based on light streak information in an image is provided, wherein shape information of a blur kernel is obtained based on a light streak in a motion blur image and image restoration is constrained by combining the shape information, a natural image and the blur kernel to thereby obtain an accurate blur kernel and a high-quality restored image. The method specifically comprises: selecting an optimum image patch including an optimum light streak; extracting shape information of a blur kernel from the optimum image patch including the optimum light streak; performing blur kernel estimation to obtain the final blur kernel; performing non-blind deconvolution and restoring a sharp restored image as a final deblurred image. The present disclosure establishes a blurry image test set of captured images including light streaks and a method to: obtain an accurate blur kernel and a high quality restore image.
    Type: Application
    Filed: July 15, 2016
    Publication date: May 9, 2019
    Inventors: Ronggang WANG, Xinxin ZHANG, Zhenyu WANG, Wen GAO
  • Publication number: 20190110060
    Abstract: A video encoding and decoding method, and its inter-frame prediction method, device and system thereof are disclosed. The inter-frame prediction method includes obtaining a motion vector of the current image block and related spatial position of a current pixel, obtaining a motion vector of the current pixel according to the motion vector of the current image block and the related spatial position of the current pixel; and obtaining a predicted value of the current pixel according to the motion vector of the current pixel. The method considers both the motion vector of the current image block and the related spatial position information of the current pixel during inter-frame prediction. The method can accommodate lens distortion characteristics of different images and zoom-in/zoom-out produced when the object moves in pictures, thereby improving the calculation accuracy of pixels' motion vectors, and improving inter-frame prediction performance and compression efficiency in video encoding and decoding.
    Type: Application
    Filed: January 19, 2016
    Publication date: April 11, 2019
    Inventors: Zhenyu Wang, Ronggang Wang, Xiubao Jiang, Wen Gao
  • Publication number: 20190098303
    Abstract: A method, a device and an encoder for controlling filtering of intra-frame prediction reference pixel point are disclosed. The method includes: when various reference pixel points in a reference pixel group of an intra-frame block to be predicted are filtered, and the target reference pixel point to be filtered currently is not an edge reference pixel point in a reference pixel group (S202), acquiring a pixel difference value between the target reference pixel point and n adjacent reference pixel points thereof (S203); and selecting a filter with the filtering grade thereof corresponding to the pixel difference value to filter the target reference pixel point (S204). For various reference pixel points not located at an edge in a reference pixel group, according to the local difference characteristics of these reference pixel points, filters with corresponding filtering grades are flexibly configured, thus providing flexibility and adaptivity to the filtering, achieving better effect.
    Type: Application
    Filed: June 16, 2016
    Publication date: March 28, 2019
    Inventors: Ronggang Wang, Kui Fan, Zhenyu Wang, Wen Gao
  • Patent number: 10244262
    Abstract: A decoding method including receiving a bitstream corresponding to a residual block, decoding the residual block having a plurality of residual pixels represented as transform coefficients, and computing a reconstructed block based on the residual pixels. The reconstructed block includes reconstructed pixels and uses an intra prediction mode to generate prediction pixels in sequence vertically or horizontally based on reconstructed pixels in the reconstructed block. The reconstructed block includes initial reconstructed pixels based on initial prediction pixels. The intra prediction mode is used to generate the initial prediction pixels based on external reference pixels located in neighboring blocks decoded before the reconstructed block. Computing the reconstructed block includes combining prediction pixels with residual pixels to generate additional reconstructed pixels used to generate additional prediction pixels.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: March 26, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Wen Gao, Jin Song, Mingyuan Yang, Haoping Yu
  • Patent number: 10230990
    Abstract: A chroma interpolation method, including: 1) determining a pixel accuracy for interpolation; 2) determining coordinate positions of interpolated fractional-pel pixels between integer-pel pixels; and 3) performing two-dimensional separated interpolation on the interpolated fractional-pel pixels by an interpolation filter according to the coordinate positions. The invention also provides a filter device using the above method for chroma interpolation.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: March 12, 2019
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Hao Lv, Zhenyu Wang, Shengfu Dong, Wen Gao