Patents by Inventor Yanwu Xu

Yanwu Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11735315
    Abstract: Embodiments of the present disclosure disclose a method, apparatus, and device for fusing features applied to small target detection, and a storage medium, relate to the field of computer vision technology. A particular embodiment of the method for fusing features applied to small target detection comprises: acquiring feature maps output by convolutional layers in a Backbone network; performing convolution on the feature maps to obtain input feature maps of feature layers, the feature layers representing resolutions of the input feature maps; and fusing, based on densely connection feature pyramid network features, the input feature maps of each feature layer to obtain output feature maps of the feature layer. Since no additional convolutional layer is introduced for feature fusion, the detection performance for small targets may be enhanced without additional parameters, and the detection ability for small targets may be improved with computing resource constraints.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: August 22, 2023
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Binghong Wu, Yehui Yang, Yanwu Xu, Lei Wang
  • Publication number: 20230195839
    Abstract: Technical solutions relate to the field of artificial intelligence such as deep learning, computer vision and intelligent imaging. A method may includes during training of a one-stage object detecting model, acquiring values of a loss function corresponding to feature maps at different scales respectively in the case that classification loss calculation is required, and the loss function is a focal loss function; and determining a final value of the loss function according to the acquired values of the loss function, and training the one-stage object detecting model according to the final value of the loss function.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 22, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Binghong WU, Yehui YANG, DaLU YANG, Yanwu XU, Lei WANG, Qian LI
  • Patent number: 11436447
    Abstract: A target detection method a is provided, which relates to the fields of deep learning, computer vision, and artificial intelligence. The method comprises: classifying, by using a first classification model, a plurality of image patches comprised in an input image, to obtain one or more candidate image patches, in the plurality of image patches, that are preliminarily classified as comprising a target; extracting a corresponding salience area for each candidate image patch; constructing a corresponding target feature vector for each candidate image patch based on the corresponding salience area for each candidate image patch; and classifying, by using a second classification model, the target feature vector to determine whether each candidate image patch comprises the target.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: September 6, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yehui Yang, Lei Wang, Yanwu Xu
  • Patent number: 11379980
    Abstract: The present application discloses an image processing method, an apparatus, an electronic device and a storage medium. A specific implementation is: acquiring an image to be processed; acquiring a grading array according to the image to be processed and a grading network model, where the grading network model is a model pre-trained according to mixed samples, the number of elements contained in the grading array is C?1, C is the number of lesion grades, C lesion grades include one lesion grade without lesion and C?1 lesion grades with lesion, and a kth element in the grading array is a probability of a lesion grade corresponding to the image to be processed being greater than or equal to a kth lesion grade, where 1?k?C?1, and k is an integer; determining the lesion grade corresponding to the image to be processed according to the grading array.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: July 5, 2022
    Inventors: Fangxin Shang, Yehui Yang, Lei Wang, Yanwu Xu
  • Patent number: 11232560
    Abstract: Embodiments of the present disclosure provide a method and apparatus for processing a fundus image. The method may include: acquiring a target fundus image; dividing the target fundus image into at least two first image blocks; inputting a first image block into a pre-trained deep learning model, to obtain a first output value; and determining, based on the first output value and a threshold, whether the first image block is the fundus image block containing a predetermined type of image region.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: January 25, 2022
    Inventors: Yehui Yang, Yanwu Xu, Lei Wang, Yan Huang
  • Publication number: 20210406586
    Abstract: An image classification method and apparatus, and a style transfer model training method and apparatus are provided, which are relate to the field of deep learning, cloud computing and computer vision in artificial intelligence. The image classification method comprises: inputting an image of a first style into a style transfer model, to obtain an image of a second style corresponding to the image of the first style; and inputting the image of the second style into an image classification model, to obtain a classification result of the image of the second style, wherein the style transfer model is obtained through training on the basis of a sample image of the first style and a sample image of the second style; and the image classification model is obtained through training on the basis of the sample image of the second style.
    Type: Application
    Filed: December 31, 2020
    Publication date: December 30, 2021
    Inventors: Dalu YANG, Yehui YANG, Lei WANG, Yanwu XU
  • Publication number: 20210406616
    Abstract: A target detection method a is provided, which relates to the fields of deep learning, computer vision, and artificial intelligence. The method comprises: classifying, by using a first classification model, a plurality of image patches comprised in an input image, to obtain one or more candidate image patches, in the plurality of image patches, that are preliminarily classified as comprising a target; extracting a corresponding salience area for each candidate image patch; constructing a corresponding target feature vector for each candidate image patch based on the corresponding salience area for each candidate image patch; and classifying, by using a second classification model, the target feature vector to determine whether each candidate image patch comprises the target.
    Type: Application
    Filed: September 30, 2020
    Publication date: December 30, 2021
    Inventors: Yehui YANG, Lei WANG, Yanwu XU
  • Patent number: 11116393
    Abstract: The present disclosure generally relates to automated method and system for vision assessment of a subject. The method comprises: determining a set of test patterns for the subject based on a preliminary assessment of an eye of the subject; displaying the set of test patterns sequentially to the subject; collecting data on the subject's gaze in response to each test pattern displayed to the subject; and assessing vision functionality of the subject based on the collected gaze data.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: September 14, 2021
    Assignees: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, TAN TOCK SENG HOSPITAL PTE LTD
    Inventors: Huiying Liu, Augustinus Laude, Tock Han Lim, Yanwu Xu, Wing Kee Damon Wong, Jiang Liu
  • Publication number: 20210224581
    Abstract: Embodiments of the present disclosure disclose a method, apparatus, and device for fusing features applied to small target detection, and a storage medium, relate to the field of computer vision technology. A particular embodiment of the method for fusing features applied to small target detection comprises: acquiring feature maps output by convolutional layers in a Backbone network; performing convolution on the feature maps to obtain input feature maps of feature layers, the feature layers representing resolutions of the input feature maps; and fusing, based on densely connection feature pyramid network features, the input feature maps of each feature layer to obtain output feature maps of the feature layer. Since no additional convolutional layer is introduced for feature fusion, the detection performance for small targets may be enhanced without additional parameters, and the detection ability for small targets may be improved with computing resource constraints.
    Type: Application
    Filed: March 26, 2021
    Publication date: July 22, 2021
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Binghong Wu, Yehui Yang, Yanwu Xu, Lei Wang
  • Publication number: 20210192728
    Abstract: The present application discloses an image processing method, an apparatus, an electronic device and a storage medium. A specific implementation is: acquiring an image to be processed; acquiring a grading array according to the image to be processed and a grading network model, where the grading network model is a model pre-trained according to mixed samples, the number of elements contained in the grading array is C?1, C is the number of lesion grades, C lesion grades include one lesion grade without lesion and C?1 lesion grades with lesion, and a kth element in the grading array is a probability of a lesion grade corresponding to the image to be processed being greater than or equal to a kth lesion grade, where 1?k?C?1, and k is an integer; determining the lesion grade corresponding to the image to be processed according to the grading array.
    Type: Application
    Filed: November 13, 2020
    Publication date: June 24, 2021
    Inventors: Fangxin SHANG, Yehui YANG, Lei WANG, Yanwu XU
  • Publication number: 20200320686
    Abstract: Embodiments of the present disclosure provide a method and apparatus for processing a fundus image. The method may include: acquiring a target fundus image; dividing the target fundus image into at least two first image blocks; inputting a first image block into a pre-trained deep learning model, to obtain a first output value; and determining, based on the first output value and a threshold, whether the first image block is the fundus image block containing a predetermined type of image region.
    Type: Application
    Filed: December 2, 2019
    Publication date: October 8, 2020
    Inventors: Yehui Yang, Yanwu Xu, Lei Wang, Yan Huang
  • Publication number: 20200260944
    Abstract: A method and a device for recognizing a macular region and a computer-readable storage medium are provided. The method includes: obtaining a fundus image of a target object; extracting blood vessel information and optic disc information from the fundus image; inputting the blood vessel information and the optic disc information into a regression model to obtain location information of a macular fovea; and determining location information of the macular region of an eye of the target object, based on the location information of the macula fovea. In the embodiments of the application, the problem that the macular region cannot be accurately recognized when the image quality of the macular region is impaired is solved.
    Type: Application
    Filed: November 27, 2019
    Publication date: August 20, 2020
    Applicant: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Qinpei SUN, Yehui YANG, Lei WANG, Yanwu XU, Yan HUANG
  • Publication number: 20190110678
    Abstract: The present disclosure generally relates to automated method and system for vision assessment of a subject. The method comprises: determining a set of test patterns for the subject based on a preliminary assessment of an eye of the subject; displaying the set of test patterns sequentially to the subject; collecting data on the subject's gaze in response to each test pattern displayed to the subject; and assessing vision functionality of the subject based on the collected gaze data.
    Type: Application
    Filed: March 31, 2017
    Publication date: April 18, 2019
    Inventors: Huiying LIU, Augustinus LAUDE, Tock Han LIM, Yanwu XU, Wing Kee Damon WONG, Jiang LIU
  • Patent number: 10145669
    Abstract: A method and system are proposed to obtain a reduced speckle noise image of a subject from optical coherence tomography (OCT) image data of the subject. The cross sectional images each comprise a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject. The method comprises two aligning steps. First the cross sectional images are aligned, then image patches of the aligned cross sectional images are aligned to form a set of aligned patches. An image matrix is then formed from the aligned patches; and matrix completion is applied to the image matrix to obtain a reduced speckle noise image of the subject.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: December 4, 2018
    Assignees: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, KABUSHIKI KAISHA TOPCON
    Inventors: Jun Cheng, Jiang Liu, Lixin Duan, Yanwu Xu, Wing Kee Damon Wong, Masahiro Akiba
  • Publication number: 20170358077
    Abstract: A method and apparatus for aligning a two-dimensional eye image with a predefined axis by rotation at a rotation angle are disclosed, the method comprising deriving the rotation angle and a de-noised image, which minimises a cost function comprising (i) a complexity measure of the de-noised image and (ii) magnitude of a noise image obtained by rotating the first image by the rotation angle and subtracting the de-noised image.
    Type: Application
    Filed: December 23, 2015
    Publication date: December 14, 2017
    Applicants: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, SINGAPORE HEALTH SERVICES PTE LTD
    Inventors: Yanwu XU, Jiang LIU, Lixin DUAN, Wing Kee Damon WONG, Tin AUNG, Baskaran MANI, Shamira PERERA
  • Patent number: 9715640
    Abstract: A numerical parameter indicative of the degree of match between two retina images is produced by comparing two graphs obtained from the respective images. Each graph is composed of edges and vertices. Each vertex is associated with a location the corresponding retina image, and with descriptor data describing a part of the corresponding retina image proximate the corresponding location.
    Type: Grant
    Filed: June 3, 2013
    Date of Patent: July 25, 2017
    Assignee: Agency for Science, Technology and Research
    Inventors: Yanwu Xu, Jiang Liu, Wing Kee Damon Wong, Ngan Meng Tan
  • Patent number: 9684959
    Abstract: A method is proposed for automatically locating the optic disc or the optic cup in an image of the rear of an eye. A portion of the image containing the optic disc or optic cup is divided into sub-regions using a clustering algorithm. Biologically inspired features, and optionally other features, are obtained for each of the sub-regions. An adaptive model uses the features to generate data indicative of whether each sub-region is within or outside the optic disc or optic cup. The result is then smoothed, to form an estimate of the position of the optic disc or optic cup.
    Type: Grant
    Filed: August 26, 2013
    Date of Patent: June 20, 2017
    Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte Ltd
    Inventors: Jun Cheng, Jiang Liu, Yanwu Xu, Fengshou Yin, Ngan Meng Tan, Wing Kee Damon Wong, Beng Hai Lee, Xiangang Cheng, Xinting Gao, Zhuo Zhang, Tien Yin Wong, Ching-Yu Cheng, Yim-lui Carol Cheung, Baskaran Mani, Tin Aung
  • Patent number: 9679379
    Abstract: A method is presented to obtain, from a retinal image, data characterizing the optic cup, such as data indicating the location and/or size of the optic cup in relation to the optic disc. A disc region of the retinal image of an eye, is expressed as a weighted sum of a plurality of pre-existing “reference” retinal images in a library, with the weights being chosen to minimize a cost function. The data characterizing the cup of the eye is obtained from cup data associated with the pre-existing disc images and the corresponding weights. The cost function includes (i) a construction error term indicating a difference between the disc region of the retinal image and a weighted sum of the reference retinal images, and (ii) a cost term, which may be generated using a weighted sum over the reference retinal images of a difference between the reference retinal images and the disc region of the retinal image.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: June 13, 2017
    Assignee: Agency for Science, Technology and Research
    Inventors: Yanwu Xu, Jiang Liu, Jun Cheng, Fengshou Yin, Wing Kee Damon Wong
  • Publication number: 20170131082
    Abstract: A method and system are proposed to obtain a reduced speckle noise image of a subject from optical coherence tomography (OCT) image data of the subject. The cross sectional images each comprise a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject. The method comprises two aligning steps. First the cross sectional images are aligned, then image patches of the aligned cross sectional images are aligned to form a set of aligned patches. An image matrix is then formed from the aligned patches; and matrix completion is applied to the image matrix to obtain a reduced speckle noise image of the subject.
    Type: Application
    Filed: June 16, 2015
    Publication date: May 11, 2017
    Inventors: Jun CHENG, Jiang LIU, Lixin DUAN, Yanwu XU, Wing Kee Damon WONG, Masahiro AKIBA
  • Patent number: 9501823
    Abstract: A method is proposed for analyzing an optical coherence tomography (OCT) image of the anterior segment (AS) of a subject's eye. A region of interest is defined which is a region of the image containing the junction of the cornea and iris, and an estimated position the junction within the region of interest is derived. Using this a second region of the image is obtained, which is a part of the image containing the estimated position of the junction. Features of the second region are obtained, and those features are input to an adaptive model to generate data characterizing the junction.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: November 22, 2016
    Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte Ltd
    Inventors: Yanwu Xu, Jiang Liu, Wing Kee Damon Wong, Beng Hai Lee, Tin Aung, Baskaran Mani, Shamira Perera