Patents by Inventor Yanwu Xu
Yanwu Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11735315Abstract: Embodiments of the present disclosure disclose a method, apparatus, and device for fusing features applied to small target detection, and a storage medium, relate to the field of computer vision technology. A particular embodiment of the method for fusing features applied to small target detection comprises: acquiring feature maps output by convolutional layers in a Backbone network; performing convolution on the feature maps to obtain input feature maps of feature layers, the feature layers representing resolutions of the input feature maps; and fusing, based on densely connection feature pyramid network features, the input feature maps of each feature layer to obtain output feature maps of the feature layer. Since no additional convolutional layer is introduced for feature fusion, the detection performance for small targets may be enhanced without additional parameters, and the detection ability for small targets may be improved with computing resource constraints.Type: GrantFiled: March 26, 2021Date of Patent: August 22, 2023Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Binghong Wu, Yehui Yang, Yanwu Xu, Lei Wang
-
Publication number: 20230195839Abstract: Technical solutions relate to the field of artificial intelligence such as deep learning, computer vision and intelligent imaging. A method may includes during training of a one-stage object detecting model, acquiring values of a loss function corresponding to feature maps at different scales respectively in the case that classification loss calculation is required, and the loss function is a focal loss function; and determining a final value of the loss function according to the acquired values of the loss function, and training the one-stage object detecting model according to the final value of the loss function.Type: ApplicationFiled: December 20, 2021Publication date: June 22, 2023Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Binghong WU, Yehui YANG, DaLU YANG, Yanwu XU, Lei WANG, Qian LI
-
Patent number: 11436447Abstract: A target detection method a is provided, which relates to the fields of deep learning, computer vision, and artificial intelligence. The method comprises: classifying, by using a first classification model, a plurality of image patches comprised in an input image, to obtain one or more candidate image patches, in the plurality of image patches, that are preliminarily classified as comprising a target; extracting a corresponding salience area for each candidate image patch; constructing a corresponding target feature vector for each candidate image patch based on the corresponding salience area for each candidate image patch; and classifying, by using a second classification model, the target feature vector to determine whether each candidate image patch comprises the target.Type: GrantFiled: September 30, 2020Date of Patent: September 6, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Yehui Yang, Lei Wang, Yanwu Xu
-
Patent number: 11379980Abstract: The present application discloses an image processing method, an apparatus, an electronic device and a storage medium. A specific implementation is: acquiring an image to be processed; acquiring a grading array according to the image to be processed and a grading network model, where the grading network model is a model pre-trained according to mixed samples, the number of elements contained in the grading array is C?1, C is the number of lesion grades, C lesion grades include one lesion grade without lesion and C?1 lesion grades with lesion, and a kth element in the grading array is a probability of a lesion grade corresponding to the image to be processed being greater than or equal to a kth lesion grade, where 1?k?C?1, and k is an integer; determining the lesion grade corresponding to the image to be processed according to the grading array.Type: GrantFiled: November 13, 2020Date of Patent: July 5, 2022Inventors: Fangxin Shang, Yehui Yang, Lei Wang, Yanwu Xu
-
Patent number: 11232560Abstract: Embodiments of the present disclosure provide a method and apparatus for processing a fundus image. The method may include: acquiring a target fundus image; dividing the target fundus image into at least two first image blocks; inputting a first image block into a pre-trained deep learning model, to obtain a first output value; and determining, based on the first output value and a threshold, whether the first image block is the fundus image block containing a predetermined type of image region.Type: GrantFiled: December 2, 2019Date of Patent: January 25, 2022Inventors: Yehui Yang, Yanwu Xu, Lei Wang, Yan Huang
-
Publication number: 20210406586Abstract: An image classification method and apparatus, and a style transfer model training method and apparatus are provided, which are relate to the field of deep learning, cloud computing and computer vision in artificial intelligence. The image classification method comprises: inputting an image of a first style into a style transfer model, to obtain an image of a second style corresponding to the image of the first style; and inputting the image of the second style into an image classification model, to obtain a classification result of the image of the second style, wherein the style transfer model is obtained through training on the basis of a sample image of the first style and a sample image of the second style; and the image classification model is obtained through training on the basis of the sample image of the second style.Type: ApplicationFiled: December 31, 2020Publication date: December 30, 2021Inventors: Dalu YANG, Yehui YANG, Lei WANG, Yanwu XU
-
Publication number: 20210406616Abstract: A target detection method a is provided, which relates to the fields of deep learning, computer vision, and artificial intelligence. The method comprises: classifying, by using a first classification model, a plurality of image patches comprised in an input image, to obtain one or more candidate image patches, in the plurality of image patches, that are preliminarily classified as comprising a target; extracting a corresponding salience area for each candidate image patch; constructing a corresponding target feature vector for each candidate image patch based on the corresponding salience area for each candidate image patch; and classifying, by using a second classification model, the target feature vector to determine whether each candidate image patch comprises the target.Type: ApplicationFiled: September 30, 2020Publication date: December 30, 2021Inventors: Yehui YANG, Lei WANG, Yanwu XU
-
Patent number: 11116393Abstract: The present disclosure generally relates to automated method and system for vision assessment of a subject. The method comprises: determining a set of test patterns for the subject based on a preliminary assessment of an eye of the subject; displaying the set of test patterns sequentially to the subject; collecting data on the subject's gaze in response to each test pattern displayed to the subject; and assessing vision functionality of the subject based on the collected gaze data.Type: GrantFiled: March 31, 2017Date of Patent: September 14, 2021Assignees: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, TAN TOCK SENG HOSPITAL PTE LTDInventors: Huiying Liu, Augustinus Laude, Tock Han Lim, Yanwu Xu, Wing Kee Damon Wong, Jiang Liu
-
Publication number: 20210224581Abstract: Embodiments of the present disclosure disclose a method, apparatus, and device for fusing features applied to small target detection, and a storage medium, relate to the field of computer vision technology. A particular embodiment of the method for fusing features applied to small target detection comprises: acquiring feature maps output by convolutional layers in a Backbone network; performing convolution on the feature maps to obtain input feature maps of feature layers, the feature layers representing resolutions of the input feature maps; and fusing, based on densely connection feature pyramid network features, the input feature maps of each feature layer to obtain output feature maps of the feature layer. Since no additional convolutional layer is introduced for feature fusion, the detection performance for small targets may be enhanced without additional parameters, and the detection ability for small targets may be improved with computing resource constraints.Type: ApplicationFiled: March 26, 2021Publication date: July 22, 2021Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Binghong Wu, Yehui Yang, Yanwu Xu, Lei Wang
-
Publication number: 20210192728Abstract: The present application discloses an image processing method, an apparatus, an electronic device and a storage medium. A specific implementation is: acquiring an image to be processed; acquiring a grading array according to the image to be processed and a grading network model, where the grading network model is a model pre-trained according to mixed samples, the number of elements contained in the grading array is C?1, C is the number of lesion grades, C lesion grades include one lesion grade without lesion and C?1 lesion grades with lesion, and a kth element in the grading array is a probability of a lesion grade corresponding to the image to be processed being greater than or equal to a kth lesion grade, where 1?k?C?1, and k is an integer; determining the lesion grade corresponding to the image to be processed according to the grading array.Type: ApplicationFiled: November 13, 2020Publication date: June 24, 2021Inventors: Fangxin SHANG, Yehui YANG, Lei WANG, Yanwu XU
-
Publication number: 20200320686Abstract: Embodiments of the present disclosure provide a method and apparatus for processing a fundus image. The method may include: acquiring a target fundus image; dividing the target fundus image into at least two first image blocks; inputting a first image block into a pre-trained deep learning model, to obtain a first output value; and determining, based on the first output value and a threshold, whether the first image block is the fundus image block containing a predetermined type of image region.Type: ApplicationFiled: December 2, 2019Publication date: October 8, 2020Inventors: Yehui Yang, Yanwu Xu, Lei Wang, Yan Huang
-
Publication number: 20200260944Abstract: A method and a device for recognizing a macular region and a computer-readable storage medium are provided. The method includes: obtaining a fundus image of a target object; extracting blood vessel information and optic disc information from the fundus image; inputting the blood vessel information and the optic disc information into a regression model to obtain location information of a macular fovea; and determining location information of the macular region of an eye of the target object, based on the location information of the macula fovea. In the embodiments of the application, the problem that the macular region cannot be accurately recognized when the image quality of the macular region is impaired is solved.Type: ApplicationFiled: November 27, 2019Publication date: August 20, 2020Applicant: Baidu Online Network Technology (Beijing) Co., Ltd.Inventors: Qinpei SUN, Yehui YANG, Lei WANG, Yanwu XU, Yan HUANG
-
Publication number: 20190110678Abstract: The present disclosure generally relates to automated method and system for vision assessment of a subject. The method comprises: determining a set of test patterns for the subject based on a preliminary assessment of an eye of the subject; displaying the set of test patterns sequentially to the subject; collecting data on the subject's gaze in response to each test pattern displayed to the subject; and assessing vision functionality of the subject based on the collected gaze data.Type: ApplicationFiled: March 31, 2017Publication date: April 18, 2019Inventors: Huiying LIU, Augustinus LAUDE, Tock Han LIM, Yanwu XU, Wing Kee Damon WONG, Jiang LIU
-
Patent number: 10145669Abstract: A method and system are proposed to obtain a reduced speckle noise image of a subject from optical coherence tomography (OCT) image data of the subject. The cross sectional images each comprise a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject. The method comprises two aligning steps. First the cross sectional images are aligned, then image patches of the aligned cross sectional images are aligned to form a set of aligned patches. An image matrix is then formed from the aligned patches; and matrix completion is applied to the image matrix to obtain a reduced speckle noise image of the subject.Type: GrantFiled: June 16, 2015Date of Patent: December 4, 2018Assignees: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, KABUSHIKI KAISHA TOPCONInventors: Jun Cheng, Jiang Liu, Lixin Duan, Yanwu Xu, Wing Kee Damon Wong, Masahiro Akiba
-
Publication number: 20170358077Abstract: A method and apparatus for aligning a two-dimensional eye image with a predefined axis by rotation at a rotation angle are disclosed, the method comprising deriving the rotation angle and a de-noised image, which minimises a cost function comprising (i) a complexity measure of the de-noised image and (ii) magnitude of a noise image obtained by rotating the first image by the rotation angle and subtracting the de-noised image.Type: ApplicationFiled: December 23, 2015Publication date: December 14, 2017Applicants: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH, SINGAPORE HEALTH SERVICES PTE LTDInventors: Yanwu XU, Jiang LIU, Lixin DUAN, Wing Kee Damon WONG, Tin AUNG, Baskaran MANI, Shamira PERERA
-
Patent number: 9715640Abstract: A numerical parameter indicative of the degree of match between two retina images is produced by comparing two graphs obtained from the respective images. Each graph is composed of edges and vertices. Each vertex is associated with a location the corresponding retina image, and with descriptor data describing a part of the corresponding retina image proximate the corresponding location.Type: GrantFiled: June 3, 2013Date of Patent: July 25, 2017Assignee: Agency for Science, Technology and ResearchInventors: Yanwu Xu, Jiang Liu, Wing Kee Damon Wong, Ngan Meng Tan
-
Patent number: 9684959Abstract: A method is proposed for automatically locating the optic disc or the optic cup in an image of the rear of an eye. A portion of the image containing the optic disc or optic cup is divided into sub-regions using a clustering algorithm. Biologically inspired features, and optionally other features, are obtained for each of the sub-regions. An adaptive model uses the features to generate data indicative of whether each sub-region is within or outside the optic disc or optic cup. The result is then smoothed, to form an estimate of the position of the optic disc or optic cup.Type: GrantFiled: August 26, 2013Date of Patent: June 20, 2017Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte LtdInventors: Jun Cheng, Jiang Liu, Yanwu Xu, Fengshou Yin, Ngan Meng Tan, Wing Kee Damon Wong, Beng Hai Lee, Xiangang Cheng, Xinting Gao, Zhuo Zhang, Tien Yin Wong, Ching-Yu Cheng, Yim-lui Carol Cheung, Baskaran Mani, Tin Aung
-
Patent number: 9679379Abstract: A method is presented to obtain, from a retinal image, data characterizing the optic cup, such as data indicating the location and/or size of the optic cup in relation to the optic disc. A disc region of the retinal image of an eye, is expressed as a weighted sum of a plurality of pre-existing “reference” retinal images in a library, with the weights being chosen to minimize a cost function. The data characterizing the cup of the eye is obtained from cup data associated with the pre-existing disc images and the corresponding weights. The cost function includes (i) a construction error term indicating a difference between the disc region of the retinal image and a weighted sum of the reference retinal images, and (ii) a cost term, which may be generated using a weighted sum over the reference retinal images of a difference between the reference retinal images and the disc region of the retinal image.Type: GrantFiled: January 22, 2014Date of Patent: June 13, 2017Assignee: Agency for Science, Technology and ResearchInventors: Yanwu Xu, Jiang Liu, Jun Cheng, Fengshou Yin, Wing Kee Damon Wong
-
Publication number: 20170131082Abstract: A method and system are proposed to obtain a reduced speckle noise image of a subject from optical coherence tomography (OCT) image data of the subject. The cross sectional images each comprise a plurality of scan lines obtained by measuring the time delay of light reflected, in a depth direction, from optical interfaces within the subject. The method comprises two aligning steps. First the cross sectional images are aligned, then image patches of the aligned cross sectional images are aligned to form a set of aligned patches. An image matrix is then formed from the aligned patches; and matrix completion is applied to the image matrix to obtain a reduced speckle noise image of the subject.Type: ApplicationFiled: June 16, 2015Publication date: May 11, 2017Inventors: Jun CHENG, Jiang LIU, Lixin DUAN, Yanwu XU, Wing Kee Damon WONG, Masahiro AKIBA
-
Patent number: 9501823Abstract: A method is proposed for analyzing an optical coherence tomography (OCT) image of the anterior segment (AS) of a subject's eye. A region of interest is defined which is a region of the image containing the junction of the cornea and iris, and an estimated position the junction within the region of interest is derived. Using this a second region of the image is obtained, which is a part of the image containing the estimated position of the junction. Features of the second region are obtained, and those features are input to an adaptive model to generate data characterizing the junction.Type: GrantFiled: August 1, 2013Date of Patent: November 22, 2016Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte LtdInventors: Yanwu Xu, Jiang Liu, Wing Kee Damon Wong, Beng Hai Lee, Tin Aung, Baskaran Mani, Shamira Perera