Patents by Inventor Chenxia Li
Chenxia Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240070454Abstract: Provided is a lightweight model training method, an image processing method, a device and a medium. The lightweight model training method includes: acquiring first and second augmentation probabilities and a target weight adopted in an e-th iteration; performing data augmentation on a data set based on the first and second augmentation probabilities respectively, to obtain first and second data sets; obtaining a first output value of a student model and a second output value of a teacher model based on the first data set; obtaining a third output value and a fourth output value based on the second data set; determining a distillation loss function, a truth-value loss function and a target loss function; training the student model based on the target loss function; and determining a first augmentation probability or target weight to be adopted in an (e+1)-th iteration in a case of e is less than E.Type: ApplicationFiled: February 13, 2023Publication date: February 29, 2024Inventors: Ruoyu GUO, Yuning DU, Chenxia LI, Baohua LAI, Yanjun MA
-
Publication number: 20230206668Abstract: The present disclosure provides a vision processing and model training method, device, storage medium and program product. A specific implementation solution is as follows: establishing an image classification network with the same backbone network as the vision model, performing a self-monitoring training on the image classification network by using an unlabeled first data set; initializing a weight of a backbone network of the vision model according to a weight of a backbone network of the trained image classification network to obtain a pre-training model, the structure of the pre-training model being consistent with that of the vision model, and optimize the weight of the backbone network by using real data set in a current computer vision task scenario, so as to be more suitable for the current computer vision task; then, training the pre-training model by using a labeled second data set to obtain a trained vision model.Type: ApplicationFiled: February 17, 2023Publication date: June 29, 2023Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Ruoyu GUO, Yuning DU, Chenxia LI, Qiwen LIU, Baohua LAI, Yanjun MA, Dianhai YU
-
Publication number: 20220343662Abstract: The present disclosure provides a method and apparatus for recognizing a text, a device and a storage medium, and relates to the field of deep learning technology. A specific implementation comprises: receiving a target image; performing a text detection on the target image using a pre-trained lightweight text detection network, to obtain a text detection box; and recognizing a text in the text detection box using a pre-trained lightweight text recognition network, to obtain a text recognition result.Type: ApplicationFiled: July 11, 2022Publication date: October 27, 2022Inventors: Yuning DU, Yehua YANG, Chenxia LI, Qiwen LIU, Xiaoguang HU, Dianhai YU, Yanjun MA, Ran BI
-
Patent number: 11403766Abstract: Embodiments of the present disclosure provide a method and device for labelling a point of interest, a computer device, and a storage medium. The method includes the following. Image data to be labelled is obtained. The image data includes an image to be labelled and a collection location of the image to be labelled. Feature extraction is performed on the image to be labelled to obtain a first image feature of the image to be labelled. A first reference image corresponding to the image to be labelled is determined based on a similarity between the first image feature and a second image feature corresponding to each reference image in an image library. The point of interest of the image to be labelled is labelled based on a category of the first reference image and the collection location of the image to be labelled.Type: GrantFiled: July 10, 2020Date of Patent: August 2, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Kai Wei, Yuning Du, Chenxia Li, Guoyi Liu
-
Publication number: 20220129731Abstract: The present disclosure provides a method and apparatus for training an image recognition model, and a method and apparatus for recognizing an image, and relates to the field of artificial intelligence, and particularly to the fields of deep learning and computer vision. A specific implementation comprises: acquiring a tagged sample set, an untagged sample set and a knowledge distillation network; and performing following training steps: selecting an input sample from the tagged sample set and the untagged sample set, and accumulating a number of iterations; inputting respectively the input sample into a student network and a teacher network of the knowledge distillation network to train the student network and the teacher network; and selecting an image recognition model from the student network and the teacher network, if a training completion condition is satisfied.Type: ApplicationFiled: January 4, 2022Publication date: April 28, 2022Inventors: Ruoyu GUO, Yuning DU, Chenxia LI, Tingquan GAO, Qiao ZHAO, Qiwen LIU, Ran BI, Xiaoguang Hu, Dianhai YU, Yanjun MA
-
Publication number: 20210090266Abstract: Embodiments of the present disclosure provide a method and device for labelling a point of interest, a computer device, and a storage medium. The method includes the following. Image data to be labelled is obtained. The image data includes an image to be labelled and a collection location of the image to be labelled. Feature extraction is performed on the image to be labelled to obtain a first image feature of the image to be labelled. A first reference image corresponding to the image to be labelled is determined based on a similarity between the first image feature and a second image feature corresponding to each reference image in an image library. The point of interest of the image to be labelled is labelled based on a category of the first reference image and the collection location of the image to be labelled.Type: ApplicationFiled: July 10, 2020Publication date: March 25, 2021Inventors: Kai WEI, Yuning DU, Chenxia LI, Guoyi LIU
-
Patent number: 10762345Abstract: The present disclosure provides a method and for acquiring text data of a trademark image, a computer device and a non-transitory computer readable storage medium. The method includes the followings. A trademark database including one or more mappings among trademark feature information, trademark description information and trademark text information is established. A to-be-processed image including image description information is acquired. Trademark feature information corresponding to the to-be-processed image is determined. The trademark text information corresponding to the trademark feature information as the text data of the trademark image corresponding to the to-be-processed image according to the one or more mappings in the trademark database when the trademark description information corresponding to the trademark feature information corresponding to the to-be-processed image is contained in the image description information corresponding to the to-be-processed image.Type: GrantFiled: July 11, 2018Date of Patent: September 1, 2020Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Shufu Xie, Yuning Du, Guang Li, Shanshan Liu, Chenxia Li
-
Publication number: 20190065841Abstract: The present disclosure provides a method and for acquiring text data of a trademark image, a computer device and a non-transitory computer readable storage medium. The method includes the followings. A trademark database including one or more mappings among trademark feature information, trademark description information and trademark text information is established. A to-be-processed image including image description information is acquired. Trademark feature information corresponding to the to-be-processed image is determined. The trademark text information corresponding to the trademark feature information as the text data of the trademark image corresponding to the to-be-processed image according to the one or more mappings in the trademark database when the trademark description information corresponding to the trademark feature information corresponding to the to-be-processed image is contained in the image description information corresponding to the to-be-processed image.Type: ApplicationFiled: July 11, 2018Publication date: February 28, 2019Inventors: Shufu XIE, Yuning DU, Guang LI, Shanshan LIU, Chenxia LI
-
Patent number: 9838663Abstract: A virtual viewpoint synthesis method and system, including: establishing a left viewpoint virtual view and a right viewpoint virtual view; searching for a candidate pixel in a reference view, and marking a pixel block in which the candidate pixel is not found as a hole point; ranking the found candidate pixels according to depth, and successively calculating a foreground coefficient and a background coefficient for performing weighted summation; enlarging the hole-point regions of the left viewpoint virtual view and/or the right viewpoint virtual view in the direction of the background to remove a ghost pixel; performing viewpoint synthesis on the left viewpoint virtual view and the right viewpoint virtual view; and filling the hole-points of a composite image.Type: GrantFiled: January 29, 2016Date of Patent: December 5, 2017Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOLInventors: Chenxia Li, Ronggang Wang, Wen Gao
-
Publication number: 20160150208Abstract: A virtual viewpoint synthesis method and system, including: establishing a left viewpoint virtual view and a right viewpoint virtual view; searching for a candidate pixel in a reference view, and marking a pixel block in which the candidate pixel is not found as a hole point; ranking the found candidate pixels according to depth, and successively calculating a foreground coefficient and a background coefficient for performing weighted summation; enlarging the hole-point regions of the left viewpoint virtual view and/or the right viewpoint virtual view in the direction of the background to remove a ghost pixel; performing viewpoint synthesis on the left viewpoint virtual view and the right viewpoint virtual view; and filling the hole-points of a composite image.Type: ApplicationFiled: January 29, 2016Publication date: May 26, 2016Inventors: Chenxia LI, Ronggang WANG, Wen GAO
-
Patent number: 9157811Abstract: A dispersion and loss spectrum auto-correction distributed optical fiber Raman temperature sensor has a dual fiber pulsed laser module with dual Raman wavelength shifts. The laser module is composed of a power supply (11), an electronic switch (12), a primary laser (13) and a secondary laser (14), a first combiner (15), a bidirectional coupler (16), a multimode fiber (17), an integrated optical fiber wavelength division multiplexer (18), a second combiner (19), a direct detection system (20), a signal collection and processing system (21) and a display (22). The sensor uses two light sources that have two Raman wavelength shifts, wherein the central wavelength of backward anti-Stokes Raman scattering peak of the primary light source coincides with that of the backward Stokes scattering peak center wavelength of the secondary light source, and the time domain reflection signal of the one-way optical fiber Rayleigh scattering is deducted.Type: GrantFiled: August 20, 2010Date of Patent: October 13, 2015Assignee: CHINA JILIANG UNVIVERSITYInventors: Zaixuan Zhang, Chenxia Li, Jianfeng Wang, Xiangdong Yu, Wensheng Zhang, Wenping Zhang, Xiaohui Niu
-
Patent number: 8785859Abstract: A distributed optical fiber sensor based on Raman and Brillouin scattering is provided. The distributed optical fiber sensor includes a semiconductor FP cavity pulsed wideband optical fiber laser (11), a semiconductor external-cavity continuous narrowband optical fiber laser (12), a wave separator (13), an electro-optic modulator (14), an isolator (15), an Er-doped optical fiber amplifier (16), a bidirectional coupler (17), an integrated wavelength division multiplexer (19), a first photoelectric receiving and amplifying module (20), a second photoelectric receiving and amplifying module (21), a direct detection system (22), a narrowband optical fiber transmission grating (23), a circulator (24) and a coherence detection module (25). The temperature and the strain can be measured simultaneously, and the signal-to-noise ratio of the system is enhanced.Type: GrantFiled: August 20, 2010Date of Patent: July 22, 2014Assignee: China Jiliang UniversityInventors: Zaixuan Zhang, Chenxia Li, Shangzhong Jin, Jianfeng Wang, Huaping Gong, Yi Li
-
Publication number: 20130028289Abstract: A dispersion and loss spectrum auto-correction distributed optical fiber Raman temperature sensor has a dual fiber pulsed laser module with dual Raman wavelength shifts. The laser module is composed of a power supply (11), an electronic switch (12), a primary laser (13) and a secondary laser (14), a first combiner (15), a bidirectional coupler (16), a multimode fiber (17), an integrated optical fiber wavelength division multiplexer (18), a second combiner (19), a direct detection system (20), a signal collection and processing system (21) and a display (22). The sensor uses two light sources that have two Raman wavelength shifts, wherein the central wavelength of backward anti-Stokes Raman scattering peak of the primary light source coincides with that of the backward Stokes scattering peak centre wavelength of the secondary light source, and the time domain reflection signal of the one-way optical fiber Rayleigh scattering is deducted.Type: ApplicationFiled: August 20, 2010Publication date: January 31, 2013Inventors: Zaixuan Zhang, Chenxia Li, Jianfeng Wang, Xiangdong Yu, Wensheng Zhang, Wenping Zhang, Xiaohui Niu
-
Publication number: 20130020486Abstract: A distributed optical fiber sensor based on Raman and Brillouin scattering is provided. The distributed optical fiber sensor includes a semiconductor FP cavity pulsed wideband optical fiber laser (11), a semiconductor external-cavity continuous narrowband optical fiber laser (12), a wave separator (13), an electro-optic modulator (14), an isolator (15), an Er-doped optical fiber amplifier (16), a bidirectional coupler (17), an integrated wavelength division multiplexer (19), a first photoelectric receiving and amplifying module (20), a second photoelectric receiving and amplifying module (21), a direct detection system (22), a narrowband optical fiber transmission grating (23), a circulator (24) and a coherence detection module (25). The temperature and the strain can be measured simultaneously, and the signal-to-noise ratio of the system is enhanced.Type: ApplicationFiled: August 20, 2010Publication date: January 24, 2013Applicant: China Jiliang UniversityInventors: Zaixuan Zhang, Chenxia Li, ShangZhong Jin, Jianfeng Wang, Huaping Gong, Yi Li