Patents by Inventor Qingfeng Liu

Qingfeng Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127589
    Abstract: A system and a method are disclosed for processing and combining feature maps using a hardware friendly multi-kernel convolution block (HFMCB). The method including splitting an input feature map into a plurality of feature maps, each of the plurality of feature maps having a reduced number of channels; processing each of the plurality of feature maps with a different series of kernels; and combining the processed plurality of feature maps.
    Type: Application
    Filed: May 19, 2023
    Publication date: April 18, 2024
    Inventors: Qingfeng LIU, Mostafa EL-KHAMY, Sukhwan LIM
  • Patent number: 11951575
    Abstract: Disclosed are an automatic welding system and method for large structural parts based on hybrid robots and 3D vision. The system comprises a hybrid robot system composed of a mobile robot and an MDOF robot, a 3D vision system, and a welding system used for welding. The rough positioning technique based on a mobile platform and the accurate recognition and positioning technique based on high-accuracy 3D vision are combined, so the working range of the MDOF robot in the XYZ directions is expanded, and flexible welding of large structural parts is realized. The invention adopts 3D vision, thus having better error tolerance and lower requirements for the machining accuracy of workpieces, positioning accuracy of mobile robots and placement accuracy of the workpieces; and the cost is reduced, the flexibility is improved, the working range is expanded, labor is saved, production efficiency is improved, and welding quality is improved.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: April 9, 2024
    Inventors: Tao Yang, HuanHuan Li, Lei Peng, JunWei Jiang, Li Ma, QingFeng Liu, NiNi Zhang, Fang Wang
  • Publication number: 20240112823
    Abstract: Disclosed are a method and device for evaluating damage caused by secondary stress to a vacuum vessel, a terminal device, and a medium, to perform following steps: obtaining secondary stress of a vacuum vessel that passes a primary-stress failure evaluation; obtaining structural damage parameters of the vacuum vessel when determining, based on evaluation parameters for the primary-stress failure evaluation of the vacuum vessel and the obtained secondary stress, that the vacuum vessel meets a precondition for a progressive deformation; and determining, based on the obtained structural damage parameters, whether the vacuum vessel meeting the precondition for the progressive deformation experiences structural damage due to the progressive deformation. In this way, a vacuum vessel of a nuclear fusion reactor can be evaluated based on damage caused by the secondary stress.
    Type: Application
    Filed: September 23, 2023
    Publication date: April 4, 2024
    Inventors: Shijun Qin, Jinxing Zheng, Yuntao Song, Kun Lu, Zhihong Liu, Qingfeng Wang, Chengfeng Lin
  • Publication number: 20240079397
    Abstract: The present disclosure provides an LED display module and an LED display device. The LED display module includes a display module body and a lock. The lock includes: a lock seat arranged on a rear side of the display module body; a lock cylinder rotatably arranged in the lock seat; a locking member; an interference member rotatably sleeved on the lock cylinder and spaced apart from the locking member along the axial direction of the lock cylinder, the interference member having an interference position where the interference member locates on a front side of a connecting beam for lifting up the display module body and an avoidance position where the interference member avoids the connecting beam; and an elastic linkage member arranged between the locking member and the interference member. When the locking member is in an unlocking position, the interference member is driven to an interference position by the elastic linkage member.
    Type: Application
    Filed: August 22, 2023
    Publication date: March 7, 2024
    Inventors: Qingfeng LI, Ming LIU, Xuechao SUN
  • Patent number: 11908960
    Abstract: A method of making a plasmonic metal/graphene heterostructure comprises heating an organometallic complex precursor comprising a metal at a first temperature T1 for a first period of time t1 to deposit a layer of the metal on a surface of a heated substrate, the heated substrate in fluid communication with the precursor; and heating, in situ, the precursor at a second temperature T2 for a second period of time t2 to simultaneously form on the layer of the metal, a monolayer of graphene and a plurality of carbon-encapsulated metal nanostructures comprising the metal, thereby providing the plasmonic metal/graphene heterostructure. The heated substrate is characterized by a third temperature T3. The plasmonic metal/graphene heterostructures, devices incorporating the heterostructures, and methods of using the heterostructures are also provided.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: February 20, 2024
    Assignee: University of Kansas
    Inventors: Judy Z. Wu, Qingfeng Liu
  • Patent number: 11847826
    Abstract: A method for computing a dominant class of a scene includes: receiving an input image of a scene; generating a segmentation map of the input image, the segmentation map being labeled with a plurality of corresponding classes of a plurality of classes; computing a plurality of area ratios based on the segmentation map, each of the area ratios corresponding to a different class of the plurality of classes of the segmentation map; and outputting a detected dominant class of the scene based on a plurality of ranked labels based on the area ratios.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: December 19, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Rama Mythili Vadali, Tae-ui Kim, Andrea Kang, Dongwoon Bai, Jungwon Lee, Maiyuran Wijay, Jaewon Yoo
  • Publication number: 20230390853
    Abstract: Disclosed are an automatic welding system and method for large structural parts based on hybrid robots and 3D vision. The system comprises a hybrid robot system composed of a mobile robot and an MDOF robot, a 3D vision system, and a welding system used for welding. The rough positioning technique based on a mobile platform and the accurate recognition and positioning technique based on high-accuracy 3D vision are combined, so the working range of the MDOF robot in the XYZ directions is expanded, and flexible welding of large structural parts is realized. The invention adopts 3D vision, thus having better error tolerance and lower requirements for the machining accuracy of workpieces, positioning accuracy of mobile robots and placement accuracy of the workpieces; and the cost is reduced, the flexibility is improved, the working range is expanded, labor is saved, production efficiency is improved, and welding quality is improved.
    Type: Application
    Filed: July 15, 2022
    Publication date: December 7, 2023
    Inventors: Tao YANG, HuanHuan LI, Lei PENG, JunWei JIANG, Li MA, QingFeng LIU, NiNi ZHANG, Fang WANG
  • Publication number: 20230360396
    Abstract: A method for computing a dominant class of a scene includes: receiving an input image of a scene; generating a segmentation map of the input image, the segmentation map being labeled with a plurality of corresponding classes of a plurality of classes; computing a plurality of area ratios based on the segmentation map, each of the area ratios corresponding to a different class of the plurality of classes of the segmentation map; and outputting a detected dominant class of the scene based on a plurality of ranked labels based on the area ratios.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Rama Mythili Vadali, Tae-ui Kim, Andrea Kang, Dongwoon Bai, Jungwon Lee, Maiyuran Wijay, Jaewon Yoo
  • Publication number: 20230334318
    Abstract: A method and system for training a neural network are provided. The method includes receiving an input image, selecting at least one data augmentation method from a pool of data augmentation methods, generating an augmented image by applying the selected at least one data augmentation method to the input image, and generating a mixed image from the input image and the augmented image.
    Type: Application
    Filed: June 26, 2023
    Publication date: October 19, 2023
    Inventors: Qingfeng LIU, Mostafa EL-KHAMY, Jungwon LEE, Behnam Babagholami MOHAMADABADI
  • Publication number: 20230260247
    Abstract: A computer vision system including: one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to: determine a semantic multi-scale context feature and an instance multi-scale context feature of an input scene; generate a joint attention map based on the semantic multi-scale context feature and the instance multi-scale context feature; refine the semantic multi-scale context feature and instance multi-scale context feature based on the joint attention map; and generate a panoptic segmentation image based on the refined semantic multi-scale context feature and the refined instance multi-scale context feature.
    Type: Application
    Filed: July 19, 2022
    Publication date: August 17, 2023
    Inventors: Qingfeng Liu, Mostafa El-Khamy
  • Patent number: 11687780
    Abstract: A method and system for training a neural network are provided. The method includes receiving an input image, selecting at least one data augmentation method from a pool of data augmentation methods, generating an augmented image by applying the selected at least one data augmentation method to the input image, and generating a mixed image from the input image and the augmented image.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: June 27, 2023
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Jungwon Lee, Behnam Babagholami Mohamadabadi
  • Publication number: 20230186912
    Abstract: A speech recognition method and related products are provided. The method includes acquiring a to-be-recognized speech and a configured hot word library; determining, based on the to-be-recognized speech and the hot word library, an audio-related feature used at a current decoding time instant; determining, based on the audio-related feature, a hot word-related feature used at the current decoding time instant from the hot word library; and determining, based on the audio-related feature and the hot word-related feature, a recognition result of the to-be-recognized speech at the current decoding time instant.
    Type: Application
    Filed: December 2, 2020
    Publication date: June 15, 2023
    Applicant: IFLYTEK CO., LTD.
    Inventors: Shifu XIONG, Cong LIU, Si WEI, Qingfeng LIU, Jianqing GAO, Jia PAN
  • Patent number: 11657150
    Abstract: A two-dimensionality detection method for industrial control system attacks: collecting data; transmitting the data to a PLC and an embedded attack detection system; uploading, by the PLC, received data to an SCADA system; transmitting, by the SCADA system, the data to the embedded attack detection system after classifying and counting the data; before starting detection, directly reading, by the embedded attack detection system, the data measured by sensors; refining data association relationships and probability distribution characteristics of the sensors of normal operation to complete storage of health data model; after starting detection, in first dimensionality, comparing the data collected directly by the sensors with statistical data of the SCADA system to judge the attacked condition of the SCADA system, and in second dimensionality, comparing the characteristics of the data collected directly by the sensors and counted online with the health data model to judge the attacked condition of the sensors.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: May 23, 2023
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Tianju Sui, Qingfeng Liu, Ximing Sun
  • Publication number: 20230123254
    Abstract: A method for computing a dominant class of a scene includes: receiving an input image of a scene; generating a segmentation map of the input image, the segmentation map being labeled with a plurality of corresponding classes of a plurality of classes; computing a plurality of area ratios based on the segmentation map, each of the area ratios corresponding to a different class of the plurality of classes of the segmentation map; and outputting a detected dominant class of the scene based on a plurality of ranked labels based on the area ratios.
    Type: Application
    Filed: December 16, 2022
    Publication date: April 20, 2023
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Rama Mythili Vadali, Tae-ui Kim, Andrea Kang, Dongwoon Bai, Jungwon Lee, Maiyuran Wijay, Jaewon Yoo
  • Publication number: 20230076346
    Abstract: A two-dimensionality detection method for industrial control system attacks: collecting data; transmitting the data to a PLC and an embedded attack detection system; uploading, by the PLC, received data to an SCADA system; transmitting, by the SCADA system, the data to the embedded attack detection system after classifying and counting the data; before starting detection, directly reading, by the embedded attack detection system, the data measured by sensors; refining data association relationships and probability distribution characteristics of the sensors of normal operation to complete storage of health data model; after starting detection, in first dimensionality, comparing the data collected directly by the sensors with statistical data of the SCADA system to judge the attacked condition of the SCADA system, and in second dimensionality, comparing the characteristics of the data collected directly by the sensors and counted online with the health data model to judge the attacked condition of the sensors.
    Type: Application
    Filed: August 12, 2022
    Publication date: March 9, 2023
    Inventors: Tianju SUI, Qingfeng LIU, Ximing SUN
  • Publication number: 20230050573
    Abstract: Apparatuses and methods are provided for training a feature extraction model determining a loss function for use in unsupervised image segmentation. A method includes determining a clustering loss from an image; determining a weakly supervised contrastive loss of the image using cluster pseudo labels based on the clustering loss; and determining the loss function based on the clustering loss and the weakly supervised contrastive loss.
    Type: Application
    Filed: May 26, 2022
    Publication date: February 16, 2023
    Inventors: Qingfeng LIU, Mostafa EL-KHAMY, Yuewei YANG
  • Patent number: 11532154
    Abstract: A method for computing a dominant class of a scene includes: receiving an input image of a scene; generating a segmentation map of the input image, the segmentation map being labeled with a plurality of corresponding classes of a plurality of classes; computing a plurality of area ratios based on the segmentation map, each of the area ratios corresponding to a different class of the plurality of classes of the segmentation map; and outputting a detected dominant class of the scene based on a plurality of ranked labels based on the area ratios.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: December 20, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Rama Mythili Vadali, Tae-ui Kim, Andrea Kang, Dongwoon Bai, Jungwon Lee, Maiyuran Wijay, Jaewon Yoo
  • Patent number: 11461998
    Abstract: Some aspects of embodiments of the present disclosure relate to using a boundary aware loss function to train a machine learning model for computing semantic segmentation maps from input images. Some aspects of embodiments of the present disclosure relate to deep convolutional neural networks (DCNNs) for computing semantic segmentation maps from input images, where the DCNNs include a box filtering layer configured to box filter input feature maps computed from the input images before supplying box filtered feature maps to an atrous spatial pyramidal pooling (ASPP) layer. Some aspects of embodiments of the present disclosure relate to a selective ASPP layer configured to weight the outputs of an ASPP layer in accordance with attention feature maps.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: October 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Qingfeng Liu, Mostafa El-Khamy, Dongwoon Bai, Jungwon Lee
  • Publication number: 20220301128
    Abstract: A method of image processing includes: determining a first feature, wherein the first feature has a dimensionality D1; determining a second feature, wherein the second feature has a dimensionality D2 and is based on an output of a feature extraction network; generating a third feature by processing the first feature, the third feature having a dimensionality D3; generating a guidance by processing the second feature, the guidance having the dimensionality D3; generating a filter output by applying a deep guided filter (DGF) to the third feature using the guidance; generating a map based on the filter output; and outputting a processed image based on the map.
    Type: Application
    Filed: December 27, 2021
    Publication date: September 22, 2022
    Inventors: Qingfeng LIU, Hai SU, Mostafa EL-KHAMY
  • Publication number: 20220301296
    Abstract: A system and a method to train a neural network are disclosed. A first image is weakly and strongly augmented. The first image, the weakly and strongly augmented first images are input into a feature extractor to obtain augmented features. Each weakly augmented first image is input to a corresponding first expert head to determine a supervised loss for each weakly augmented first image. Each strongly augmented first image is input to a corresponding second expert head to determine a diversity loss for each strongly augmented first image. The feature extractor is trained to minimize the supervised loss on weakly augmented first images and to minimize a multi-expert consensus loss on strongly augmented first images. Each first expert head is trained to minimize the supervised loss for each weakly augmented first image, and each second expert head is trained to minimize the diversity loss for each strongly augmented first image.
    Type: Application
    Filed: February 17, 2022
    Publication date: September 22, 2022
    Inventors: Behnam BABAGHOLAMI MOHAMADABADI, Qingfeng LIU, Mostafa EL-KHAMY, Jungwon LEE