Patents by Inventor Xiaogang Wang

Xiaogang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210165997
    Abstract: Disclosed in embodiments of the present disclosure are an object three-dimensional detection method and apparatus, an intelligent driving control method and apparatus, a medium, and a device. The object three-dimensional detection method comprises: obtaining two-dimensional coordinates of a key point of a target object in an image to be processed; constructing a pseudo three-dimensional detection body of the target object according to the two-dimensional coordinates of the key point; obtaining depth information of the key point; and determining a three-dimensional detection body of the target object according to the depth information of the key point and the pseudo three-dimensional detection body.
    Type: Application
    Filed: July 16, 2019
    Publication date: June 3, 2021
    Inventors: Yingjie CAI, Xingyu ZENG, Junjie YAN, Xiaogang WANG
  • Publication number: 20210126250
    Abstract: A pre-lithiated silicon-based anode includes a silicon-based anode, lithium disposed on a surface or in an interior of the silicon-based anode, and a protective coating on the surface of the silicon-based anode. The pre-lithiated silicon-based anode allows subsequent processing to be performed safely in the atmosphere.
    Type: Application
    Filed: April 17, 2017
    Publication date: April 29, 2021
    Inventors: Rongrong Jiang, Jingjun Zhang, Yuqian Dou, Lei Wang, Chuanling Li, Yunhua Chen, Xiaogang Hao, Qiang Lu
  • Patent number: 10984266
    Abstract: A vehicle lamp detection method includes: obtaining an image block including an image of a vehicle; and performing vehicle lamp detection on the image block by means of a deep neural network, to obtain a vehicle lamp detection result.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: April 20, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Shinan Liu, Xingyu Zeng, Junjie Yan, Xiaogang Wang
  • Publication number: 20210082181
    Abstract: Disclosed are a method and apparatus for object detection, an electronic device and a computer storage medium. The method includes: acquiring three-dimensional (3D) point cloud data; determining point cloud semantic features corresponding to the 3D point cloud data according to the 3D point cloud data; determining part location information of foreground points based on the point cloud semantic features; extracting at least one initial 3D bounding box based on the point cloud data; and determining a 3D bounding box for an object according to the point cloud semantic features corresponding to the point cloud data, the part location information of the foreground points and the at least one initial 3D bounding box.
    Type: Application
    Filed: November 30, 2020
    Publication date: March 18, 2021
    Inventors: Shaoshuai SHI, Zhe Wang, Xiaogang Wang, Hongsheng Li
  • Publication number: 20210042954
    Abstract: Embodiments of the present application disclose a binocular matching method, including: obtaining an image to be processed, where the image is a two-dimensional (2D) image including a left image and a right image; constructing a three-dimensional (3D) matching cost feature of the image by using extracted features of the left image and extracted features of the right image, where the 3D matching cost feature includes a group-wise cross-correlation feature, or includes a feature obtained by concatenating the group-wise cross-correlation feature and a connection feature; and determining the depth of the image by using the 3D matching cost feature. The embodiments of the present application also provide a binocular matching apparatus, a computer device, and a storage medium.
    Type: Application
    Filed: October 28, 2020
    Publication date: February 11, 2021
    Inventors: Xiaoyang GUO, Kai Yang, Wukui Yang, Hongsheng Li, Xiaogang Wang
  • Publication number: 20210042501
    Abstract: A method for processing point cloud data includes: point cloud data in a target scene and weight vectors of a first discrete convolution kernel are obtained; interpolation processing is performed on the point cloud data based on the point cloud data and the weight vectors of the first discrete convolution kernel to obtain first weight data, the first weight data representing weights of allocation of the point cloud data to positions corresponding to the weight vectors of the first discrete convolution kernel; first discrete convolution processing is performed on the point cloud data based on the first weight data and the weight vectors of the first discrete convolution kernel to obtain a first discrete convolution result; and a spatial structure feature of at least part of point cloud data in the point cloud data is obtained based on the first discrete convolution result.
    Type: Application
    Filed: October 28, 2020
    Publication date: February 11, 2021
    Inventors: Jiageng MAO, Xiaogang WANG, Hongsheng LI
  • Publication number: 20210012154
    Abstract: The present disclosure relates to a network optimization method and apparatus, an image processing method and apparatus, and a storage medium. The network optimization method includes: obtaining an image sample group; obtaining a first feature and a second feature of an image in the image sample group, and obtaining a first classification result by using the first feature of the image; performing feature exchange processing on an image pair in the image sample group to obtain a new image pair; obtaining a first loss value of the first classification result, a second loss value of the new image pair, and a third loss value of first features and second features of the new image pair in a preset manner; and adjusting parameters of a neural network at least according to the first loss value, the second loss value, and the third loss value until a preset requirement is met.
    Type: Application
    Filed: September 29, 2020
    Publication date: January 14, 2021
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yixiao GE, Yantao SHEN, Dapeng CHEN, Xiaogang WANG, Hongsheng LI
  • Patent number: 10891471
    Abstract: A method and a system for pose estimation are provided. The method includes: extracting a plurality of sets of part-feature maps from an image, each set of the extracted part-feature maps encoding the messages for a particular body part and forming a node of a part-feature network; passing a message of each set of the extracted part-feature maps through the part-feature network to update the extracted part-feature maps, resulting in each set of the extracted part-feature maps incorporating the message of upstream nodes; estimating, based on the updated part-feature maps, the body part within the image.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: January 12, 2021
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang Wang, Xiao Chu, Wanli Ouyang, Hongsheng Li
  • Publication number: 20200380279
    Abstract: A method and apparatus for liveness detection includes: processing a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; determining a predicted face region in the target image; and obtaining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
    Type: Application
    Filed: August 20, 2020
    Publication date: December 3, 2020
    Inventors: Guowei YANG, Jing SHAO, Junjie YAN, Xiaogang WANG
  • Publication number: 20200371481
    Abstract: In some embodiments, a control system, a control method and a storage medium are provided. In the method, first motion information of a machine acquired by a first sensor is received; the first motion information is inputted into a deep learning model to obtain a model output, the deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM); the deep learning model is trained using the first motion information and second motion information acquired by a second sensor; the first sensor and the second sensor having different ways of detecting information and processing the detected information. The model output is used to control the machine.
    Type: Application
    Filed: May 22, 2019
    Publication date: November 26, 2020
    Inventors: Shih-Chi CHEN, XiangBo LIU, Chenglin LI, Xiaogang WANG, Hongsheng LI
  • Publication number: 20200364518
    Abstract: The present application relates to an object prediction method and apparatus, an electronic device, and a storage medium. The method is applied to a neural network and includes: performing feature extraction processing on a to-be-predicted object to obtain feature information of the to-be-predicted object; determining multiple intermediate prediction results for the to-be-predicted object according to the feature information; performing fusion processing on the multiple intermediate prediction results to obtain fusion information; and determining multiple target prediction results for the to-be-predicted object according to the fusion information.
    Type: Application
    Filed: August 5, 2020
    Publication date: November 19, 2020
    Inventors: Dan XU, Wanli OUYANG, Xiaogang WANG, Sebe NICU
  • Patent number: 10825187
    Abstract: The application relates to a method and system for tracking a target object in a video. The method includes: extracting, from the video, a 3-dimension (3D) feature block containing the target object; decomposing the extracted 3D feature block into a 2-dimension (2D) spatial feature map containing spatial information of the target object and a 2D spatial-temporal feature map containing spatial-temporal information of the target object; estimating, in the 2D spatial feature map, a location of the target object; determining, in the 2D spatial-temporal feature map, a speed and an acceleration of the target object; calibrating the estimated location of the target object according to the determined speed and acceleration; and tracking the target object in the video according to the calibrated location.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: November 3, 2020
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang Wang, Jing Shao, Chen-Change Loy, Kai Kang
  • Patent number: 10817714
    Abstract: A method for predicting walking behaviors includes: encoding walking behavior information of at least one target object in a target scene within a historical time period M to obtain a first offset matrix for representing the walking behavior information of the at least one target object within the historical time period M; inputting the first offset matrix into a neural network, and outputting by the neural network a second offset matrix for representing walking behavior information of the at least one target object within a future time period M?; and decoding the second offset matrix to obtain the walking behavior prediction information of the at least one target object within the future time period M?.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 27, 2020
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Shuai Yi, Hongsheng Li, Xiaogang Wang
  • Publication number: 20200327690
    Abstract: A 3D object detection method includes: obtaining two-dimensional (2D) coordinates of at least one predetermined key point of a target object in an image to be processed; constructing a pseudo 3D detection body of the target object in a 2D space according to the 2D coordinates of the at least one predetermined key point; obtaining depth information of a plurality of vertices of the pseudo 3D detection body; and determining a 3D detection body of the target object in a 3D space according to the depth information of the plurality of vertices of the pseudo 3D detection body.
    Type: Application
    Filed: April 1, 2020
    Publication date: October 15, 2020
    Applicants: SENSETIME GROUP LIMITED, HONDA MOTOR CO. LTD.
    Inventors: Yingjie CAI, Shinan LIU, Xingyu ZENG, Junjie YAN, Xiaogang WANG, Atsushi Kawamura, Yuji Yasui, Tokitomo Ariyoshi, Yuji Kaneda, Yuhi Goto
  • Patent number: 10759894
    Abstract: A vegetable oil-based cartilage bionic cushioning and shock-absorbing material, and a preparation method and use thereof, is provided. The vegetable oil-based cartilage bionic cushioning and shock-absorbing material is prepared from a premix A and an isocyanate mixture B; the premix A including a vegetable oil-based modified polyol, a type 1 polyether polyol, a type 2 polyether polyol, a polymer polyol, a surfactant, a foaming agent, a chain extender, a catalyst and a cell regulator; the type 1 polyether polyol is a polyether polyol with a molecular weight of 400-1000 and a hydroxyl value of 110-280 mgKOH/g; and the type 2 polyether polyol is a polyether polyol with a molecular weight of 1000-10000 and a hydroxyl value of 25-56 mg KOH/g. The material provided by the present invention is environment-friendly and breathable with open cells, and has a high cushioning effect and a low permanent compression set value.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: September 1, 2020
    Assignee: Foshan Linzhi Polymer Materials Science and Technology Co., Ltd.
    Inventors: Bowei Wang, Xiaogang Wang, Keer Chen
  • Publication number: 20200234078
    Abstract: Target matching method and apparatus, electronic device, and storage medium, including: extracting feature vector of each frame in query image sequence and feature vector of each frame in candidate image sequence; determining self-expression feature vector of query image sequence, collaborative expression feature vector of the query image sequence, self-expression feature vector of candidate image sequence, and collaborative expression feature vector of candidate image sequence based on feature vector of each frame in query image sequence and feature vector of each frame in candidate image sequence; determining similarity feature vector between query image sequence and candidate image sequence based on self-expression feature vector of query image sequence, collaborative expression feature vector of query image sequence, self-expression feature vector of candidate image sequence, and collaborative expression feature vector of candidate image sequence; and determining matching result between query image sequen
    Type: Application
    Filed: April 7, 2020
    Publication date: July 23, 2020
    Inventors: Ruimao Zhang, Hongbin Sun, Ping Luo, Yuying Ge, Kuanze Ren, Liang Lin, Xiaogang Wang
  • Publication number: 20200226410
    Abstract: A method and apparatus for positioning a description statement in an image includes: analyzing a to-be-analyzed description statement and a to-be-analyzed image to obtain a plurality of statement attention weights of the to-be-analyzed description statement and a plurality of image attention weights of the to-be-analyzed image; obtaining a plurality of first matching scores based on the plurality of statement attention weights and a subject feature, a location feature and a relationship feature of the to-be-analyzed image; obtaining a second matching score between the to-be-analyzed description statement and the to-be-analyzed image based on the plurality of first matching scores and the plurality of image attention weights; and determining a positioning result of the to-be-analyzed description statement in the to-be-analyzed image based on the second matching score.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Xihui LIU, Jing SHAO, Zihao WANG, Hongsheng LI, Xiaogang WANG
  • Patent number: 10715011
    Abstract: An electrical machine includes an electric motor, a cooling jacket over the electric motor, and a power inverter having multiple AC power outlets. The electrical machine also includes an elongated busbar having an end adjacent to and coupled to an AC power outlet. The other end of the elongated busbar is adjacent to and coupled to the electric motor. The elongated busbar traverses from one end of the electric motor to a second end of the electric motor over, and in thermal contact with, the cooling jacket so as to reduce a high temperature at the electric motor to a low temperature at the AC power outlet.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: July 14, 2020
    Assignee: Karma Automotive LLC
    Inventors: Lei Gu, Zhong Nie, Yu Liu, Xiaogang Wang
  • Publication number: 20200193228
    Abstract: An image question answering method includes: extracting a question feature representing a semantic meaning of a question, a global feature of an image, and a detection frame feature of a detection frame encircling an object in the image; obtaining a first weight of each of at least one area of the image and a second weight of each of at least one detection frame of the image according to question feature, global feature, and detection frame feature; performing weighting processing on global feature by using first weight to obtain an area attention feature of image; performing weighting processing on detection frame feature by using second weight to obtain a detection frame attention feature of image; and predicting an answer to question according to question feature, area attention feature, and detection frame attention feature.
    Type: Application
    Filed: February 22, 2020
    Publication date: June 18, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Pan LU, Hongsheng LI, Xiaogang WANG
  • Patent number: 10672130
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 2, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Xiaogang Wang, Zongji Wang, Qinping Zhao