Patents by Inventor Zhikang Zou

Zhikang Zou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062582
    Abstract: The present disclosure provides a method and device for dynamic recognition of emotion based on facial muscle movement monitoring, including: obtaining muscle movement data on one side and continuous frame images on the other side; obtaining multiple emotional states related to emotions, associating each emotional state with its corresponding continuous frame images, associating each emotional state with the muscle movement data corresponding to the continuous frame images at the same location and at the same time, to form a training set, building an emotion recognition model; inputting the muscle movement data and continuous frame images obtained in real time into the emotion recognition model to obtain a corresponding emotional state. This method builds an emotion recognition model through muscle movement data and continuous frame image, and uses motion data to make up for the parts of small action images that cannot be collected, thereby obtaining accurate emotion detection results.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Applicant: Air Force Medical University
    Inventors: Shengjun Wu, Xufeng Liu, Zhikang Zou, Xuefeng Wang, Xiang Xu, Ping Wei, Xiuchao Wang, Hui Wang, Peng Fang, Kangning Xie, Guoxin Li, Minhua Hu
  • Patent number: 11887388
    Abstract: The present disclosure provides an object pose obtaining method, and an electronic device, relates to technology fields of image processing, computer vision, and deep learning. A detailed implementation is: extracting an image block of an object from an image, and generating a local coordinate system corresponding to the image block; obtaining 2D projection key points in an image coordinate system corresponding to a plurality of 3D key points on a 3D model of the object; converting the 2D projection key points into the local coordinate system to generate corresponding 2D prediction key points; obtaining direction vectors between each pixel point in the image block and each 2D prediction key point, and obtaining a 2D target key point corresponding to each 2D predication key point based on the direction vectors; and determining a pose of the object according to the 3D key points and the 2D target key points.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: January 30, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xiaoqing Ye, Zhikang Zou, Xiao Tan, Hao Sun
  • Patent number: 11875535
    Abstract: A method and an apparatus for calibrating an external parameter of a camera are provided.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: January 16, 2024
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Zhikang Zou, Xiaoqing Ye, Xiao Tan, Hao Sun
  • Publication number: 20230245364
    Abstract: The present disclosure provides a method for processing a video, an electronic device, and a storage medium. A specific implementation solution includes: generating a first three-dimensional movement trajectory of a virtual three-dimensional model in world space based on attribute information of a target contact surface of the virtual three-dimensional model in the world space; converting the first three-dimensional movement trajectory into a second three-dimensional movement trajectory in camera space, where the camera space is three-dimensional space for shooting an initial video; determining a movement sequence of the virtual three-dimensional model in the camera space according to the second three-dimensional movement trajectory; and compositing the virtual three-dimensional model and the initial video by means of texture information of the virtual three-dimensional model and the movement sequence, to obtain a to-be-played target video.
    Type: Application
    Filed: August 9, 2022
    Publication date: August 3, 2023
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Guanying CHEN, Zhikang ZOU, Xiaoqing YE, Hao SUN
  • Publication number: 20220366589
    Abstract: A method of determining disparity is provided. The implementation scheme is: obtaining a plurality of images corresponding to a target view, wherein each image in the plurality of images is obtained by performing size adjustment on the target view, and each image in the plurality of images has the same size as a feature map output by a corresponding layer structure in a disparity refinement network; and obtaining a refined disparity map output by the disparity refinement network by at least inputting an initial disparity map into the disparity refinement network, and fusing each image in the plurality of images and the feature map output by the corresponding layer structure, wherein the initial disparity map is generated at least based on the target view.
    Type: Application
    Filed: July 28, 2022
    Publication date: November 17, 2022
    Inventors: Zhikang ZOU, Xiaoqing YE, Hao SUN
  • Publication number: 20220358735
    Abstract: A method for processing an image may include: acquiring a target image; segmenting a target object in the target image, and determining a mask image according to a segmentation result; rendering the target object according to the target image and the mask image and determining a rendering result; and performing AR displaying according to the rendering result. A device and storage medium may implement the method.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 10, 2022
    Inventors: Bo JU, Zhikang ZOU, Xiaoqing YE, Xiao TAN, Hao SUN
  • Publication number: 20220351398
    Abstract: A depth detection method, a method for training a depth estimation branch network, an electronic device, and a storage medium are provided, which relate to the field of artificial intelligence, particularly to the technical fields of computer vision and deep learning, and may be applied to intelligent robot and automatic driving scenarios. The specific implementation includes: extracting a high-level semantic feature in an image to be detected, wherein the high-level semantic feature is used to represent a target object in the image to be detected; inputting the high-level semantic feature into a pre-trained depth estimation branch network, to obtain distribution probabilities of the target object in respective sub-intervals of a depth prediction interval; and determining a depth value of the target object according to the distribution probabilities of the target object in the respective sub-intervals and depth values represented by the respective sub-intervals.
    Type: Application
    Filed: July 20, 2022
    Publication date: November 3, 2022
    Inventors: Zhikang Zou, Xiaoqing Ye, Hao Sun
  • Publication number: 20220058779
    Abstract: The disclosure provides an inpainting method for a human image, an inpainting apparatus for a human image and an electronic device. An image to be processed is received. The image to be processed contains a human image to be processed. A three-dimensional human body model corresponding to the human image to be processed, camera parameters, and human body posture information are generated based on the image to be processed. A segmentation image corresponding to the human image to be processed is generated based on the image to be processed. A processed human image corresponding to the human image to be processed is generated based on the three-dimensional human body model, the camera parameters, the human body posture information, and the segmentation image.
    Type: Application
    Filed: November 2, 2021
    Publication date: February 24, 2022
    Inventors: Zhikang ZOU, Xiaoqing YE, Qu CHEN, Hao SUN
  • Publication number: 20220004801
    Abstract: The present disclosure provides an image processing method and apparatus, a training method for a neural network and apparatus, a device, and a medium. The implementation is: inputting a source domain image and a target domain image into a matching feature extraction network to extract a matching feature of the source domain image and a matching feature of the target domain image, wherein the matching feature of the source domain image and the matching feature of target domain image are mutually matching features in the source domain image and the target domain image, the source domain image is a simulated image generated through rendering based on object pose parameters, and the target domain image is a real image that is actually shot and applicable to training of object pose estimation; and providing the matching feature of the source domain image for the training of the object pose estimation.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Inventors: Zhikang ZOU, Xiaoqing YE, Hao SUN
  • Publication number: 20210358169
    Abstract: A method and an apparatus for calibrating an external parameter of a camera are provided.
    Type: Application
    Filed: June 4, 2021
    Publication date: November 18, 2021
    Applicant: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Zhikang ZOU, Xiaoqing YE, Xiao TAN, Hao SUN
  • Publication number: 20210350541
    Abstract: The disclosure provides a portrait extracting method, a portrait extracting apparatus and a storage medium. The method includes: obtaining an image to be processed; obtaining a semantic segmentation result and an instance segmentation result of the image, in which the semantic segmentation result includes a mask image of a portrait area of the image, and the instance segmentation result includes a mask image of at least one portrait in the image; fusing the mask. image of the at least one portrait and the mask image of the portrait area to generate a fused mask image of the at least one portrait; and extracting the at least one portrait in the image based on the fused mask image of the at least one portrait.
    Type: Application
    Filed: July 22, 2021
    Publication date: November 11, 2021
    Inventors: Qu CHEN, Xiaoqing Ye, Zhikang Zou, Hao Sun
  • Publication number: 20210304438
    Abstract: The present disclosure provides an object pose obtaining method, and an electronic device, relates to technology fields of image processing, computer vision, and deep learning. A detailed implementation is: extracting an image block of an object from an image, and generating a local coordinate system corresponding to the image block; obtaining 2D projection key points in an image coordinate system corresponding to a plurality of 3D key points on a 3D model of the object; converting the 2D projection key points into the local coordinate system to generate corresponding 2D prediction key points; obtaining direction vectors between each pixel point in the image block and each 2D prediction key point, and obtaining a 2D target key point corresponding to each 2D predication key point based on the direction vectors; and determining a pose of the object according to the 3D key points and the 2D target key points.
    Type: Application
    Filed: June 14, 2021
    Publication date: September 30, 2021
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xiaoqing Ye, Zhikang Zou, Xiao Tan, Hao Sun