Patents Examined by Julius Chenjun Chai
  • Patent number: 11113817
    Abstract: A method and device for generating a three dimensional (3D) bounding box of a region of interest (ROI) of a patient include receiving a two dimensional (2D) maximum intensity projection (MIP) image that is an axial view of the patient and a 2D MIP image that is a sagittal view of the patient. A first 2D bounding box of the ROI of the patient and a second 2D bounding box of the ROI of the patient are detected using the 2D MIP images. A 3D MIP image of the patient is received, and the 3D bounding box of the ROI of the patient is generated using the 3D MIP image, the first 2D bounding box, and the second 2D bounding box. The 3D MIP image including the 3D bounding box is provided.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: September 7, 2021
    Assignee: TENCENT AMERICA LLC
    Inventors: Hui Tang, Lianyi Han, Min Tu, Kun Wang, Chao Huang, Zhen Qian, Wei Fan
  • Patent number: 11113830
    Abstract: Embodiments of the present disclosure are directed to a method for generating simulated point cloud data, a device, and a storage medium. The method includes: acquiring at least one frame of point cloud data collected by a road collecting device in an actual environment without a dynamic obstacle as static scene point cloud data; setting, according to set position association information, at least one dynamic obstacle in a coordinate system matching the static scene point cloud data; simulating in the coordinate system, according to the static scene point cloud data, a plurality of simulated scanning lights emitted by a virtual scanner located at an origin of the coordinate system; and updating the static scene point cloud data according to intersections of the plurality of simulated scanning lights and the at least one dynamic obstacle to obtain the simulated point cloud data comprising point cloud data of the dynamic obstacle.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: September 7, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Feilong Yan, Jin Fang, Tongtong Zhao, Chi Zhang, Liang Wang, Yu Ma, Ruigang Yang
  • Patent number: 11100665
    Abstract: The application discloses a computer-implemented method (100) of providing a model for estimating an anatomical body measurement value from at least one 2-D ultrasound image including a contour of the anatomical body, the method comprising providing (110) a set of 3-D ultrasound images of the anatomical body; and, for each of said 3-D images, determining (120) a ground truth value of the anatomical body measurement; generating (130) a set of 2-D ultrasound image planes each including a contour of the anatomical body, and for each of the 2-D ultrasound image planes, extrapolating (140) a value of the anatomical body measurement from at least one of an outline contour measurement and a cross-sectional measurement of the anatomical body in the 2-D ultrasound image plane; and generating (150) said model by training a machine-learning algorithm to generate an estimator function of the anatomical body measurement value from at least one of a determined outline contour measurement and a determined cross-sectional me
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: August 24, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Frank Michael Weber, Irina Waechter-Stehle, Christian Buerger
  • Patent number: 11080899
    Abstract: A method is provided for producing a high-resolution three-dimensional digital subtraction angiography image of an examination object. The method includes: providing or recording of a data set of a three-dimensional rotational run of an imaging system around the examination object without administration of contrast agent (e.g., mask run); motion compensation of the data set of the mask run by a method based on the epipolar consistency conditions; providing or recording of a data set of a three-dimensional rotational run of the imaging system around the examination object with administration of contrast agent (e.g., fill run); motion compensation of the data set of the fill run by a method based on the epipolar consistency conditions; reconstructing a first volume from the compensated data set of the mask run (e.g., mask volume) and a second volume from the compensated data set of the fill run (e.g.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: August 3, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Markus Kowarschik, Michael Manhart
  • Patent number: 11055825
    Abstract: Even a windmill artifact including a high-frequency component in a plane perpendicular to a rotation axis can be reduced and a boundary of an organ can be made to be clear to maintain the contrast. The invention relates to a medical image processing device and includes an image acquiring unit that acquires a 3D volumetric image, a Z high-frequency image generating unit that generates a Z high-frequency image which is a high-frequency component in a rotation axis direction from the 3D volumetric image, an organ component extracting unit that extracts an organ component from the Z high-frequency image, an artifact component extracting unit that extracts an artifact component on the basis of the Z high-frequency image and the organ component, and a corrected image generating unit that generates a corrected image by subtracting the artifact component from the 3D volumetric image.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: July 6, 2021
    Assignee: Hitachi, Ltd.
    Inventors: Taiga Goto, Hisashi Takahashi
  • Patent number: 11049222
    Abstract: A smoothing method of smoothing color values associated with a plurality of grid points that are arranged in a device-dependent color space and include a plurality of surface grid points arranged on a surface of a grid point region in which the plurality of grid points is arranged in the device-dependent color space includes calculating polynomial approximation coefficients to be used in a polynomial approximation equation for calculating approximate values of color values corresponding to positions in a first processing direction in the device-dependent color space for a plurality of first target grid points that are among the surface grid points and arranged in the first processing direction in the device-dependent color space, and smoothing color values associated with the first target grid points using the polynomial approximation equation when the color values associated with the first target grid points are to be smoothed.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: June 29, 2021
    Assignee: Seiko Epson Corporation
    Inventor: Yuko Yamamoto
  • Patent number: 11010600
    Abstract: A face emotion recognition method based on dual-stream convolutional neural network uses a multi-scale face expression recognition network to single frame face images and face sequences to perform learning classification. The method includes constructing a multi-scale face expression recognition network which includes a channel network with a resolution of 224×224 and a channel network with a resolution of 336×336, extracting facial expression characteristics at different resolutions through the recognition network, effectively combining static characteristics of images and dynamic characteristics of expression sequence to perform training and learning, fusing the two channel models, testing and obtaining a classification effect of facial expressions. The present invention fully utilizes the advantages of deep learning, effectively avoids the problems of manual extraction of feature deviations and long time, and makes the method provided by the present invention more adaptable.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: May 18, 2021
    Assignee: Sichuan University
    Inventors: Linbo Qing, Songfan Yang, Xiaohai He, Qizhi Teng
  • Patent number: 10949703
    Abstract: A method of extraction of an impervious surface of a remote sensing image. The method includes: 1) obtaining a remote sensing image of a target region, performing normalization for image data, and dividing the normalized target region image into a sample image and a test image; 2) extracting an image feature of each sample image by constructing a deep convolutional network for feature extraction of the remote sensing image; 3) performing pixel-by-pixel category prediction for each sample image respectively; 4) constructing a loss function by using an error between a prediction value and a true value of the sample image and performing update training for network parameters of the deep convolutional network and network parameters relating to the category prediction; and 5) extracting an image feature from the test image through the deep convolutional network based on the training result obtained in 4).
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: March 16, 2021
    Assignee: WUHAN UNIVERSITY
    Inventors: Zhenfeng Shao, Lei Wang