Patents Examined by Tsung-Yin Tsai
  • Patent number: 11321835
    Abstract: A method, a non-transitory computer readable medium and a system for determining three dimensional (3D) information of structural elements of a substrate.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: May 3, 2022
    Assignee: APPLIED MATERIALS ISRAEL LTD.
    Inventors: Anna Levant, Rafael Bistritzer
  • Patent number: 11244446
    Abstract: The present disclosure relates to systems and methods for imaging. The method may include obtaining a real-time representation of a subject. The method may also include determining at least one scanning parameter associated with the subject by automatically processing the representation according to a parameter obtaining model. The method may further include performing a scan on the subject based at least in part on the at least one scanning parameter.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 8, 2022
    Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
    Inventors: Ziyan Wu, Shanhui Sun, Arun Innanje
  • Patent number: 11232572
    Abstract: A system and method are disclosed for segmenting a set of two-dimensional CT slices corresponding to a lesion. In an embodiment, for each of at least a subset of the set of CT slices, the system inputs the CT slice into a plurality of branches of a trained segmentation block. Each branch of the segmentation block includes a convolutional neural network (CNN) with filters at a different scale, and produces one or more levels of output. The system generates, for each CT slice in the subset, feature maps for each level of output. The system generates a segmentation of each CT slice in the subset based on the feature maps of each level of output. The system aggregates the segmentations of each slice in the subset to generate a three-dimensional segmentation of the lesion. The system transmits data representing the three-dimensional segmentation to a user interface for display.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: January 25, 2022
    Assignee: Merck Sharp & Dohme Corp.
    Inventors: Antong Chen, Gregory Goldmacher, Bo Zhou
  • Patent number: 11222226
    Abstract: A monitoring-screen-data generation device includes an object-data generation unit, a screen-data generation unit, and an assignment processing unit. The object-data generation unit identifies a plurality of objects included in an image based on image data, and generates object data. The screen-data generation unit generates monitoring screen data on the basis of the object data. On the basis of definition data that defines a state transition and the object data, the assignment processing unit assigns data that defines the state transition to an image object included in a monitoring screen of the monitoring screen data.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: January 11, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Shingo Sodenaga
  • Patent number: 11213247
    Abstract: An image processing system, comprising an input interface (IN) for receiving a plurality of input images acquired of test objects. The system further comprises a material type analyzer (MTA) configured to produce material type readings at corresponding locations across said input images (IM(CH)). A statistical module (SM) of the system is configured to determine based on said readings an estimate for a probability distribution of material type for said corresponding locations.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: January 4, 2022
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Dominik Benjamin Kutra, Thomas Buelow, Joerg Sabczynski, Kirsten Regina Meetz
  • Patent number: 11210494
    Abstract: Method and apparatus for segmenting a cellular image are disclosed. A specific embodiment of the method includes: acquiring a cellular image; enhancing the cellular image using a generative adversarial network to obtain an enhanced cellular image; and segmenting the enhanced cellular image using a hierarchical fully convolutional network for image segmentation to obtain cytoplasm and zona pellucida areas in the cellular image.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: December 28, 2021
    Assignee: The Chinese University of Hong Kong
    Inventors: Yiu Leung Chan, Mingpeng Zhao, Han Hui Li, Tin Chiu Li
  • Patent number: 11200668
    Abstract: Method and system for grading a tumor. For example, a system for grading a tumor comprising: an image obtaining module configured to obtain a pathological image of a tissue to be examined; a snippet obtaining module configured to obtain one or more snippets having one or more sizes from the pathological image; an analyzing module configured to obtain one or more classification features based on at least analyzing the one or more snippets using one or more selected trained detection models of the analyzing module, wherein each selected trained detection model is configured to identify one or more classification features; and an outputting module configured to determine a tumor identification result based on at least the one or more classification features and output the tumor identification result.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: December 14, 2021
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Qiuping Chun, Feng Shi, Yiqiang Zhan
  • Patent number: 11195040
    Abstract: A monitoring-screen-data generation device includes an object-data generation unit, a screen-data generation unit, and an assignment processing unit. The object-data generation unit identifies a plurality of objects included in an image based on image data, and generates object data. The screen-data generation unit generates monitoring screen data on the basis of the object data. On the basis of definition data that defines a state transition and the object data, the assignment processing unit assigns data that defines the state transition to an image object included in a monitoring screen of the monitoring screen data.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: December 7, 2021
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Shingo Sodenaga
  • Patent number: 11170502
    Abstract: Provided is a method based on deep neural network to extract appearance and geometry features for pulmonary textures classification, which belongs to the technical fields of medical image processing and computer vision. Taking 217 pulmonary computed tomography images as original data, several groups of datasets are generated through a preprocessing procedure. Each group includes a CT image patch, a corresponding image patch containing geometry information and a ground-truth label. A dual-branch residual network is constructed, including two branches separately takes CT image patches and corresponding image patches containing geometry information as input. Appearance and geometry information of pulmonary textures are learnt by the dual-branch residual network, and then they are fused to achieve high accuracy for pulmonary texture classification. Besides, the proposed network architecture is clear, easy to be constructed and implemented.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: November 9, 2021
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Rui Xu, Xinchen Ye, Lin Lin, Haojie Li, Xin Fan, Zhongxuan Luo
  • Patent number: 11164312
    Abstract: A system associated with quantifying a density level of tumor-infiltrating lymphocytes, based on prediction of reconstructed TIL information associated with tumoral tissue image data during pathology analysis of the tissue image data is disclosed. The system receives digitized diagnostic and stained whole-slide image data related to tissue of a particular type of tumoral data. Defined are regions of interest that represents a portion of, or a full image of the whole-slide image data. The image data is encoded into segmented data portions based on convolutional autoencoding of objects associated with the collection of image data. The density of tumor-infiltrating lymphocytes is determined of bounded segmented data portions for respective classification of the regions of interest. A classification label is assigned to the regions of interest. It is determined whether an assigned classification label is above a pre-determined threshold probability value of lymphocyte infiltrated.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: November 2, 2021
    Assignees: The Research Foundation tor the State University of New York, Board of Regents, The University of Texas System, Institute for Systems Biology
    Inventors: Joel Haskin Saltz, Tahsin Kurc, Rajarsi Gupta, Tianhao Zhao, Rebecca Batiste, Le Hou, Vu Nguyen, Dimitrios Samaras, Arvind Rao, John Van Arnam, Pankaj Singh, Alexander Lazar, Ashish Sharma, Ilya Shmulevich, Vesteinn Thorsson
  • Patent number: 11138423
    Abstract: Arbitrary image data may be transformed into data suitable for optical character recognition (OCR) processing. A processor may generate a plurality of intermediate feature layers of an image using convolutional neural network (CNN) processing. For each intermediate feature layer, the processor may generate at least one text proposal using a region proposal network (RPN). The at least one text proposal may comprise a portion of the intermediate feature layer that is predicted to contain text. The processor may merge the text proposals with one another to form a patch of the image that is predicted to contain text. The processor may determine outer coordinates of the patch. The outer coordinates may comprise at least leftmost, rightmost, topmost, and bottommost coordinates. The processor may generate a quadrilateral of the image that is a smallest quadrilateral including the leftmost, rightmost, topmost, and bottommost coordinates.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: October 5, 2021
    Assignee: Intuit Inc.
    Inventors: Terrence J. Torres, Homa Foroughi
  • Patent number: 11132770
    Abstract: An image processing method includes: obtaining an original photographed image and original depth information of at least one pixel point in the original photographed image; determining blurring degree data of each of the at least one pixel point in the original photographed image according to blurring parameter data and the original depth information of the at least one pixel point; and performing blurring processing on the original photographed image according to the blurring degree data of the at least one pixel point to generate a post-processing result image.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 28, 2021
    Assignee: SHENZHEN SENSETIME TECHNOLOGY CO., LTD
    Inventors: Xiangyu Mao, Qiong Yan, Wenxiu Sun, Sijie Ren
  • Patent number: 11120622
    Abstract: A method of determining a biophysical model for a lung of a patient from multiple x-ray measurements corresponding to different breathing phases of the lung is provided. The method includes extracting multiple displacement fields of lung tissue from the multiple x-ray measurements corresponding to different breathing phases of the lung. Each displacement field represents movement of the lung tissue from a first breathing phase to a second breathing phase and each breathing phase has a corresponding set of biometric parameters. The method includes calculating one or more biophysical parameters of a biophysical model of the lung using the multiple displacement fields of the lung tissue between different breathing phases of the lung and the corresponding sets of biometric parameters.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: September 14, 2021
    Assignee: DATA INTEGRITY ADVISORS, LLC
    Inventor: Janid Blanco Kiely
  • Patent number: 11120269
    Abstract: The present disclosure relates to a method and apparatus for determining a target rotation direction. Said method for determining the target rotation direction comprises: inputting successive video frames which comprise a rotation target; establishing a background model for the first image frame in said video frames; performing foreground detection on the video frames other than the first frame by means of said background model so as to determine the rotation axis of said rotation target; obtaining the distribution of optical flow points within a preset region of said rotation axis; determining the direction of rotation of said rotation target according to the distribution of optical flow points within said preset region. By means of the present disclosure, it is possible to simply and efficiently determine the clock direction of a rotation target in a video.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: September 14, 2021
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.
    Inventors: Guangfu Che, Shan An, Xiaozhen Ma, Yu Chen
  • Patent number: 11120578
    Abstract: A computer-implemented method evaluates which of a set of reference indicators has colour attributes closest to those of an object. Each respective reference indicator has different colour attributes. The method comprises determining a colour relationship between the object and at least one reference indicator of a first type by analysing a plurality of image portions, each comprising at least part of the object and at least one reference indicator of the first type placed directly on the object. The image portions each have different lighting characteristics. The method also comprises comparing the determined colour relationship to known colour relationships associated with the at least one reference indicator, each known colour relationship representing a relationship between the reference indicator and another reference indicator of the set, to identify which reference indicator of the set has colour attributes closest to those of the object.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: September 14, 2021
    Assignee: Anthropics Technology Limited
    Inventors: Ivor Simpson, Tony Polichroniadis
  • Patent number: 11113543
    Abstract: A facility inspection system prevents a normal part from being detected as an abnormal part caused by a deviation in an alignment due to a presence/absence of a moving object in detecting the abnormal part in a surrounding environment of a vehicle moving on a track. The system includes a photographing device, storage device, separation unit, an alignment unit, and a extraction unit. The photographing device photographs the surrounding environment of the moving vehicle. The storage device stores a reference alignment point cloud and a reference difference-extraction point cloud for each position on the track. The separation unit separates the alignment point cloud from a three-dimensional point cloud. The alignment unit aligns the reference alignment point cloud and the alignment point cloud and outputs alignment information. The extraction unit extracts a difference between the three-dimensional point cloud deformed based on the alignment information and the reference difference-extraction point cloud.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: September 7, 2021
    Assignee: HITACHI HIGH-TECH FINE SYSTEMS CORPORATION
    Inventors: Nobuhiro Chihara, Masahiko Honda, Toshihide Kishi, Masashi Shinbo, Kiyotake Horie
  • Patent number: 11113558
    Abstract: An information processing apparatus includes an extraction unit that extracts a character string corresponding to a keyword from a character string including the keyword described across plural lines, in accordance with an extraction condition of the character string corresponding to the keyword, a combining unit that combines character strings extracted by the extraction unit in accordance with a line sequence, and an output unit that the character strings combined by the combining unit as a character string corresponding to the keyword.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: September 7, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Kunihiko Kobayashi, Junichi Shimizu, Daigo Horie
  • Patent number: 11113807
    Abstract: A method of training a detection system is able to acquire volume image data in an additively manufactured object for the detection of process irregularities, and comprises the steps of: a) receiving process irregularity data referring to a selected location within an additively manufactured reference object in which selected location a predefined process irregularity occurred during the additive manufacture of the object, b) acquiring volume image data of a volume of the reference object comprising at least the selected location by said detection system, c) identifying within the volume image data characteristic data which represent a difference between the volume image data of the selected location in comparison with the volume image data of at least one other location of the reference object and/or of a number of other additively manufactured objects in which no process irregularity has occurred and/or no process irregularity is suspected, d) assigning to the predefined process irregularity the characteris
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: September 7, 2021
    Assignee: EOS GmbH Electro Optical Systems
    Inventor: Juha Kotila
  • Patent number: 11113526
    Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: September 7, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kevin Stone, Krishna Shankar, Michael Laskey
  • Patent number: 11106907
    Abstract: Embodiments include methods, system and computer program products for processing a scanned document. Aspects include obtaining an image of the scanned document and identifying a boundary of a portion of the scanned document, wherein the portion includes at least partially obscured text. Aspects also include performing optical character recognition on the image of the scanned document to extract text from the document. Aspects further include performing additional processing on the text extracted from inside the portion of the document.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 31, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brendan Bull, Scott Carrier, Paul Lewis Felt