Patents Examined by Han Hoang
  • Patent number: 11748885
    Abstract: A method for characterising motion of one or more objects in a time ordered image dataset comprising a plurality of time ordered data frames, the method comprising: selecting a reference data frame from the plurality of time ordered data frames (210); extracting a plurality of image patches from at least a part of the reference data frame (220); identifying a location of each image patch of at least a subset of the plurality of image patches in each data frame (230); defining, based on the identified locations, a mesh for each data frame, wherein vertices of each mesh correspond to respective identified locations of image patches in the corresponding data frame (240); and deriving, from the meshes, a motion signature for the time ordered image dataset, the motion signature characteristic of the motion of the one or more objects in the plurality of time ordered data frames (250).
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: September 5, 2023
    Assignee: Ludwig Institute for Cancer Research Ltd
    Inventors: Felix Zhou, Xin Lu, Carlos Ruiz-puig, Jens Rittscher
  • Patent number: 11727592
    Abstract: A detection apparatus to extract features from an image; determine the number of candidate regions of the object in the image based on the extracted features, wherein the determined number of the candidate regions is decided by a position and shape of the candidate regions; and to detect the object from the image based on at least the extracted features and the determined number, position and shape of the candidate regions.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 15, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yaohai Huang, Yan Zhang, Zhiyuan Zhang
  • Patent number: 11729356
    Abstract: A method includes capturing a first image associated with a portion of a display screen being shared. The method further includes rendering the first image in a preview window of the display screen being shared to form a second image. The second image is captured so as to determine whether the first image is duplicated in the second image. The duplication of the first image in the second image is masked to form a third image. The third image is rendered in the preview window.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: August 15, 2023
    Assignee: RingCentral, Inc.
    Inventor: Aleksei Petrov
  • Patent number: 11704819
    Abstract: The present disclosure discloses a three-dimensional data alignment apparatus, a three-dimensional data alignment method, and a recording medium, which may align a location between volumetric data and surface data even without a segmentation process of extracting a surface from the volumetric data. A three-dimensional data alignment apparatus according to an exemplary embodiment of the present disclosure includes a three-dimensional data alignment unit for aligning a location between first three-dimensional data and second three-dimensional data expressed in different data forms with regard to a target to be measured. The first three-dimensional data are three-dimensional data acquired in a voxel form with regard to the target to be measured, and the second three-dimensional data are three-dimensional data acquired in a surface form with regard to the target to be measured.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: July 18, 2023
    Assignees: MEDIT CORP., Korea University Research and Business Foundation
    Inventors: Min Ho Chang, Han Sol Kim, Keonhwa Jung
  • Patent number: 11682125
    Abstract: A fluorescence image registration method includes obtaining at least one fluorescence image of a biochip. An interior local area. Sums of pixel values in the interior local area along a first direction and a second direction are obtained. A plurality of first template lines is selected to find a minimum total value of the sums of pixel values corresponding to the first template lines. Pixel-level correction is performed on a local area of the track line to obtain a pixel-level track cross. Other track crosses on the biochip is obtained, and the pixel-level correction is performed on the other track crosses. The position of the pixel-level track line is corrected by a center-of-gravity method to obtain the subpixel-level position of the track line. The subpixel-level positions of all sites uniformly distributed on the biochip is obtained.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: June 20, 2023
    Assignee: MGI Tech Co., Ltd.
    Inventors: Mei Li, Yu-Xiang Li, Yi-Wen Liu
  • Patent number: 11647949
    Abstract: Embodiments herein provide a method for stereo-visual localization of an object by a stereo-visual localization apparatus. The method includes generating, by a stereo-visual localization apparatus, a stereo-visual interface displaying the first stereo image of the object and the first stereo image of the subject in a first portion and the second stereo image of the object and the second stereo image of the subject in a second portion. Further, the method includes detecting, by the stereo-visual localization apparatus, a movement of the subject to align the subject in the field of view with the object. Furthermore, the method includes visually aligning, by the stereo-visual localization apparatus, the subject with the object based on the movement by simultaneously changing apparent position of the first and the second stereo images of the subject in each of the first portion and the second portion in the stereo-visual interface.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: May 16, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Shankar Mosur Venkatesan, Phaneendra Kumar Yalavarthy, Trivikram Annamalai
  • Patent number: 11644901
    Abstract: A method for detecting a user input based on a gesture in which image data of at least two individual images are acquired and recoding times are allocated to the individual images. Each of the acquired individual images is segmented, an individual image object is identified in each of the individual images and a reference point is determined based on the individual image object. A trajectory is determined based on the reference points in the individual images and a gesture is determined based on the trajectory. An output signal is generated and output based on the gesture determined. A device for detecting a user input based on a gesture having an acquisition unit for acquiring image data, a segmentation unit for performing segmentation, a trajectory computing unit for determining the trajectory, an allocation unit for determining a gesture and an output unit.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: May 9, 2023
    Inventors: Bernd Ette, Volker Wintsche, Christian Gaida
  • Patent number: 11635408
    Abstract: Systems and methods for tracking the location of a non-destructive inspection (NDI) scanner using images of a target object acquired by the NDI scanner. The system includes a frame, an NDI scanner supported by the frame, a system configured to enable motorized movement of the frame, and a computer system communicatively coupled to receive sensor data from the NDI scanner and track the location of the NDI scanner. The NDI scanner includes a two-dimensional (2-D) array of sensors. Subsurface depth sensor data is repeatedly (recurrently, continually) acquired by and output from the 2-D sensor array while at different locations on a surface of the target object. The resulting 2-D scan image sequence is fed into an image processing and feature point comparison module that is configured to track the location of the scanner relative to the target object using virtual features visible in the acquired scan images.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: April 25, 2023
    Assignee: The Boeing Company
    Inventors: Joseph L. Hafenrichter, James J. Troy, Gary E. Georgeson
  • Patent number: 11631234
    Abstract: The present disclosure relates to an object selection system that accurately detects and optionally automatically selects user-requested objects (e.g., query objects) in digital images. For example, the object selection system builds and utilizes an object selection pipeline to determine which object detection neural network to utilize to detect a query object based on analyzing the object class of a query object. In particular, the object selection system can identify both known object classes as well as objects corresponding to unknown object classes.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: April 18, 2023
    Assignee: Adobe, Inc.
    Inventors: Scott Cohen, Zhe Lin, Mingyang Ling
  • Patent number: 11625842
    Abstract: An image processing apparatus includes: a model pattern storage unit that stores a model pattern composed of a plurality of model feature points; an image data acquisition unit that acquires a plurality of images obtained through capturing an object to be detected; an object detection unit that detects the object to be detected from the images using the model pattern; a model pattern transformation unit that transforms a position and posture such that the model pattern is superimposed on an image of the object to be detected; a corresponding point acquisition unit that acquires a corresponding point on image data corresponding to each of the model feature points; a corresponding point set selection unit that selects a set of corresponding points on the plurality of images; and a three-dimensional position calculation unit that calculates a three-dimensional position of the image of the object to be detected.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: April 11, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yuta Namiki
  • Patent number: 11620809
    Abstract: The present invention discloses fiducial marker systems or tag systems and methods to detect and decode a tag. In one aspect, a tag comprises four corners. Two upper corners are interconnected to form a detection area. Two lower corners are interconnected to form another detection area. The detection areas are interconnected by a path. The path divides the space between the detection areas into two coding areas. In another aspect, a tag comprises four corners. The four corners are interconnected by multiple paths. The multiple paths divide the space defined by the four corners into multiple coding areas.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: April 4, 2023
    Inventors: Jiawei Huang, Dexin Li, Xintian Li
  • Patent number: 11576638
    Abstract: An image synthesis unit of an X-ray imaging apparatus is configured to correct a synthesis target image or a transparent image based on movement information of a feature point and movement information of a pixel and generate a synthesized image by synthesizing a corrected synthesis target image and a transparent image or synthesizing a synthesis target image and a corrected transparent image.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: February 14, 2023
    Assignee: Shimadzu Corporation
    Inventor: Takanori Yoshida
  • Patent number: 11538244
    Abstract: Implementations of the subject matter described herein provide a solution for extracting spatial-temporal feature representation. In this solution, an input comprising a plurality of images is received at a first layer of a learning network. First features that characterize spatial presentation of the images are extracted from the input in a spatial dimension using a first unit of the first layer. Based on a type of a connection between the first unit and a second unit of the first layer, second features at least characterizing temporal changes across the images are extracted from the first features and/or the input in a temporal dimension using the second unit. A spatial-temporal feature representation of the images is generated partially based on the second features. Through this solution, it is possible to reduce learning network sizes, improve training and use efficiency of learning networks, and obtain accurate spatial-temporal feature representations.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: December 27, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ting Yao, Tao Mei
  • Patent number: 11532121
    Abstract: A method for measuring a seam on aircraft skin based on a large-scale point cloud is disclosed. A point cloud density of each point in an aircraft skin point cloud is calculated. Seam and non-seam point clouds are divided according to a discrepancy of the calculated point cloud density. A point is selected from the point cloud of the seam area, and a section at the point is extracted. A certain range of the seam and non-seam point clouds is projected to the section and a projected point cloud is acquired. A calculation model of flush and gap is constructed, and the flush and the gap of the aircraft skin seam at the measuring point is calculated according to the projected point cloud and the calculation model.
    Type: Grant
    Filed: February 7, 2021
    Date of Patent: December 20, 2022
    Assignee: Nanjing University of Aeronautics and Astronautics
    Inventors: Jun Wang, Kun Long, Qian Xie, Dening Lu
  • Patent number: 11501121
    Abstract: A method for automatically classifying emission tomographic images includes receiving original images and a plurality of class labels designating each original image as belonging to one of a plurality of possible classifications and utilizing a data generator to create generated images based on the original images. The data generator shuffles the original images. The number of generated images is greater than the number of original images. One or more geometric transformations are performed on the generated images. A binomial sub-sampling operation is applied to the transformed images to yield a plurality of sub-sampled images for each original image. A multi-layer convolutional neural network (CNN) is trained using the sub-sampled images and the class labels to classify input images as corresponding to one of the possible classifications. A plurality of weights corresponding to the trained CNN are identified and those weights are used to create a deployable version of the CNN.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: November 15, 2022
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Shuchen Zhang, Xinhong Ding
  • Patent number: 11423528
    Abstract: The image inspection apparatus includes a camera that images a workpiece placed on a stage and generates a workpiece image, and a second camera having an imaging field-of-view wider than that of the camera, which images the workpiece and generates a bird's eye view image. The apparatus detects a position of the workpiece based on the bird's eye view image, and positions the workpiece based on the position of the workpiece so that the workpiece is located in or near an imaging field-of-view of the camera, thereby imaging the workpiece with the camera to generate the workpiece image. The apparatus specifies a detailed position and an orientation of the workpiece in the workpiece image generated by the camera, and determines an inspection point of the workpiece in the workpiece image based on the detailed position and the orientation of the workpiece specified, thereby executing a predetermined inspection process.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: August 23, 2022
    Assignee: KEYENCE CORPORATION
    Inventor: Koji Takahashi
  • Patent number: 11410303
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: August 9, 2022
    Assignee: Agilent Technologies Inc.
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Patent number: 11403758
    Abstract: A 3D/2D vascular registration method includes: according to topological information of vessels in a 3D vascular image, a first vascular image model is obtained, and according to the topological information of vessels in a 2D vascular image, a second vascular image model is obtained; according to the first vascular image model and the second vascular image model, obtain a spatial transformation relationship between the three-dimensional vascular image and the two-dimensional vascular image; wherein, the spatial transformation relationship is used to register the 3D vascular image and the 2D vascular image. The 3D/2D vascular registration method can establish the vascular image model according to the topological information of the vessel in the vascular image model, register according to the vascular image model, so as to give consideration to both high accuracy and high calculation efficiency.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: August 2, 2022
    Assignee: BEIJING INSTITUTE OF TECHNOLOGY
    Inventors: Yongtian Wang, Jian Yang, Danni Ai, Jingfan Fan, Jianjun Zhu
  • Patent number: 11379688
    Abstract: A keypoint detection system includes: a camera system including at least one camera; and a processor and memory, the processor and memory being configured to: receive an image captured by the camera system; compute a plurality of keypoints in the image using a convolutional neural network including: a first layer implementing a first convolutional kernel; a second layer implementing a second convolutional kernel; an output layer; and a plurality of connections between the first layer and the second layer and between the second layer and the output layer, each of the connections having a corresponding weight stored in the memory; and output the plurality of keypoints of the image computed by the convolutional neural network.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: July 5, 2022
    Assignee: PACKSIZE LLC
    Inventors: Paolo Di Febbo, Carlo Dal Mutto, Kinh Tieu
  • Patent number: 11379999
    Abstract: The feature extraction device according to one aspect of the present disclosure comprises: a reliability determination unit that determines a degree of reliability with respect to a second region, which is a region that has been extracted as a foreground region of an image and is within a first region that has been extracted from the image as a partial region containing a recognition subject, said degree of reliability indicating the likelihood of being the recognition subject; a feature determination unit that, on the basis of the degree of reliability, uses a first feature which is a feature extracted from the first region and a second feature which is a feature extracted from the second region to determine a feature of the recognition subject; and an output unit that outputs information indicating the determined feature of the recognition subject.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: July 5, 2022
    Assignee: NEC CORPORATION
    Inventor: Ryo Kawai