Patents Examined by Menatoallah Youssef
  • Patent number: 10963668
    Abstract: A method of preprocessing an image including biological information is disclosed, in which an image preprocessor may set an edge line in an input image including biological information, calculate an energy value corresponding to the edge line, and adaptively crop the input image based on the energy value.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: March 30, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyuhong Kim, Wonjun Kim, Youngsung Kim, Sungjoo Suh
  • Patent number: 10916053
    Abstract: Systems and methods for generating a three-dimensional (3D) model of a user's dental arch based on two-dimensional (2D) images include a model generation system that receives one or more images of a dental arch of a user. The model generation system generates a point cloud based on the images of the dental arch of the user. The model generation system generates a 3D model of the dental arch of the user based on the point cloud. A dental aligner is manufactured based on the 3D model of the dental arch of the user. The dental aligner is specific to the user and configured to reposition one or more teeth of the user.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: February 9, 2021
    Assignee: SDC U.S. SmilePay SPV
    Inventors: Jordan Katzman, Christopher Yancey
  • Patent number: 10902278
    Abstract: In an image processing system, an image processing apparatus configured to recognize images of documents is connected through a network to terminal apparatuses each including an input unit and a display unit. The image processing apparatus includes: a recognition unit configured to perform character recognition processing on an image; a confidential information detecting unit configured to detect confidential information from a result of the character recognition processing; and a manipulation unit configured to generate, based on the confidential information in the image, a first manipulated image obtained by fragmenting the confidential information. Each of the terminal apparatuses includes: a display unit configured to display the first manipulated image; and an input unit configured to input corrected data for the first manipulated image.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: January 26, 2021
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA SOLUTIONS CORPORATION
    Inventors: Soichiro Ono, Tomohisa Suzuki, Akio Furuhata, Atsuhiro Yoshida
  • Patent number: 10885653
    Abstract: Present aspects are directed to a system that receives digital images of parcels at various points throughout a transportation and logistics network. Based on analysis of a single digital image of a parcel, the dimensions of the parcel are detected and calculated. A single digital image of a parcel is captured at various points throughout a transit or shipping network. The single digital image is processed using computer vision for image manipulation. Key identifying points of the parcel (e.g., a label or bar code) can be detected through the processing. The processed digital image is input into a mathematical model that generates an estimate for all dimensions of the parcel.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: January 5, 2021
    Assignee: United Parcel Service of America, Inc.
    Inventors: Asheesh Goja, Eric Almberg
  • Patent number: 10878576
    Abstract: Techniques for enhancing image segmentation with the integration of deep learning are disclosed herein. An example method for atlas-based segmentation using deep learning includes: applying a deep learning model to a subject image to identify an anatomical feature, registering an atlas image to the subject image, using the deep learning segmentation data to improve a registration result, generating a mapped atlas, and identifying the feature in the subject image using the mapped atlas. Another example method for training and use of a trained machine learning classifier, in an atlas-based segmentation process using deep learning, includes: applying a deep learning model to an atlas image, training a machine learning model classifier using data from applying the deep learning model, estimating structure labels of areas of the subject image, and defining structure labels by combining the estimated structure labels with labels produced from atlas-based segmentation on the subject image.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: December 29, 2020
    Assignee: Elekta, Inc.
    Inventors: Xiao Han, Nicolette Patricia Magro
  • Patent number: 10872221
    Abstract: A non-contact biometric identification system includes a hand scanner that generates images of a user's palm. Images are acquired using light of a first polarization at a first time that show surface characteristics such as wrinkles in the palm while images acquired using light of a second polarization at a second time show deeper characteristics such as veins. Within the images, the palm is identified and subdivided into sub-images. The sub-images are subsequently processed to determine feature vectors present in each sub-image. A current signature is determined using the feature vectors. A user may be identified based on a comparison of the current signature with a previously stored reference signature that is associated with a user identifier.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: December 22, 2020
    Assignee: AMAZON TECHNOLOGIES, INC
    Inventors: Dilip Kumar, Manoj Aggarwal, George Leifman, Gerard Guy Medioni, Nikolai Orlov, Natan Peterfreund, Korwin Jon Smith, Dmitri Veikherman, Sora Kim
  • Patent number: 10867183
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: December 15, 2020
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 10861141
    Abstract: A facial image-processing method includes: transforming a facial image with 2D Fourier transformation (FT) in a template to obtain 2D FT data of color channels of the facial image and a 2D FT data of the template, with computing first light intensities of color channels and a second light intensity of the template with the 2D FT data; computing an intensity mean value and an intensity maximum in each of the color channels; processing the first light intensities and the second light intensity with singular value decomposition (SVD) to obtain intensity spectrum SVD matrixes and a template SVD matrix; computing a compensation weight coefficient for each color channel with the intensity mean value, the intensity maximum and SV maximums of the intensity spectrum SVD matrixes and the template SVD matrix; and compensating the facial image with the compensation weight coefficients to obtain a compensated facial image.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: December 8, 2020
    Assignee: NATIONAL KAOHSIUNG UNIVERSITY OF APPLIED SCIENCES
    Inventor: Jing-Wein Wang
  • Patent number: 10852749
    Abstract: A computer-implemented method, system, and computer program product are provided for pose estimation. The method includes receiving, by a processor, a plurality of images from one or more cameras. The method also includes generating, by the processor with a feature extraction convolutional neural network (CNN), a feature map for each of the plurality of images. The method additionally includes estimating, by the processor with a feature weighting network, a score map from a pair of the feature maps. The method further includes predicting, by the processor with a pose estimation CNN, a pose from the score map and a combined feature map. The method also includes controlling an operation of a processor-based machine to change a state of the processor-based machine, responsive to the pose.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: December 1, 2020
    Inventors: Quoc-Huy Tran, Manmohan Chandraker, Hyo Jin Kim
  • Patent number: 10848662
    Abstract: Provided is an image processing device including a global motion detection unit configured to detect a global motion indicating a motion of an entire image, a local motion detection unit configured to detect a local motion indicating a motion of each of areas of an image, and a main subject determination unit configured to determine a main subject based on the global motion and the local motion.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: November 24, 2020
    Assignee: Sony Corporation
    Inventor: Masaya Kinoshita
  • Patent number: 10839564
    Abstract: A system classifies a compressed image or predicts likelihood values associated with a compressed image. The system partially decompresses compressed JPEG image data to obtain blocks of discrete cosine transform (DCT) coefficients that represent the image. The system may apply various transform functions to the individual blocks of DCT coefficients to resize the blocks so that they may be input together into a neural network for analysis. Weights of the neural network may be trained to accept transformed blocks of DCT coefficients which may be less computationally intensive than accepting raw image data as input.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: November 17, 2020
    Assignee: Uber Technologies, Inc.
    Inventors: Lionel Gueguen, Alexander Igorevich Sergeev, Ruoqian Liu, Jason Yosinski
  • Patent number: 10832149
    Abstract: A mechanism is provided in a data processing system comprising a processor and a memory, the memory comprising instructions executed by the processor to specifically configure the processor to implement a statistical model tool for providing insight into decision making. The statistical model tool applies the statistical model to an input image to generate an original classification probability. An image modification component executing within the statistical model tool iterative modifies each portion of the input image to generate a modified image. The statistical model tool applies the statistical model to the modified image to generate a new classification probability for each portion of the input image. A compare component executing in the statistical model tool compares each new classification probability to the original classification probability to generate a respective probability distance.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael C. Mudie, Christopher A. Bischke, Abhijit Tomar
  • Patent number: 10824895
    Abstract: An object of the present invention is to extract an area of a foreground object with high accuracy. The present invention is an image processing apparatus including: a target image acquisition unit configured to acquire a target image that is a target of extraction of a foreground area; a reference image acquisition unit configured to acquire a plurality of reference images including an image whose viewpoint is different from that of the target image; a conversion unit configured to convert a plurality of reference images acquired by the reference image acquisition unit based on a viewpoint corresponding to the target image; and an extraction unit configured to extract a foreground area of the target image by using data relating to a degree of coincidence of a plurality of reference images converted by the conversion unit.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: November 3, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kina Itakura
  • Patent number: 10810494
    Abstract: A method of updating a classifier on-the-fly is provided. The method includes providing a base classifier. The base classifier is a neural network. The method further includes receiving a class and a set of images associated with the class. The method further includes splitting the set of images into an evaluation set and a training set. The method further includes updating the base classifier on-the-fly to provide an updated classifier. Updating the base classifier includes (1) extracting features for each image from the training set from the base classifier; (2) training the updated classifier using the extracted features; and (3) scoring the evaluation set with the updated classifier.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: October 20, 2020
    Inventors: Appu Shaji, Ramzi Rizk, Harsimrat S. Sandhawalia, Ludwig G. W. Schmidt-Hackenberg
  • Patent number: 10810757
    Abstract: A vehicle exterior environment recognition apparatus includes a road surface identifying unit, a three-dimensional object identifying unit, a road surface determining unit, and a three-dimensional object composition unit. The road surface identifying unit identifies a road surface in an image. The three-dimensional object identifying unit identifies three-dimensional objects each having a height extending vertically upward from the identified road surface. When the identified three-dimensional objects are separated and are located at respective positions distant from an own vehicle by a same relative distance, the road surface determining unit performs a determination of whether a three-dimensional-object-intervening region between the identified three-dimensional objects has a correspondence to the road surface.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: October 20, 2020
    Assignee: SUBARU CORPORATION
    Inventor: Toshimi Okubo
  • Patent number: 10803593
    Abstract: The disclosure relates to method and system for compressing an image. The method involves receiving an image of the one or more images. Further, at least one segmentation algorithm is applied on the image and dividing the image into a plurality segments. The method further includes comparing the plurality of segments of the image with a seed image, where seed images include a seed image identifier. Further, a seed image is associated with the segments of the image in case there is a match between the seed image and the plurality of segments. The method also includes storing the image as a residual image and a seed image along with one or more seed image identifiers. Further, the image may be reconstructed based on the residual image and one or more seed images associated with the image. Thereafter, the image may be displayed on a display unit.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: October 13, 2020
    Assignee: Siemens Healthcare GmbH
    Inventor: Mohana Krishna Munukutla
  • Patent number: 10796178
    Abstract: A method for face liveness detection and a device for face liveness detection. The method for face liveness detection includes: performing an illumination liveness detection and obtaining an illumination liveness detection result; and determining whether or not a face to be verified passes the face liveness detection at least according to the illumination liveness detection result. Performing of the illumination liveness detection and obtaining of the illumination liveness detection result includes: acquiring a plurality of illumination images of the face to be verified, in which the plurality of illumination images are captured in a process of dynamically changing mode of illumination light irradiated on the face to be verified and are respectively corresponding to various modes of the illumination light; and obtaining the illumination liveness detection result according to a light reflection characteristic of the face to be verified in the plurality of illumination images.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: October 6, 2020
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., MEGVII (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Haoqiang Fan, Keqing Chen
  • Patent number: 10796436
    Abstract: An inspection apparatus and a method for segmenting an Image of a vehicle are disclosed. An X-ray transmission scanning is performed on the vehicle to obtain a transmission image. Each pixel of the transmission image is labeled with a category tag, by using a trained convolutional network. Images of respective parts of the vehicle are determined according to the category tag for each pixel. With the above solutions, it is possible to segment the images of respective parts of a vehicle more accurately in the situations that are complicated or have large variety kinds of vehicles.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: October 6, 2020
    Assignees: Nuctech Company Limited, Tsinghua University
    Inventors: Chuanhong Huang, Qiang Li, Jianping Gu, Yaohong Liu, Ziran Zhao
  • Patent number: 10789703
    Abstract: Autoencoder-based, semi-supervised approaches are used for anomaly detection. Defects on semiconductor wafers can be discovered using these approaches. The model can include a variational autoencoder, such as a one that includes ladder networks. Defect-free or clean images can be used to train the model that is later used to discover defects or other anomalies.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: September 29, 2020
    Assignee: KLA-Tencor Corporation
    Inventors: Shaoyu Lu, Li He, Sankar Venkataraman
  • Patent number: 10786207
    Abstract: A physiological state determination system includes a CPU and a camera. The camera acquires a photographic image data of a time-series change in facial data of a subject to which brain function activation information that activates human brain function was provided. The CPU determines a mental or physical physiological state of the subject based on facial change information corresponding to the photographic image data.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: September 29, 2020
    Assignees: Daikin Industries, Ltd., Tokyo Institute of Technology
    Inventors: Junichiro Arai, Takashi Gotou, Makoto Iwakame, Kenichi Hino, Tomoya Hirano, Takahiro Hirayama, Yasunori Kotani, Yoshimi Ohgami, Taro Tomatsu