Patents Examined by Nirav G Patel
  • Patent number: 10977788
    Abstract: An image analysis method according one or more embodiments may analyze a form of a cell using a deep learning algorithm with a structure of a neural network. The image analysis method may include: generating data for analysis from an image for analysis in which an analysis target cell is captured; inputting the data for analysis into the deep learning algorithm; and generating data indicating a form of the analysis target cell using the deep learning algorithm.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: April 13, 2021
    Assignee: SYSMEX Corporation
    Inventors: Yoshinori Sasagawa, Yoshiaki Miyamoto, Kazumi Hakamada, Kengo Gotoh, Yosuke Sekiguchi
  • Patent number: 10977509
    Abstract: An image processing method implemented by a processor includes receiving an image, acquiring a target image that includes an object from the image, calculating an evaluation score by evaluating a quality of the target image, and detecting the object from the target image based on the evaluation score.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: April 13, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hao Feng, Jae-Joon Han, Changkyu Choi, Chao Zhang, Jingtao Xu, Yanhu Shan, Yaozu An
  • Patent number: 10970849
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 6, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross
  • Patent number: 10970847
    Abstract: Techniques are disclosed for document boundary detection (BD) from an input image using a combination of deep learning model and image processing algorithms. Quadrilaterals approximating the document boundaries in the input image are determined and rated separately using both these approaches: deep leaning using convolutional neural network (CNN) and heuristics using image processing algorithms. Thereafter, the best rated quadrilateral is selected from the quadrilaterals obtained from both the approaches.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: April 6, 2021
    Assignee: Adobe Inc.
    Inventors: Prasenjit Mondal, Anuj Shara, Ankit Bal, Deepanshu Arora, Siddharth Kumar
  • Patent number: 10963992
    Abstract: Method and system for compensating intensity biases in a plurality of digital images. Each digital image of the plurality of digital images contains a plurality of objects and each of the plurality of objects is configured to receive at least one molecule comprising genetic information, wherein the at least one molecule is configured to receive one of at least a first fluorescent compound and a second fluorescent compound. A first digital image of the plurality of digital images is taken by an optical imaging system during emission of electromagnetic radiation by the first fluorescent compound, and a second digital image of the plurality of digital images is taken by the optical imaging system during emission of electromagnetic radiation by the second fluorescent compound.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: March 30, 2021
    Assignee: QIAGEN GmbH
    Inventors: Thorsten Zerfass, Maiko Lohel, Fernando Carrillo Oesterreich
  • Patent number: 10965957
    Abstract: Introduced here is a technique to create small compressed image files while preserving data quality upon decompression. Upon receiving an uncompressed data, such as an image, a video, an audio, and/or a structured data, a machine learning model identifies an object in the uncompressed data such as a house, a dog, a text, a distinct audio signal, a unique data pattern, etc. The identified object is compressed using a compression treatment optimized for the identified object. The identified object, either before or after the compression, is removed from the uncompressed data. The uncompressed data with the identified object removed is compressed using a standard compression treatment.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: March 30, 2021
    Assignee: Groq, Inc.
    Inventor: Jonathan Alexander Ross
  • Patent number: 10957053
    Abstract: A multi-object tracking (MOT) framework uses a dual Long Short-Term Memory (LSTM) network (Siamese) for MOT. The dual LSTM network learns metrics along with an online updating scheme for data association. The dual LSTM network fuses relevant features of trajectories to interpret both temporal and spatial components non-linearly and concurrently outputs a similarity score. An LSTM model can be initialized for each trajectory and the metric updated in an online fashion during the tracking phase. An efficient and feasible visual tracking approach using Optical Flow and affine transformations can generate robust tracklets for initialization. Thus, the MOT framework can achieve increased tracking accuracy. Further, the MOT framework has improved performance and can be flexible utilized in arbitrary scenarios.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: March 23, 2021
    Assignee: DEEPNORTH INC.
    Inventors: Jinjun Wang, Xingyu Wan
  • Patent number: 10949995
    Abstract: Example embodiments of the present disclosure provide a method and a server image capture direction recognition, a method and a system of surveillance, and an image capture device. The recognition method includes: extracting deep features of a target image captured by a camera; based on degree of matchings between deep features of the target image and deep features of a plurality of reference images, determining a matched reference image of the target image; obtaining a coordinate position relationship between the matched reference image and the target image; using the coordinate position relationship and direction information of the matched reference image, calculating the image capture direction of the camera at the time of capturing the target image. Example embodiments of the present disclosure may quickly and accurately recognize an image capture direction of a camera, improving processing efficiency of image capture direction recognition.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: March 16, 2021
    Inventor: Qianghua Gao
  • Patent number: 10950006
    Abstract: Various systems and methods for generating environmentally contextualized patterns are disclosed. The system and method generates the environmentally contextualized pattern from a set of images representing an environment. The color palette of the dominant colors from the representational images is processed to remove the gray hues, set the remaining highest and lowest value hues to a particular contrast, and then determine a split complement from the lowest value hue. An algorithm, such as a reaction-diffusion algorithm, is then utilized to generate a pattern incorporating the aforementioned hues. The pattern generated by the algorithm provides a high degree of visual contrast with the environment that the images represent, allowing an individual wearing the pattern to be readily visually identifiable against the environment.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: March 16, 2021
    Inventor: Katherine A. McLean
  • Patent number: 10949989
    Abstract: A more effective confidence/uncertainty measure determination for disparity measurements is achieved by performing the determination on an evaluation of a set of disparity candidates for a predetermined position of a first picture at which the measurement of the disparity relative to the second picture is to be performed, and if this evaluation involves an accumulation of a contribution value for each of this set of disparity candidates, which contribution values depends on the respective disparity candidate and a dissimilarity to the second picture which is associated with the respective disparity candidate according to a function which has a first monotonicity with a dissimilarity associated with the respective disparity candidate, and a second monotonicity, opposite to the first monotonicity, with an absolute difference between the respective disparity candidate and a predetermined disparity having a minimum dissimilarity associated therewith among dissimilarities associated with the set of disparity candi
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: March 16, 2021
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Ronald Op Het Veld, Joachim Keinert
  • Patent number: 10949744
    Abstract: Provided are systems and techniques that provide an output phrase describing an image. An example method includes creating, with a convolutional neural network, feature maps describing image features in locations in the image. The method also includes providing a skeletal phrase for the image by processing the feature maps with a first long short-term memory (LSTM) neural network trained based on a first set of ground truth phrases which exclude attribute words. Then, attribute words are provided by processing the skeletal phrase and the feature maps with a second LSTM neural network trained based on a second set of ground truth phrases including words for attributes. Then, the method combines the skeletal phrase and the attribute words to form the output phrase.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: March 16, 2021
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Yufei Wang, Scott Cohen, Xiaohui Shen
  • Patent number: 10939036
    Abstract: A spot detecting apparatus includes a photographing part and a spot detecting part. The photographing part photographs, in a first resolution, an image displayed on a display panel to output first resolution image data, and photograph, in a second resolution, the image displayed on the display panel to output second resolution image data, where the second resolution is higher than the first resolution, and the image displayed on the display panel includes a first spot greater than or equal to a reference size and a second spot less than the reference size. The spot detecting part receives the first resolution image data and the second resolution image data, and subtracts the first resolution image data from the second resolution image data to detect the second spot.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: March 2, 2021
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Se-Yun Kim, Hoi-Sik Moon
  • Patent number: 10936918
    Abstract: An electronic device and a method for controlling the electronic device are provided. The method for controlling the electronic device includes, based on an occurrence of an event for outputting information being determined, obtaining data for determining a context corresponding to the electronic device, inputting the obtained data to a first model trained by an artificial intelligence algorithm and obtaining information about a person located in a vicinity of the electronic device, inputting the obtained information about the person and information about the event to a second model trained by an artificial intelligence algorithm and obtaining output information corresponding to the event, and providing the obtained output information.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: March 2, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sojung Yun, Yehoon Kim, Chanwon Seo
  • Patent number: 10937166
    Abstract: A method for structured text detection includes: receiving, by a convolutional neural network, a to-be-detected image and at least one character region template, the to-be-detected image includes structured text, the at least one character region template includes locations of N first character regions with N being an integer equal to or greater than 1, and the location of each first character region is obtained based on locations of second character regions corresponding to the each first character region in M sample images that are of the same type as the to-be-detected image, where M is an integer equal to or greater than 1; and obtaining, by the convolutional neural network, an actual location of the structured text in the to-be-detected image according to the at least one character region template. A system for structured detection and a non-transitory computer-readable medium are also provided.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: March 2, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Donglai Xiang, Yan Xia
  • Patent number: 10937177
    Abstract: A determination device generates, based on a first captured image captured by a first image capturing device mounted on a moving object, a shape of a subject (one of subjects) included in the first captured image. The determination device estimates the location of the shape of the subject after specific time based on the location of the shape of the subject and a moving speed. The determination device extracts the shape of the subject from a second captured image captured by a second image capturing device mounted on the moving object; compares the location of the shape of the subject extracted from the second captured image with the location of the shape of the subject estimated from the first captured image; and performs determination related to a moving state of the subject.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: March 2, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Masahiro Kataoka, Hideki Hara, Keisuke Saito
  • Patent number: 10928190
    Abstract: Techniques for scanning through structured light techniques that are robust to global illumination effects and that provide an accurate determination of position as provided. According to some aspects, described techniques may be computationally efficient compared with conventional techniques by making use of a novel approach of selecting frequencies of patterns for projection onto a target that mathematically enable efficient calculations. In particular, the selected frequencies may be chosen so that there is a known relationship between the frequencies. This relationship may be derived from Chebyshev polynomials and also relates the chosen frequencies to a low frequency pattern.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: February 23, 2021
    Assignee: Brown University
    Inventors: Gabriel Taubin, Daniel Alejandro Moreno
  • Patent number: 10922838
    Abstract: The present invention provides an image display system, a terminal, a method, and a program that can quickly and accurately display an image corresponding to a particular place. An image display system according to one example embodiment of the present invention includes: an information acquisition unit that acquires information including a position and an orientation of a mobile terminal; and an image acquisition unit that, based on the position and the orientation of the mobile terminal and a position and an orientation associated with an image stored in a storage device in the past, acquires the image.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: February 16, 2021
    Assignee: NEC CORPORATION
    Inventor: Shizuo Sakamoto
  • Patent number: 10921234
    Abstract: The present invention relates to methods and systems for image cytometry analysis, typically at low optical magnification, where analysis is based on detection of biological particles using UV bright field and optionally one or more sources of excitation light.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: February 16, 2021
    Assignee: CHEMOMETEC A/S
    Inventors: Martin Glensbjerg, Johan Holm, Søren Kjærulff, Frans Ejner Ravn Hansen
  • Patent number: 10909681
    Abstract: A method for identification of an optimal image within a sequence of image frames includes inputting the sequence of images into a computer processor configured for executing a plurality of neural networks and applying a sliding window to the image sequence to identify a plurality of image frame windows. The image frame windows are processed using a first neural network trained to classify the image frames according to identified spatial features. The image frame windows are also processed using a second neural network trained to classify the image frames according to identified serial features. The results of each classification are concatenated to separate each of the image frame windows into one of two classes, one class containing the optimal image. An output is generated to display image frame windows classification as including the optimal image.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: February 2, 2021
    Assignee: The Regents of the University of California
    Inventors: Albert Hsiao, Naeim Bahrami, Tara Retson
  • Patent number: 10909675
    Abstract: A system and method for characterizing tissues of a subject using multi-parametric imaging are provided. In some aspects, the method includes receiving a set of multi-parametric magnetic resonance (“MR”) images acquired from a subject using an MR imaging system, and selecting at least one region of interest (“ROI”) in the subject using one or more images in the set of multi-parametric MR images. The method also includes performing a texture analysis on corresponding ROIs in the set of multi-parametric MR images to generate a set of texture features, and applying a classification scheme, using the set of texture features, to characterize tissues in the ROI. The method further includes generating a report indicative of characterized tissues in the ROI.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: February 2, 2021
    Assignees: Mayo Foundation for Medical Education and Research, Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Leland S. Hu, J. Ross Mitchell, Jing Li, Teresa Wu