Patents Examined by Soo Jin Park
  • Patent number: 11361438
    Abstract: In various embodiments, an experiment analysis application detects executional artifacts in experiments involving microwell plates. The experiment analysis application computes one or more sets of spatial features based on one or more heat maps associated with a microwell plate. The experiment analysis application then aggregates the set(s) of spatial features to generate a feature vector. The experiment analysis application inputs the feature vector into a trained classifier. In response, the trained classifier generates a label indicating that the microwell plate is associated with a first executional artifact.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: June 14, 2022
    Assignee: RECURSION PHARMACEUTICALS, INC.
    Inventors: Benjamin Marc Feder Fogelson, Peter McLean, Imran Haque, Marissa Saunders, Eric Fish, Charles Baker, Juan Sebastián Rodríguez Vera
  • Patent number: 11354531
    Abstract: This disclosure relates to system and method for enabling a robot to perceive and detect socially interacting groups. Various known systems have limited accuracy due to prevalent rule-driven methods. In case of few data-driven learning methods, they lack datasets with varied conditions of light, occlusion, and backgrounds. The disclosed method and system detect the formation of a social group of people, or, f-formation in real-time in a given scene. The system also detects outliers in the process, i.e., people who are visible but not part of the interacting group. This plays a key role in correct f-formation detection in a real-life crowded environment. Additionally, when a collocated robot plans to join the group it has to detect a pose for itself along with detecting the formation. Thus, the system provides the approach angle for the robot, which can help it to determine the final pose in a socially acceptable manner.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: June 7, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Hrishav Bakul Barua, Pradip Pramanick, Chayan Sarkar
  • Patent number: 11334965
    Abstract: A method of generating a combined image of a body part from a sequence of partially overlapping source images of the body part, each of the partially overlapping source images showing the body part at one of a plurality of different times, the source images being ordered in the sequence according to the different times, the method including defining a temporally coherent sequence of transformations, for registering the partially overlapping source images in the sequence with each other, registering the source images to each other using the defined temporally coherent sequence of transformations, to obtain co-registered images, and combining at least some of the co-registered images into a combined image. Related apparatus and methods are also described.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: May 17, 2022
    Assignee: Navix International Limited
    Inventors: Shlomo Ben-Haim, Eli Dichterman
  • Patent number: 11328413
    Abstract: Systems and methods for detecting an aneurysm are disclosed. The method includes forming a virtual skeleton model. The virtual skeleton model has a plurality of edges with each edge having a plurality of skeleton points. Each skeleton point is associated with a subset of the plurality of blood vessel surface points. The method includes virtually fitting elliptically shaped tubules for each edge of the virtual skeleton model and identifying a potential aneurysm based on the fitted elliptically shaped tubules.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: May 10, 2022
    Assignee: iSchemaView, Inc.
    Inventors: Val Smiricinschi, Kristen Catherine Karman-Shoemake, Mohamed Haithem Babiker
  • Patent number: 11321839
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to segment magnified images of tissue samples. The method includes obtaining a magnified image of a tissue sample; processing an input comprising: the image, features derived from the image, or both, in accordance with current values of model parameters of a machine learning model to generate an automatic segmentation of the image into a plurality of tissue classes; providing, to a user through a user interface, an indication of: (i) the image, and (ii) the automatic segmentation of the image; determining an edited segmentation of the image, comprising applying modifications specified by the user to the automatic segmentation of the image; and determining updated values of the model parameters of the machine learning model based the edited segmentation of the image.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: May 3, 2022
    Assignee: Applied Materials, Inc.
    Inventors: Sumit Kumar Jha, Aditya Sista, Ganesh Kumar Mohanur Raghunathan, Ubhay Kumar, Kedar Sapre
  • Patent number: 11295178
    Abstract: An image classification method includes: obtaining an original image and a category of an object included in the original image; adjusting a first display parameter of the original image to obtain an adjusted original image; and transforming a second display parameter of the original image to obtain a new image. The adjusted first display parameter satisfies a value condition; and the transformed second parameter of the new image satisfies a distribution condition. The method also includes training a neural network model based on the category of the included object and a training set constructed by combining the adjusted original image and the new image; and inputting a to-be-predicted image into the trained neural network model, and determining the category of the object included in the to-be-predicted.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: April 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Pai Peng, Kailin Wu, Xiaowei Guo
  • Patent number: 11295112
    Abstract: Embodiments discussed herein facilitate training deep learning models to generate synthetic versions of histological texture features and employing such deep learning models. One example embodiment is an apparatus configured to convert a stained histological image to grayscale; extract patches from the grayscale image; for each patch of the plurality of patches: provide that patch to a deep learning model trained to generate a synthetic version of a texture feature; and obtain an associated patch from the deep learning model that indicates an associated value of the synthetic version of the histology texture feature for each pixel of that patch; and merge the associated patches for each patch of the plurality of patches to generate an associated feature map for the stained histological image, wherein the associated feature map indicates the associated value of the synthetic version of the histology texture feature for each pixel of the plurality of pixels.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: April 5, 2022
    Assignee: Case Western Reserve University
    Inventors: Cheng Lu, Anant Madabhushi, Khoi Le
  • Patent number: 11288797
    Abstract: Embodiments may include techniques to choose a model based on a similarity of computed features of an input to computed features of several models in order to improve feature analysis using Machine Learning models. A method of image analysis may comprise extracting a training feature vector corresponding to each of the plurality of machine learning models from each validation image from a plurality of machine learning models trained using a plurality of validation images, extracting from a new image a new feature vector corresponding to each of the plurality of machine learning models, comparing each new feature vector corresponding to each machine learning model with the training feature vector corresponding to each of the plurality of machine learning models, and selecting and outputting an inference for the new image generated by the machine learning model for which the new feature vector and the training feature vector are most similar.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: March 29, 2022
    Assignee: International Business Machines Corporation
    Inventors: Flora Gilboa-Solomon, Efrat Hexter, Dana Levanony, Aviad Zlotnick
  • Patent number: 11288538
    Abstract: A method is disclosed. The method includes obtaining an object for prediction and a plurality of candidate scenes by a computer device; inputting the object for prediction and a current candidate scene to a distance measurement model, the distance measurement model calculates a feature vector corresponding to the current candidate scene based on a trained scene feature subnetwork, and outputs a distance from the object for prediction to the current candidate scene based on the object for prediction and the feature vector corresponding to the current candidate scene, model parameters of the distance measurement model including a parameter determined by a trained object feature subnetwork; obtaining distances from the object for prediction to the plurality of candidate scenes based on the distance measurement model; determining a target scene corresponding to the object for prediction based on the distances from the object for prediction to the plurality of candidate scenes.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: March 29, 2022
    Assignee: Shenzhen University
    Inventors: Ruizhen Hu, Hui Huang, Hao Zhang
  • Patent number: 11284119
    Abstract: The present disclosure provides a decoding method and apparatus. The decoding method mainly includes: extracting pre-indexed information, storing the pre-indexed information in another file associated with a to-be-decoded file or a tail end of the to-be-decoded file, then reading the pre-indexed information before decoding is performed, and performing parallel decoding on multiple data segments in the to-be-decoded file according to the pre-indexed information. Using the foregoing storage method for the pre-indexed information may effectively reduce an I/O operation when the pre-indexed information is read, so as to avoid, to some extent, a system frame freezing phenomenon that may be caused when decoding is performed.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: March 22, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yuqiong Xu, Zhenkun Zhou, Tao Yu
  • Patent number: 11282201
    Abstract: Provided is an information processing device that includes a first information acquisition section that acquires first information on the basis of a still image in a frame corresponding to a predetermined time point, from among a plurality of images of a biological sample captured in a time-series manner, a second information acquisition section that acquires second information on the basis of an interframe change of the plurality of images in a predetermined period, and a determination section that determines an event regarding the biological sample, using the first information and the second information.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: March 22, 2022
    Assignee: SONY CORPORATION
    Inventors: Rei Murata, Shinji Watanabe
  • Patent number: 11273378
    Abstract: Disclosed are a system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for digital avatars. An interface module receives a request message to determine measurements of a user. A graphics engine sub-module accesses a first set of data that is indicative of locations in a first image of a user. The locations are points of the user's body in the first image. The graphics engine sub-module accesses a second set of data that is indicative of a first physical-space measurement of the user. A computational sub-module determines, based at least partly on the locations and the first physical-space measurement characteristic, an estimate of a second physical-space measurement of the user.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 15, 2022
    Assignee: EBAY, INC.
    Inventors: Akshay Gadre, Kerri Breslin
  • Patent number: 11269405
    Abstract: A method for determining correspondence between a gaze direction and an environment around a wearable device is disclosed. The wearable device may include an eye tracking device and an outward facing image sensor. The method may include receiving an input parameter and at least one scene image from the outward facing image sensor. The method may further include determining, with at least the eye tracking device, at least one gaze direction of a wearer of the wearable device at a point in time corresponding to when the scene image was captured by the outward facing image sensor. The method may additionally include determining, based at least in part on the input parameter, that a particular scene image includes at least a portion of a predefined image. The method may moreover include determining, based on the at least one gaze direction, at least one gaze point on the particular scene image.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: March 8, 2022
    Assignee: Tobii AB
    Inventors: André Lovtjärn, Jesper Högström, Jonas Högström, Rasmus Petersson, Mårten Skogö, Wilkey Wong
  • Patent number: 11270098
    Abstract: Methods are disclosed for clustering biological samples and other objects using a grand canonical ensemble. A biological sample is characterized by data attributes from varying sources (e.g. NGS, other types of high-dimensional cytometric data, observed disease state) and of varying data types (e.g. Boolean, continuous, or coded sets) organized as vectors (as many as 109) having as many as 106, 109, or more components. The biological samples or observational data are modeled as particles of a grand canonical ensemble which can be variably distributed among partitions. A pseudo-energy is defined as a measure of inverse similarity between the particles. Minimization of grand canonical ensemble pseudo-energy corresponds to clustering maximally similar particles in each partition, thereby determining clusters of the biological samples. The sample clusters can be used for feature discovery, gene and pathway identification, and development of cell based therapeutics, or for other purposes.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: March 8, 2022
    Assignee: THE UNITED STATES OF AMERICA, AS REPRESENTED BY THE SECRETARY, DEPARTMENT OF HEALTH AND HUMAN SERVICES
    Inventors: Elaine Ellen Thompson, Vahan Simonyan, Malcolm Moos, Jr.
  • Patent number: 11259696
    Abstract: A method of determining a geometrical measurement of a retina of an eye, comprising obtaining a two dimensional representation of at least a portion of the retina of the eye (34), deriving a geometrical remapping which converts the two dimensional representation of the retinal portion to a three dimensional representation of the retinal portion (36), using one or more coordinates of the two dimensional representation of the retinal portion to define the geometrical measurement to be taken of the retina on the two dimensional representation (38), using the geometrical remapping to convert the or each coordinate of the two dimensional representation of the retinal portion to an equivalent coordinate of the three dimensional representation of the retinal portion (40), and using the or each equivalent coordinate of the three dimensional representation of the retinal portion to determine the geometrical measurement of the retina of the eye (42).
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: March 1, 2022
    Assignee: OPTOS PLC
    Inventors: Jano Van Hemert, Michael Verhoek
  • Patent number: 11263500
    Abstract: A method for designating a given image as similar/dissimilar with respect to a reference image is provided. The method includes normalizing the image. Normalizing includes performing pre-processing and a lossy compression on the given image to obtain a lossy representation. The pre-processing includes at least one of cropping, fundamental extracting, gray scale converting and lower color bit converting. The method also includes comparing the lossy representation of the given image with a reference representation, which is a version of a reference spam image after the reference spam image has undergone a similar normalizing process as normalizing. The method further includes, if the lossy representation of the given image matches the reference representation, designating the given image similar to the reference image. The method yet also includes, if the lossy representation of the given image does not match the reference representation, designating the given image dissimilar to the reference image.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: March 1, 2022
    Assignee: Trend Micro Incorporated
    Inventors: Jonathan James Oliver, Yun-Chian Chang
  • Patent number: 11257586
    Abstract: Human mesh model recovery may utilize prior knowledge of the hierarchical structural correlation between different parts of a human body. Such structural correlation may be between a root kinematic chain of the human body and a head or limb kinematic chain of the human body. Shape and/or pose parameters relating to the human mesh model may be determined by first determining the parameters associated with the root kinematic chain and then using those parameters to predict the parameters associated with the head or limb kinematic chain. Such a task can be accomplished using a system comprising one or more processors and one or more storage devices storing instructions that, when executed by the one or more processors, cause the one or more processors to implement one or more neural networks trained to perform functions related to the task.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: February 22, 2022
    Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
    Inventors: Srikrishna Karanam, Ziyan Wu, Georgios Georgakis
  • Patent number: 11244182
    Abstract: A method includes receiving a first image and a second image, wherein the first and second images represent first and second relative locations, respectively, of an image acquisition device with respect to a subject. The method also includes determining, using the first and second images, a total relative displacement of the subject with respect to the image acquisition device between a time of capture of the first image and a time of capture of the second image, and determining, based on sensor data associated with one or more sensors associated with the image acquisition device, a component of the total relative displacement associated with a motion of the image acquisition device. The method also includes determining, based on a difference between the first total relative displacement and the component, that the first subject is an alternative representation of a live person, and in response, preventing access to a secure system.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: February 8, 2022
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Gregory Lee Storm, Reza R. Derakhshani
  • Patent number: 11232293
    Abstract: A performance capture system is provided to detect one or more active marker units in a live action scene. Active marker units emanate at least one wavelength of light that is captured by the performance capture system and used to detect the active markers in the scene. The system detects the presence of the light as a light patch in a capture frames and determines if the light patch represents light from an active marker unit. In some implementations, various active markers in a scene may emanate different wavelengths of light. For example, wavelengths of light from multi-emitting active marker units may be changed due to various conditions in the scene.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: January 25, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11231576
    Abstract: Method and microscope for determining a fluorescence intensity which is integrated over a partial region of an isolated intensity distribution, from a first individual image is corrected by means of a background intensity value, integrated over the same image region, from another individual image in which this image region is free from fluorescence. The axial position can be ascertained on the basis of two thus corrected fluorescence intensity values from, in each case, different partial regions of the intensity distribution.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: January 25, 2022
    Assignee: Carl Zeiss Microscopy GmbH
    Inventors: Christian Franke, Markus Sauer, Sebastian van de Linde