Patents Examined by Nirav G Patel
  • Patent number: 10380736
    Abstract: Methods of obtaining a suspect nodules' sizes and/or growth rate are provided. In one embodiment, the method begins with at a first time: (i) obtaining a first three dimensional (3D) data set cube of voxels of a patient's anatomy including nodules; (ii) creating a second 3D data set cube of the same size as the first where all voxel values are set to zero; (iii) creating multiple Maximum-Intensity-Projection (MIP) images from the first 3D data set cube taken at different angles; (iv) replacing those voxels in the second 3D data set cube with corresponding voxels from the first 3D data set cube that provide non-zero values in the multiple MIP images; and (v) converting data in the second 3D data set cube into closed surface volumes, cross sectional areas or linear dimensions using image segmentation. Data from a second time can be used to determine growth rate.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: August 13, 2019
    Inventors: Larry Partain, George Zentai, Stavros Prionas, Raisa Pavlyuchkova
  • Patent number: 10372962
    Abstract: This application provides techniques, including circuits and designs, which can receive information with respect to fingerprint images, or portions thereof, and which can be incorporated into devices using fingerprint recognition. This application also provides techniques, including devices which perform fingerprint recognition and methods which can be performed by those devices. In one embodiment, techniques can include providing a fingerprint recognition sensor in which one or more portions of each fingerprint can be collected as they are identified, and those portions can be combined into a unified fingerprint template. In this way, collection and enrollment of fingerprints may be simplified for users.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: August 6, 2019
    Assignee: Apple Inc.
    Inventors: John A. Wright, Byron B. Han, Craig A. Marciniak
  • Patent number: 10354112
    Abstract: A fingerprint identification module is provided for identifying a fingerprint of a finger. The fingerprint identification module includes a sensing chip and a thermally deformable layer. The thermally deformable layer is disposed over the sensing chip and includes a sensing region. When the finger is placed on the sensing region, the fingerprint of the finger is sensed by the sensing chip. If the fingerprint identification result of the fingerprint identification module fails, the thermally deformable layer is firstly changed to a molten state and then returned to a solidified state within a predetermined time period. Consequently, the finger is fixed by the thermally deformable layer.
    Type: Grant
    Filed: February 1, 2017
    Date of Patent: July 16, 2019
    Inventors: Mao-Hsiu Hsu, Kuan-Pao Ting
  • Patent number: 10342504
    Abstract: One example method for estimating scatter associated with a target object may include acquiring a set of original projection data that includes primary radiation and scattered radiation at one or more selected projection angles associated with the target object, generating a first set of estimated scatter data from the set of original projection data, generating reconstructed image data by performing a first pass reconstruction using the first set of estimated scatter data, and generating a set of reference scatter data associated with the target based on the reconstructed image data. The example method may also include generating a set of reference primary plus scatter data associated with the target object based on the reconstructed image data, generating a second set of estimated scatter data associated with the target object based on the set of reference primary plus scatter data, and generating perturbation data associated with the target object.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: July 9, 2019
    Inventors: Josh Star-Lack, Mingshan Sun
  • Patent number: 10339675
    Abstract: A tomography method for generating a computed tomography (CT) image, including generating a first tomography image based on first raw data corresponding to a received X-ray comprising acquired photons; determining second raw data by generating a second tomography image having an increased resolution in comparison with the first tomography image and performing forward projection on the second tomography image; determining third raw data based on a first parameter, the first raw data, and the second raw data; and generating a third tomography image based on the third raw data, wherein the determining of the third raw data may be based on information about a distribution of the acquired photons, the information being included in at least one from among the first raw data and the second raw data.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: July 2, 2019
    Inventors: Ki-hwan Choi, Seok-min Han, Sang-wook Yoo, Jong-hyon Yi
  • Patent number: 10331933
    Abstract: The present disclosure provides an OLED panel, a mobile device and a method for controlling identification, and belongs to the field of display technology. The OLED panel includes an array substrate, an OLED layer disposed on the array substrate, a fingerprint collecting unit array disposed in the array substrate, or a fingerprint collecting unit array disposed between the array substrate and the OLED layer, and a control circuit connected to the fingerprint collecting unit array.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: June 25, 2019
    Assignee: XIAOMI INC.
    Inventors: Guosheng Li, Xiaoxing Yang, Shanrong Liu
  • Patent number: 10319091
    Abstract: A three-dimensional subtraction angiography image data set including a target region of the patient is acquired. A region of interest is selected. An imaging geometry is defined for monitoring the intervention using an X-ray device. The image-obscuring blood vessels that superimpose the region of interest in the imaging geometry and imaging zones that show fractions of the image-obscuring blood vessels in the imaging geometry are determined. Path information relating to the image-obscuring blood vessels is defined. The information relating to the path is input into a two-dimensional forward projection data set. A fluoroscopic image is acquired in the imaging geometry. Pixels showing the image-obscuring blood vessels in the fluoroscopic image are determined using the path information and image intensity information from the fluoroscopic image. A masked image of the image-obscuring blood vessels is subtracted. The fluoroscopic image that has been modified is displayed.
    Type: Grant
    Filed: December 4, 2016
    Date of Patent: June 11, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Thomas Hoffmann, Martin Skalej
  • Patent number: 10319412
    Abstract: The present disclosure is directed toward systems and methods for tracking objects in videos. For example, one or more embodiments described herein utilize various tracking methods in combination with an image search index made up of still video frames indexed from a video. One or more embodiments described herein utilize a backward and forward tracking method that is anchored by one or more key frames in order to accurately track an object through the frames of a video, even when the video is long and may include challenging conditions.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: June 11, 2019
    Assignee: ADOBE INC.
    Inventors: Zhihong Ding, Zhe Lin, Xiaohui Shen, Michael Kaplan, Jonathan Brandt
  • Patent number: 10304190
    Abstract: A method for determining virtual articulation from dental scans. The method includes receiving digital 3D models of a person's maxillary and mandibular arches, and digital 3D models of a plurality of different bite poses of the arches. The digital 3D models of the maxillary and mandibular arches are registered with the bite poses to generate transforms defining spatial relationships between the arches for the bite poses. Based upon the digital 3D models and transforms, the method computes a pure rotation axis representation for each bite pose of the mandibular arch with respect to the maxillary arch. The virtual articulation can be used in making restorations or for diagnostic purposes.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: May 28, 2019
    Inventors: Alberto Alvarez, Eric S. Hansen, Steven C. Demlow, Richard E. Raby
  • Patent number: 10303979
    Abstract: Systems and methods that receive as input microscopy images, extract features, and apply layers of processing units to compute one or more set of cellular phenotype features, corresponding to cellular densities and/or fluorescence measured under different conditions. The system is a neural network architecture having a convolutional neural network followed by a multiple instance learning (MIL) pooling layer. The system does not necessarily require any segmentation steps or per cell labels as the convolutional neural network can be trained and tested directly on raw microscopy images in real-time. The system computes class specific feature maps for every phenotype variable using a fully convolutional neural network and uses multiple instance learning to aggregate across these class specific feature maps. The system produces predictions for one or more reference cellular phenotype variables based on microscopy images with populations of cells.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: May 28, 2019
    Assignee: PHENOMIC AI INC.
    Inventors: Oren Kraus, Jimmy Ba, Brendan Frey
  • Patent number: 10290076
    Abstract: A system and method for image registration includes tracking (508) a scanner probe in a position along a skin surface of a patient. Image planes corresponding to the position are acquired (510). A three-dimensional volume of a region of interest is reconstructed (512) from the image planes. A search of an image volume is initialized (514) to determine candidate images to register the image volume with the three-dimensional volume by employing pose information of the scanner probe during image plane acquisition, and physical constraints of a pose of the scanner probe. The image volume is registered (522) with the three-dimensional volume.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: May 14, 2019
    Assignees: The United States of America, as represented by the Secretary, Department of Health and Human Services, Koninklijke Philips N.V.
    Inventors: Samuel Kadoury, Jochen Kruecker, James Robertson Jago, Bradford Johns Wood, Antoine Collet-Billon, Cecile Dufour
  • Patent number: 10282841
    Abstract: Biometric analysis of vascular patterning may be performed in 3D and 2D as an integrative biomarker of complex molecular and mechanical signaling. The vascular patterning may facilitate the coordination of essentially unlimited numbers of bioinformatics dimensions for specific molecular and other co-localizations with spatiotemporal dimensions of vascular morphology. The vascular patterning may also apply geometric principles of translational versus rotational principles for vascular branching to support the transformation of VESGEN 2D to VESGEN 3D.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: May 7, 2019
    Assignee: The United States of America as Represented by the Administrator of the NASA
    Inventors: Patricia A. Parsons-Wingerter, Mary B. Vickerman
  • Patent number: 10235332
    Abstract: According to at least one aspect, systems and methods for distributed license plate review, at one or more crowd source analysis provider, are provided. In one example, a system for license plate review includes an interface configured to receive at least a roadside image of a vehicle license plate, and at least one processor configured to partition the roadside image of the vehicle license plate into one or more segments, and assign the one or more segments to one or more crowd source analysis provider, the one or more crowd source analysis provider being configured to review and recognize at least one character of the vehicle license plate included within an individual segment of the roadside image. In an example, the crowd source analysis provider may allow a reviewer to interact with the individual segment and recognize the at least one character within that segment.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: March 19, 2019
    Assignee: VERITOLL, LLC
    Inventors: Joseph C. Silva, Christopher T. Higgins
  • Patent number: 10229511
    Abstract: A method for determining the pose of a camera relative to a real environment includes the following steps: taking at least one image of a real environment by means of a camera, the image containing at least part of a real object, performing a tracking method that evaluates information with respect to correspondences between features associated with the real object and corresponding features of the real object as it is contained in the image of the real environment, so as to obtain conclusions about the pose of the camera, determining at least one parameter of an environmental situation, and performing the tracking method in accordance with the at least one parameter. Analogously, the method can also be utilized in a method for recognizing an object of a real environment in an image taken by a camera.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: March 12, 2019
    Assignee: Apple Inc.
    Inventor: Peter Meier
  • Patent number: 10223790
    Abstract: A dynamic analysis system includes a comparing unit and a display unit. The comparing unit extracts a lung field from each of dynamic images obtained by imaging a chest part containing a left lung and a right lung of a subject, specifies a corresponding point in a left part and a corresponding point in a right part of the lung field, and compares characteristic amounts at the respective corresponding points with each other. The display unit displays a result of the comparison made by the comparing unit together with the dynamic images or one of the dynamic images, or displays the result on the dynamic images or the one of the dynamic images.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: March 5, 2019
    Assignee: KONICA MINOLTA, INC.
    Inventors: Satoshi Kasai, Akinori Tsunomori
  • Patent number: 10210406
    Abstract: A method and system for simultaneously generating a global lane map and localizing a vehicle in the generated global lane map is provided. The system includes a plurality of image sensors adapted to operatively capture a 360-degree field of view image from the host vehicle for detecting a plurality of lane markings. The image sensors include a front long-range camera, a front mid-range camera, a right side mid-range camera, a left side mid-range camera, and a rear mid-range camera. A controller communicatively is coupled to the plurality of image sensors and includes a data base containing reference lane markings and a processor. The processor is configured to identify the plurality of lane markings by comparing the detected lane markings from the 360-degree field of view image to the reference lane markings from the data base and to fuse the identified lane markings into the global lane map.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: February 19, 2019
    Inventors: Shuo Huang, Danish Uzair Siddiqui
  • Patent number: 10204410
    Abstract: A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry is configured to obtain a chronological transition of signal intensities for each of the pixels in a plurality of X-ray images chronologically acquired by using a contrast media. The processing circuitry is configured to correct the chronological transition of the signal intensities on the basis of a level of similarity between at least two mutually-different signal intensities within the chronological transition of the signal intensities.
    Type: Grant
    Filed: October 13, 2016
    Date of Patent: February 12, 2019
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Ryuji Zaiki, Takuya Sakaguchi, Tadaharu Kobayashi
  • Patent number: 10204264
    Abstract: A method of dynamically scoring implicit interactions can include receiving, by an interaction analysis server from an imaging system, a plurality of images of an environment captured in a period of time corresponding to display of a presentation, retrieving, by the interaction analysis server, content information corresponding to content of the presentation, and identifying, by a presence detector of the interaction analysis server, that a face appears in at least one image of the plurality of images. The method can further include matching, by a facial recognition system of the interaction analysis server, the face with a user identifier, retrieving, by a client device information retriever, client device information associated with the user identifier and corresponding to the period of time, and calculating, by a client device interaction score calculator, a client device interaction score based on one or more correspondences between the client device information and the content information.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: February 12, 2019
    Assignee: Google LLC
    Inventors: Andrew Gallagher, Utsav Prabhu
  • Patent number: 10204270
    Abstract: A method for identifying the presence of fruit in image data in an image sensor of a scene includes acquiring image data in an image sensor for at least two distinct wavelengths of a scene. A normalized difference reflectivity index (NDRI) for each location in an array of locations in the image data is calculated with respect to said at least two distinct wavelengths. Regions in the array of locations are identified where the value of the calculated NDRI of the locations in these regions is within a range of values indicative of a presence of fruits in the scene. An output is generated on an output device with information related to the identified presence of fruits.
    Type: Grant
    Filed: November 17, 2016
    Date of Patent: February 12, 2019
    Assignee: FRUITSPEC LTD
    Inventors: Nir Margalit, Shahar Nitsan, Raviv Kula
  • Patent number: 10198632
    Abstract: The efficiency of the work for identifying reference points that are included in photographed images is improved. Targets are located and are photographed from short distances, and the positions of the targets are measured. A 3D reference point model is generated by using the measured positions of the targets as apexes. Then, the positions of the targets that are detected from images taken by a UAV from the air are calculated from a three-dimensional model that are generated by the principle of the stereoscopic three-dimensional measurement, whereby a 3D relative model constituted of a TIN is obtained. After a matching relationship between the 3D reference point model and the 3D relative model is identified, the positions of the targets in the 3D relative model are estimated based on the identified matching relationship, and the positions of the reference points in the images taken from the UAV are estimated.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: February 5, 2019
    Inventors: Takeshi Sasaki, Tetsuji Anai, Hitoshi Otani, Nobuo Kochi