Patents by Inventor Stefan Kluckner
Stefan Kluckner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200232908Abstract: Embodiments provide a method of using image-based tube top circle detection based on multiple candidate selection to localize the tube top circle region in input images. According to embodiments provided herein, the multi-candidate selection enhances the robustness of tube circle detection by making use of multiple views of the same tube to improve the robustness of tube top circle detection. With multiple candidates extracted from images under different viewpoints of the same tube, the multi-candidate selection algorithm selects an optimal combination among the candidates and provides more precise measurement of tube characteristics. This information is invaluable in an IVD environment in which a sample handler is processing the tubes and moving the tubes to analyzers for testing and analysis.Type: ApplicationFiled: June 25, 2018Publication date: July 23, 2020Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Yao-Jen Chang, Stefan Kluckner, Benjamin S. Pollack, Terrence Chen
-
Patent number: 10716457Abstract: A method and system for calculating a volume of resected tissue from a stream of intraoperative images is disclosed. A stream of 2D/2.5D intraoperative images of resected tissue of a patient is received. The 2D/2.5D intraoperative images in the stream are acquired at different angles with respect to the resected tissue. A resected tissue surface is segmented in each of the 2D/2.5D intraoperative images. The segmented resected tissue surfaces are stitched to generate a 3D point cloud representation of the resected tissue surface. A 3D mesh representation of the resected tissue surface is generated from the 3D point cloud representation of the resected tissue surface. The volume of the resected tissue is calculated from the 3D mesh representation of the resected tissue surface.Type: GrantFiled: October 14, 2015Date of Patent: July 21, 2020Assignee: Siemens AktiengesellschaftInventors: Thomas Pheiffer, Stefan Kluckner, Ali Kamen
-
Patent number: 10699438Abstract: The present embodiments relate to localizing a mobile device in a complex, three-dimensional scene. By way of introduction, the present embodiments described below include apparatuses and methods for using multiple, independent pose estimations to increase the accuracy of a single, resulting pose estimation. The present embodiments increase the amount of input data by windowing a single depth image, using multiple depth images from the same sensor, and/or using multiple depth image from different sensors. The resulting pose estimation uses the input data with a multi-window model, a multi-shot model, a multi-sensor model, or a combination thereof to accurately estimate the pose of a mobile device.Type: GrantFiled: July 6, 2017Date of Patent: June 30, 2020Assignee: Siemens Healthcare GmbHInventors: Oliver Lehmann, Stefan Kluckner, Terrence Chen
-
Publication number: 20200167591Abstract: Methods for image-based detection of the tops of sample tubes used in an automated diagnostic analysis system may be based on a convolutional neural network to pre-process images of the sample tube tops to intensify the tube top circle edges while suppressing the edge response from other objects that may appear in the image. Edge maps generated by the methods may be used for various image-based sample tube analyses, categorizations, and/or characterizations of the sample tubes to control a robot in relationship to the sample tubes. Image processing and control apparatus configured to carry out the methods are also described, as are other aspects.Type: ApplicationFiled: June 25, 2018Publication date: May 28, 2020Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Yao-Jen Chang, Stefan Kluckner, Benjamin S. Pollack, Terrence Chen
-
Publication number: 20200158745Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization may be used for Hemolysis, Icterus, and/or Lipemia, or Normal detection. The method captures one or more images of a labeled specimen container including a serum or plasma portion, processes the one or more images to provide segmentation data and identification of a label-containing region, and classifying the label-containing region with a convolutional neural network (CNN) to provide a pixel-by-pixel (or patch-by-patch) characterization of the label thickness count, which may be used to adjust intensities of regions of a serum or plasma portion having label occlusion. Optionally, the CNN can characterize the label-containing region as one of multiple pre-defined label configurations. Quality check modules and specimen testing apparatus adapted to carry out the method are described, as are other aspects.Type: ApplicationFiled: April 13, 2017Publication date: May 21, 2020Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Jiang Tian, Stefan Kluckner, Shanhui Sun, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
-
Publication number: 20200151878Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization method may be used to provide input to an HILN (H, I, and/or L, or N) detection method. The characterization method includes capturing one or more images of a labeled specimen container including a serum or plasma portion from multiple viewpoints, processing the one or more images to provide segmentation data including identification of a label-containing region, determining a closest label match of the label-containing region to a reference label configuration selected from a reference label configuration database, and generating a combined representation based on the segmentation information and the closest label match. Using the combined representation allows for compensation of the light blocking effects of the label-containing region. Quality check modules and testing apparatus and adapted to carry out the method are described, as are other aspects.Type: ApplicationFiled: April 10, 2018Publication date: May 14, 2020Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Stefan Kluckner, Patrick Wissmann, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
-
Publication number: 20200151498Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization may be used for determining Hemolysis (H), Icterus (I), and/or Lipemia (L), or Normal (N) of a serum or plasma portion of a specimen. The method includes capturing one or more images of a labeled specimen container including a serum or plasma portion, processing the one or more images with a convolutional neural network to provide a determination of Hemolysis (H), Icterus (I), and/or Lipemia (L), or Normal (N). In further embodiments, the convolutional neural network can provide N-Class segmentation information. Quality check modules and testing apparatus adapted to carry out the method are described, as are other aspects.Type: ApplicationFiled: April 10, 2018Publication date: May 14, 2020Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Shanhui SUN, Stefan KLUCKNER, Yao-Jen CHANG, Terrence CHEN, Benjamin S. POLLACK
-
Publication number: 20200057778Abstract: In pose estimation from a depth sensor (12), depth information is matched (70) with 3D information. Depending on the shape captured in depth image information, different objects may benefit from more or less pose density from different perspectives. The database (48) is created by bootstrap aggregation (64). Possible additional poses are tested (70) for nearest neighbors already in the database (48). Where the nearest neighbor is far, then the additional pose is added (72). Where the nearest neighbor is not far, then the additional pose is not added. The resulting database (48) includes entries for poses to distinguish the pose without overpopulation. The database (48) is indexed and used to efficiently determine pose from a depth camera (12) of a given captured image.Type: ApplicationFiled: April 11, 2017Publication date: February 20, 2020Inventors: Shanhui Sun, Stefan Kluckner, Ziyan Wu, Oliver Lehmann, Jan Ernst, Terrence Chen
-
Publication number: 20200057831Abstract: The present embodiments relate to generating synthetic depth data. By way of introduction, the present embodiments described below include apparatuses and methods for modeling the characteristics of a real-world light sensor and generating realistic synthetic depth data accurately representing depth data as if captured by the real-world light sensor. To generate accurate depth data, a sequence of procedures are applied to depth images rendered from a three-dimensional model. The sequence of procedures simulate the underlying mechanism of the real-world sensor. By simulating the real-world sensor, parameters relating to the projection and capture of the sensor, environmental illuminations, image processing and motion are accurately modeled for generating depth data.Type: ApplicationFiled: February 23, 2017Publication date: February 20, 2020Inventors: Ziyan Wu, Shanhui Sun, Stefan Kluckner, Terrence Chen, Jan Ernst
-
Publication number: 20200013189Abstract: The present embodiments relate to automatically estimating a three]dimensional pose of an object from an image captured using a camera with a structured light sensor. By way of introduction, the present embodiments described below include apparatuses and methods for training a system for and estimating a pose of an object from a test image. Training and test images are sampled to generate local image patches. Features are extracted from the local image patches to generate feature databased used to estimate nearest neighbor poses for each local image patch. The closest nearest neighbor pose to the test image is selected as the estimated three]dimensional pose.Type: ApplicationFiled: February 23, 2017Publication date: January 9, 2020Inventors: Srikrishna Karanam, Ziyan Wu, Shanhui Sun, Oliver Lehmann, Stefan Kluckner, Terrence Chen, Jan Ernst
-
Patent number: 10425629Abstract: A system and method includes generation of a first map of first descriptors based on pixels of a first two-dimensional depth image, where a location of each first descriptor in the first map corresponds to a location of a respective pixel of a first two-dimensional depth image, generation of a second map of second descriptors based on pixels of the second two-dimensional depth image, where a location of each second descriptor in the second map corresponds to a location of a respective pixel of the second two-dimensional depth image, upsampling of the first map of descriptors using a first upsampling technique to generate an upsampled first map of descriptors, upsampling of the second map of descriptors using a second upsampling technique to generate an upsampled second map of descriptors, generation of a descriptor difference map based on differences between descriptors of the upsampled first map of descriptors and descriptors of the upsampled second map of descriptors, generation of a geodesic preservation mType: GrantFiled: June 28, 2017Date of Patent: September 24, 2019Assignee: Siemens Healthcare GmbHInventors: Vivek Kumar Singh, Stefan Kluckner, Yao-jen Chang, Kai Ma, Terrence Chen
-
Publication number: 20190277870Abstract: A method of characterizing a specimen for HILN (H, I, and/or L, or N). The method includes capturing images of the specimen at multiple different viewpoints, processing the images to provide segmentation information for each viewpoint, generating a semantic map from the segmentation information, selecting a synthetic viewpoint, identifying front view semantic data and back view semantic data for the synthetic viewpoint, and determining HILN of the serum or plasma portion based on the front view semantic data with an HILN classifier, while taking into account back view semantic data. Testing apparatus and quality check modules adapted to carry out the method are described, as are other aspects.Type: ApplicationFiled: November 13, 2017Publication date: September 12, 2019Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Stefan Kluckner, Shanhui Sun, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
-
Publication number: 20190271714Abstract: A model-based method of determining characteristics of a specimen container cap to identify the container cap. The method includes providing a specimen container including a container cap; capturing backlit images of the container cap taken at different exposures lengths and using a plurality of different nominal wavelengths; selecting optimally-exposed pixels from the images at different exposure lengths at each nominal wavelength to generate optimally-exposed image data for each nominal wavelength; classifying the optimally-exposed pixels as at least being one of a tube, a label or a cap; and identifying a shape of the container cap based upon the optimally-exposed pixels classified as being the cap and the image data for each nominal wavelength. Quality check modules and specimen testing apparatus adapted to carry out the method are described, as are numerous other aspects.Type: ApplicationFiled: July 7, 2017Publication date: September 5, 2019Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
-
Patent number: 10350434Abstract: In order to provide radiation dose estimation during a treatment, one or more characteristics of the patient are used. A camera captures the patient so that the characteristics (e.g., organ position) may be derived. Radiation exposure and/or absorption are determined from the characteristics. A Monte Carlo, machine-learnt, or other model estimates the dosage for different locations in the patient. During the treatment, the dosage may be presented as a warning when exceeding a threshold or other visualization.Type: GrantFiled: December 11, 2015Date of Patent: July 16, 2019Assignee: Siemens Healthcare GmbHInventors: Saikiran Rapaka, Stefan Kluckner, Carol Novak, Puneet Sharma, Dorin Comaniciu
-
Patent number: 10335115Abstract: Multi-source, multi-type image registration is provided. Images are received from a plurality of image devices, and images are received from a medical imaging device. A pre-existing diagram of a probe of the medical imaging device is received. A four-dimensional model is determined based on the received images from the image devices. A pose of the probe of the medical imaging device is determined based on the pre-existing diagram of the probe and the received images from the image devices. The plurality of images from the medical imaging device are registered with the four-dimensional model based on a common coordinate system and the determined pose of the probe.Type: GrantFiled: September 3, 2015Date of Patent: July 2, 2019Assignee: Siemens Healthcare GmbHInventors: Sasa Grbic, Tommaso Mansi, Stefan Kluckner, Charles Henri Florin, Terrence Chen, Dorin Comaniciu
-
Patent number: 10325182Abstract: Embodiments are directed to classifying barcode tag conditions on sample tubes from top view images to streamline sample tube handling in advanced clinical laboratory automation systems. The classification of barcode tag conditions leads to the automatic detection of problematic barcode tags, allowing for a user to take necessary steps to fix the problematic barcode tags. A vision system is utilized to perform the automatic classification of barcode tag conditions on sample tubes from top view images. The classification of barcode tag conditions on sample tubes from top view images is based on the following factors: (1) a region-of-interest (ROI) extraction and rectification method based on sample tube detection; (2) a barcode tag condition classification method based on holistic features uniformly sampled from the rectified ROI; and (3) a problematic barcode tag area localization method based on pixel-based feature extraction.Type: GrantFiled: February 16, 2016Date of Patent: June 18, 2019Assignee: Siemens Healthcare Diagnostics Inc.Inventors: Khurram Soomro, Yao-Jen Chang, Stefan Kluckner, Wen Wu, Benjamin Pollack, Terrence Chen
-
Publication number: 20190130603Abstract: Systems, methods, and computer-readable media are disclosed for determining feature representations of 2.5D image data using deep learning techniques. The 2.5D image data may be synthetic image data generated from 3D simulated model data such as 3D CAD data. The 2.5D image data may be indicative of any number of pose estimations/camera poses representing virtual or actual viewing perspectives of an object modeled by the 3D CAD data. A neural network such as a convolution neural network (CNN) may be trained using the 2.5D image data as training data to obtain corresponding feature representations. The pose estimations/camera poses may be stored in a data repository in association with the corresponding feature representations. The learnt CNN may then be used to determine an input feature representation from an input 2.5D image and index the input feature representation against the data repository to determine matching pose estimation(s).Type: ApplicationFiled: March 9, 2017Publication date: May 2, 2019Inventors: Shanhui Sun, Kai Ma, Stefan Kluckner, Ziyan Wu, Jan Ernst, Vivek Kumar Singh, Terrence Chen
-
Publication number: 20190102909Abstract: Systems, methods, and computer-readable media are disclosed for automated identification of parts of a parts assembly using image data of the parts assembly and 3D simulated model data of the parts assembly. The 3D simulated model data may be 3D CAD data of the parts assembly. An image of the parts assembly is captured by a mobile device and sent to a back-end server for processing. The back-end server determines a feature representation corresponding to the image and searches a repository to locate a matching feature representation stored in association with a corresponding pose estimation. The matching pose estimation is rendered as an overlay on the image of the parts assembly, thereby enabling automated identification of parts within the image or some user-selected portion of the image.Type: ApplicationFiled: March 9, 2017Publication date: April 4, 2019Inventors: Stefan Kluckner, Shanhui Sun, Kai Ma, Ziyan Wu, Arun Innanje, Jan Ernst, Terrence Chen
-
Publication number: 20190080475Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.Type: ApplicationFiled: March 13, 2017Publication date: March 14, 2019Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
-
Publication number: 20190033209Abstract: A model-based method for quantifying a specimen. The method includes providing a specimen, capturing images of the specimen while illuminated by multiple spectra at different nominal wavelengths, and exposures, and classifying the specimen into various class types comprising one or more of serum or plasma portion, settled blood portion, gel separator (if used), air, tube, label, or cap; and quantifying of the specimen. Quantifying includes determining one or more of: a location of a liquid-air interface, a location of a serum-blood interface, a location of a serum-gel interface, a location of a blood-gel interface, a volume and/or a depth of the serum or plasma portion, or a volume and/or a depth of the settled blood portion. Quality check modules and specimen testing apparatus adapted to carry out the method are described, as are other aspects.Type: ApplicationFiled: January 24, 2017Publication date: January 31, 2019Applicant: Siemens Healthcare Diagnostics Inc.Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack