Patents by Inventor Stefan Kluckner

Stefan Kluckner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210064927
    Abstract: A method of training a neural network (Convolutional Neural Network-CNN) including reduced graphical annotation input is provided. The training method can be used to train a Testing CNN that can be used for determining Hemolysis (H), Icterus (I), and/or Lipemia (L), or Normal (N) of a serum or plasma portion of a test specimen. The training method includes capturing training images of multiple specimen containers including training specimens, generating region proposals of the serum or plasma portions of the training specimens; and selecting the best matches for the location, size and shape of the region proposals for the multiple training specimens. The obtained features (network and weights) from the training CNN can be used in a testing CNN. Quality check modules and testing apparatus adapted to carry out the training method, and characterization methods using abounding box regressor are described, as are other aspects.
    Type: Application
    Filed: January 8, 2019
    Publication date: March 4, 2021
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Kai Ma, Vivek Singh, Terrence Chen, Benjamin S. Pollack
  • Patent number: 10824832
    Abstract: Barcode tag conditions on sample tubes are detected utilizing side view images of sample tubes for streamlining handling in clinical laboratory automation systems. The condition of the tags may be classified into classes, each divided into a list of additional subcategories that cover individual characteristics of the tag quality. According to an embodiment, a tube characterization station (TCS) is utilized to obtain the side view images. The TCS enables the simultaneous or near-simultaneous collection of three images for each tube, resulting in a 360 degree side view for each tube. The method is based on a supervised scene understanding concept, resulting in an explanation of each pixel into its semantic meaning. Two parallel low-level cues for condition recognition, in combination with a tube model extraction cue, may be utilized. The semantic scene information is then integrated into a mid-level representation for final decision making into one of the condition classes.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: November 3, 2020
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Wen Wu, Benjamin Pollack, Terrence Chen
  • Patent number: 10816538
    Abstract: A model-based method of inspecting a specimen for presence of an interferent (H, I, and/or L). The method includes capturing images of the specimen at multiple different exposures times and at multiple spectra having different nominal wavelengths, selection of optimally-exposed pixels from the captured images to generate optimally-exposed image data for each spectra, identifying a serum or plasma portion of the specimen, and classifying whether an interferent is present or absent within the serum or plasma portion. Testing apparatus and quality check modules adapted to carry out the method are described, as are other aspects.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: October 27, 2020
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack, Patrick Wissmann
  • Patent number: 10803619
    Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Siemens Mobility GmbH
    Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
  • Publication number: 20200265263
    Abstract: A neural network-based method for quantifying a volume of a specimen. The method includes providing a specimen, capturing images of the specimen, and directly classifying to one of a plurality of volume classes or volumes using a trained neural network. Quality check modules and specimen testing apparatus adapted to carry out the volume quantification method are described, as are other aspects.
    Type: Application
    Filed: July 25, 2018
    Publication date: August 20, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Kai Ma, Vivek Singh, Terrence Chen, Benjamin S. Pollack
  • Patent number: 10746665
    Abstract: A model-based method of inspecting a specimen for presence of one or more artifacts (e.g., a clot, bubble, and/or foam). The method includes capturing multiple images of the specimen at multiple different exposures and at multiple spectra having different nominal wavelengths, selection of optimally-exposed pixels from the captured images to generate optimally-exposed image data for each spectra, computing statistics of the optimally-exposed pixels to generate statistical data, identifying a serum or plasma portion of the specimen, and classifying, based on the statistical data, whether an artifact is present or absent within the serum or plasma portion. Testing apparatus and quality check modules adapted to carry out the method are described, as are other aspects.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: August 18, 2020
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
  • Patent number: 10746753
    Abstract: A model-based method of classifying a specimen in a specimen container. The method includes capturing images of the specimen and container at multiple different exposures times, at multiple different spectra having different nominal wavelengths, and at different viewpoints by using multiple cameras. From the captured images, 2D data sets are generated. The 2D data sets are based upon selection of optimally-exposed pixels from the multiple different exposure images to generate optimally-exposed image data for each spectra. Based upon these 2D data sets, various components are classified using a multi-class classifier, such as serum or plasma portion, settled blood portion, gel separator (if present), tube, air, or label. From the classification data and 2D data sets, a 3D model can be generated. Specimen testing apparatus and quality check modules adapted to carry out the method are described, as are other aspects.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: August 18, 2020
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
  • Publication number: 20200232908
    Abstract: Embodiments provide a method of using image-based tube top circle detection based on multiple candidate selection to localize the tube top circle region in input images. According to embodiments provided herein, the multi-candidate selection enhances the robustness of tube circle detection by making use of multiple views of the same tube to improve the robustness of tube top circle detection. With multiple candidates extracted from images under different viewpoints of the same tube, the multi-candidate selection algorithm selects an optimal combination among the candidates and provides more precise measurement of tube characteristics. This information is invaluable in an IVD environment in which a sample handler is processing the tubes and moving the tubes to analyzers for testing and analysis.
    Type: Application
    Filed: June 25, 2018
    Publication date: July 23, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Yao-Jen Chang, Stefan Kluckner, Benjamin S. Pollack, Terrence Chen
  • Patent number: 10716457
    Abstract: A method and system for calculating a volume of resected tissue from a stream of intraoperative images is disclosed. A stream of 2D/2.5D intraoperative images of resected tissue of a patient is received. The 2D/2.5D intraoperative images in the stream are acquired at different angles with respect to the resected tissue. A resected tissue surface is segmented in each of the 2D/2.5D intraoperative images. The segmented resected tissue surfaces are stitched to generate a 3D point cloud representation of the resected tissue surface. A 3D mesh representation of the resected tissue surface is generated from the 3D point cloud representation of the resected tissue surface. The volume of the resected tissue is calculated from the 3D mesh representation of the resected tissue surface.
    Type: Grant
    Filed: October 14, 2015
    Date of Patent: July 21, 2020
    Assignee: Siemens Aktiengesellschaft
    Inventors: Thomas Pheiffer, Stefan Kluckner, Ali Kamen
  • Patent number: 10699438
    Abstract: The present embodiments relate to localizing a mobile device in a complex, three-dimensional scene. By way of introduction, the present embodiments described below include apparatuses and methods for using multiple, independent pose estimations to increase the accuracy of a single, resulting pose estimation. The present embodiments increase the amount of input data by windowing a single depth image, using multiple depth images from the same sensor, and/or using multiple depth image from different sensors. The resulting pose estimation uses the input data with a multi-window model, a multi-shot model, a multi-sensor model, or a combination thereof to accurately estimate the pose of a mobile device.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: June 30, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Oliver Lehmann, Stefan Kluckner, Terrence Chen
  • Publication number: 20200167591
    Abstract: Methods for image-based detection of the tops of sample tubes used in an automated diagnostic analysis system may be based on a convolutional neural network to pre-process images of the sample tube tops to intensify the tube top circle edges while suppressing the edge response from other objects that may appear in the image. Edge maps generated by the methods may be used for various image-based sample tube analyses, categorizations, and/or characterizations of the sample tubes to control a robot in relationship to the sample tubes. Image processing and control apparatus configured to carry out the methods are also described, as are other aspects.
    Type: Application
    Filed: June 25, 2018
    Publication date: May 28, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Yao-Jen Chang, Stefan Kluckner, Benjamin S. Pollack, Terrence Chen
  • Publication number: 20200158745
    Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization may be used for Hemolysis, Icterus, and/or Lipemia, or Normal detection. The method captures one or more images of a labeled specimen container including a serum or plasma portion, processes the one or more images to provide segmentation data and identification of a label-containing region, and classifying the label-containing region with a convolutional neural network (CNN) to provide a pixel-by-pixel (or patch-by-patch) characterization of the label thickness count, which may be used to adjust intensities of regions of a serum or plasma portion having label occlusion. Optionally, the CNN can characterize the label-containing region as one of multiple pre-defined label configurations. Quality check modules and specimen testing apparatus adapted to carry out the method are described, as are other aspects.
    Type: Application
    Filed: April 13, 2017
    Publication date: May 21, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Jiang Tian, Stefan Kluckner, Shanhui Sun, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
  • Publication number: 20200151498
    Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization may be used for determining Hemolysis (H), Icterus (I), and/or Lipemia (L), or Normal (N) of a serum or plasma portion of a specimen. The method includes capturing one or more images of a labeled specimen container including a serum or plasma portion, processing the one or more images with a convolutional neural network to provide a determination of Hemolysis (H), Icterus (I), and/or Lipemia (L), or Normal (N). In further embodiments, the convolutional neural network can provide N-Class segmentation information. Quality check modules and testing apparatus adapted to carry out the method are described, as are other aspects.
    Type: Application
    Filed: April 10, 2018
    Publication date: May 14, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Shanhui SUN, Stefan KLUCKNER, Yao-Jen CHANG, Terrence CHEN, Benjamin S. POLLACK
  • Publication number: 20200151878
    Abstract: A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization method may be used to provide input to an HILN (H, I, and/or L, or N) detection method. The characterization method includes capturing one or more images of a labeled specimen container including a serum or plasma portion from multiple viewpoints, processing the one or more images to provide segmentation data including identification of a label-containing region, determining a closest label match of the label-containing region to a reference label configuration selected from a reference label configuration database, and generating a combined representation based on the segmentation information and the closest label match. Using the combined representation allows for compensation of the light blocking effects of the label-containing region. Quality check modules and testing apparatus and adapted to carry out the method are described, as are other aspects.
    Type: Application
    Filed: April 10, 2018
    Publication date: May 14, 2020
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Patrick Wissmann, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
  • Publication number: 20200057778
    Abstract: In pose estimation from a depth sensor (12), depth information is matched (70) with 3D information. Depending on the shape captured in depth image information, different objects may benefit from more or less pose density from different perspectives. The database (48) is created by bootstrap aggregation (64). Possible additional poses are tested (70) for nearest neighbors already in the database (48). Where the nearest neighbor is far, then the additional pose is added (72). Where the nearest neighbor is not far, then the additional pose is not added. The resulting database (48) includes entries for poses to distinguish the pose without overpopulation. The database (48) is indexed and used to efficiently determine pose from a depth camera (12) of a given captured image.
    Type: Application
    Filed: April 11, 2017
    Publication date: February 20, 2020
    Inventors: Shanhui Sun, Stefan Kluckner, Ziyan Wu, Oliver Lehmann, Jan Ernst, Terrence Chen
  • Publication number: 20200057831
    Abstract: The present embodiments relate to generating synthetic depth data. By way of introduction, the present embodiments described below include apparatuses and methods for modeling the characteristics of a real-world light sensor and generating realistic synthetic depth data accurately representing depth data as if captured by the real-world light sensor. To generate accurate depth data, a sequence of procedures are applied to depth images rendered from a three-dimensional model. The sequence of procedures simulate the underlying mechanism of the real-world sensor. By simulating the real-world sensor, parameters relating to the projection and capture of the sensor, environmental illuminations, image processing and motion are accurately modeled for generating depth data.
    Type: Application
    Filed: February 23, 2017
    Publication date: February 20, 2020
    Inventors: Ziyan Wu, Shanhui Sun, Stefan Kluckner, Terrence Chen, Jan Ernst
  • Publication number: 20200013189
    Abstract: The present embodiments relate to automatically estimating a three]dimensional pose of an object from an image captured using a camera with a structured light sensor. By way of introduction, the present embodiments described below include apparatuses and methods for training a system for and estimating a pose of an object from a test image. Training and test images are sampled to generate local image patches. Features are extracted from the local image patches to generate feature databased used to estimate nearest neighbor poses for each local image patch. The closest nearest neighbor pose to the test image is selected as the estimated three]dimensional pose.
    Type: Application
    Filed: February 23, 2017
    Publication date: January 9, 2020
    Inventors: Srikrishna Karanam, Ziyan Wu, Shanhui Sun, Oliver Lehmann, Stefan Kluckner, Terrence Chen, Jan Ernst
  • Patent number: 10425629
    Abstract: A system and method includes generation of a first map of first descriptors based on pixels of a first two-dimensional depth image, where a location of each first descriptor in the first map corresponds to a location of a respective pixel of a first two-dimensional depth image, generation of a second map of second descriptors based on pixels of the second two-dimensional depth image, where a location of each second descriptor in the second map corresponds to a location of a respective pixel of the second two-dimensional depth image, upsampling of the first map of descriptors using a first upsampling technique to generate an upsampled first map of descriptors, upsampling of the second map of descriptors using a second upsampling technique to generate an upsampled second map of descriptors, generation of a descriptor difference map based on differences between descriptors of the upsampled first map of descriptors and descriptors of the upsampled second map of descriptors, generation of a geodesic preservation m
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: September 24, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Vivek Kumar Singh, Stefan Kluckner, Yao-jen Chang, Kai Ma, Terrence Chen
  • Publication number: 20190277870
    Abstract: A method of characterizing a specimen for HILN (H, I, and/or L, or N). The method includes capturing images of the specimen at multiple different viewpoints, processing the images to provide segmentation information for each viewpoint, generating a semantic map from the segmentation information, selecting a synthetic viewpoint, identifying front view semantic data and back view semantic data for the synthetic viewpoint, and determining HILN of the serum or plasma portion based on the front view semantic data with an HILN classifier, while taking into account back view semantic data. Testing apparatus and quality check modules adapted to carry out the method are described, as are other aspects.
    Type: Application
    Filed: November 13, 2017
    Publication date: September 12, 2019
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Shanhui Sun, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack
  • Publication number: 20190271714
    Abstract: A model-based method of determining characteristics of a specimen container cap to identify the container cap. The method includes providing a specimen container including a container cap; capturing backlit images of the container cap taken at different exposures lengths and using a plurality of different nominal wavelengths; selecting optimally-exposed pixels from the images at different exposure lengths at each nominal wavelength to generate optimally-exposed image data for each nominal wavelength; classifying the optimally-exposed pixels as at least being one of a tube, a label or a cap; and identifying a shape of the container cap based upon the optimally-exposed pixels classified as being the cap and the image data for each nominal wavelength. Quality check modules and specimen testing apparatus adapted to carry out the method are described, as are numerous other aspects.
    Type: Application
    Filed: July 7, 2017
    Publication date: September 5, 2019
    Applicant: Siemens Healthcare Diagnostics Inc.
    Inventors: Stefan Kluckner, Yao-Jen Chang, Terrence Chen, Benjamin S. Pollack