Patents by Inventor Jan Ernst
Jan Ernst has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200226735Abstract: A system and method for visual anomaly localization in a test image includes generating, in plurality of scaled iterations, attention maps for a test image using a trained classifier network, using image-level. A current attention map is generated using an inversion of the classifier network on a condition that a forward pass of the test image in the classifier network detects a first class. One or more attention regions of the current attention map may be extracted and resized as a sub-image. For each scaled iteration, extraction of one or more regions of a current attention map is performed on a condition that the current attention map is significantly different than the preceding attention map.Type: ApplicationFiled: March 16, 2018Publication date: July 16, 2020Inventors: Kuan-Chuan PENG, Ziyan WU, Jan ERNST
-
Patent number: 10672115Abstract: Systems and methods are disclosed for processing an image to detect anomalous pixels. An image classification is received from a trained convolutional neural network (CNN) for an input image with a positive classification being defined to represent detection of an anomaly in the image and a negative classification being defined to represent absence of an anomaly. A backward propagation analysis of the input image for each layer of the CNN generates an attention mapping that includes a positive attention map and a negative attention map. A positive mask is generated based on intensity thresholds of the positive attention map and a negative mask is generated based on intensity thresholds of the negative attention map. An image of segmented anomalous pixels is generated based on an aggregation of the positive mask and the negative mask.Type: GrantFiled: December 6, 2017Date of Patent: June 2, 2020Assignee: Siemens CorporationInventors: Rameswar Panda, Ziyan Wu, Arun Innanje, Ramesh Nair, Ti-chiun Chang, Jan Ernst
-
Publication number: 20200065634Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.Type: ApplicationFiled: May 11, 2018Publication date: February 27, 2020Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
-
Publication number: 20200057831Abstract: The present embodiments relate to generating synthetic depth data. By way of introduction, the present embodiments described below include apparatuses and methods for modeling the characteristics of a real-world light sensor and generating realistic synthetic depth data accurately representing depth data as if captured by the real-world light sensor. To generate accurate depth data, a sequence of procedures are applied to depth images rendered from a three-dimensional model. The sequence of procedures simulate the underlying mechanism of the real-world sensor. By simulating the real-world sensor, parameters relating to the projection and capture of the sensor, environmental illuminations, image processing and motion are accurately modeled for generating depth data.Type: ApplicationFiled: February 23, 2017Publication date: February 20, 2020Inventors: Ziyan Wu, Shanhui Sun, Stefan Kluckner, Terrence Chen, Jan Ernst
-
Publication number: 20200057778Abstract: In pose estimation from a depth sensor (12), depth information is matched (70) with 3D information. Depending on the shape captured in depth image information, different objects may benefit from more or less pose density from different perspectives. The database (48) is created by bootstrap aggregation (64). Possible additional poses are tested (70) for nearest neighbors already in the database (48). Where the nearest neighbor is far, then the additional pose is added (72). Where the nearest neighbor is not far, then the additional pose is not added. The resulting database (48) includes entries for poses to distinguish the pose without overpopulation. The database (48) is indexed and used to efficiently determine pose from a depth camera (12) of a given captured image.Type: ApplicationFiled: April 11, 2017Publication date: February 20, 2020Inventors: Shanhui Sun, Stefan Kluckner, Ziyan Wu, Oliver Lehmann, Jan Ernst, Terrence Chen
-
Publication number: 20200013189Abstract: The present embodiments relate to automatically estimating a three]dimensional pose of an object from an image captured using a camera with a structured light sensor. By way of introduction, the present embodiments described below include apparatuses and methods for training a system for and estimating a pose of an object from a test image. Training and test images are sampled to generate local image patches. Features are extracted from the local image patches to generate feature databased used to estimate nearest neighbor poses for each local image patch. The closest nearest neighbor pose to the test image is selected as the estimated three]dimensional pose.Type: ApplicationFiled: February 23, 2017Publication date: January 9, 2020Inventors: Srikrishna Karanam, Ziyan Wu, Shanhui Sun, Oliver Lehmann, Stefan Kluckner, Terrence Chen, Jan Ernst
-
Publication number: 20190318315Abstract: A method for matching jobs includes providing a master skill ledger module for identifying skills, wherein the skills are grouped into clusters and categories, providing a platform for receiving hiring requirements from employers and applications from applicants, analyzing the hiring requirements and producing job profiles which are associated with the master skill ledger module, analyzing the applications from the applicants and producing applicant profiles which are associated with the master skill ledger module, matching the applicant profiles with the job profiles and generating a matching score thereof, providing recommendations to the employer based on the matching score, receiving and processing rating from the employers on the recommendations, if any. The rating received from the employer is directed as input for training of the artificial intelligence processing.Type: ApplicationFiled: April 13, 2018Publication date: October 17, 2019Applicant: CXS Analytics Sdn BhdInventors: Connor CLARK-LINDH, Jan Ernst F. LAMBRECHTS, Yee Chee KEAN
-
Patent number: 10444406Abstract: A method for predicting short-term cloud coverage includes a computer calculating an estimated cloud velocity field at a current time value based on sky images. The computer determines a segmented cloud model based on the sky images, a future sun location corresponding to a future time value, and sun pixel locations at the future time value based on the future sun location. Next, the computer applies a back-propagation algorithm to the sun pixel locations using the estimated cloud velocity field to yield propagated sun pixel locations corresponding to a previous time value. Then, the computer predicts cloud coverage for the future sun location based on the propagated sun pixel locations and the segmented cloud model.Type: GrantFiled: April 17, 2014Date of Patent: October 15, 2019Assignee: Siemens AktiengesellschaftInventors: Shanhui Sun, Jan Ernst, Archana Sapkota, Eberhard Ritzhaupt-Kleissl, Jeremy Ralph Wiles, Terrence Chen
-
Publication number: 20190287234Abstract: Systems and methods are disclosed for processing an image to detect anomalous pixels. An image classification is received from a trained convolutional neural network (CNN) for an input image with a positive classification being defined to represent detection of an anomaly in the image and a negative classification being defined to represent absence of an anomaly. A backward propagation analysis of the input image for each layer of the CNN generates an attention mapping that includes a positive attention map and a negative attention map. A positive mask is generated based on intensity thresholds of the positive attention map and a negative mask is generated based on intensity thresholds of the negative attention map. An image of segmented anomalous pixels is generated based on an aggregation of the positive mask and the negative mask.Type: ApplicationFiled: December 6, 2017Publication date: September 19, 2019Inventors: Rameswar Panda, Ziyan Wu, Arun Innanje, Ramesh Nair, Ti-chiun Chang, Jan Ernst
-
Patent number: 10303942Abstract: A short-term cloud forecasting system includes a cloud segmentation processor that receives image data from images captured by an all sky camera. The cloud segmentation processor calculates a probability for each pixel in an image that the pixel is representative of a cloud. A cloud motion estimation processor calculates a motion vector representing estimated cloud motion calculates weights representing the likelihood that the cloud motion will cause a cloud to occlude the sun at a time in the near future. An uncertainty processor calculates one or more uncertainty indexes that quantify the confidence that a cloud forecast is accurate. Combining the cloud probabilities, the global motion vector and the at least one uncertainty index, in a sun occlusion prediction processor produces a short-term cloud forecast based on the image data that may be used as input to control systems for controlling a power grid.Type: GrantFiled: February 16, 2017Date of Patent: May 28, 2019Assignee: Siemens AktiengesellschaftInventors: Ti-chiun Chang, Jan Ernst, Jeremy-Ralph Wiles, Joachim Bamberger, Andrei Szabo
-
Publication number: 20190130603Abstract: Systems, methods, and computer-readable media are disclosed for determining feature representations of 2.5D image data using deep learning techniques. The 2.5D image data may be synthetic image data generated from 3D simulated model data such as 3D CAD data. The 2.5D image data may be indicative of any number of pose estimations/camera poses representing virtual or actual viewing perspectives of an object modeled by the 3D CAD data. A neural network such as a convolution neural network (CNN) may be trained using the 2.5D image data as training data to obtain corresponding feature representations. The pose estimations/camera poses may be stored in a data repository in association with the corresponding feature representations. The learnt CNN may then be used to determine an input feature representation from an input 2.5D image and index the input feature representation against the data repository to determine matching pose estimation(s).Type: ApplicationFiled: March 9, 2017Publication date: May 2, 2019Inventors: Shanhui Sun, Kai Ma, Stefan Kluckner, Ziyan Wu, Jan Ernst, Vivek Kumar Singh, Terrence Chen
-
Publication number: 20190102909Abstract: Systems, methods, and computer-readable media are disclosed for automated identification of parts of a parts assembly using image data of the parts assembly and 3D simulated model data of the parts assembly. The 3D simulated model data may be 3D CAD data of the parts assembly. An image of the parts assembly is captured by a mobile device and sent to a back-end server for processing. The back-end server determines a feature representation corresponding to the image and searches a repository to locate a matching feature representation stored in association with a corresponding pose estimation. The matching pose estimation is rendered as an overlay on the image of the parts assembly, thereby enabling automated identification of parts within the image or some user-selected portion of the image.Type: ApplicationFiled: March 9, 2017Publication date: April 4, 2019Inventors: Stefan Kluckner, Shanhui Sun, Kai Ma, Ziyan Wu, Arun Innanje, Jan Ernst, Terrence Chen
-
Publication number: 20190080475Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.Type: ApplicationFiled: March 13, 2017Publication date: March 14, 2019Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
-
Patent number: 10215714Abstract: Method and system for detecting defects on surface of object are presented. An imaging device captures images of surface of object under ambient and dark field illumination conditions. The images are processed with a plurality of image operations to detect area of potential defect at location on surface of object based on predictable pattern consisting of bright and shadow regions. Kernels are defined corresponding to configurations of dark field illumination sources to enhance detecting potential defect. Areas of potential defect are cut from processed images to sub images. Sub images are stitched together to generate hypothesis of potential defect at location on surface of object. The hypothesis is classified with a classifier to determine whether the potential defect is true defect. The classifier is trained with training data having characteristics of true defect. The method provides efficient automated detection of micro defects on surface of object.Type: GrantFiled: August 16, 2017Date of Patent: February 26, 2019Assignee: SIEMENS ENERGY, INC.Inventors: Ziyan Wu, Rameswar Panda, Jan Ernst, Kevin P. Bailey
-
Publication number: 20190057498Abstract: Method and system for detecting line defects on surface of object are presented. An imaging device captures images of surface of object under ambient and dark field illumination conditions. The images are processed with a plurality of image operations to detect areas of potential defects based on predictable pattern consisting of bright and shadow regions. Areas of potential defect are cut from processed images to sub images. Sub images are stitched together to generate hypotheses of potential defects at locations on surface of object. The hypotheses are classified to determine whether the potential defects are true defects at the locations. Line defect is detected by refining line segments detected on the processed image based on criteria. The criteria include distance from the true defects to the line segments and slops between the true defects and the line segments are less than threshold values.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Inventors: Rameswar Panda, Ziyan Wu, Jan Ernst, Kevin P. Bailey
-
Publication number: 20190056333Abstract: Method and system for detecting defects on surface of object are presented. An imaging device captures images of surface of object under ambient and dark field illumination conditions. The images are processed with a plurality of image operations to detect area of potential defect at location on surface of object based on predictable pattern consisting of bright and shadow regions. Kernels are defined corresponding to configurations of dark field illumination sources to enhance detecting potential defect. Areas of potential defect are cut from processed images to sub images. Sub images are stitched together to generate hypothesis of potential defect at location on surface of object. The hypothesis is classified with a classifier to determine whether the potential defect is true defect. The classifier is trained with training data having characteristics of true defect. The method provides efficient automated detection of micro defects on surface of object.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Inventors: Ziyan Wu, Rameswar Panda, Jan Ernst, Kevin P. Bailey
-
Patent number: 10192301Abstract: Method and system for detecting line defects on surface of object are presented. An imaging device captures images of surface of object under ambient and dark field illumination conditions. The images are processed with a plurality of image operations to detect areas of potential defects based on predictable pattern consisting of bright and shadow regions. Areas of potential defect are cut from processed images to sub images. Sub images are stitched together to generate hypotheses of potential defects at locations on surface of object. The hypotheses are classified to determine whether the potential defects are true defects at the locations. Line defect is detected by refining line segments detected on the processed image based on criteria. The criteria include distance from the true defects to the line segments and slops between the true defects and the line segments are less than threshold values.Type: GrantFiled: August 16, 2017Date of Patent: January 29, 2019Assignee: SIEMENS ENERGY, INC.Inventors: Rameswar Panda, Ziyan Wu, Jan Ernst, Kevin P. Bailey
-
Publication number: 20180330194Abstract: Embodiments of the present invention provide a computer-implemented method for training an RGB-D classifier for a scene classification task. The method receives task-relevant labeled depth data, task-irrelevant RGB-D data, and a given trained representation in RGB. The method simulates an RGB representation using only the task-irrelevant RGB-D data. The method builds a joint neural network using only the task-irrelevant RGB-D data and the task-relevant labeled depth data.Type: ApplicationFiled: September 29, 2017Publication date: November 15, 2018Inventors: Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
-
Publication number: 20180330205Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.Type: ApplicationFiled: September 29, 2017Publication date: November 15, 2018Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
-
Patent number: 10073848Abstract: A database stores reference photographs of an assembly. The reference photographs are from different orientations relative to the assembly. By matching the query photograph to one or more of the reference photographs, the pose of the assembly in the query photograph is determined. Based on the pose, the pixels of the two-dimensional query photograph are related to a three-dimensional representation from engineering data. Using labeled parts from the engineering data, the parts represented in the query photograph are identified, and part information (e.g., segmentation, number, or other metadata) is provided relative to the query photograph.Type: GrantFiled: March 17, 2015Date of Patent: September 11, 2018Assignee: Siemens AktiengesellschaftInventors: Stefan Kluckner, Arun Innanje, Jan Ernst, Terrence Chen