Patents by Inventor Kuan-chuan PENG

Kuan-chuan PENG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11900247
    Abstract: Deep learning is used to train a neural network for end-to-end prediction of short term (e.g., 20 minutes or less) solar irradiation based on camera images and metadata. The architecture of the neural network includes a recurrent network for temporal considerations. The images and metadata are input at different locations in the neural network. The resulting machine-learned neural network predicts solar irradiation based on camera images and metadata so that a solar plant and back-up power source may be controlled to minimize output power variation.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: February 13, 2024
    Assignee: Siemens Aktiengesellschaft
    Inventors: Ti-chiun Chang, Patrick Reeb, Joachim Bamberger, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20240013062
    Abstract: A cross-modality knowledge transfer system is provided for adapting one or more source model networks to one or more target model networks. The system is configured to perform steps of providing the TI paired datasets through the source feature encoders of the one or more source model networks, extracting TI source features and TI source moments from the TI paired data by the BN layers of the one or more source model networks, providing the TI paired datasets and the unlabeled TR datasets through the one or more target model networks to extract TI target features and TR target moments, training jointly all the feature encoders of the one or more target model networks by matching the extracted TI target features and TR target moments with the TI source features and TI source moments along with mixing weights, and forming a final target model network by combining the trained one or more target model networks.
    Type: Application
    Filed: January 16, 2023
    Publication date: January 11, 2024
    Inventors: Suhas Lohit, Sk Miraj Ahmed, Kuan Chuan Peng, Michael Jones
  • Patent number: 11861480
    Abstract: Systems, methods, and computer-readable media are described for determining the orientation of a target object in an image and iteratively reorienting the target object until an orientation of the target object is within an acceptable threshold of a target orientation. Also described herein are systems, methods, and computer-readable media for verifying that an image contains a target object.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: January 2, 2024
    Assignee: Siemens Mobility GmbH
    Inventors: Arun Innanje, Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
  • Publication number: 20230326195
    Abstract: Anomalies in multiple different scenes or images can be detected and localized in a single training flow of a neural network. In various examples, incremental learning can be applied to a given system or network, such that the system or network can learn the distribution of new scenes in a single training flow. Thus, in some cases, when an anomalous image from a new scene is given as input to the network, the network can detect and localize the anomaly.
    Type: Application
    Filed: August 28, 2020
    Publication date: October 12, 2023
    Applicant: Siemens Aktiengesellschaft
    Inventors: Shashanka Venkataramanan, Rajat Vikram Singh, Kuan-Chuan Peng
  • Publication number: 20230281986
    Abstract: Embodiments of the present invention disclose a method and system for performing video anomaly detection by training neural networks. The method includes collecting a video of one or more digital images from a source domain. The method includes obtaining a set of images of foreground objects present in the video. In addition, the method includes training of a first neural network to predict frames for the one or more digital images in the video. The first neural network is trained using a future frame prediction module that predicts frames for the one or more digital image. The method includes training of a second neural network to classify the predicted frame as normal and to classify the synthesized pseudo anomaly frame as abnormal. The method includes performing video anomaly detection based on training of the first neural network and the second neural network.
    Type: Application
    Filed: March 1, 2022
    Publication date: September 7, 2023
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Kuan-Chuan Peng, Abhishek Aich
  • Patent number: 11657274
    Abstract: Systems, methods, and computer-readable media are described for performing weakly supervised semantic segmentation of input images that utilizes self-guidance on attention maps during training to cause a guided attention inference network (GAIN) to focus attention on an object in an input image in a holistic manner rather than only on the most discriminative parts of the image. The self-guidance is provided jointly by a classification loss function and an attention mining loss function. Extra supervision can also be provided by using a select number pixel-level labeled input images to enhance the semantic segmentation capabilities of the GAIN.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: May 23, 2023
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Patent number: 11625950
    Abstract: A method for enhancing facial/object recognition includes receiving a query image, and providing a database of object images, including images relevant to the query image, each image having a first attribute and a second attribute with each of the first attribute and the second attribute having a first state and a second state. The method also includes creating an augmented database by generating a plurality of artificial images for each image in the database, the artificial images cooperating with the image to define a set of images including every combination of the first attribute and the second attribute in each of the first state and the second state, and comparing the query image to the images in the augmented database to find one or more matches.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: April 11, 2023
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Patent number: 11556749
    Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: January 17, 2023
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20220048733
    Abstract: Systems, methods and devices for real-time contactless elevator service operation of an elevator includes a trained neural network (TNN) model. The TNN model is trained using a training processor with augmented datasets as a synthetic training dataset, to later perform elevator identifier recognition. The augmented data sets are generated from synthetic text images, the synthetic text images are augmented with different geometrical parameters and visual parameters to a predetermined number of variations in appearance to a set of training elevator identifiers. A camera captures a user image. A text image portion from the user image is extracted using the TNN model, and detects an elevator identifier in the extracted text image portion using the extracted text image portion and the TNN model. The detected elevator identifier is displayed for user confirmation or user cancellation, and upon user confirmation, generates a control command based the detected elevator identifier associated with an elevator service.
    Type: Application
    Filed: August 17, 2020
    Publication date: February 17, 2022
    Inventors: Zafer Sahinoglu, Kuan-Chuan Peng, Alan Sullivan, William Yerazunis
  • Patent number: 11216927
    Abstract: A system and method for visual anomaly localization in a test image includes generating, in plurality of scaled iterations, attention maps for a test image using a trained classifier network, using image-level. A current attention map is generated using an inversion of the classifier network on a condition that a forward pass of the test image in the classifier network detects a first class. One or more attention regions of the current attention map may be extracted and resized as a sub-image. For each scaled iteration, extraction of one or more regions of a current attention map is performed on a condition that the current attention map is significantly different than the preceding attention map. Visual localization of a region for the class in the test image is based on one or more of the attention maps.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: January 4, 2022
    Assignee: Siemens Aktiengesellschaft
    Inventors: Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
  • Publication number: 20210304437
    Abstract: Systems, methods, and computer-readable media are described for determining the orientation of a target object in an image and iteratively reorienting the target object until an orientation of the target object is within an acceptable threshold of a target orientation. Also described herein are systems, methods, and computer-readable media for verifying that an image contains a target object.
    Type: Application
    Filed: August 21, 2018
    Publication date: September 30, 2021
    Inventors: Arun Innanje, Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
  • Publication number: 20210158010
    Abstract: Deep learning is used to train a neural network for end-to-end prediction of short term (e.g., 20 minutes or less) solar irradiation based on camera images and metadata. The architecture of the neural network includes a recurrent network for temporal considerations. The images and metadata are input at different locations in the neural network. The resulting machine-learned neural network predicts solar irradiation based on camera images and metadata so that a solar plant and back-up power source may be controlled to minimize output power variation.
    Type: Application
    Filed: May 31, 2018
    Publication date: May 27, 2021
    Inventors: Ti-chiun Chang, Patrick Reeb, Joachim Bamberger, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20210150264
    Abstract: A system and method for semi-supervised learning of visual recognition networks includes generating an initial set of feature representation training data based on simulated 2D test images of various viewpoints with respect to a target 3D rendering. A feature representation network generates feature representation vectors based on processing of the initial feature representation training data. Keypoint patches are labeled according to a score value based on a series of reference patches of unique viewpoint poses and a test keypoint patch processed through the trained feature representation network. A keypoint detector network learns keypoint detection based on processing of the keypoint detector training data.
    Type: Application
    Filed: July 3, 2018
    Publication date: May 20, 2021
    Inventors: Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20200356854
    Abstract: Systems, methods, and computer-readable media are described for performing weakly supervised semantic segmentation of input images that utilizes self-guidance on attention maps during training to cause a guided attention inference network (GAIN) to focus attention on an object in an input image in a holistic manner rather than only on the most discriminative parts of the image. The self-guidance is provided jointly by a classification loss function and an attention mining loss function. Extra supervision can also be provided by using a select number pixel-level labeled input images to enhance the semantic segmentation capabilities of the GAIN.
    Type: Application
    Filed: October 9, 2018
    Publication date: November 12, 2020
    Inventors: Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20200334519
    Abstract: A method for learning image representations comprises receiving a pair of images, generating a set of candidate patches in each image, identifying features in each patch, arranging the patches in pairs and comparing a distance between a feature in the first image to a feature in the second image. The pair of patches is labeled as positive or negative based on the comparison of the measured distance to a threshold. Images may be depth images and distance is determined by projecting the features into three-dimensional space. A system for learning representations includes a computer processor configured to receive a pair of images to a Siamese convolutional neural network to generate candidate patches in each image. A sampling layer arranges the patches in pairs and measures distances between features in the patches. Each pair is labeled as positive or negative according to the comparison of the distance to a threshold.
    Type: Application
    Filed: January 11, 2018
    Publication date: October 22, 2020
    Inventors: Georgios Georgakis, Srikrishna Karanam, Varun Manjunatha, Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
  • Publication number: 20200242340
    Abstract: A method for enhancing facial/object recognition includes receiving a query image, and providing a database of object images, including images relevant to the query image, each image having a first attribute and a second attribute with each of the first attribute and the second attribute having a first state and a second state. The method also includes creating an augmented database by generating a plurality of artificial images for each image in the database, the artificial images cooperating with the image to define a set of images including every combination of the first attribute and the second attribute in each of the first state and the second state, and comparing the query image to the images in the augmented database to find one or more matches.
    Type: Application
    Filed: October 23, 2018
    Publication date: July 30, 2020
    Inventors: Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20200226735
    Abstract: A system and method for visual anomaly localization in a test image includes generating, in plurality of scaled iterations, attention maps for a test image using a trained classifier network, using image-level. A current attention map is generated using an inversion of the classifier network on a condition that a forward pass of the test image in the classifier network detects a first class. One or more attention regions of the current attention map may be extracted and resized as a sub-image. For each scaled iteration, extraction of one or more regions of a current attention map is performed on a condition that the current attention map is significantly different than the preceding attention map.
    Type: Application
    Filed: March 16, 2018
    Publication date: July 16, 2020
    Inventors: Kuan-Chuan PENG, Ziyan WU, Jan ERNST
  • Publication number: 20200065634
    Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.
    Type: Application
    Filed: May 11, 2018
    Publication date: February 27, 2020
    Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20180330205
    Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.
    Type: Application
    Filed: September 29, 2017
    Publication date: November 15, 2018
    Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20180330194
    Abstract: Embodiments of the present invention provide a computer-implemented method for training an RGB-D classifier for a scene classification task. The method receives task-relevant labeled depth data, task-irrelevant RGB-D data, and a given trained representation in RGB. The method simulates an RGB representation using only the task-irrelevant RGB-D data. The method builds a joint neural network using only the task-irrelevant RGB-D data and the task-relevant labeled depth data.
    Type: Application
    Filed: September 29, 2017
    Publication date: November 15, 2018
    Inventors: Kuan-Chuan Peng, Ziyan Wu, Jan Ernst