Patents by Inventor Ziyan Wu

Ziyan Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10901740
    Abstract: A system and method for generating realistic depth images by enhancing simulated images rendered from a 3D model, include a rendering engine configured to render noiseless 2.5D images by rendering various poses with respect to a target 3D CAD model, a noise transfer engine configured to apply realistic noise to the noiseless 2.5D images, and a background transfer engine configured to add pseudo-realistic scenedependent backgrounds to the noiseless 2.5D images. The noise transfer engine is configured to learn noise transfer based on a mapping, by a first generative adversarial network (GAN), of the noiseless 2.5D images to real 2.5D scans generated by a targeted sensor. The background transfer engine is configured to learn background generation based on a processing, by a second GAN, of output data of the first GAN as input data and corresponding real 2.5D scans as target data.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: January 26, 2021
    Assignee: Siemens Aktiengesellschaft
    Inventors: Benjamin Planche, Ziyan Wu
  • Patent number: 10887558
    Abstract: Methods and systems for automatically setting up a sensor connected to an apparatus. For example, a computer-implemented method for automatically setting up a sensor connected to an apparatus includes: receiving a sensor-connection signal corresponding to a connection established between the sensor and the apparatus; determining whether a streaming microservice corresponding to the sensor has been downloaded onto the apparatus; if the streaming microservice has not been downloaded onto the apparatus, determining whether the streaming microservice corresponding to the sensor is supported by the apparatus; if the streaming microservice is supported by the apparatus, downloading a streaming microservice docker from a docker registry, the streaming microservice docker including the streaming microservice and a driver corresponding to the sensor; and deploying the streaming microservice with the driver corresponding to the sensor.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: January 5, 2021
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Arun Innanje, Abhishek Sharma, Ziyan Wu, Terrence Chen
  • Publication number: 20200356854
    Abstract: Systems, methods, and computer-readable media are described for performing weakly supervised semantic segmentation of input images that utilizes self-guidance on attention maps during training to cause a guided attention inference network (GAIN) to focus attention on an object in an input image in a holistic manner rather than only on the most discriminative parts of the image. The self-guidance is provided jointly by a classification loss function and an attention mining loss function. Extra supervision can also be provided by using a select number pixel-level labeled input images to enhance the semantic segmentation capabilities of the GAIN.
    Type: Application
    Filed: October 9, 2018
    Publication date: November 12, 2020
    Inventors: Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20200355946
    Abstract: An electrophoretic medium includes a fluid, a plurality of light scattering charged particles having a first polarity, and a first, second, and third set of particles, each set having a color different from each other set. The first and second particles may have a second polarity opposite to the first polarity, and the mobility of the third set of particles is less than half of the mobility of the light scattering particles, the first set of charged particles, and the second set of charged particles.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 12, 2020
    Inventors: Stephen J. TELFER, Eugene BZOWEJ, Kenneth R. CROUNSE, John L. MARSHALL, Brandon MACDONALD, Ziyan WU, Lee YEZEK
  • Publication number: 20200341342
    Abstract: An electrophoretic particle including a pigment particle having an intermediate residue covalently bonded to the surface of the pigment particle and a polymer bonded to the intermediate residue. The polymer may be derived from one or more types of monomers and at least one of the monomers is a substituted or unsubstituted styrene monomer including halogenated styrenic monomers. The electrophoretic particle may be used in an electrophoretic medium including a dispersion of the particles in a suspending fluid. The electrophoretic medium may be incorporated into an electro-optic display.
    Type: Application
    Filed: April 9, 2020
    Publication date: October 29, 2020
    Inventors: Ziyan WU, David Darrell MILLER
  • Publication number: 20200334519
    Abstract: A method for learning image representations comprises receiving a pair of images, generating a set of candidate patches in each image, identifying features in each patch, arranging the patches in pairs and comparing a distance between a feature in the first image to a feature in the second image. The pair of patches is labeled as positive or negative based on the comparison of the measured distance to a threshold. Images may be depth images and distance is determined by projecting the features into three-dimensional space. A system for learning representations includes a computer processor configured to receive a pair of images to a Siamese convolutional neural network to generate candidate patches in each image. A sampling layer arranges the patches in pairs and measures distances between features in the patches. Each pair is labeled as positive or negative according to the comparison of the distance to a threshold.
    Type: Application
    Filed: January 11, 2018
    Publication date: October 22, 2020
    Inventors: Georgios Georgakis, Srikrishna Karanam, Varun Manjunatha, Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
  • Patent number: 10803619
    Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Siemens Mobility GmbH
    Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
  • Patent number: 10782668
    Abstract: A system and method is disclosed for development of a control application for a controller of an automation system. The controller receives sensor signals associated with perception of a first real component during an execution of the control application program. Activity of a virtual component, including interaction with the real first component, is simulated, the virtual component being a digital twin of a second real component designed for the work environment and absent in the work environment. Virtual data is produced in response to the simulated activity of the virtual component. A control application module determines parameters for development of the control application program using the sensor signals for the first real component and the virtual data. An AR display signal for the work environment is rendered and displayed based on a digital representation of the virtual data during an execution of the control application program.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: September 22, 2020
    Assignee: Siemens Aktiengesellschaft
    Inventors: Lingyun Wang, Hasan Sinan Bank, Mareike Kritzler, Phani Ram Kumar Kuruganty, Naveen Kumar Singa, Ziyan Wu
  • Publication number: 20200294201
    Abstract: A method of removing noise from a depth image includes presenting real-world depth images in real-time to a first generative adversarial neural network (GAN), the first GAN being trained by synthetic images generated from computer assisted design (CAD) information of at least one object to be recognized in the real-world depth image. The first GAN subtracts the background in the real-world depth image and segments the foreground in the real-world depth image to produce a cleaned real-world depth image. Using the cleaned image, an object of interest in the real-world depth image can be identified via the first GAN trained with synthetic images and the cleaned real-world depth image. In an embodiment the cleaned real-world depth image from the first GAN is provided to a second GAN that provides additional noise cancellation and recovery of features removed by the first GAN.
    Type: Application
    Filed: November 3, 2017
    Publication date: September 17, 2020
    Inventors: Benjamin Planche, Sergey Zakharov, Ziyan Wu, Slobodan Ilic
  • Publication number: 20200268339
    Abstract: The present disclosure relates to a method for scanning a patient in an imaging system. The imaging system may include one or more cameras directed at the patient. The method may include obtaining a position of each of the camera(s) relative to the imaging system. The method may also include obtain image data of the patient captured by the camera(s), wherein the image data may correspond to a first view with respect to the patient. The method may further include generating projection image data of the patient based on the image data and the position of each of the camera(s) relative to the imaging system, wherein the projection image data may correspond to a second view with respect to the patient different from the first view. The method may further include generating control information for scanning the patient based on the projection image data of the patient.
    Type: Application
    Filed: April 9, 2020
    Publication date: August 27, 2020
    Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Weiqiang HAO, Zhuobiao HE, Mingchao WANG, Yining WANG, Ziyan WU
  • Publication number: 20200268251
    Abstract: The present disclosure relates to systems and methods for scanning a patient in an imaging system. The imaging system may include at least one camera directed at the patient. The systems and methods may obtain a plurality of images of the patient that are captured by the at least one camera. Each of the plurality of images may correspond to one of a series of time points. The systems and methods may also determine a motion of the patient over the series of time points based on the plurality of images of the patient. The systems and methods may further determine whether the patient is ready for scan based on the motion of the patient, and generate control information of the imaging system for scanning the patient in response to determining that the patient is ready for scan.
    Type: Application
    Filed: April 9, 2020
    Publication date: August 27, 2020
    Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Weiqiang HAO, Zhuobiao HE, Mingchao WANG, Yining WANG, Yimo GUO, Srikrishna KARANAM, Ziyan WU
  • Publication number: 20200248054
    Abstract: Electro-optic assemblies and related materials (e.g., adhesive) for use therein are generally provided. The electro-optic assembly comprises a hybrid adhesive layer comprising two or more adhesive materials including a polyurethane adhesive material and a polyacrylate adhesive material. The polyurethane adhesive material includes an end-capping cyclic carbonate. In some embodiments, the adhesive layer is formed by curing the two adhesive materials under two different sets of conditions, comprising two or more curing steps.
    Type: Application
    Filed: April 17, 2020
    Publication date: August 6, 2020
    Inventors: Eugene BZOWEJ, David Darrell MILLER, Ziyan WU
  • Publication number: 20200239695
    Abstract: Quinacridone pigments that are surface-functionalized with glycidyl methacrylate, maleic anhydride, or 4-methacryloxyethyl trimellitic anhydride to create a functionalized pigment. The functional groups are then activated to bond hydrophobic polymers, thereby coating the pigment with the hydrophobic polymers. The quinacridone pigments can be used for a variety of applications. They are well-suited for use in electro-optic materials, such as electrophoretic media for use in electrophoretic displays.
    Type: Application
    Filed: April 13, 2020
    Publication date: July 30, 2020
    Inventors: Ziyan WU, Jason D. FEICK
  • Publication number: 20200242340
    Abstract: A method for enhancing facial/object recognition includes receiving a query image, and providing a database of object images, including images relevant to the query image, each image having a first attribute and a second attribute with each of the first attribute and the second attribute having a first state and a second state. The method also includes creating an augmented database by generating a plurality of artificial images for each image in the database, the artificial images cooperating with the image to define a set of images including every combination of the first attribute and the second attribute in each of the first state and the second state, and comparing the query image to the images in the augmented database to find one or more matches.
    Type: Application
    Filed: October 23, 2018
    Publication date: July 30, 2020
    Inventors: Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst
  • Publication number: 20200226735
    Abstract: A system and method for visual anomaly localization in a test image includes generating, in plurality of scaled iterations, attention maps for a test image using a trained classifier network, using image-level. A current attention map is generated using an inversion of the classifier network on a condition that a forward pass of the test image in the classifier network detects a first class. One or more attention regions of the current attention map may be extracted and resized as a sub-image. For each scaled iteration, extraction of one or more regions of a current attention map is performed on a condition that the current attention map is significantly different than the preceding attention map.
    Type: Application
    Filed: March 16, 2018
    Publication date: July 16, 2020
    Inventors: Kuan-Chuan PENG, Ziyan WU, Jan ERNST
  • Patent number: 10672115
    Abstract: Systems and methods are disclosed for processing an image to detect anomalous pixels. An image classification is received from a trained convolutional neural network (CNN) for an input image with a positive classification being defined to represent detection of an anomaly in the image and a negative classification being defined to represent absence of an anomaly. A backward propagation analysis of the input image for each layer of the CNN generates an attention mapping that includes a positive attention map and a negative attention map. A positive mask is generated based on intensity thresholds of the positive attention map and a negative mask is generated based on intensity thresholds of the negative attention map. An image of segmented anomalous pixels is generated based on an aggregation of the positive mask and the negative mask.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: June 2, 2020
    Assignee: Siemens Corporation
    Inventors: Rameswar Panda, Ziyan Wu, Arun Innanje, Ramesh Nair, Ti-chiun Chang, Jan Ernst
  • Publication number: 20200167161
    Abstract: A system and method for generating realistic depth images by enhancing simulated images rendered from a 3D model, include a rendering engine configured to render noiseless 2.5D images by rendering various poses with respect to a target 3D CAD model, a noise transfer engine configured to apply realistic noise to the noiseless 2.5D images, and a background transfer engine configured to add pseudo-realistic scenedependent backgrounds to the noiseless 2.5D images. The noise transfer engine is configured to learn noise transfer based on a mapping, by a first generative adversarial network (GAN), of the noiseless 2.5D images to real 2.5D scans generated by a targeted sensor. The background transfer engine is configured to learn background generation based on a processing, by a second GAN, of output data of the first GAN as input data and corresponding real 2.5D scans as target data.
    Type: Application
    Filed: August 7, 2018
    Publication date: May 28, 2020
    Inventors: Benjamin Planche, Ziyan Wu
  • Patent number: 10662334
    Abstract: Quinacridone pigments that are surface-functionalized with glycidyl methacrylate, maleic anhydride, or 4-methacryloxyethyl trimellitic anhydride to create a functionalized pigment. The functional groups are then activated to bond hydrophobic polymers, thereby coating the pigment with the hydrophobic polymers. The quinacridone pigments can be used for a variety of applications. They are well-suited for use in electro-optic materials, such as electrophoretic media for use in electrophoretic displays.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: May 26, 2020
    Assignee: E Ink Corporation
    Inventors: Ziyan Wu, Jason D. Feick
  • Patent number: 10662354
    Abstract: Electro-optic assemblies and related materials (e.g., adhesive) tier use therein are generally provided. The adhesive layer may comprise an end-capped polyurethane. Some adhesive layers comprise two or more reactive functional groups (e.g., reactive functional groups configured to react with one or more curing species such that, for example, at least one of the two or more functional groups forms a crosslink). The adhesive may also comprise a chain-extending reagent that includes one or more reactive functional groups. In some embodiments, the adhesive is cured by reacting one or more reactive functional groups with one or more curing species. Curing the adhesive may comprise two or more curing steps. In some embodiments the adhesive layer may comprise one or more cross-linkers.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: May 26, 2020
    Assignee: E Ink Corporation
    Inventors: Eugene Bzowej, David Darrell Miller, Ziyan Wu
  • Publication number: 20200065634
    Abstract: Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.
    Type: Application
    Filed: May 11, 2018
    Publication date: February 27, 2020
    Inventors: Ziyan Wu, Kuan-Chuan Peng, Jan Ernst