Patents by Inventor Ser-Nam Lim

Ser-Nam Lim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10262236
    Abstract: A system that generates training images for neural networks includes one or more processors configured to receive input representing one or more selected areas in an image mask. The one or more processors are configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment. The one or more processors also are configured to train an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment and/or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: April 16, 2019
    Assignee: General Electric Company
    Inventors: Ser Nam Lim, Arpit Jain, David Scott Diwinsky, Sravanthi Bondugula
  • Publication number: 20190095765
    Abstract: A system includes one or more processors configured to automatically identify different distressed portions in repeating segments of a rotating body. At least one of a size and/or a shape of one or more of the distressed portions changes with respect to time. The one or more processors also are configured to determine a pattern of the different distressed portions in the repeating segments of the rotating body during rotation of the rotating body based on identifying the different distressed portions. The one or more processors also are configured to subsequently automatically identify locations of individual segments of the repeating segments in the rotating body using the pattern of the distressed portions that is determined.
    Type: Application
    Filed: September 25, 2017
    Publication date: March 28, 2019
    Inventors: Ser Nam Lim, David Scott Diwinsky
  • Patent number: 10196922
    Abstract: A method for locating probes within a gas turbine engine may generally include positioning a plurality of location transmitters relative to the engine and inserting a probe through an access port of the engine, wherein the probe includes a probe tip and a location signal receiver configured to receive location-related signals transmitted from the location transmitters. The method may also include determining a current location of the probe tip within the engine based at least in part on the location-related signals and identifying a virtual location of the probe tip within a three-dimensional model of the engine corresponding to the current location of the probe tip within the engine. Moreover, the method may include providing for display the three-dimensional model of the engine, wherein the virtual location of the probe tip is displayed as a visual indicator within the three-dimensional model.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: February 5, 2019
    Assignee: General Electric Company
    Inventors: David Scott Diwinsky, Ser Nam Lim
  • Patent number: 10196927
    Abstract: A method for locating probes within a gas turbine engine may generally include positioning a plurality of location transmitters relative to the engine and inserting a probe through an access port of the engine, wherein the probe includes a probe tip and a location signal receiver configured to receive location-related signals transmitted from the location transmitters. The method may also include determining a current location of the probe tip within the engine based at least in part on the location-related signals and identifying a virtual location of the probe tip within a three-dimensional model of the engine corresponding to the current location of the probe tip within the engine. Moreover, the method may include providing for display the three-dimensional model of the engine, wherein the virtual location of the probe tip is displayed as a visual indicator within the three-dimensional model.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: February 5, 2019
    Assignee: General Electric Company
    Inventors: David Scott Diwinsky, Ser Nam Lim
  • Patent number: 10197473
    Abstract: A method for performing a visual inspection of a gas turbine engine may generally include inserting a plurality of optical probes through a plurality of access ports of the gas turbine engine. The access ports may be spaced apart axially along a longitudinal axis of the gas turbine engine such that the optical probes provide internal views of the gas turbine engine from a plurality of different axial locations along the gas turbine engine. The method may also include coupling the optical probes to a computing device, rotating the gas turbine engine about the longitudinal axis as the optical probes are used to simultaneously obtain images of an interior of the gas turbine engine at the different axial locations and receiving, with the computing device, image data associated with the images obtained by each of the optical probes at the different axial locations.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: February 5, 2019
    Assignee: General Electric Company
    Inventors: David Scott Diwinsky, Ser Nam Lim
  • Patent number: 10191479
    Abstract: A monitoring system for monitoring a plurality of components is provided. The monitoring system includes a plurality of client systems. The plurality of client systems is configured to generate a plurality of component status reports. The plurality of component status reports is associated with the plurality of components. The monitoring system also includes a component wear monitoring (CWM) computer device configured to receive the plurality of component status reports from the plurality of client systems, generate component status information based on a plurality of component status reports, aggregate the component status information to identify a plurality of images associated with a first component, and compare the plurality of images associated with the first component. The plurality of images represents the first component at different points in time. The CWM computer device is also configured to determine a state of the first component based on the comparison.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: January 29, 2019
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Ser Nam Lim, David Scott Diwinsky, Russell Robert Irving
  • Patent number: 10147021
    Abstract: A novel technique for performing video matting, which is built upon a proposed image matting algorithm that is fully automatic is disclosed. The disclosed methods utilize a PCA-based shape model as a prior for guiding the matting process, so that manual interactions required by most existing image matting methods are unnecessary. By applying the image matting algorithm to these foreground windows, on a per frame basis, a fully automated video matting process is attainable. The process of aligning the shape model with the object is simultaneously optimized based on a quadratic cost function.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: December 4, 2018
    Assignee: General Electric Company
    Inventors: Ting Yu, Peter Henry Tu, Xiaoming Liu, Ser Nam Lim
  • Publication number: 20180341836
    Abstract: A system includes one or more processors and a memory that stores a generative adversarial network (GAN). The one or more processors are configured to receive a low resolution point cloud comprising a set of three-dimensional (3D) data points that represents an object. A generator of the GAN is configured to generate a first set of generated data points based at least in part on one or more characteristics of the data points in the low resolution point cloud, and to interpolate the generated data points into the low resolution point cloud to produce a super-resolved point cloud that represents the object and has a greater resolution than the low resolution point cloud. The one or more processors are further configured to analyze the super-resolved point cloud for detecting one or more of an identity of the object or damage to the object.
    Type: Application
    Filed: May 24, 2017
    Publication date: November 29, 2018
    Inventors: Ser Nam Lim, Jingjing Zheng, Jiajia Luo, David Scott Diwinsky
  • Publication number: 20180342069
    Abstract: A system includes one or more processors configured to analyze obtained image data representing a rotor blade to detect a candidate feature on the rotor blade and determine changes in the size or position of the candidate feature over time. The one or more processors are configured to identify the candidate feature on the rotor blade as a defect feature responsive to the changes in the candidate feature being the same or similar to a predicted progression of the defect feature over time. The predicted progression of the defect feature is determined according to an action-guidance function generated by an artificial neural network via a machine learning algorithm. Responsive to identifying the candidate feature on the rotor blade as the defect feature, the one or more processors are configured to automatically schedule maintenance for the rotor blade, alert an operator, or stop movement of the rotor blade.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Inventors: Ser Nam Lim, David Scott Diwinsky, Wei Wang, Swaminathan Sankaranarayanan, Xiao Bian, Arpit Jain
  • Publication number: 20180336674
    Abstract: A method includes obtaining a series of images of a rotating target object through multiple revolutions of the target object. The method includes grouping the images into multiple, different sets of images. The images in each of the different sets depict a common portion of the target object. At least some of the images in each set are obtained during a different revolution of the target object. The method further includes examining the images in at least a first set of the multiple sets of images using an artificial neural network for automated object-of-interest recognition by the artificial neural network.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Inventors: Ser Nam Lim, Xiao Bian, David Scott Diwinsky
  • Publication number: 20180336454
    Abstract: The systems and methods herein relate to artificial neural networks. The systems and methods examine an input image having a plurality of instances using an artificial neural network, and generate an affinity graph based on the input image. The affinity graph is configured to indicate positions of the instances within the input image. The systems and methods further identify a number of instances of the input image by clustering the instances based on the affinity graph.
    Type: Application
    Filed: May 19, 2017
    Publication date: November 22, 2018
    Inventors: Ser Nam Lim, Xiao Bian, Wei-Chih Hung, David Scott Diwinsky
  • Publication number: 20180322366
    Abstract: A system that generates training images for neural networks includes one or more processors configured to receive input representing one or more selected areas in an image mask. The one or more processors are configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment. The one or more processors also are configured to train an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment and/or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment.
    Type: Application
    Filed: May 2, 2017
    Publication date: November 8, 2018
    Inventors: Ser Nam Lim, Arpit Jain, David Scott Diwinsky, Sravanthi Bondugula
  • Publication number: 20180307894
    Abstract: Systems and methods are provided relating to artificial neural networks are provided. The systems and methods obtain a teacher network that includes artificial neural layers configured to automatically identify one or more objects in an image examined by the artificial neural layers, receive a set of task images at the teacher network, examine the set of task images with the teacher network, identify a subset of the artificial neural layers that are utilized during examination of the set of task images with the teacher network, and define a student network based on the set of task images. The student network is configured to automatically identify one or more objects in an image examined by the subset.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 25, 2018
    Inventors: Ser Nam Lim, David Scott Diwinsky, Xiao Bian
  • Publication number: 20180293734
    Abstract: A generative adversarial network (GAN) system includes a generator neural sub-network configured to receive one or more images depicting one or more objects. The generator neural sub-network also is configured to generate a foreground image and a background image based on the one or more images that are received, the generator neural sub-network configured to combine the foreground image with the background image to form a consolidated image. The GAN system also includes a discriminator neural sub-network configured to examine the consolidated image and determine whether the consolidated image depicts at least one of the objects. The generator neural sub-network is configured to one or more of provide the consolidated image or generate an additional image as a training image used to train another neural network to automatically identify the one or more objects in one or more other images.
    Type: Application
    Filed: April 6, 2017
    Publication date: October 11, 2018
    Inventors: Ser Nam Lim, David Diwinsky, Yen-Liang Lin, Xiao Bian
  • Publication number: 20180286034
    Abstract: A generative adversarial network (GAN) system includes a generator sub-network configured to examine one or more images of actual damage to equipment. The generator sub-network also is configured to create one or more images of potential damage based on the one or more images of actual damage that were examined. The GAN system also includes a discriminator sub-network configured to examine the one or more images of potential damage to determine whether the one or more images of potential damage represent progression of the actual damage to the equipment.
    Type: Application
    Filed: April 3, 2017
    Publication date: October 4, 2018
    Inventors: Ser Nam Lim, Arpit Jain, David Diwinsky, Sravanthi Bondugula, Yen-Liang Lin, Xiao Bian
  • Publication number: 20180286055
    Abstract: A generative adversarial network (GAN) system includes a generator sub-network configured to examine images of an object moving relative to a viewer of the object. The generator sub-network also is configured to generate one or more distribution-based images based on the images that were examined. The system also includes a discriminator sub-network configured to examine the one or more distribution-based images to determine whether the one or more distribution-based images accurately represent the object. A predicted optical flow of the object is represented by relative movement of the object as shown in the one or more distribution-based images.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 4, 2018
    Inventors: Ser Nam Lim, Mustafa Devrim Kaba, Mustafa Uzunbas, David Diwinsky
  • Publication number: 20180253866
    Abstract: A method includes determining object class probabilities of pixels in a first input image by examining the first input image in a forward propagation direction through layers of artificial neurons of an artificial neural network. The object class probabilities indicate likelihoods that the pixels represent different types of objects in the first input image.
    Type: Application
    Filed: April 24, 2017
    Publication date: September 6, 2018
    Inventors: Arpit Jain, Swaminathan Sankaranarayanan, David Scott Diwinsky, Ser Nam Lim, Kari Thompson
  • Patent number: 10060857
    Abstract: A system includes one or more processors configured to create a projection matrix based on a three-dimensional (3D) model of a part and sensor data associated with a workpiece in a workspace of a robotic manipulator. The projection matrix provides a mapping between sensor coordinates associated with the sensor data and 3D coordinates associated with the 3D model. The one or more processors are configured to identify a set of sensor coordinates from the sensor data corresponding to a feature indication associated with the workpiece, and to determine from the set of sensor coordinates a set of 3D coordinates using the projection matrix.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: August 28, 2018
    Assignee: General Electric Company
    Inventors: Steeves Bouchard, Stephane Harel, John Karigiannis, Nicolas Saudrais, David Cantin, Ser Nam Lim, Maxime Beaudoin Pouliot, Jean-Philippe Choiniere
  • Publication number: 20180117760
    Abstract: The present disclosure is directed to a computer-implemented method of sensor planning for acquiring samples via an apparatus including one or more sensors. The computer-implemented method includes defining, by one or more computing devices, an area of interest; identifying, by the one or more computing devices, one or more sensing parameters for the one or more sensors; determining, by the one or more computing devices, a sampling combination for acquiring a plurality of samples by the one or more sensors based at least in part on the one or more sensing parameters; and providing, by the one or more computing devices, one or more command control signals to the apparatus including the one or more sensors to acquire the plurality of samples of the area of interest using the one or more sensors based at least on the sampling combination.
    Type: Application
    Filed: November 3, 2016
    Publication date: May 3, 2018
    Inventors: Ser Nam Lim, David Scott Diwinsky, Xiao Bian, Wayne Ray Grady, Mustafa Gokhan Uzunbas, Mustafa Devrim Kaba
  • Patent number: 9886754
    Abstract: A method for detecting missing tooth in mining shovel, implemented using a processing device, includes receiving a pair of image frames from a camera disposed on a rope mine shovel configured to carry a mining load. A tooth line region corresponding to the pair of image frames is detected to generate a pair of tooth line regions based on a shovel template set. A difference image is determined based on the pair of image frames and the pair of tooth line regions. Further, a response map representative of possible tooth positions is determined based on the difference image using a tooth template matching technique. A tooth line is selected among a plurality of candidate tooth lines based on the response map. Further, a tooth condition is determined based on the tooth line and the difference image. The tooth condition is notified to an operator of the rope mine shovel.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: February 6, 2018
    Assignee: General Electric Company
    Inventors: Ser Nam Lim, Ning Zhou, Joao Vitor Baldini Soares