Patents by Inventor Ming-Yu Liu

Ming-Yu Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190340728
    Abstract: A superpixel sampling network utilizes a neural network coupled to a differentiable simple linear iterative clustering component to determine pixel-superpixel associations from a set of pixel features output by the neural network. The superpixel sampling network computes updated superpixel centers and final pixel-superpixel associations over a number of iterations.
    Type: Application
    Filed: September 13, 2018
    Publication date: November 7, 2019
    Inventors: Varun Jampani, Deqing Sun, Ming-Yu Liu, Jan Kautz
  • Patent number: 10467763
    Abstract: A method, computer readable medium, and system are disclosed for estimating optical flow between two images. A first pyramidal set of features is generated for a first image and a partial cost volume for a level of the first pyramidal set of features is computed, by a neural network, using features at the level of the first pyramidal set of features and warped features extracted from a second image, where the partial cost volume is computed across a limited range of pixels that is less than a full resolution of the first image, in pixels, at the level. The neural network processes the features and the partial cost volume to produce a refined optical flow estimate for the first image and the second image.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: November 5, 2019
    Assignee: NVIDIA Corporation
    Inventors: Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz
  • Patent number: 10424069
    Abstract: A method, computer readable medium, and system are disclosed for estimating optical flow between two images. A first pyramidal set of features is generated for a first image and a partial cost volume for a level of the first pyramidal set of features is computed, by a neural network, using features at the level of the first pyramidal set of features and warped features extracted from a second image, where the partial cost volume is computed across a limited range of pixels that is less than a full resolution of the first image, in pixels, at the level. The neural network processes the features and the partial cost volume to produce a refined optical flow estimate for the first image and the second image.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: September 24, 2019
    Assignee: NVIDIA Corporation
    Inventors: Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz
  • Patent number: 10417524
    Abstract: An image processing system includes a memory to store a classifier and a set of labeled images for training the classifier, wherein each labeled image is labeled as either a positive image that includes an object of a specific type or a negative image that does not include the object of the specific type, wherein the set of labeled images has a first ratio of the positive images to the negative images. The system includes an input interface to receive a set of input images, a processor to determine a second ratio of the positive images, to classify the input images into positive and negative images to produce a set of classified images, and to select a subset of the classified images having the second ratio of the positive images to the negative images, and an output interface to render the subset of the input images for labeling.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: September 17, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Chen Feng, Ming-Yu Liu, Chieh-Chi Kao, Teng-Yok Lee
  • Publication number: 20190279075
    Abstract: A source image is processed using an encoder network to determine a content code representative of a visual aspect of the source object represented in the source image. A target class is determined, which can correspond to an entire population of objects of a particular type. The user may specify specific objects within the target class, or a sampling can be done to select objects within the target class to use for the translation. Style codes for the selected target objects are determined that are representative of the appearance of those target objects. The target style codes are provided with the source content code as input to a translation network, which can use the codes to infer a set of images including representations of the selected target objects having the visual aspect determined from the source image.
    Type: Application
    Filed: February 19, 2019
    Publication date: September 12, 2019
    Inventors: Ming-Yu Liu, Xun Huang
  • Publication number: 20190244329
    Abstract: Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. Examples of styles include seasons (summer, winter, etc.), weather (sunny, rainy, foggy, etc.), lighting (daytime, nighttime, etc.). A photorealistic image stylization process includes a stylization step and a smoothing step. The stylization step transfers the style of the reference photo to the content photo. A photo style transfer neural network model receives a photorealistic content image and a photorealistic style image and generates an intermediate stylized photorealistic image that includes the content of the content image modified according to the style image. A smoothing function receives the intermediate stylized photorealistic image and pixel similarity data and generates the stylized photorealistic image, ensuring spatially consistent stylizations.
    Type: Application
    Filed: January 11, 2019
    Publication date: August 8, 2019
    Inventors: Yijun Li, Ming-Yu Liu, Ming-Hsuan Yang, Jan Kautz
  • Publication number: 20190244060
    Abstract: A style transfer neural network may be used to generate stylized synthetic images, where real images provide the style (e.g., seasons, weather, lighting) for transfer to synthetic images. The stylized synthetic images may then be used to train a recognition neural network. In turn, the trained neural network may be used to predict semantic labels for the real images, providing recognition data for the real images. Finally, the real training dataset (real images and predicted recognition data) and the synthetic training dataset are used by the style transfer neural network to generate stylized synthetic images. The training of the neural network, prediction of recognition data for the real images, and stylizing of the synthetic images may be repeated for a number of iterations. The stylization operation more closely aligns a covariate of the synthetic images to the covariate of the real images, improving accuracy of the recognition neural network.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 8, 2019
    Inventors: Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz
  • Publication number: 20190244331
    Abstract: An image processing method extracts consecutive input blurry frames from a video, and generates sharp frames corresponding to the input blurry frames. An optical flow is determined between the sharp frames, and the optical flow is used to compute a per-pixel blur kernel. The blur kernel is used to reblur each of the sharp frames into a corresponding re-blurred frame. The re-blurred frame is used to fine-tune the deblur network by minimizing the distance between the re-blurred frame and the input blurry frame.
    Type: Application
    Filed: February 2, 2018
    Publication date: August 8, 2019
    Inventors: Jinwei Gu, Orazio Gallo, Ming-Yu Liu, Jan Kautz, Huaijin Chen
  • Publication number: 20190156154
    Abstract: Segmentation is the identification of separate objects within an image. An example is identification of a pedestrian passing in front of a car, where the pedestrian is a first object and the car is a second object. Superpixel segmentation is the identification of regions of pixels within an object that have similar properties An example is identification of pixel regions having a similar color, such as different articles of clothing worn by the pedestrian and different components of the car. A pixel affinity neural network (PAN) model is trained to generate pixel affinity maps for superpixel segmentation. The pixel affinity map defines the similarity of two points in space. In an embodiment, the pixel affinity map indicates a horizonal affinity and vertical affinity for each pixel in the image. The pixel affinity map is processed to identify the superpixels.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 23, 2019
    Inventors: Wei-Chih Tu, Ming-Yu Liu, Varun Jampani, Deqing Sun, Ming-Hsuan Yang, Jan Kautz
  • Publication number: 20190158884
    Abstract: A method, computer readable medium, and system are disclosed for identifying residual video data. This data describes data that is lost during a compression of original video data. For example, the original video data may be compressed and then decompressed, and this result may be compared to the original video data to determine the residual video data. This residual video data is transformed into a smaller format by means of encoding, binarizing, and compressing, and is sent to a destination. At the destination, the residual video data is transformed back into its original format and is used during the decompression of the compressed original video data to improve a quality of the decompressed original video data.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 23, 2019
    Inventors: Yi-Hsuan Tsai, Ming-Yu Liu, Deqing Sun, Ming-Hsuan Yang, Jan Kautz
  • Publication number: 20190147296
    Abstract: A method, computer readable medium, and system are disclosed for creating an image utilizing a map representing different classes of specific pixels within a scene. One or more computing systems use the map to create a preliminary image. This preliminary image is then compared to an original image that was used to create the map. A determination is made whether the preliminary image matches the original image, and results of the determination are used to adjust the computing systems that created the preliminary image, which improves a performance of such computing systems. The adjusted computing systems are then used to create images based on different input maps representing various object classes of specific pixels within a scene.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 16, 2019
    Inventors: Ting-Chun Wang, Ming-Yu Liu, Bryan Christopher Catanzaro, Jan Kautz, Andrew J. Tao
  • Publication number: 20190102908
    Abstract: Iterative prediction systems and methods for the task of action detection process an inputted sequence of video frames to generate an output of both action tubes and respective action labels, wherein the action tubes comprise a sequence of bounding boxes on each video frame. An iterative predictor processes large offsets between the bounding boxes and the ground-truth.
    Type: Application
    Filed: October 4, 2018
    Publication date: April 4, 2019
    Inventors: Xiaodong YANG, Xitong YANG, Fanyi XIAO, Ming-Yu LIU, Jan KAUTZ
  • Publication number: 20190065908
    Abstract: System and method for an active learning system including a sensor obtains data from a scene including a set of images having objects. A memory to store active learning data including an object detector trained for detecting objects in images. A processor in communication with the memory, is configured to detect a semantic class and a location of at least one object in an image selected from the set of images using the object detector to produce a detection metric as a combination of an uncertainty of the object detector about the semantic class of the object in the image (classification) and an uncertainty of the object detector about the location of the object in the image (localization). Using an output interface or a display type device, in communication with the processor, to display the image for human labeling when the detection metric is above a threshold.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Inventors: Teng-Yok Lee, Chieh-Chi Kao, Pradeep Sen, Ming-Yu Liu
  • Patent number: 10210418
    Abstract: A method detects an object in an image. The method extracts a first feature vector from a first region of an image using a first subnetwork and determines a second region of the image by processing the first feature vector with a second subnetwork. The method also extracts a second feature vector from the second region of the image using the first subnetwork and detects the object using a third subnetwork on a basis of the first feature vector and the second feature vector to produce a bounding region surrounding the object and a class of the object. The first subnetwork, the second subnetwork, and the third subnetwork form a neural network. Also, a size of the first region differs from a size of the second region.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: February 19, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ming-Yu Liu, Oncel Tuzel, Amir massoud Farahmand, Kota Hara
  • Publication number: 20180293737
    Abstract: A method, computer readable medium, and system are disclosed for estimating optical flow between two images. A first pyramidal set of features is generated for a first image and a partial cost volume for a level of the first pyramidal set of features is computed, by a neural network, using features at the level of the first pyramidal set of features and warped features extracted from a second image, where the partial cost volume is computed across a limited range of pixels that is less than a full resolution of the first image, in pixels, at the level. The neural network processes the features and the partial cost volume to produce a refined optical flow estimate for the first image and the second image.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 11, 2018
    Inventors: Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz
  • Publication number: 20180288431
    Abstract: A method, computer readable medium, and system are disclosed for action video generation. The method includes the steps of generating, by a recurrent neural network, a sequence of motion vectors from a first set of random variables and receiving, by a generator neural network, the sequence of motion vectors and a content vector sample. The sequence of motion vectors and the content vector sample are sampled by the generator neural network to produce a video clip.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 4, 2018
    Inventors: Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Sergey Tulyakov
  • Publication number: 20180247201
    Abstract: A method, computer readable medium, and system are disclosed for training a neural network. The method includes the steps of encoding, by a first neural network, a first image represented in a first domain to convert the first image to a shared latent space, producing a first latent code and encoding, by a second neural network, a second image represented in a second domain to convert the second image to a shared latent space, producing a second latent code. The method also includes the step of generating, by a third neural network, a first translated image in the second domain based on the first latent code, wherein the first translated image is correlated with the first image and weight values of the third neural network are computed based on the first latent code and the second latent code.
    Type: Application
    Filed: February 27, 2018
    Publication date: August 30, 2018
    Inventors: Ming-Yu Liu, Thomas Michael Breuel, Jan Kautz
  • Publication number: 20180232601
    Abstract: An image processing system includes a memory to store a classifier and a set of labeled images for training the classifier, wherein each labeled image is labeled as either a positive image that includes an object of a specific type or a negative image that does not include the object of the specific type, wherein the set of labeled images has a first ratio of the positive images to the negative images. The system includes an input interface to receive a set of input images, a processor to determine a second ratio of the positive images, to classify the input images into positive and negative images to produce a set of classified images, and to select a subset of the classified images having the second ratio of the positive images to the negative images, and an output interface to render the subset of the input images for labeling.
    Type: Application
    Filed: July 11, 2017
    Publication date: August 16, 2018
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Chen Feng, Ming-Yu Liu, Chieh-Chi Kao, Teng-Yok Lee
  • Patent number: 9989964
    Abstract: A method and a system generate a time-series signal indicative of a variation of the environment in vicinity of the vehicle with respect to a motion of the vehicle and submit the time-series signal to the neural network to produce a reference trajectory as a function of time that satisfies time and spatial constraints on a position of the vehicle. The neural network is trained in to transform time-series signals to reference trajectories of the vehicle. The motion trajectory tracking the reference trajectory while satisfying constraints on the motion of the vehicle is determined and the motion of the vehicle is controlled to follow the motion trajectory.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: June 5, 2018
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Karl Berntorp, Ming-Yu Liu, Avishai Weiss
  • Publication number: 20180144241
    Abstract: A method for training a neuron network using a processor in communication with a memory includes determining features of a signal using the neuron network, determining an uncertainty measure of the features for classifying the signal, reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal, comparing the reconstructed signal with the signal to produce a reconstruction error, combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling, labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal.
    Type: Application
    Filed: November 22, 2016
    Publication date: May 24, 2018
    Inventors: Ming-Yu Liu, Chieh-Chi Kao