Patents by Inventor Huaizu Jiang

Huaizu Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078423
    Abstract: A vision transformer (ViT) is a deep learning model that performs one or more vision processing tasks. ViTs may be modified to include a global task that clusters images with the same concept together to produce semantically consistent relational representations, as well as a local task that guides the ViT to discover object-centric semantic correspondence across images. A database of concepts and associated features may be created and used to train the global and local tasks, which may then enable the ViT to perform visual relational reasoning faster, without supervision, and outside of a synthetic domain.
    Type: Application
    Filed: August 22, 2022
    Publication date: March 7, 2024
    Inventors: Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Anima Anandkumar
  • Publication number: 20240062534
    Abstract: A vision transformer (ViT) is a deep learning model that performs one or more vision processing tasks. ViTs may be modified to include a global task that clusters images with the same concept together to produce semantically consistent relational representations, as well as a local task that guides the ViT to discover object-centric semantic correspondence across images. A database of concepts and associated features may be created and used to train the global and local tasks, which may then enable the ViT to perform visual relational reasoning faster, without supervision, and outside of a synthetic domain.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 22, 2024
    Inventors: Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Anima Anandkumar
  • Patent number: 10986325
    Abstract: Scene flow represents the three-dimensional (3D) structure and movement of objects in a video sequence in three dimensions from frame-to-frame and is used to track objects and estimate speeds for autonomous driving applications. Scene flow is recovered by a neural network system from a video sequence captured from at least two viewpoints (e.g., cameras), such as a left-eye and right-eye of a viewer. An encoder portion of the system extracts features from frames of the video sequence. The features are input to a first decoder to predict optical flow and a second decoder to predict disparity. The optical flow represents pixel movement in (x,y) and the disparity represents pixel movement in z (depth). When combined, the optical flow and disparity represent the scene flow.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: April 20, 2021
    Assignee: NVIDIA Corporation
    Inventors: Deqing Sun, Varun Jampani, Erik Gundersen Learned-Miller, Huaizu Jiang
  • Patent number: 10776688
    Abstract: Video interpolation is used to predict one or more intermediate frames at timesteps defined between two consecutive frames. A first neural network model approximates optical flow data defining motion between the two consecutive frames. A second neural network model refines the optical flow data and predicts visibility maps for each timestep. The two consecutive frames are warped according to the refined optical flow data for each timestep to produce pairs of warped frames for each timestep. The second neural network model then fuses the pair of warped frames based on the visibility maps to produce the intermediate frame for each timestep. Artifacts caused by motion boundaries and occlusions are reduced in the predicted intermediate frames.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: September 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Huaizu Jiang, Deqing Sun, Varun Jampani
  • Publication number: 20200084427
    Abstract: Scene flow represents the three-dimensional (3D) structure and movement of objects in a video sequence in three dimensions from frame-to-frame and is used to track objects and estimate speeds for autonomous driving applications. Scene flow is recovered by a neural network system from a video sequence captured from at least two viewpoints (e.g., cameras), such as a left-eye and right-eye of a viewer. An encoder portion of the system extracts features from frames of the video sequence. The features are input to a first decoder to predict optical flow and a second decoder to predict disparity. The optical flow represents pixel movement in (x,y) and the disparity represents pixel movement in z (depth). When combined, the optical flow and disparity represent the scene flow.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 12, 2020
    Inventors: Deqing Sun, Varun Jampani, Erik Gundersen Learned-Miller, Huaizu Jiang
  • Publication number: 20190138889
    Abstract: Video interpolation is used to predict one or more intermediate frames at timesteps defined between two consecutive frames. A first neural network model approximates optical flow data defining motion between the two consecutive frames. A second neural network model refines the optical flow data and predicts visibility maps for each timestep. The two consecutive frames are warped according to the refined optical flow data for each timestep to produce pairs of warped frames for each timestep. The second neural network model then fuses the pair of warped frames based on the visibility maps to produce the intermediate frame for each timestep. Artifacts caused by motion boundaries and occlusions are reduced in the predicted intermediate frames.
    Type: Application
    Filed: October 24, 2018
    Publication date: May 9, 2019
    Inventors: Huaizu Jiang, Deqing Sun, Varun Jampani
  • Patent number: 9042648
    Abstract: Techniques for identifying a salient object with respect to its context are described. A process receives an input image that includes a salient object. The process segments the input image into multiple regions and calculates a saliency value for each of the segmented regions based on scale image levels. The process constructs saliency maps based at least in part on the calculated saliency value, and combines the saliency maps to construct a total saliency map. Next, the process connects a set of line segments computed from the input image and utilizes the total saliency map to compute a closed boundary, which forms a shape prior from the closed boundary, and extracts the salient object from the total saliency map and the shape prior.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: May 26, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jingdong Wang, Shipeng Li, Huaizu Jiang
  • Publication number: 20130223740
    Abstract: Techniques for identifying a salient object with respect to its context are described. A process receives an input image that includes a salient object. The process segments the input image into multiple regions and calculates a saliency value for each of the segmented regions based on scale image levels. The process constructs saliency maps based at least in part on the calculated saliency value, and combines the saliency maps to construct a total saliency map. Next, the process connects a set of line segments computed from the input image and utilizes the total saliency map to compute a closed boundary, which forms a shape prior from the closed boundary, and extracts the salient object from the total saliency map and the shape prior.
    Type: Application
    Filed: February 23, 2012
    Publication date: August 29, 2013
    Applicant: Microsoft Corporation
    Inventors: Jingdong Wang, Shipeng Li, Huaizu Jiang