Patents by Inventor Qifei Wang

Qifei Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119555
    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
    Type: Application
    Filed: December 4, 2023
    Publication date: April 11, 2024
    Inventors: Junjie Ke, Feng Yang, Qifei Wang, Yilin Wang, Peyman Milanfar
  • Patent number: 11887270
    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 30, 2024
    Assignee: Google LLC
    Inventors: Junjie Ke, Feng Yang, Qifei Wang, Yilin Wang, Peyman Milanfar
  • Patent number: 11790550
    Abstract: A method includes obtaining a first plurality of feature vectors associated with a first image and a second plurality of feature vectors associated with a second image. The method also includes generating a plurality of transformed feature vectors by transforming each respective feature vector of the first plurality of feature vectors by a kernel matrix trained to define an elliptical inner product space. The method additionally includes generating a cost volume by determining, for each respective transformed feature vector of the plurality of transformed feature vectors, a plurality of inner products, wherein each respective inner product of the plurality of inner products is between the respective transformed feature vector and a corresponding candidate feature vector of a corresponding subset of the second plurality of feature vectors. The method further includes determining, based on the cost volume, a pixel correspondence between the first image and the second image.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Taihong Xiao, Deqing Sun, Ming-Hsuan Yang, Qifei Wang, Jinwei Yuan
  • Publication number: 20230267307
    Abstract: Systems and methods of the present disclosure are directed to a method for generating a machine-learned multitask model configured to perform tasks. The method can include obtaining a machine-learned multitask search model comprising candidate nodes. The method can include obtaining tasks and machine-learned task controller models associated with the tasks. As an example, for a task, the method can include using the task controller model to route a subset of the candidate nodes in a machine-learned task submodel for the corresponding task. The method can include inputting task input data to the task submodel to obtain a task output. The method can include generating, using the task output, a feedback value based on an objective function. The method can include adjusting parameters of the task controller model based on the feedback value.
    Type: Application
    Filed: July 23, 2020
    Publication date: August 24, 2023
    Inventors: Qifei Wang, Junjie Ke, Grace Chu, Gabriel Mintzer Bender, Luciano Sbaiz, Feng Yang, Andrew Gerald Howard, Alec Michael Go, Jeffrey M. Gilbert, Peyman Milanfar, Joshua William Charles Greaves
  • Publication number: 20230222623
    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
    Type: Application
    Filed: July 1, 2021
    Publication date: July 13, 2023
    Inventors: Junjie Ke, Feng Yang, Qifei Wang, Yilin Wang, Peyman Milanfar
  • Publication number: 20230091374
    Abstract: The present disclosure is directed to object and/or character recognition for use in applications such as computer vision. Advantages of the present disclosure include lightweight functionality that can be used on devices such as smart phones. Aspects of the present disclosure include a sequential architecture where a lightweight machine-learned model can receive an image, detect whether an object is present in one or more regions of the image, and generate an output based on the detection. This output can be applied as a filter to remove image data that can be neglected for more memory intensive machine-learned models applied downstream.
    Type: Application
    Filed: February 24, 2020
    Publication date: March 23, 2023
    Inventors: Qifei Wang, Alexander Kuznetsov, Alec Michael Go, Grace Chu, Eunyoung Kim, Feng Yang, Andrew Gerald Howard, Jeffrey M. Gilbert
  • Publication number: 20220189051
    Abstract: A method includes obtaining a first plurality of feature vectors associated with a first image and a second plurality of feature vectors associated with a second image. The method also includes generating a plurality of transformed feature vectors by transforming each respective feature vector of the first plurality of feature vectors by a kernel matrix trained to define an elliptical inner product space. The method additionally includes generating a cost volume by determining, for each respective transformed feature vector of the plurality of transformed feature vectors, a plurality of inner products, wherein each respective inner product of the plurality of inner products is between the respective transformed feature vector and a corresponding candidate feature vector of a corresponding subset of the second plurality of feature vectors. The method further includes determining, based on the cost volume, a pixel correspondence between the first image and the second image.
    Type: Application
    Filed: July 8, 2020
    Publication date: June 16, 2022
    Inventors: Taihong Xiao, Deqing Sun, Ming-Hsuan Yang, Qifei Wang, Jinwei Yuan
  • Patent number: 8924954
    Abstract: An application software installation method and an application software installation apparatus are used to solve problems of operation complexity and high implementation difficulty in an existing installation process of application software. The method includes: mounting mirror data of a virtual machine, and mapping the mirror data as one virtual disk in a local file system; updating a registry file in a virtual disk according to registry change record data in an application software package; and updating a file structure in the virtual disk according to the file change record data and the file in the application software package, thereby implementing installation of the application software in the virtual machine. In the process of installing the application software, a user of the virtual machine does not need to perform complex operations, thereby reducing software installation difficulty.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: December 30, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Qifei Wang