Patents by Inventor Haoxiang Li

Haoxiang Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11032498
    Abstract: A quantitative pulse count (event detection) algorithm with linearity to high count rates is accomplished by combining a high-speed, high frame rate camera with simple logic code run on a massively parallel processor such as a GPU. The parallel processor elements examine frames from the camera pixel by pixel to find and tag events or count pulses. The tagged events are combined to form a combined quantitative event image.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: June 8, 2021
    Assignee: The Regents of the University of Colorado, a body
    Inventors: Justin Waugh, Daniel S. Dessau, Stephen P. Parham, Thomas Nummy, Justin Griffith, Xiaoqing Zhou, Haoxiang Li
  • Patent number: 11003030
    Abstract: An array substrate and a display device including the array substrate are provided. The array substrate includes: an upper electrode layer on a base substrate and including a first upper electrode strip and a second upper electrode strip; a lower electrode layer between the base substrate and the upper electrode layer. The lower electrode layer includes a portion that does not overlap the first upper electrode strip and the second upper electrode strip in a direction perpendicular to an upper surface of the base substrate. The array substrate includes a pixel electrode strip and a common electrode strip which are in a same layer and both correspond to a region between the first upper electrode strip and the second upper electrode strip.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: May 11, 2021
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., CHONGQING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD.
    Inventors: Haoxiang Fan, Keke Gu, Peng Li, Xiaoji Li, Zhe Li, Junhong Lu, Wei Zhu, Peng Qin, Wenliang Liu
  • Patent number: 10915798
    Abstract: Disclosed herein are embodiments of systems, methods, and products for a webly supervised training of a convolutional neural network (CNN) to predict emotion in images. A computer may query one or more image repositories using search keywords generated based on the tertiary emotion classes of Parrott's emotion wheel. The computer may filter images received in response to the query to generate a weakly labeled training dataset labels associated with the images that are noisy or wrong may be cleaned prior to training of the CNN. The computer may iteratively train the CNN leveraging the hierarchy of emotion classes by increasing the complexity of the labels (tags) for each iteration. Such curriculum guided training may generate a trained CNN that is more accurate than the conventionally trained neural networks.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: February 9, 2021
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Rameswar Panda, Haoxiang Li, Joon-Young Lee, Xin Lu
  • Patent number: 10832036
    Abstract: Methods and systems are provided for generating a facial recognition system. A facial recognition system can be implemented using a meta-model based on a trained neural network. A neural network can be trained as multiple classifiers that identify individuals using a small number of images of the individual's face. A meta-model can learn from the neural networks to be capable to identify an individual based on a small number of images. In this way, the facial recognition system uses the meta-model that learns from the neural network trained to identify an individual based on a small number of images. Such a facial recognition system is tested to determine any misidentification for fine-tuning the system. A facial recognition system implemented using such a meta-model is capable of adapting the model to learn identities entered into the system using only a small number of images to enroll an identity into the system.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: November 10, 2020
    Assignee: ADOBE INC.
    Inventors: Haoxiang Li, Zhe Lin, Muhammad Abdullah Jamal
  • Publication number: 20200261604
    Abstract: The present invention provides a 19F-MR/fluorescence multi-mode molecular imaging and drug loading diagnosis-treatment integrated nanoprobe, and a preparation method and an application. The nano-probe is a nanoparticle formed by coating a mixture of a surfactant containing a molecular targeting treatment drug and a fluorescent dye with a Perfluorocarbon (PFC) carrier; and by uniformly dispersing a mixed solution into water and glycerol, processing ultrasonically, removing a component which is not effectively coated, and purifying, the drug-loading nanoparticle capable of being used for 19 F-MR imaging may be prepared.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 20, 2020
    Inventors: Xilin SUN, Lina WU, Jie YANG, Kai WANG, Lili YANG, Haoxiang LI, Yingbo LI, Xiaona LI, Shuang LIU
  • Publication number: 20200019758
    Abstract: Methods and systems are provided for generating a facial recognition system. A facial recognition system can be implemented using a meta-model based on a trained neural network. A neural network can be trained as multiple classifiers that identify individuals using a small number of images of the individual's face. A meta-model can learn from the neural networks to be capable to identify an individual based on a small number of images. In this way, the facial recognition system uses the meta-model that learns from the neural network trained to identify an individual based on a small number of images. Such a facial recognition system is tested to determine any misidentification for fine-tuning the system. A facial recognition system implemented using such a meta-model is capable of adapting the model to learn identities entered into the system using only a small number of images to enroll an identity into the system.
    Type: Application
    Filed: July 16, 2018
    Publication date: January 16, 2020
    Inventors: Haoxiang Li, Zhe Lin, Muhammad Abdullah Jamal
  • Publication number: 20200014869
    Abstract: A quantitative pulse count (event detection) algorithm with linearity to high count rates is accomplished by combining a high-speed, high frame rate camera with simple logic code run on a massively parallel processor such as a GPU. The parallel processor elements examine frames from the camera pixel by pixel to find and tag events or count pulses. The tagged events are combined to form a combined quantitative event image.
    Type: Application
    Filed: March 19, 2018
    Publication date: January 9, 2020
    Inventors: Justin Waugh, Daniel S. Dessau, Stephen P. Parham, Thomas Nummy, Justin Griffith, Xiaoqing Zhou, Haoxiang Li
  • Patent number: 10460154
    Abstract: Methods and systems for recognizing people in images with increased accuracy are disclosed. In particular, the methods and systems divide images into a plurality of clusters based on common characteristics of the images. The methods and systems also determine an image cluster to which an image with an unknown person instance most corresponds. One or more embodiments determine a probability that the unknown person instance is each known person instance in the image cluster using a trained cluster classifier of the image cluster. Optionally, the methods and systems determine context weights for each combination of an unknown person instance and each known person instance using a conditional random field algorithm based on a plurality of context cues associated with the unknown person instance and the known person instances. The methods and systems calculate a contextual probability based on the cluster-based probabilities and context weights to identify the unknown person instance.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: October 29, 2019
    Assignee: Adobe Inc.
    Inventors: Jonathan Brandt, Zhe Lin, Xiaohui Shen, Haoxiang Li
  • Publication number: 20190258925
    Abstract: This disclosure covers methods, non-transitory computer readable media, and systems that learn attribute attention projections for attributes of digital images and parameters for an attention controlled neural network. By iteratively generating and comparing attribute-modulated-feature vectors from digital images, the methods, non-transitory computer readable media, and systems update attribute attention projections and parameters indicating either one (or both) of a correlation between some attributes of digital images and a discorrelation between other attributes of digital images. In certain embodiments, the methods, non-transitory computer readable media, and systems use the attribute attention projections in an attention controlled neural network as part of performing one or more tasks.
    Type: Application
    Filed: February 20, 2018
    Publication date: August 22, 2019
    Inventors: Haoxiang Li, Xiaohui Shen, Xiangyun Zhao
  • Publication number: 20190250568
    Abstract: The training of a learning agent to provide real-time control of an object is disclosed. Training of the learning agent and training of a corresponding pioneer agent are iteratively alternated. The training of the learning and pioneer agents is under the supervision of a supervisor agent. The training of the learning agent provides feedback for subsequent training of the pioneer agent. The training of the pioneer agent provides feedback for subsequent training of the learning agent. During the training, a supervisor coefficient modulates the influence of the supervisor agent. As agents are trained, the influence of the supervisor agent is decayed. The training of the learning agent, under a first level of supervisor influence, includes real-time control of the object. The subsequent training of the pioneer agent, under a reduced level of supervisor influence, includes replay of training data accumulated during the real-time control of the object.
    Type: Application
    Filed: February 12, 2018
    Publication date: August 15, 2019
    Inventors: Haoxiang Li, Yinan Zhang
  • Publication number: 20190147224
    Abstract: Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations.
    Type: Application
    Filed: November 16, 2017
    Publication date: May 16, 2019
    Inventors: HAOXIANG LI, ZHE LIN, JONATHAN BRANDT, XIAOHUI SHEN
  • Publication number: 20180336401
    Abstract: Methods and systems for recognizing people in images with increased accuracy are disclosed. In particular, the methods and systems divide images into a plurality of clusters based on common characteristics of the images. The methods and systems also determine an image cluster to which an image with an unknown person instance most corresponds. One or more embodiments determine a probability that the unknown person instance is each known person instance in the image cluster using a trained cluster classifier of the image cluster. Optionally, the methods and systems determine context weights for each combination of an unknown person instance and each known person instance using a conditional random field algorithm based on a plurality of context cues associated with the unknown person instance and the known person instances. The methods and systems calculate a contextual probability based on the cluster-based probabilities and context weights to identify the unknown person instance.
    Type: Application
    Filed: July 30, 2018
    Publication date: November 22, 2018
    Inventors: Jonathan Brandt, Zhe Lin, Xiaohui Shen, Haoxiang Li
  • Patent number: 10068129
    Abstract: Methods and systems for recognizing people in images with increased accuracy are disclosed. In particular, the methods and systems divide images into a plurality of clusters based on common characteristics of the images. The methods and systems also determine an image cluster to which an image with an unknown person instance most corresponds. One or more embodiments determine a probability that the unknown person instance is each known person instance in the image cluster using a trained cluster classifier of the image cluster. Optionally, the methods and systems determine context weights for each combination of an unknown person instance and each known person instance using a conditional random field algorithm based on a plurality of context cues associated with the unknown person instance and the known person instances. The methods and systems calculate a contextual probability based on the cluster-based probabilities and context weights to identify the unknown person instance.
    Type: Grant
    Filed: November 18, 2015
    Date of Patent: September 4, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jonathan Brandt, Zhe Lin, Xiaohui Shen, Haoxiang Li
  • Patent number: 9697416
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: July 4, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20170140213
    Abstract: Methods and systems for recognizing people in images with increased accuracy are disclosed. In particular, the methods and systems divide images into a plurality of clusters based on common characteristics of the images. The methods and systems also determine an image cluster to which an image with an unknown person instance most corresponds. One or more embodiments determine a probability that the unknown person instance is each known person instance in the image cluster using a trained cluster classifier of the image cluster. Optionally, the methods and systems determine context weights for each combination of an unknown person instance and each known person instance using a conditional random field algorithm based on a plurality of context cues associated with the unknown person instance and the known person instances. The methods and systems calculate a contextual probability based on the cluster-based probabilities and context weights to identify the unknown person instance.
    Type: Application
    Filed: November 18, 2015
    Publication date: May 18, 2017
    Inventors: Jonathan Brandt, Zhe Lin, Xiaohui Shen, Haoxiang Li
  • Patent number: 9563825
    Abstract: A convolutional neural network is trained to analyze input data in various different manners. The convolutional neural network includes multiple layers, one of which is a convolution layer that performs a convolution, for each of one or more filters in the convolution layer, of the filter over the input data. The convolution includes generation of an inner product based on the filter and the input data. Both the filter of the convolution layer and the input data are binarized, allowing the inner product to be computed using particular operations that are typically faster than multiplication of floating point values. The possible results for the convolution layer can optionally be pre-computed and stored in a look-up table. Thus, during operation of the convolutional neural network, rather than performing the convolution on the input data, the pre-computed result can be obtained from the look-up table.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: February 7, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160307074
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Application
    Filed: June 29, 2016
    Publication date: October 20, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9418319
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: August 16, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160148079
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Application
    Filed: November 21, 2014
    Publication date: May 26, 2016
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160148078
    Abstract: A convolutional neural network is trained to analyze input data in various different manners. The convolutional neural network includes multiple layers, one of which is a convolution layer that performs a convolution, for each of one or more filters in the convolution layer, of the filter over the input data. The convolution includes generation of an inner product based on the filter and the input data. Both the filter of the convolution layer and the input data are binarized, allowing the inner product to be computed using particular operations that are typically faster than multiplication of floating point values. The possible results for the convolution layer can optionally be pre-computed and stored in a look-up table.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 26, 2016
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt