Patents by Inventor Yezhou YANG

Yezhou YANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117935
    Abstract: In accordance with some embodiments of the disclosed subject matter, systems, methods, and media for automatically transforming a digital image into a simulated pathology image are provided. In some embodiments, the method comprises: receiving a content image from an endomicroscopy device; receiving, from a hidden layer of a convolutional neural network (CNN) trained to recognize a multitude of classes of common objects, features indicative of content of the content image; receiving, providing a style reference image to the CNN; receiving, from another hidden layer of the CNN, features indicative of a style of the style reference image; receiving, from the hidden layers of the CNN, features indicative of content and style of a target image; generating a loss value based on the features of the content image, the style reference image, and the target image; minimizing the loss value; and displaying the target image with the minimized loss.
    Type: Application
    Filed: October 18, 2024
    Publication date: April 10, 2025
    Inventors: Mohammadhassan Izadyyazdanabadi, Mark C. Preul, Evgenii Belykh, Yezhou Yang
  • Patent number: 12131461
    Abstract: In accordance with some embodiments of the disclosed subject matter, systems, methods, and media for automatically transforming a digital image into a simulated pathology image are provided. In some embodiments, the method comprises: receiving a content image from an endomicroscopy device; receiving, from a hidden layer of a convolutional neural network (CNN) trained to recognize a multitude of classes of common objects, features indicative of content of the content image; receiving, providing a style reference image to the CNN; receiving, from another hidden layer of the CNN, features indicative of a style of the style reference image; receiving, from the hidden layers of the CNN, features indicative of content and style of a target image; generating a loss value based on the features of the content image, the style reference image, and the target image; minimizing the loss value; and displaying the target image with the minimized loss.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: October 29, 2024
    Assignees: DIGNITY HEALTH, ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Mohammadhassan Izadyyazdanabadi, Mark C. Preul, Evgenii Belykh, Yezhou Yang
  • Publication number: 20240303349
    Abstract: A system may be configured for implementing targeted attacks on deep reinforcement learning-based autonomous driving with learned visual patterns. In some examples, processing circuitry receives first input specifying an initial state for a driving environment and user configurable input specifying a target state. Processing circuitry may generate a representative dataset of the driving environment by performing multiple rollouts of the vehicle through the driving environment, including performing an action for the vehicle from the initial state with variable strength noise added to determine a next state for each rollout resulting from the action. Processing circuitry may train an artificial intelligence model to output a next predicted state based on the representative dataset as training input. In such an example, processing circuitry outputs from the artificial intelligence model, an attack plan against the autonomous driving agent to achieve the target state from the initial state.
    Type: Application
    Filed: March 8, 2024
    Publication date: September 12, 2024
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Prasanth Buddareddygari, Travis Zhang, Yezhou Yang, Yi Ren
  • Publication number: 20240096047
    Abstract: Based on traffic images characteristics, a general pre-processing system and method reduces input size of neural network object recognition models to focus on necessary regions. The system includes a light neural network (binary or low precision; based on configuration) to detect target regions for further processing and applies a deeper model to those specific regions. The present disclosure provides experimentation results on various types of methods, such as conventional convolutional neural networks, transformers, and adaptive models, to show the scalability of the system.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 21, 2024
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Mohammad Farhadi, Yezhou Yang, Rahul Santhosh Kumar Varma
  • Publication number: 20220198332
    Abstract: A system and associated methods for decentralized attribution of GAN models is disclosed. Given a group of models derived from the same dataset and published by different users, attributability is achieved when a public verification service associated with each model (a linear classifier) returns positive only for outputs of that model. Each model is parameterized by keys distributed by a registry. The keys are computed from first-order sufficient conditions for decentralized attribution. The keys are orthogonal or opposite to each other and belong to a subspace dependent on the data distribution and the architecture of the generative model.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 23, 2022
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Yezhou Yang, Changhoon Kim, Yi Ren
  • Publication number: 20220121855
    Abstract: Temporal knowledge distillation for active perception is provided. Despite significant performance improvements in object detection and classification using deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing that a human visual system (HVS) relies heavily on temporal dependencies among frames from visual input to conduct recognition efficiently, embodiments described herein propose a novel framework dubbed as temporal knowledge distillation (TKD). The TKD framework distills temporal knowledge gained from a heavy neural network-based model over selected video frames (e.g., the perception of the moments) for a light-weight model. To enable the distillation, two novel procedures are described: 1) a long-short term memory (LSTM)-based key frame selection method; and 2) a novel teacher-bounded loss design.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 21, 2022
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Mohammad Farhadi, Yezhou Yang
  • Publication number: 20220067453
    Abstract: Adaptive and hierarchical convolutional neural networks (AH-CNNs) using partial reconfiguration on a field-programmable gate array (FPGA) are provided. An AH-CNN is implemented to adaptively switch between shallow and deep networks to reach a higher throughput on resource-constrained devices, such as a multiprocessor system on a chip (MPSoC) with a central processing unit (CPU) and FPGA. To this end, the AH-CNN includes a novel CNN architecture having three parts: 1) a shallow part which is a light-weight CNN model, 2) a decision layer which evaluates the shallow part's performance and makes a decision whether deeper processing would be beneficial, and 3) one or more deep parts which are deep CNNs with a high inference accuracy.
    Type: Application
    Filed: September 1, 2021
    Publication date: March 3, 2022
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Mohammad Farhadi, Yezhou Yang
  • Publication number: 20220051400
    Abstract: In accordance with some embodiments of the disclosed subject matter, systems, methods, and media for automatically transforming a digital image into a simulated pathology image are provided. In some embodiments, the method comprises: receiving a content image from an endomicroscopy device; receiving, from a hidden layer of a convolutional neural network (CNN) trained to recognize a multitude of classes of common objects, features indicative of content of the content image; receiving, providing a style reference image to the CNN; receiving, from another hidden layer of the CNN, features indicative of a style of the style reference image; receiving, from the hidden layers of the CNN, features indicative of content and style of a target image; generating a loss value based on the features of the content image, the style reference image, and the target image; minimizing the loss value; and displaying the target image with the minimized loss.
    Type: Application
    Filed: January 28, 2020
    Publication date: February 17, 2022
    Inventors: Mohammadhassan Izadyyazdanabadi, Mark C. Preul, Evgenii Belykh, Yezhou Yang
  • Patent number: 10849532
    Abstract: Methods and systems are presented for kinematic tracking and assessment of upper extremity function of a patient. A sequence of 2D images is captured by one or more cameras of a patient performing an upper extremity function assessment tasks. The captured images are processed to separately track body movements in 3D space, hand movements, and object movements. The hand movements are tracked by adjusting a position, orientation, and finger positions of a three-dimensional virtual model of a hand to match the hand in each 2D image. Based on the tracked movement data, the system is able to identify specific aspects of upper extremity function that exhibit impairment instead of providing only a generalized indication of upper extremity impairment.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: December 1, 2020
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Marco Santello, Yezhou Yang, Qiushi Fu
  • Publication number: 20190143517
    Abstract: Various embodiments of systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision are disclosed.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 16, 2019
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Yezhou Yang, Wenlong Zhang, Yiwei Wang, Xin Ye
  • Publication number: 20160221190
    Abstract: Various systems may benefit from computer learning. For example, robotics systems may benefit from learning actions, such as manipulation actions, from unconstrained videos. A method can include processing a set of video images to obtain a collection of semantic entities. The method can also include processing the semantic entities to obtain at least one visual sentence from the set of video images. The method can further include deriving an action plan for a robot from the at least one visual sentence. The method can additionally include implementing the action plan by the robot. The processing the set of video images, the processing semantic entities, and the deriving the action plan can be computer implemented.
    Type: Application
    Filed: January 29, 2016
    Publication date: August 4, 2016
    Inventors: Yiannis ALOIMONOS, Cornelia FERMULLER, Yezhou YANG, Yi LI, Katerina PASTRA