Patents by Inventor Yurong Chen

Yurong Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250259055
    Abstract: A PGConv layer extract features from grid-structured data samples. The PGConv layer may receive an input feature map including a grid representation of an object, which is generated from a graph representation of the object. The grid representation includes node elements that are arranged in a grid pattern. The PGConv layer may perform padding on the grid representation to generate an IFM that includes the node elements and the additional node elements. An additional node element may have a value of zero or a value of a node element in the grid representation. The PGConv layer may also generate an attentive kernel that includes attentive weights determined based on the IFM. The PGConv layer may generate a dynamic kernel based on the attentive kernel and a convolutional kernel generated through training. The PGConv layer may further perform MAC operations on the IFM and the dynamic kernel and generate an OFM.
    Type: Application
    Filed: May 16, 2022
    Publication date: August 14, 2025
    Applicant: Intel Corporation
    Inventors: Anbang Yao, Chao Li, Yangyuxuan Kang, Dongqi Cai, Xiaolong Liu, Yi Yang, Yurong Chen
  • Publication number: 20250238895
    Abstract: The application relates to a multi-exit visual synthesis network (VSN) based on dynamic patch computing. A method for visual synthesis is provided and includes: splitting an input image into multiple input patches; performing a synthesis process on each input patch with a first layer to an ith exit layer of a multi-exit VSN to obtain an ith intermediate synthesis patch, where i is an index of an intermediate exit of the VSN and predetermined as an integer greater than or equal to 1; predicting an incremental improvement of a (i+1)th intermediate synthesis patch relative to the ith intermediate synthesis patch based on features in the ith intermediate synthesis patch; determining a final exit of the VSN and a final synthesis patch for the input patch based on the predicted incremental improvement; and merging respective final synthesis patches for the multiple input patches to generate an output image.
    Type: Application
    Filed: May 6, 2022
    Publication date: July 24, 2025
    Inventors: Ming Lu, Anbang Yao, Yanjie Pan, Li Xu, Yurong Chen
  • Patent number: 12331329
    Abstract: Provided are compositions for genome editing and site-directed integration in plants comprising microprojectile particles coated, treated of applied with a recombinant DNA construct comprising a sequence encoding one or more genome editing reagents for delivery to a mature embryo explant from dry seeds. Further provided are methods for genome editing and site-directed integration in at least one cell of a plant using the disclosed compositions, and plants, plant parts and seeds comprising an edited genome or site-directed integration, which are produced by the disclosed methods.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: June 17, 2025
    Assignee: Monsanto Technology LLC
    Inventors: Yurong Chen, Annie Saltarikos, Jianping Xu, Xudong Ye
  • Patent number: 12299927
    Abstract: Apparatus and methods for three-dimensional pose estimation are disclosed herein. An example apparatus includes an image synchronizer to synchronize a first image generated by a first image capture device and a second image generated by a second image capture device, the first image and the second image including a subject; a two-dimensional pose detector to predict first positions of keypoints of the subject based on the first image and by executing a first neural network model to generate first two-dimensional data and predict second positions of the keypoints based on the second image and by executing the first neural network model to generate second two-dimensional data; and a three-dimensional pose calculator to generate a three-dimensional graphical model representing a pose of the subject in the first image and the second image based on the first two-dimensional data, the second two-dimensional data, and by executing a second neural network model.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 13, 2025
    Assignee: Intel Corporation
    Inventors: Shandong Wang, Yangyuxuan Kang, Anbang Yao, Ming Lu, Yurong Chen
  • Publication number: 20250148761
    Abstract: The disclosure provides an apparatus, method, device and medium for 3D dynamic sparse convolution. The method includes: receiving an input feature map of a 3D data sample; performing input feature map partition to divide the input feature map into a plurality of disjoint input feature map groups; performing a shared 3D dynamic sparse convolution to the plurality of disjoint input feature map groups respectively to obtain a plurality of output feature maps corresponding to the plurality of disjoint input feature map groups, wherein the shared 3D dynamic sparse convolution comprises a shared 3D dynamic sparse convolutional kernel; and performing output feature map grouping to sequentially stack the plurality of output feature maps to obtain an output feature map corresponding to the input feature map. (FIG. 2).
    Type: Application
    Filed: March 3, 2022
    Publication date: May 8, 2025
    Inventors: Dongqi CAI, Anbang YAO, Chao LI, Shandong WANG, Yurong CHEN
  • Publication number: 20250117639
    Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
    Type: Application
    Filed: September 16, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Anbang Yao, Aojun Zhou, Kuan Wang, Hao Zhao, Yurong Chen
  • Publication number: 20250068891
    Abstract: Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement dynamic triplet convolution for convolutional neural networks are disclosed. An example apparatus disclosed herein for a convolutional neural network is to calculate one or more scalar kernels based on an input feature map applied to a layer of the convolutional neural network, ones of the one or more scalar kernels corresponding to respective dimensions of a static multidimensional convolutional filter associated with the layer of the convolutional neural network. The disclosed example apparatus is also to scale elements of the static multidimensional convolutional filter along a first one of the dimensions based on a first one of the one or more scalar kernels corresponding to the first one of the dimensions to determine a dynamic multidimensional convolutional filter associated with the layer of the convolutional neural network.
    Type: Application
    Filed: February 18, 2022
    Publication date: February 27, 2025
    Inventors: Dongqi CAI, Anbang YAO, Chao LI, Yurong CHEN, Wenjian SHAO
  • Publication number: 20250068916
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for teacher-free self-feature distillation training of machine-learning (ML) models. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to perform a first comparison of (i) a first group of a first set of feature channels (FCs) of an ML model and (ii) a second group of the first set, perform a second comparison of (iii) a first group of a second set of FCs of the ML model and one of (iv) a third group of the first set or a first group of a third set of FCs of the ML model, adjust parameter(s) of the ML model based on the first and/or second comparisons, and, in response to an error value satisfying a threshold, deploy the ML model to execute a workload based on the parameter(s).
    Type: Application
    Filed: February 21, 2022
    Publication date: February 27, 2025
    Inventors: Yurong Chen, Anbang Yao, Yi Qian, Yu Zhang, Shandong Wang
  • Publication number: 20250053814
    Abstract: A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.
    Type: Application
    Filed: August 14, 2024
    Publication date: February 13, 2025
    Applicant: Intel Corporation
    Inventors: Yurong Chen, Jianguo Li, Renkun Ni
  • Patent number: 12220685
    Abstract: The present invention relates to the field of solid-phase extraction, and particularly to a solid-phase extraction material, and a preparation method and use thereof. The preparation method includes prepolymerizing the monomers N-vinylpyrrolidone and divinylbenzene in the presence of a chain transfer agent, adding prepolymer dropwise to an emulsion of monodispersed seed microspheres, swelling, and reacting to prepare white spheres; and functionalizing the white spheres, to obtain the solid-phase extraction material. The solid-phase extraction material prepared by the reaction has good spherical morphology, large specific surface area, and high ion exchange capacity. The prepared solid-phase extraction material functions in the separation and enrichment of PPCPs by means of a variety of forces with a high extraction rate. The extraction rate is basically maintained between 85% and 105%, and acidic, alkaline, neutral and amphoteric substances is capable of being selectively separated.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: February 11, 2025
    Assignee: NANJING UNIVERSITY
    Inventors: Qing Zhou, Ziang Zhang, Junxia Chen, Yurong Chen, Chongtian Lei, Ranqiu Wang, Weiwei Zhou, Libin Zhang
  • Publication number: 20250045573
    Abstract: The disclosure relates to decimal-bit network quantization of CNN models.
    Type: Application
    Filed: March 3, 2022
    Publication date: February 6, 2025
    Inventors: Anbang YAO, Yikai WANG, Zhaole SUN, Yi YANG, Feng CHEN, Zhuo WANG, Shandong WANG, Yurong CHEN
  • Publication number: 20250045582
    Abstract: Techniques related to compressing a pre-trained dense deep neural network to a sparsely connected deep neural network for efficient implementation are discussed. Such techniques may include iteratively pruning and splicing available connections between adjacent layers of the deep neural network and updating weights corresponding to both currently disconnected and currently connected connections between the adjacent layers.
    Type: Application
    Filed: August 14, 2024
    Publication date: February 6, 2025
    Applicant: Intel Corporation
    Inventors: Anbang Yao, Yiwen Guo, Yan Li, Yurong Chen
  • Patent number: 12217163
    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps.
    Type: Grant
    Filed: September 22, 2023
    Date of Patent: February 4, 2025
    Assignee: Intel Corporation
    Inventors: Yiwen Guo, Yuqing Hou, Anbang Yao, Dongqi Cai, Lin Xu, Ping Hu, Shandong Wang, Wenhua Cheng, Yurong Chen, Libin Wang
  • Patent number: 12165065
    Abstract: A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: December 10, 2024
    Assignee: INTEL CORPORATION
    Inventors: Yurong Chen, Jianguo Li, Renkun Ni
  • Patent number: 12154309
    Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
    Type: Grant
    Filed: September 6, 2023
    Date of Patent: November 26, 2024
    Assignee: Intel Corporation
    Inventors: Anbang Yao, Yun Ren, Hao Zhao, Tao Kong, Yurong Chen
  • Publication number: 20240370716
    Abstract: Methods and apparatus for discrimitive semantic transfer and physics-inspired optimization in deep learning are disclosed. A computation training method for a convolutional neural network (CNN) includes receiving a sequence of training images in the CNN of a first stage to describe objects of a cluttered scene as a semantic segmentation mask. The semantic segmentation mask is received in a semantic segmentation network of a second stage to produce semantic features. Using weights from the first stage as feature extractors and weights from the second stage as classifiers, edges of the cluttered scene are identified using the semantic features.
    Type: Application
    Filed: July 11, 2024
    Publication date: November 7, 2024
    Inventors: Anbang YAO, Hao ZHAO, Ming LU, Yiwen GUO, Yurong CHEN
  • Patent number: 12112256
    Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: October 8, 2024
    Assignee: Intel Corporation
    Inventors: Anbang Yao, Aojun Zhou, Kuan Wang, Hao Zhao, Yurong Chen
  • Publication number: 20240331371
    Abstract: Methods and apparatus to perform parallel double-batched self-distillation in resource-constrained image recognition environments are disclosed herein. Example apparatus disclosed herein are to identify a source data batch and an augmented data batch, the augmented data generated based on at least one data augmentation technique. Disclosed example apparatus is also to share one or more parameters between a student neural network corresponding to the source data batch and a teacher neural network corresponding to the augmented data batch, the one or more parameters including one or more convolution layers to be shared between the teacher neural network and the student neural network. Disclosed example apparatus is further to align knowledge corresponding to the teacher neural network and the student neural network, the knowledge corresponding to the one or more parameters shared between the student neural network and the teacher neural network.
    Type: Application
    Filed: November 30, 2021
    Publication date: October 3, 2024
    Inventors: Yurong Chen, Anbang Yao, Ming Lu, Dongqi Cai, Xiaolong Liu
  • Publication number: 20240312196
    Abstract: An apparatus, method, device and medium for dynamic quadruple convolution in a 3-dimensional (3D) convolutional neural network (CNN) are provided. The method includes: a multi-dimensional attention block configured to: receive an input feature map of a video data sample; and dynamically generate convolutional kernel scalars along four dimensions of a 3-dimensional convolution kernel space based on the input feature map, the four dimensions comprising an output channel number, an input channel number, a temporal size and a spatial size; and a convolution block configured to sequentially multiply the generated convolutional kernel scalars with a static 3D convolution kernel in a matrix-vector product way to obtain a dynamic kernel of dynamic quadruple convolution.
    Type: Application
    Filed: November 30, 2021
    Publication date: September 19, 2024
    Inventors: Dongqi CAI, Anbang YAO, Yurong CHEN, Chao LI
  • Patent number: D1067978
    Type: Grant
    Filed: December 3, 2024
    Date of Patent: March 25, 2025
    Inventor: Yurong Chen