Patents Examined by Brent Johnston Hoover
  • Patent number: 11966832
    Abstract: A method includes receiving a first data set comprising embeddings of first and second types, generating a fixed adjacency matrix from the first dataset, and applying a first stochastic binary mask to the fixed adjacency matrix to obtain a first subgraph of the fixed adjacency matrix. The method also includes processing the first subgraph through a first layer of a graph convolutional network (GCN) to obtain a first embedding matrix, and applying a second stochastic binary mask to the fixed adjacency matrix to obtain a second subgraph of the fixed adjacency matrix. The method includes processing the first embedding matrix and the second subgraph through a second layer of the GCN to obtain a second embedding matrix, and then determining a plurality of gradients of a loss function, and modifying the first stochastic binary mask and the second stochastic binary mask using at least one of the plurality of gradients.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: April 23, 2024
    Assignee: Visa International Service Association
    Inventors: Huiyuan Chen, Yu-San Lin, Lan Wang, Michael Yeh, Fei Wang, Hao Yang
  • Patent number: 11960984
    Abstract: An active learning framework is provided that employs a plurality of machine learning components that operate over iterations of a training phase followed by an active learning phase. In each iteration of the training phase, the machine learning components are trained from a pool of labeled observations. In the active learning phase, the machine learning components are configured to generate metrics used to control sampling of unlabeled observations for labeling such that newly labeled observations are added to a pool of labeled observations for the next iteration of the training phase. The machine learning components can include an inspection (or primary) learning component that generates a predicted label and uncertainty score for an unlabeled observation, and at least one additional component that generates a quality metric related to the unlabeled observation or the predicted label. The uncertainty score and quality metric(s) can be combined for efficient sampling of observations for labeling.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: April 16, 2024
    Assignee: Schlumberger Technology Corporation
    Inventors: Nader Salman, Guillaume Le Moing, Sepand Ossia, Vahagn Hakopian
  • Patent number: 11954586
    Abstract: A neural processing unit may comprise a first circuitry including a plurality of processing elements (PEs) configured to perform operations of an artificial neural network model, the plurality of PEs including an adder, a multiplier, and an accumulator, and a clock signal supply circuitry configured to output one or more clock signals. When the plurality of PEs include a first group of PEs and a second group of PEs, a first clock signal among the one or more clock signals, may be supplied to the first group of PEs and a second clock signal among the one or more clock signals, may be supplied to the second group of PEs. At least one of the first and second clock signals may have a preset phase based on a phase of an original clock signal.
    Type: Grant
    Filed: September 1, 2023
    Date of Patent: April 9, 2024
    Assignee: DEEPX CO., LTD.
    Inventors: Seong Jin Lee, Jung Boo Park, Lok Won Kim
  • Patent number: 11948067
    Abstract: Some embodiments of the invention provide a method for implementing a temporal convolution network (TCN) that includes several layers of machine-trained processing nodes. While processing one set of inputs that is provided to the TCN at a particular time, some of the processing nodes of the TCN use intermediate values computed by the processing nodes for other sets of inputs that were provided to the TCN at earlier times. To speed up the operation of the TCN and improve its efficiency, the method of some embodiments stores intermediate values computed by the TCN processing nodes for earlier sets of TCN inputs, so that these values can later be used for processing later set of TCN inputs.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: April 2, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Ryan J. Cassidy, Steven L. Teig
  • Patent number: 11941511
    Abstract: Some embodiments of the invention provide a method for implementing a temporal convolution network (TCN) that includes several layers of machine-trained processing nodes. While processing one set of inputs that is provided to the TCN at a particular time, some of the processing nodes of the TCN use intermediate values computed by the processing nodes for other sets of inputs that were provided to the TCN at earlier times. To speed up the operation of the TCN and improve its efficiency, the method of some embodiments stores intermediate values computed by the TCN processing nodes for earlier sets of TCN inputs, so that these values can later be used for processing later set of TCN inputs.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: March 26, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Ryan J. Cassidy, Steven L. Teig
  • Patent number: 11934958
    Abstract: This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that utilize channel pruning and knowledge distillation to generate a compact noise-to-image GAN. For example, the disclosed systems prune less informative channels via outgoing channel weights of the GAN. In some implementations, the disclosed systems further utilize content-aware pruning by utilizing a differentiable loss between an image generated by the GAN and a modified version of the image to identify sensitive channels within the GAN during channel pruning. In some embodiments, the disclosed systems utilize knowledge distillation to learn parameters for the pruned GAN to mimic a full-size GAN.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: March 19, 2024
    Assignee: Adobe Inc.
    Inventors: Zhixin Shu, Zhe Lin, Yuchen Liu, Yijun Li
  • Patent number: 11922295
    Abstract: An arithmetic device includes an activation function (AF) control circuit and a data storage circuit. The AF control circuit is configured to generate an activation period signal, an activation active signal, and an activation read signal based on an activation control signal. The data storage circuit includes at least one memory bank that is activated based on a bank active signal that is generated based on the activation active signal. The data storage circuit is configured to output data stored in a memory cell array, which is selected by a row address and a column address, as activation data based on the activation read signal.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: March 5, 2024
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song
  • Patent number: 11922302
    Abstract: Provided are a hyperparameter optimizer and method for optimizing hyperparameters and a spiking neural network processing unit. The optimizer includes a statistical analyzer configured to receive training data and perform statistical analysis on the training data, an objective function generator configured to generate hyperparameter-specific objective functions by using a statistical analysis value of the statistical analyzer, and an optimal hyperparameter selector configured to select optimal hyperparameters according to certain rules on the basis of the hyperparameter-specific objective functions.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: March 5, 2024
    Assignee: Korea Electronics Technology Institute
    Inventors: Seok Hoon Jeon, Byung Soo Kim, Hee Tak Kim, Tae Ho Hwang
  • Patent number: 11922324
    Abstract: Techniques are described herein for a method of determining a similarity of each neuron in a layer of neurons of a neural network model to each other neuron in the layer of neurons. The method further comprises determining a redundant set of neurons and a non-redundant set of neurons based on the similarity of each neuron in the layer. The method further comprises fine tuning the set of non-redundant neurons using a first set of training data. The method further comprises training the set of redundant neurons using a second set of training data.
    Type: Grant
    Filed: May 16, 2023
    Date of Patent: March 5, 2024
    Assignee: Tenyx, Inc.
    Inventors: Romain Cosentino, Adam Earle
  • Patent number: 11915141
    Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: February 27, 2024
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun Yoo, Dong Hyeon Han
  • Patent number: 11915125
    Abstract: An arithmetic device includes an AF circuit including a first table storage circuit. The AF circuit stores a table input signal into one variable latch selected based on an input selection signal among variable latches included in the first table storage circuit in a look-up table form when a table set signal is activated. The AF circuit extracts a result value of a first activation function realized by a look-up table based on an input distribution signal to output the extracted result value as a fist table output signal for generating an output distribution signal.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: February 27, 2024
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song
  • Patent number: 11907828
    Abstract: A field programmable gate array (FPGA) may be used for inference of a trained deep neural network (DNN). The trained DNN may comprise a set of parameters and the FPGA may have a first precision configuration defining first number representations of the set of parameters. The FPGA may determine different precision configurations of the trained DNN. A precision configuration of the precision configurations may define second number representations of a subset of the set of parameters. For each precision configuration of the determined precision configurations a bitstream file may be provided. The bitstream files may be stored so that the FPGA may be programmed using one of the stored bitstream files for inference of the trained DNN.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Mitra Purandare, Dionysios Diamantopoulos, Raphael Polig
  • Patent number: 11893473
    Abstract: A method for model adaptation, an electronic device, and a computer program product are disclosed. For example, the method comprises processing first input data by using a first machine learning model having first parameter set values, to obtain first feature information of the first input data, the first machine learning model having a capability of self-ordering and the first parameter set values being updated after the processing of the first input data; generating a first classification result for the first input data based on the first feature information by using a second machine learning model having second parameter set values; processing second input data by using the first machine learning model having the updated first parameter set values, to obtain second feature information of the second input data; and generating a second classification result for the second input data based on the second feature information by using the second machine learning model having the second parameter set values.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: February 6, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: WuiChak Wong, Sanping Li, Jin Li
  • Patent number: 11893477
    Abstract: A system may comprise a neural processing unit (NPU) including at least one memory and a plurality of processing elements (PEs) capable of performing operations for at least one artificial neural network (ANN) model. The plurality of PEs may include an adder, a multiplier, and an accumulator. The plurality of PEs may include a first group of PEs configured to operate on a first portion of a clock signal and a second group of PEs configured to operate on a second portion of the clock signal.
    Type: Grant
    Filed: July 17, 2023
    Date of Patent: February 6, 2024
    Assignee: DEEPX CO., LTD.
    Inventors: Lok Won Kim, Jung Boo Park, Seong Jin Lee
  • Patent number: 11887002
    Abstract: Disclosed is a method of generating data based on input data by using a pre-trained artificial neural network model having an encoder-decoder structure. In particular, according to the present disclosure, a computing device generates new data based on a probability distribution of input data by using a pre-trained artificial neural network model having an encoder-decoder structure, and the pre-trained artificial neural network model having the encoder-decoder structure corresponds to a pre-trained model in which a latent vector layer is included between an encoder layer and a decoder layer of the artificial neural network model.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: January 30, 2024
    Inventors: Seongmin Park, Jihwa Lee
  • Patent number: 11868891
    Abstract: In some aspects, a computing system can generate and optimize a neural network for risk assessment. The neural network can be trained to enforce a monotonic relationship between each of the input predictor variables and an output risk indicator. The training of the neural network can involve solving an optimization problem under a monotonic constraint. This constrained optimization problem can be converted to an unconstrained problem by introducing a Lagrangian expression and by introducing a term approximating the monotonic constraint. Additional regularization terms can also be introduced into the optimization problem. The optimized neural network can be used both for accurately determining risk indicators for target entities using predictor variables and determining explanation codes for the predictor variables. Further, the risk indicators can be utilized to control the access by a target entity to an interactive computing environment for accessing services provided by one or more institutions.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: January 9, 2024
    Assignee: Equifax Inc.
    Inventors: Matthew Turner, Lewis Jordan, Allan Joshua
  • Patent number: 11861504
    Abstract: A method of performing a class incremental learning in a neural network apparatus, the method including training an autoencoder using first input embeddings with respect to a first class group, calculating a contribution value of each of parameters of the autoencoder and calculating a representative value with respect to each of at least one first class included in the first class group in the training of the autoencoder, retraining the autoencoder using second input embeddings with respect to a second class group, and updating the contribution value of the each of the parameters and calculating a representative value with respect to each of at least one second class included in the second class group in the retraining the autoencoder.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: January 2, 2024
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Donghyun Lee, Euntae Choi, Kyungmi Lee, Kiyoung Choi
  • Patent number: 11853861
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output examples using neural networks. One of the methods includes receiving a request to generate an output example of a particular type, accessing dependency data, and generating the output example by, at each of a plurality of generation time steps: identifying one or more current blocks for the generation time step, wherein each current block is a block for which the values of the bits in all of the other blocks identified in the dependency for the block have already been generated; and generating the values of the bits in the current blocks for the generation time step conditioned on, for each current block, the already generated values of the bits in the other blocks identified in the dependency for the current block.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: December 26, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Nal Emmerich Kalchbrenner, Karen Simonyan, Erich Konrad Elsen
  • Patent number: 11847568
    Abstract: Some embodiments of the invention provide a novel method for training a quantized machine-trained network. Some embodiments provide a method of scaling a feature map of a pre-trained floating-point neural network in order to match the range of output values provided by quantized activations in a quantized neural network. A quantization function is modified, in some embodiments, to be differentiable to fix the mismatch between the loss function computed in forward propagation and the loss gradient used in backward propagation. Variational information bottleneck, in some embodiments, is incorporated to train the network to be insensitive to multiplicative noise applied to each channel. In some embodiments, channels that finish training with large noise, for example, exceeding 100%, are pruned.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: December 19, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11847550
    Abstract: A method, computer program product, and system perform computations using a processor. A first instruction including a first index vector operand and a second index vector operand is received and the first index vector operand is decoded to produce first coordinate sets for a first array, each first coordinate set including at least a first coordinate and a second coordinate of a position of a non-zero element in the first array. The second index vector operand is decoded to produce second coordinate sets for a second array, each second coordinate set including at least a third coordinate and a fourth coordinate of a position of a non-zero element in the second array. The first coordinate sets are summed with the second coordinate sets to produce output coordinate sets and the output coordinate sets are converted into a set of linear indices.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: December 19, 2023
    Assignee: NVIDIA Corporation
    Inventors: William J. Dally, Angshuman Parashar, Joel Springer Emer, Stephen William Keckler, Larry Robert Dennison