Patents Examined by Kamran Afshar
  • Patent number: 12112264
    Abstract: A device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells. Peripheral circuitry comprising readout circuitry is coupled to the first and second control lines. A control system generates control signals to control the peripheral circuitry to perform a first operation and a second operation on the array of RPU cells. The control signals include a first configuration control signal to configure the readout circuitry to have a first hardware configuration when the first operation is performed on the array of RPU cells, and a second configuration control signal to configure the readout circuitry to have a second hardware configuration, which is different from the first hardware configuration, when the second operation is performed on the array of RPU cells.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: October 8, 2024
    Assignee: International Business Machines Corporation
    Inventors: Malte Johannes Rasch, Tayfun Gokmen, Seyoung Kim
  • Patent number: 12106491
    Abstract: Embodiments of this application disclose a target tracking method performed at an electronic device. The electronic device obtains a first video stream and detects candidate regions within a current video frame in the first video stream. The electronic device then extracts, from the candidate regions, a deep feature corresponding to each candidate region and calculates a feature similarity for each candidate region and a deep feature of a target detected in a previous video frame. Finally, the electronic device determines, based on the feature similarity corresponding to the candidate region, that the target is detected in the current video frame. Target detection is performed in a range of video frames by using a target detection model, and target tracking is performed based on the deep feature, so that occurrence of cases such as a target tracking drift or loss can be effectively prevented, to ensure the accuracy of target tracking.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: October 1, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hao Zhang, Zhiwei Niu
  • Patent number: 12079704
    Abstract: A system includes a data collection engine, a plurality of items including radio-frequency identification chips, a plurality of third party data and insight sources, a plurality of interfaces, client devices, a server and method thereof for preventing suicide. The server includes trained machine learning models, business logic and attributes of a plurality of patient events. The data collection engine sends attributes of new patient events to the server. The server can predict an adverse event risk of the new patient events based upon the attributes of the new patient events utilizing the trained machine learning models.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: September 3, 2024
    Assignee: Brain Trust Innovations I, LLC
    Inventor: David LaBorde
  • Patent number: 12067479
    Abstract: Systems and methods for heterogenous hardware acceleration are disclosed. The systems and methods can include a neural network processing unit comprising compute tiles. Each of a first set of the compute tiles can include a first tensor array configured to support operations in a first number format. Each of a second set of the compute tiles can include a second tensor array configured to support operations in a second number format, the second number format supporting a greater range or a greater precision than the first number format, and a de-quantizer configured to convert data in the first number format to data in the second number format. The systems and methods can include neural network processing units, multi-chip hardware accelerators and distributed hardware accelerators including low-precision components for performing interference tasks and high-precision components for performing training tasks.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: August 20, 2024
    Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.
    Inventor: Liang Han
  • Patent number: 12067484
    Abstract: An example method of training a neural network includes defining hardware building blocks (HBBs), neuron equivalents (NEQs), and conversion procedures from NEQs to HBBs; defining the neural network using the NEQs in a machine learning framework; training the neural network on a training platform; and converting the neural network as trained into a netlist of HBBs using the conversion procedures to convert the NEQs in the neural network to the HBBs of the netlist.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: August 20, 2024
    Assignee: XILINX, INC.
    Inventors: Yaman Umuroglu, Nicholas Fraser, Michaela Blott, Kristof Denolf, Kornelis A. Vissers
  • Patent number: 12067485
    Abstract: Methods, systems, and non-transitory computer readable medium are provided for long short-term memory (LSTM) anomaly detection for multi-sensor equipment monitoring. A method includes training a LSTM recurrent neural network (RNN) model for semiconductor processing fault detection. The training includes generating training data for the LSTM RNN model and providing the training data to train the LSTM RNN model on first training input and first target output to generate a trained LSTM RNN model for the semiconductor processing fault detection. The training data includes the first training input and the first target output based on normal runs of manufacturing processes of semiconductor processing equipment. Another method includes providing input based on runs of manufacturing processes of semiconductor processing equipment to a trained LSTM RNN model; obtaining one or more outputs from the trained LSTM RNN model; and using the one or more outputs for semiconductor processing fault detection.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: August 20, 2024
    Assignee: Applied Materials, Inc
    Inventors: Sima Didari, Tianqing Liao, Harikrishnan Rajagopal
  • Patent number: 12056604
    Abstract: Layers of a deep neural network (DNN) are partitioned into stages using a profile of the DNN. Each of the stages includes one or more of the layers of the DNN. The partitioning of the layers of the DNN into stages is optimized in various ways including optimizing the partitioning to minimize training time, to minimize data communication between worker computing devices used to train the DNN, or to ensure that the worker computing devices perform an approximately equal amount of the processing for training the DNN. The stages are assigned to the worker computing devices. The worker computing devices process batches of training data using a scheduling policy that causes the workers to alternate between forward processing of the batches of the DNN training data and backward processing of the batches of the DNN training data. The stages can be configured for model parallel processing or data parallel processing.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: August 6, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vivek Seshadri, Amar Phanishayee, Deepak Narayanan, Aaron Harlap, Nikhil Devanur Rangarajan
  • Patent number: 12045725
    Abstract: Some embodiments provide a method for training a network including layers that each includes multiple nodes. The method identifies a set of related layers of the network. Each node in one of the related layers has corresponding nodes in each of the other related layers. Each set of corresponding nodes receives a same set of inputs and applies different sets of weights to the inputs to generate an output. The method identifies an element-wise addition layer including nodes that each add outputs of a different set of corresponding nodes from the related layers to generate a sum. The method uses a set of outputs generated by the nodes of each related layer to determine batch normalization parameters specific to each layer of the set of related layers. The method uses data generated by the element-wise addition layer to determine batch normalization parameters for the set of related layers.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: July 23, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 12039432
    Abstract: An artificial neural network (ANN) apparatus can include processing component circuitry that receives linear inputs, and removes linearity from the one or more linear inputs based on an S-shaped saturating activation function that generates a continuous non-linear output. The neurons of the ANN comprise digital bit-wise components configured to transform the linear inputs into the continuous non-linear output.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: July 16, 2024
    Assignee: Infineon Technologies AG
    Inventor: Andrew Stevens
  • Patent number: 12033053
    Abstract: Embodiments of the invention may execute a NN by executing sub-tensor columns, each sub-tensor column including computations from portions of a layers of the NN, and each sub-tensor column performing computations entirely within a first layer of cache (e.g. L2 in one embodiment) and saving its output entirely within a second layer of cache (e.g. L3 in one embodiment). Embodiments may include partitioning the execution of a NN by partitioning the execution of the NN into sub-tensor columns, each sub-tensor column including computations from portions of layers of the NN, each sub-tensor column performing computations entirely within a first layer of cache and saving its output entirely within a second layer of cache.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: July 9, 2024
    Assignee: NEURALMAGIC, INC.
    Inventors: Alexander Matveev, Nir Shavit, Govind Ramnarayan
  • Patent number: 12020163
    Abstract: A method includes receiving a request to solve a problem defined by input information and applying a neural network to generate an answer to the problem. The neural network includes an input level, a manager level including a first manager, a worker level including first and second workers, and an output level. Applying the neural network includes implementing the input level to provide a piece of input information to the first manager; implementing the first manager to delegate portions of the piece of information to the first and second workers; implementing the first worker to operate on its portion of information to generate a first output; implementing the second worker to operate on its portion of information to generate a second output; and implementing the output level to generate the answer to the problem, using the first and second outputs. The method also includes transmitting a response comprising the answer.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: June 25, 2024
    Assignee: Bank of America Corporation
    Inventors: Garrett Thomas Botkin, Matthew Bruce Murray
  • Patent number: 12020160
    Abstract: A method, computer program product and system for generating a neural network. Initial neural networks are prepared, each of which includes an input layer containing one or more input nodes, a middle layer containing one or more middle nodes, and an output layer containing one or more output nodes. A new neural network is generated that includes a new middle layer containing one or more middle nodes based on the middle nodes of the middle layers of the initial neural networks.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: June 25, 2024
    Assignee: International Business Machines Corporation
    Inventor: Takeshi Inagaki
  • Patent number: 12014262
    Abstract: Disclosed herein are apparatus, method, and computer-readable storage device embodiments for implementing deconvolution via a set of convolutions. An embodiment includes a convolution processor that includes hardware implementing logic to perform at least one algorithm comprising a convolution algorithm. The at least one convolution processor may be further configured to perform operations including performing a first convolution and outputting a first deconvolution segment as a result of the performing the first convolution. The at least one convolution processor may be further configured to perform a second convolution and output a second deconvolution segment as a result of the performing the second convolution.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: June 18, 2024
    Assignee: SYNOPSYS, INC.
    Inventors: Tom Michiels, Thomas Julian Pennello
  • Patent number: 12001944
    Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine an optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: June 4, 2024
    Assignee: INTEL CORPORATION
    Inventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
  • Patent number: 11995533
    Abstract: Some embodiments provide a method for executing a layer of a neural network, for a circuit that restricts a number of weight values used per layer. The method applies a first set of weights to a set of inputs to generate a first set of results. The first set of weights are restricted to a first set of allowed values. For each of one or more additional sets of weights, the method applies the respective additional set of weights to the same set of inputs to generate a respective additional set of results. The respective additional set of weights is restricted to a respective additional set of allowed values that is related to the first set of allowed values and the other additional sets of allowed values. The method generates outputs for the particular layer by combining the first set of results with each respective additional set of results.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 28, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11954597
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using embedded function with a deep network. One of the methods includes receiving an input comprising a plurality of features, wherein each of the features is of a different feature type; processing each of the features using a respective embedding function to generate one or more numeric values, wherein each of the embedding functions operates independently of each other embedding function, and wherein each of the embedding functions is used for features of a respective feature type; processing the numeric values using a deep network to generate a first alternative representation of the input, wherein the deep network is a machine learning model composed of a plurality of levels of non-linear operations; and processing the first alternative representation of the input using a logistic regression classifier to predict a label for the input.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Gregory S. Corrado, Kai Chen, Jeffrey A. Dean, Gary R. Holt, Julian P. Grady, Sharat Chikkerur, David W. Sculley, II
  • Patent number: 11948065
    Abstract: A system that uses one or more artificial intelligence models that predict an effect of a predicted event on a current state of the system. For example, the model may predict how a rate of change in time-series data may be altered throughout the first time period based on the predicted event.
    Type: Grant
    Filed: June 1, 2023
    Date of Patent: April 2, 2024
    Assignee: Citigroup Technology, Inc.
    Inventors: Ernst Wilhelm Spannhake, II, Thomas Francis Gianelle, Milan Shah
  • Patent number: 11948074
    Abstract: Disclosed is a processor-implemented data processing method in a neural network. A data processing apparatus includes at least one processor, and at least one memory configured to store instructions to be executed by the processor and a neural network, wherein the processor is configured to, based on the instructions, input an input activation map into a current layer included in the neural network, output an output activation map by performing a convolution operation between the input activation map and a weight quantized with a first representation bit number of the current layer, and output a quantized activation map by quantizing the output activation map with a second representation bit number based on an activation quantization parameter.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Chang Kyu Choi
  • Patent number: 11948073
    Abstract: Systems, apparatuses, and methods for adaptively mapping a machine learning model to a multi-core inference accelerator engine are disclosed. A computing system includes a multi-core inference accelerator engine with multiple inference cores coupled to a memory subsystem. The system also includes a control unit which determines how to adaptively map a machine learning model to the multi-core inference accelerator engine. In one implementation, the control unit selects a mapping scheme which minimizes the memory bandwidth utilization of the multi-core inference accelerator engine. In one implementation, this mapping scheme involves having one inference core of the multi-core inference accelerator engine fetch given data and broadcast the given data to other inference cores of the inference accelerator engine. Each inference core fetches second data unique to the respective inference core.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: April 2, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Lei Zhang, Sateesh Lagudu, Allen Rush
  • Patent number: 11922316
    Abstract: A computer-implemented method includes: initializing model parameters for training a neural network; performing a forward pass and backpropagation for a first minibatch of training data; determining a new weight value for each of a plurality of nodes of the neural network using a gradient descent of the first minibatch; for each determined new weight value, determining whether to update a running mean corresponding to a weight of a particular node; based on a determination to update the running mean, calculating a new mean weight value for the particular node using the determined new weight value; updating the weight parameters for all nodes based on the calculated new mean weight values corresponding to each node; assigning the running mean as the weight for the particular node when training on the first minibatch is completed; and reinitializing running means for all nodes at a start of training a second minibatch.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: March 5, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah