Patents Examined by Benjamin P. Geib
  • Patent number: 11790235
    Abstract: Computer systems and methods modify a base deep neural network (DNN). The method comprises replacing the target node of the base DNN with a compound node to thereby create a modified base DNN. The compound node comprises at least first and second nodes. The first node is trained to detect target node patterns in inputs to the first node and the second node is trained to detect an absence of the target node patterns in inputs to the second node, and the first and second nodes are trained to be non-complementary.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: October 17, 2023
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 11790221
    Abstract: Many of the features of neural networks for machine learning can naturally be mapped into the quantum optical domain by introducing the quantum optical neural network (QONN). A QONN can be performed to perform a range of quantum information processing tasks, including newly developed protocols for quantum optical state compression, reinforcement learning, black-box quantum simulation and one way quantum repeaters. A QONN can generalize from only a small set of training data onto previously unseen inputs. Simulations indicate that QONNs are a powerful design tool for quantum optical systems and, leveraging advances in integrated quantum photonics, a promising architecture for next generation quantum processors.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: October 17, 2023
    Assignee: Massachusetts Institute of Technology
    Inventors: Jacques Johannes Carolan, Gregory R. Steinbrecher, Dirk Robert Englund
  • Patent number: 11783168
    Abstract: Disclosed are a network accuracy quantification method, system, and device, an electronic device and a readable medium, which are applicable to a many-core chip. The method includes: determining a reference accuracy according to a total core resource number of the many-core chip and the number of core resources required by each network to be quantified, with the number of the core resources required by each network to be quantified being the number of the core resources which is determined after each network to be quantified is quantified; and determining a target accuracy corresponding to each network to be quantified according to the reference accuracy and the total core resource number of the many-core chip.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: October 10, 2023
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Fanhui Meng, Chuan Hu, Han Li, Xinyang Wu, Yaolong Zhu
  • Patent number: 11740898
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 29, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11734552
    Abstract: A neural processing device is provided. The neural processing device comprises: an activation buffer in which first and second input activations are stored, an activation compressor configured to generate a first compressed input activation by using the first and second input activations, and a tensor unit configured to perform two-dimensional calculations using the first compressed input activation, wherein the first compressed input activation comprises first input row data comprising at least a portion of the first input activation and at least a portion of the second input activation, and first metadata corresponding to the first input row data.
    Type: Grant
    Filed: August 24, 2022
    Date of Patent: August 22, 2023
    Assignee: Rebellions Inc.
    Inventor: Minhoo Kang
  • Patent number: 11720362
    Abstract: An apparatus and method for a tensor permutation engine. The TPE may include a read address generation unit (AGU) to generate a plurality of read addresses for the plurality of tensor data elements in a first storage and a write AGU to generate a plurality of write addresses for the plurality of tensor data elements in the first storage. The TPE may include a shuffle register bank comprising a register to read tensor data elements from the plurality of read addresses generated by the read AGU, a first register bank to receive the tensor data elements, and a shift register to receive a lowest tensor data element from each bank in the first register bank, each tensor data element in the shift register to be written to a write address from the plurality of write addresses generated by the write AGU.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 8, 2023
    Assignee: Intel Corporation
    Inventor: Berkin Akin
  • Patent number: 11710041
    Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: July 25, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Tianshi Chen, Yifan Hao, Shaoli Liu
  • Patent number: 11704575
    Abstract: Neural networks can be implemented with DNA strand displacement (DSD) circuits. The neural networks are designed and trained in silico taking into account the behavior of DSD circuits. Oligonucleotides comprising DSD circuits are synthesized and combined to form a neural network. In an implementation, the neural network may be a binary neural network in which the output from each neuron is a binary value and the weight of each neuron either maintains the incoming binary value or flips the binary value. Inputs to the neural network are one more oligonucleotides such as synthetic oligonucleotides containing digital data or natural oligonucleotides such as mRNA. Outputs from the neural networks may be oligonucleotides that are read by directly sequencing or oligonucleotides that generate signals such as by release of fluorescent reporters.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: July 18, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Karin Strauss, Luis Ceze, Johannes Staffan Anders Linder
  • Patent number: 11681528
    Abstract: An apparatus and method for a tensor permutation engine. The TPE may include a read address generation unit (AGU) to generate a plurality of read addresses for the plurality of tensor data elements in a first storage and a write AGU to generate a plurality of write addresses for the plurality of tensor data elements in the first storage. The TPE may include a shuffle register bank comprising a register to read tensor data elements from the plurality of read addresses generated by the read AGU, a first register bank to receive the tensor data elements, and a shift register to receive a lowest tensor data element from each bank in the first register bank, each tensor data element in the shift register to be written to a write address from the plurality of write addresses generated by the write AGU.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: June 20, 2023
    Assignee: Intel Corporation
    Inventor: Berkin Akin
  • Patent number: 11657258
    Abstract: The present disclosure discloses a neural network processing module, in which a mapping unit is configured to receive an input neuron and a weight, and then process the input neuron and/or the weight to obtain a processed input neuron and a processed weight; and an operation unit is configured to perform an artificial neural network operation on the processed input neuron and the processed weight. Examples of the present disclosure may reduce additional overhead of the device, reduce the amount of access, and improve efficiency of the neural network operation.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: May 23, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yao Zhang, Shaoli Liu, Bingrui Wang, Xiaofu Meng
  • Patent number: 11645493
    Abstract: Methods and apparatus are disclosed supporting a design flow for developing quantized neural networks. In one example of the disclosed technology, a method includes quantizing a normal-precision floating-point neural network model into a quantized format. For example, the quantized format can be a block floating-point format, where two or more elements of tensors in the neural network share a common exponent. A set of test input is applied to a normal-precision flooding point model and the corresponding quantized model and the respective output tensors are compared. Based on this comparison, hyperparameters or other attributes of the neural networks can be adjusted. Further, quantization parameters determining the widths of data and selection of shared exponents for the block floating-point format can be selected. An adjusted, quantized neural network is retrained and programmed into a hardware accelerator.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: May 9, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Eric S. Chung, Bita Darvish Rouhani, Daniel Lo, Ritchie Zhao
  • Patent number: 11640517
    Abstract: Methods, apparatus, and computer-readable media for determining and utilizing corrections to robot actions. Some implementations are directed to updating a local features model of a robot in response to determining a human correction of an action performed by the robot. The local features model is used to determine, based on an embedding generated over a corresponding neural network model, one or more features that are most similar to the generated embedding. Updating the local features model in response to a human correction can include updating a feature embedding, of the local features model, that corresponds to the human correction. Adjustment(s) to the features model can immediately improve robot performance without necessitating retraining of the corresponding neural network model.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: May 2, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Krishna Shankar, Nicolas Hudson, Alexander Toshev
  • Patent number: 11636363
    Abstract: Disclosed embodiments provide techniques for automated technical support based on cognitive capabilities and preferences of a user. A user profile is obtained which includes a skill level assessment. A solution path includes one or more potential solutions for a problem. One or more solutions in the solution path are presented to a user as a potential remedy for a technical problem, based on the cognitive capabilities and preferences of a user.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: April 25, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shubhadip Ray, Andrew S. Christiansen, Norbert Herman, Avik Sanyal
  • Patent number: 11610095
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: March 21, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
  • Patent number: 11599802
    Abstract: Systems and methods for remote intervention are disclosed herein. The system can include memory including: a user profile database; a content database; and a model database. The system can include a remote device including: a network interface; and an I/O subsystem. The system can include a content management server that can: receive a first electrical signal from the remote device; generate and send an electrical signal to the remote device directing the launch of the content authoring interface; receive a second electrical signal including content received by the content authoring interface from the remote device; identify a plurality of response demands in the received content; determine a level of the received content based on the identified plurality of response demands; determine the acceptability of the received content based on the identified plurality of response demands; and generate and send an alert to the remote device.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: March 7, 2023
    Inventors: Stephen F. Ferrara, Amy A. Reilly, Jeffrey T. Steedle, Amy L. Kinsman, Roger S. Frantz
  • Patent number: 11593232
    Abstract: A method for verifying a calculation of a neuron value of multiple neurons of a neural network, including: carrying out or triggering a calculation of neuron functions of the multiple neurons, in each case to obtain a neuron value, the neuron functions being determined by individual weightings for each neuron input; calculating a first comparison value as the sum of the neuron values of the multiple neurons; carrying out or triggering a control calculation with one or multiple control neuron functions and with all neuron inputs of the multiple neurons, to obtain a second comparison value as a function of the neuron inputs of the multiple neurons and of the sum of the weightings of the multiple neurons assigned to the respective neuron input; and recognizing an error as a function of the first comparison value and of the second comparison value.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: February 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Sebastian Vogel, Jaroslaw Topp, Juergen Schirmer
  • Patent number: 11586884
    Abstract: A diffusive memristor device and an electronic device for emulating a biological neuron is disclosed. The diffusive memristor device includes a bottom electrode, a top electrode formed opposite the bottom electrode, and a dielectric layer disposed between the top electrode and the bottom electrode. The dielectric layer comprises an oxide doped with a metal.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: February 21, 2023
    Assignee: University of Massachusetts
    Inventors: Jianhua Yang, Qiangfei Xia, Mark McLean, Qing Wu
  • Patent number: 11574162
    Abstract: A system and method for evaluating the performance and usage of a cognitive computing tool which answers questions from users. A log file for these interactions includes the questions, the answers and a confidence rating assigned by the tool to each answer. Questions and answers are analyzed to determine validity, accuracy, and categories by subject matter experts or text analytics tools, and the results are added to the log file. Comments and sentiments from users may be analyzed and added to the log file. Additional data about the users, such as identities, demographics, and locations, may be added. Data from the log file may be presented in a dashboard display as metrics, such as trends and comparisons, describing the usage and performance of the cognitive computing tool. Answers may be displayed as they were presented to the users. Selectable filters may be provided to control the data displayed.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: February 7, 2023
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Sunitha Garapati, Matt Floyd, Darcy Bogle, Oscar Rebollo Martinez, Shawn Perrone, Adam Hellman
  • Patent number: 11574031
    Abstract: Disclosed is a method for convolution calculation in a neural network, comprising: reading an input feature map, depthwise convolution kernels and pointwise convolution kernels from a dynamitic random access memory (DRAM); performing depthwise convolution calculations and pointwise convolution calculations according to the input feature map, the depthwise convolution kernels and the pointwise convolution kernels to obtain output feature values of a first predetermined number p of points on all pointwise convolution output channels; storing the output feature values of a first predetermined number p of points on all pointwise convolution output channels into an on-chip memory, wherein the first predetermined number p is determined according to at least one of available space in the on-chip memory, a number of the depthwise convolution calculation units, and width, height and channel dimensions of the input feature map; and repeating the above operation obtain output feature values of all points on all pointwis
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: February 7, 2023
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Liang Chen, Chang Huang, Kun Ling, Jianjun Li, Delin Li, Heng Luo
  • Patent number: 11568217
    Abstract: Provided are embodiments for a computer-implemented method, a system, and a computer program product for updating analog crossbar arrays. The embodiments include receiving a number used in matrix multiplication to represent using pulse generation for a crossbar array, and receiving a first bit-length to represent the number, wherein the bit-length is a modifiable bit length. The embodiments also include selecting pulse positions in a pulse sequence having the first bit length to represent the number, performing a computation using the selected pulse positions in the pulse sequence, and updating the crossbar array using the computation.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: January 31, 2023
    Assignee: International Business Machines Corporation
    Inventors: Seyoung Kim, Oguzhan Murat Onen, Tayfun Gokmen, Malte Johannes Rasch