Patents Examined by Brian J Hales
  • Patent number: 12198058
    Abstract: Systems and methods for a tightly coupled end-to-end multi-sensor fusion with integrated compensation are described herein. For example, a system includes an inertial measurement unit that produces inertial measurements. Additionally, the system includes additional sensors that produce additional measurements. Further, the system includes one or more memory units. Moreover, the system includes one or more processors configured to receive the inertial measurements and the additional measurements. Additionally, the one or more processors are configured to compensate the inertial measurements with a compensation model stored on the one or more memory units. Also, the one or more processors are configured to fuse the inertial measurements with the additional measurements using a differential filter that applies filter coefficients stored on the one or more memory units.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: January 14, 2025
    Assignee: Honeywell International Inc.
    Inventors: Alberto Speranzon, Andrew Stewart, Shashank Shivkumar
  • Patent number: 12190236
    Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for predicting one or more properties of a material. One of the methods includes maintaining data specifying a set of known materials each having a respective known physical structure; receiving data specifying a new material; identifying a plurality of known materials in the set of known materials that are similar to the new material; determining a predicted embedding of the new material from at least respective embeddings corresponding to each of the similar known materials; and processing the predicted embedding of the new material using an experimental prediction neural network to predict one or more properties of the new material.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: January 7, 2025
    Assignee: DeepMind Technologies Limited
    Inventors: Annette Ada Nkechinyere Obika, Tian Xie, Victor Constant Bapst, Alexander Lloyd Gaunt, James Kirkpatrick
  • Patent number: 12182701
    Abstract: The present invention discloses a memory and a training method for neural network based on memory. The training method includes: obtaining one or more transfer functions of a memory corresponding to one or more influence factors; determining a training plan according to an ideal case and the one or more influence factors; training the neural network according to the training plan and the one or more transfer functions to obtain a plurality of weights of the trained neural network; and programming the memory according to the weights.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: December 31, 2024
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Yu-Hsuan Lin, Po-Kai Hsu, Ming-Liang Wei
  • Patent number: 12175359
    Abstract: An apparatus for training and inferencing a neural network includes circuitry that is configured to generate a first weight having a first format including a first number of bits based at least in part on a second weight having a second format including a second number of bits and a residual having a third format including a third number of bits. The second number of bits and the third number of bits are each less than the first number of bits. The circuitry is further configured to update the second weight based at least in part on the first weight and to update the residual based at least in part on the updated second weight and the first weight. The circuitry is further configured to update the first weight based at least in part on the updated second weight and the updated residual.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: December 24, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
  • Patent number: 12169763
    Abstract: Techniques are disclosed for providing a scalable multi-tenant serve pool for chatbot systems. A query serving system (QSS) receives a request to serve a query for a skillbot. The QSS includes: (i) a plurality of deployments in a serving pool, and (ii) a plurality of deployments in a free pool. The QSS determines whether a first deployment from the plurality of deployments in the serving pool can serve the query based on an identifier of the skillbot. In response to determining that the first deployment cannot serve the query, the QSS selects a second deployment from the plurality of deployments in the free pool to be assigned to the skillbot, and loads a machine-learning model associated with the skillbot into the second deployment, wherein the machine-learning model is trained to serve the query for the skillbot. The query is served using the machine-learning model loaded into the second deployment.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: December 17, 2024
    Assignee: Oracle International Corporation
    Inventors: Vishal Vishnoi, Suman Mallapura Somasundar, Xin Xu, Stevan Malesevic
  • Patent number: 12169793
    Abstract: A system and method for controlling a system, comprising estimating an optimal control policy for the system; receiving data representing sequential states and associated trajectories of the system, comprising off-policy states and associated off-policy trajectories; improving the estimate of the optimal control policy by performing at least one approximate value iteration, comprising: estimating a value of operation of the system dependent on the estimated optimal control policy; using a complex return of the received data, biased by the off-policy states, to determine a bound dependent on at least the off-policy trajectories, and using the bound to improve the estimate of the value of operation of the system according to the estimated optimal control policy; and updating the estimate of the optimimal control policy, dependent on the improved estimate of the value of operation of the system.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: December 17, 2024
    Assignee: The Research Foundation for The State University of New York
    Inventors: Robert Wright, Lei Yu, Steven Loscalzo
  • Patent number: 12147906
    Abstract: Methods, systems, and computer program products for localization-based test generation for individual fairness testing of AI models are provided herein. A computer-implemented method includes obtaining at least one artificial intelligence model and training data related to the at least one artificial intelligence model; identifying one or more boundary regions associated with the at least one artificial intelligence model based at least in part on results of processing at least a portion of the training data using the at least one artificial model; generating, in accordance with at least one of the one or more identified boundary regions, one or more synthetic data points for inclusion with the training data; and executing one or more fairness tests on the at least one artificial intelligence model using at least a portion of the one or more generated synthetic data points and at least a portion of the training data.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: November 19, 2024
    Assignee: International Business Machines Corporation
    Inventors: Diptikalyan Saha, Aniya Aggarwal, Sandeep Hans
  • Patent number: 12141238
    Abstract: Discussed herein are devices, systems, and methods for classification using a clustering autoencoder. A method can include obtaining content to be classified by the DNN classifier, and operating the DNN classifier to determine a classification of the received content, the DNN classifier including a clustering classification layer that clusters based on a latent feature vector representation of the content, the classification corresponding to one or more clusters that are closest to the latent feature vector providing the classification and a corresponding confidence.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: November 12, 2024
    Assignee: Raytheon Company
    Inventors: Philip A. Sallee, James Mullen
  • Patent number: 12136039
    Abstract: Some embodiments provide a method for training multiple parameters of a machine-trained (MT) network subject to a sparsity constraint that requires a threshold portion of the parameters to be equal to zero. A first set of the parameters subject to the sparsity constraint are grouped into groups of parameters. For each parameter of a second set of the parameters subject to the sparsity constraint, the method determines an accuracy penalty associated with the parameter being set to zero. For each group of parameters in the first set of parameters, the method determines a minimum accuracy penalty for each possible number of parameters in the group being set to zero. The method uses the determined accuracy penalties to set to the value zero at least the threshold portion of the plurality of parameters.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: November 5, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 12112264
    Abstract: A device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells. Peripheral circuitry comprising readout circuitry is coupled to the first and second control lines. A control system generates control signals to control the peripheral circuitry to perform a first operation and a second operation on the array of RPU cells. The control signals include a first configuration control signal to configure the readout circuitry to have a first hardware configuration when the first operation is performed on the array of RPU cells, and a second configuration control signal to configure the readout circuitry to have a second hardware configuration, which is different from the first hardware configuration, when the second operation is performed on the array of RPU cells.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: October 8, 2024
    Assignee: International Business Machines Corporation
    Inventors: Malte Johannes Rasch, Tayfun Gokmen, Seyoung Kim
  • Patent number: 12106491
    Abstract: Embodiments of this application disclose a target tracking method performed at an electronic device. The electronic device obtains a first video stream and detects candidate regions within a current video frame in the first video stream. The electronic device then extracts, from the candidate regions, a deep feature corresponding to each candidate region and calculates a feature similarity for each candidate region and a deep feature of a target detected in a previous video frame. Finally, the electronic device determines, based on the feature similarity corresponding to the candidate region, that the target is detected in the current video frame. Target detection is performed in a range of video frames by using a target detection model, and target tracking is performed based on the deep feature, so that occurrence of cases such as a target tracking drift or loss can be effectively prevented, to ensure the accuracy of target tracking.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: October 1, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hao Zhang, Zhiwei Niu
  • Patent number: 12079704
    Abstract: A system includes a data collection engine, a plurality of items including radio-frequency identification chips, a plurality of third party data and insight sources, a plurality of interfaces, client devices, a server and method thereof for preventing suicide. The server includes trained machine learning models, business logic and attributes of a plurality of patient events. The data collection engine sends attributes of new patient events to the server. The server can predict an adverse event risk of the new patient events based upon the attributes of the new patient events utilizing the trained machine learning models.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: September 3, 2024
    Assignee: Brain Trust Innovations I, LLC
    Inventor: David LaBorde
  • Patent number: 12067479
    Abstract: Systems and methods for heterogenous hardware acceleration are disclosed. The systems and methods can include a neural network processing unit comprising compute tiles. Each of a first set of the compute tiles can include a first tensor array configured to support operations in a first number format. Each of a second set of the compute tiles can include a second tensor array configured to support operations in a second number format, the second number format supporting a greater range or a greater precision than the first number format, and a de-quantizer configured to convert data in the first number format to data in the second number format. The systems and methods can include neural network processing units, multi-chip hardware accelerators and distributed hardware accelerators including low-precision components for performing interference tasks and high-precision components for performing training tasks.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: August 20, 2024
    Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.
    Inventor: Liang Han
  • Patent number: 11995533
    Abstract: Some embodiments provide a method for executing a layer of a neural network, for a circuit that restricts a number of weight values used per layer. The method applies a first set of weights to a set of inputs to generate a first set of results. The first set of weights are restricted to a first set of allowed values. For each of one or more additional sets of weights, the method applies the respective additional set of weights to the same set of inputs to generate a respective additional set of results. The respective additional set of weights is restricted to a respective additional set of allowed values that is related to the first set of allowed values and the other additional sets of allowed values. The method generates outputs for the particular layer by combining the first set of results with each respective additional set of results.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 28, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11948073
    Abstract: Systems, apparatuses, and methods for adaptively mapping a machine learning model to a multi-core inference accelerator engine are disclosed. A computing system includes a multi-core inference accelerator engine with multiple inference cores coupled to a memory subsystem. The system also includes a control unit which determines how to adaptively map a machine learning model to the multi-core inference accelerator engine. In one implementation, the control unit selects a mapping scheme which minimizes the memory bandwidth utilization of the multi-core inference accelerator engine. In one implementation, this mapping scheme involves having one inference core of the multi-core inference accelerator engine fetch given data and broadcast the given data to other inference cores of the inference accelerator engine. Each inference core fetches second data unique to the respective inference core.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: April 2, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Lei Zhang, Sateesh Lagudu, Allen Rush
  • Patent number: 11948074
    Abstract: Disclosed is a processor-implemented data processing method in a neural network. A data processing apparatus includes at least one processor, and at least one memory configured to store instructions to be executed by the processor and a neural network, wherein the processor is configured to, based on the instructions, input an input activation map into a current layer included in the neural network, output an output activation map by performing a convolution operation between the input activation map and a weight quantized with a first representation bit number of the current layer, and output a quantized activation map by quantizing the output activation map with a second representation bit number based on an activation quantization parameter.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Chang Kyu Choi
  • Patent number: 11907827
    Abstract: Methods and systems include a neural network system that includes a neural network accelerator. The neural network accelerator includes multiple processing engines coupled together to perform arithmetic operations in support of an inference performed using the deep neural network system. The neural network accelerator also includes a schedule-aware tensor data distribution circuitry or software that is configured to load tensor data into the multiple processing engines in a load phase, extract output data from the multiple processing engines in an extraction phase, reorganize the extracted output data, and store the reorganized extracted output data to memory.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Gautham Chinya, Huichu Liu, Arnab Raha, Debabrata Mohapatra, Cormac Brick, Lance Hacking
  • Patent number: 11886982
    Abstract: In a data processing system, at least one processing node is configured to perform computations for a multi-stage process whilst at least one other processor performs the load/unload operations required to calculate a subsequent stage of the multi stage process. An exchange of data then occurs between the processing nodes. At a later time, at least one processing node performs calculations using the data loaded from storage, whilst at least one other processor performs the load/unload operations required to calculate a subsequent stage of the multi stage process.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: January 30, 2024
    Assignee: GRAPHCORE LIMITED
    Inventors: Ola Torudbakken, Lorenzo Cevolani
  • Patent number: 11875247
    Abstract: An acceleration engine with multiple accelerators may share a common set of data that is used by each accelerator to perform computations on input data. The set of shared data can be loaded into the acceleration engine from an external memory. Instead of accessing the external memory multiple times to load the set of shared data into each accelerator, the external memory can be accessed once using direct memory access to load the set of shared data into the first accelerator. The set of shared data can then be serially loaded from one accelerator to the next accelerator in the acceleration engine using direct memory access. To achieve data parallelism and reduce computation time, a runtime driver may split the input data into data batches, and each accelerator can perform computations on a different batch of input data with the common set of shared data.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: January 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Richard John Heaton, Ron Diamant
  • Patent number: 11868870
    Abstract: A neuromorphic apparatus configured to process a multi-bit neuromorphic operation including a single axon circuit, a single synaptic circuit, a single neuron circuit, and a controller. The single axon circuit is configured to receive, as a first input, an i-th bit of an n-bit axon. The single synaptic circuit is configured to store, as a second input, a j-th bit of an m-bit synaptic weight and output a synaptic operation value between the first input and the second input. The single neuron circuit is configured to obtain each bit value of a multi-bit neuromorphic operation result between the n-bit axon and the m-bit synaptic weight, based on the output synaptic operation value. The controller is configured to respectively determine the i-th bit and the j-th bit to be sequentially assigned for each time period of different time periods to the single axon circuit and the single synaptic circuit.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee