Learning Method Patents (Class 706/25)
  • Patent number: 11991156
    Abstract: A system and method are disclosed for providing an averaging of models for federated learning and blind learning systems. The method includes selecting, at a server, a generator g and a number p, transmitting, to at least two n client devices, the generator g and the number p, receiving, from each client device i of the at least two client devices, a respective value ki=gri mod p and transmitting the set of respective values ki to each client device i of the at least two client devices where respective added group of shares are generated on each client device i. The method includes receiving each respective added group of shares from each client device i of the at least two client devices and adding all the respective added group of shares to make a global sum of shares and dividing the global sum of shares by n.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: May 21, 2024
    Assignee: TripleBlind, Inc.
    Inventors: Babak Poorebrahim Gilkalaye, Gharib Gharibi, Ravi Patel, Greg Storm, Riddhiman Das
  • Patent number: 11989634
    Abstract: Embodiments described herein provide for a non-transitory machine-readable medium storing instructions to cause one or more processors to perform operations comprising receiving a machine learning model from a server at a client device, training the machine learning model using local data at the client device, generating an update for the machine learning model, the update including a weight vector that represents a difference between the received machine learning model and the trained machine learning model, privatizing the update for the machine learning model, and transmitting the privatized update for the machine learning model to the server.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: May 21, 2024
    Assignee: Apple Inc.
    Inventors: Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, Ryan M. Rogers
  • Patent number: 11983620
    Abstract: The simplification of neural network models is described. For example, a method for simplifying a neural network model includes providing the neural network model to be simplified, defining a first temporal filter for the conveyance of input from a neuron to an other spatially-extended neuron along the arborized projection, defining a second temporal filter for the conveyance of input from yet another neuron to the spatially-extended neuron along the arborized projection, replacing, in the neural network model, the first, spatially-extended neuron with a first, spatially-constrained neuron and the arborized projection with a first connection extending between the first, spatially-constrained neuron and the second neuron, wherein the first connection filters input from the second neuron in accordance with the first temporal filter and a second connection extending between the first spatially-constrained neuron and the third neuron.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: May 14, 2024
    Assignee: Ecole Polytechnique Federale De Lausanne (EPFL)
    Inventors: Henry Markram, Wulfram Gerstner, Marc-Oliver Gewaltig, Christian Rössert, Eilif Benjamin Muller, Christian Pozzorini, Idan Segev, James Gonzalo King, Csaba Erö, Willem Wybo
  • Patent number: 11983609
    Abstract: An end-to-end cloud-based machine learning platform providing two data pipelines: a first machine learning pipeline provides data transformation, a second machine learning pipeline optimizes those data transformations. The first pipeline transforms raw data into model features, and features into machine learning models. It provides training, inference, and experimentation of online and off-line models for personalizing experiences for game players. The second pipeline optimizes models generated by the first pipeline leveraging a reinforcement learning (RL) model and an evolution strategy (ES) model. The second pipeline learns with its first RL model, from experimentation, the best performing models. To improve the training of new models, the second pipeline also transfers its learning from its first RL model to its second ES model to generate the training of new models in the first pipeline. The second pipeline can be considered as an overlay pipeline to the first one.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: May 14, 2024
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Serge-Paul Carrasco
  • Patent number: 11977916
    Abstract: A neural network processing unit (NPU) includes a processing element array, an NPU memory system configured to store at least a portion of data of an artificial neural network model processed in the processing element array, and an NPU scheduler configured to control the processing element array and the NPU memory system based on artificial neural network model structure data or artificial neural network data locality information.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: May 7, 2024
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Patent number: 11977626
    Abstract: A method for securing a genuine machine learning model against adversarial samples includes the steps of attaching a trigger to a sample to be classified and classifying the sample with the trigger attached using a backdoored model that has been backdoored using the trigger. In a further step, it is determined whether an output of the backdoored model is the same as a backdoor class of the backdoored model, and/or an outlier detection method is applied to logits compared to honest logits that were computed using a genuine sample. These steps are repeated using different triggers and backdoored models respectively associated therewith. It is compared a number of times that an output of the backdoored models is not the same as the respective backdoor class, and/or a difference determined by applying the outlier detection method, against one or more thresholds so as to determine whether the sample is adversarial.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: May 7, 2024
    Assignee: NEC CORPORATION
    Inventors: Sebastien Andreina, Giorgia Azzurra Marson, Ghassan Karame
  • Patent number: 11977842
    Abstract: A computing system generates a plurality of training data sets for generating the NLP model. The computing system trains a teacher network to extract and classify tokens from a document. The training includes a pre-training stage where the teacher network is trained to classify generic data in the plurality of training data sets and a fine-tuning stage where the teacher network is trained to classify targeted data in the plurality of training data sets. The computing system trains a student network to extract and classify tokens from a document by distilling knowledge learned by the teacher network during the fine-tuning stage from the teacher network to the student network. The computing system outputs the NLP model based on the training. The computing system causes the NLP model to be deployed in a remote computing environment.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: May 7, 2024
    Assignee: INTUIT INC.
    Inventors: Dominic Miguel Rossi, Hui Fang Lee, Tharathorn Rimchala
  • Patent number: 11972344
    Abstract: A method, system, and computer program product, including generating, using a linear probe, confidence scores through flattened intermediate representations and theoretically-justified weighting of samples during a training of the simple model using the confidence scores of the intermediate representations.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: April 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder Andreas Olsen
  • Patent number: 11973743
    Abstract: Disclosed is a process for testing a suspect model to determine whether it was derived from a source model. An example method includes receiving, from a model owner node, a source model and a fingerprint associated with the source model, receiving a suspect model at a service node, based on a request to test the suspect model, applying the fingerprint to the suspect model to generate an output and, when the output has an accuracy that is equal to or greater than a threshold, determining that the suspect model is derived from the source model. Imperceptible noise can be used to generate the fingerprint which can cause predictable outputs from the source model and a potential derivative thereof.
    Type: Grant
    Filed: December 12, 2022
    Date of Patent: April 30, 2024
    Assignee: TRIPLEBLIND, INC.
    Inventors: Gharib Gharibi, Babak Poorebrahim Gilkalaye, Riddhiman Das
  • Patent number: 11972348
    Abstract: Embodiments of the present disclosure relate to a texture unit circuit in a neural processor circuit. The neural processor circuit includes a tensor access operation circuit with the texture unit circuit, a data processor circuit, and at least one neural engine circuit. The texture unit circuit fetches a source tensor from a system memory by referencing an index tensor in the system memory representing indexing information into the source tensor. The data processor circuit stores an output version of the source tensor obtained from the tensor access operation circuit and sends the output version of the source tensor as multiple of units of input data to the at least one neural engine circuit. The at least one neural engine circuit performs at least convolution operations on the units of input data and at least one kernel to generate output data.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 30, 2024
    Assignee: APPLE INC.
    Inventor: Christopher L. Mills
  • Patent number: 11972349
    Abstract: In one embodiment, a method for machine learning acceleration includes receiving instructions to perform convolution on an input tensor using a filter tensor, determining that the size of a first dimension of the input tensor is less than a processing capacity of each of multiple subarrays of computation units in a tensor processor, selecting a second dimension of the input tensor along which to perform the convolution, selecting, based on the second dimension, one or more dimensions of the filter tensor, generating (1) first instructions for reading, using vector read operations, activation elements in the input tensor organized such that elements with different values in the second dimension are stored contiguously in memory, and (2) second instructions for reading weights of the filter tensor along the selected one or more dimensions, and using the first and second instructions to provide the activation elements and the weights to the subarrays.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: April 30, 2024
    Assignee: Meta Platforms, Inc.
    Inventors: Liangzhen Lai, Yu Hsin Chen, Vikas Chandra
  • Patent number: 11966451
    Abstract: A method for optimizing a deep learning operator, includes: calling a method of reading an image object to read target data from an L1 cache of an image processor to the processor in response to detecting the target data in the L1 cache, performing a secondary quantization operation on the target data in the processor to obtain an operation result and writing the operation result into a main memory of the image processor. The target data is fixed-point data obtained after performing a quantization operation on data to be quantized in advance and the data to be quantized is one of the following: float-point data of an initial network layer of the neural network model and fixed-point data outputted from a network layer previous to the current network layer.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: April 23, 2024
    Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventor: Bin Li
  • Patent number: 11966832
    Abstract: A method includes receiving a first data set comprising embeddings of first and second types, generating a fixed adjacency matrix from the first dataset, and applying a first stochastic binary mask to the fixed adjacency matrix to obtain a first subgraph of the fixed adjacency matrix. The method also includes processing the first subgraph through a first layer of a graph convolutional network (GCN) to obtain a first embedding matrix, and applying a second stochastic binary mask to the fixed adjacency matrix to obtain a second subgraph of the fixed adjacency matrix. The method includes processing the first embedding matrix and the second subgraph through a second layer of the GCN to obtain a second embedding matrix, and then determining a plurality of gradients of a loss function, and modifying the first stochastic binary mask and the second stochastic binary mask using at least one of the plurality of gradients.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: April 23, 2024
    Assignee: Visa International Service Association
    Inventors: Huiyuan Chen, Yu-San Lin, Lan Wang, Michael Yeh, Fei Wang, Hao Yang
  • Patent number: 11960934
    Abstract: A method and system for computing one or more outputs of a neural network having a plurality of layers is provided. The method and system can include determining a plurality of sub-computations from total computations of the neural network to execute in parallel wherein the computations to execute in parallel involve computations from multiple layers. The method and system also can also include avoiding repeating overlapped computations and/or multiple memory reads and writes during execution.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: April 16, 2024
    Assignee: NEURALMAGIC, INC.
    Inventors: Alexander Matveev, Nir Shavit
  • Patent number: 11960981
    Abstract: Systems and methods for model evaluation. A model is evaluated by performing a decomposition process for a model output, relative to a baseline input data set.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: April 16, 2024
    Assignee: ZESTFINANCE, INC.
    Inventors: Douglas C. Merrill, Michael Edward Ruberry, Ozan Sayin, Bojan Tunguz, Lin Song, Esfandiar Alizadeh, Melanie Eunique DeBruin, Yachen Yan, Derek Wilcox, John Candido, Benjamin Anthony Solecki, Jiahuan He, Jerome Louis Budzik, Armen Avedis Donigian, Eran Dvir, Sean Javad Kamkar, Vishwaesh Rajiv, Evan George Kriminger
  • Patent number: 11953874
    Abstract: The embodiment of the present disclosure provides an Industrial Internet of Things system for inspection operation management of an inspection robot and a method thereof. The system includes a user platform, a service platform, a management platform, a sensor network platform, and an object platform that are interacted sequentially from top to bottom. The management platform is configured to perform operations including: determining an inspection task, the inspection task including detecting at least one detection site; sending instructions to an inspection robot based on the inspection task to move the inspection robot to a target position to be inspected; obtaining detection data based on the inspection robot, and determining subsequent detection or processing operations based on the detection data.
    Type: Grant
    Filed: March 16, 2023
    Date of Patent: April 9, 2024
    Assignee: CHENGDU QINCHUAN IOT TECHNOLOGY CO., LTD.
    Inventors: Zehua Shao, Haitang Xiang, Junyan Zhou, Yaqiang Quan, Xiaojun Wei
  • Patent number: 11948693
    Abstract: The present disclosure provides a traditional Chinese medicine (TCM) syndrome classification method based on multi-graph attention, which comprehensively considers the contribution of symptoms and syndrome elements in syndrome classification by constructing a graph structure, integrates a symptom-symptom graph and a symptom-syndrome element graph into classification, uses a multi-graph attention network to aggregate the features of symptoms and syndrome elements, and finally realizes syndrome classification through a multi-layer perceptron. At the same time, extensive experiments are carried out on real data sets, the effectiveness of the multi-graph attention network is verified, more accurate classification is realized, and better classification results have been achieved.
    Type: Grant
    Filed: June 20, 2023
    Date of Patent: April 2, 2024
    Assignee: NANJING DAJING TCM INFORMATION TECHNOLOGY CO. LTD
    Inventors: Jing Zhao, Wenyou Li, Zhaoyang Jiang, Jie Yin, Ying Chen
  • Patent number: 11948092
    Abstract: A brain-inspired cognitive learning method can obtain good learning results in various environments and tasks by selecting the most suitable algorithm models and parameters based on the environments and tasks, and can correct wrong behavior. The framework includes four main modules: a cognitive feature extraction module, a cognitive control module, a learning network module, and a memory module. The memory module includes a data base, a cognitive case base, and an algorithm and hyper-parameter base, which store data of dynamic environments and tasks, cognitive cases, and concrete algorithms and hyper-parameter values, respectively. For dynamic environments and tasks, the most suitable algorithm model and hyper-parameter combination can be flexibly selected. In addition, with “good money drives out bad”, mislabeled data is corrected using correctly labeled data, to achieve robustness of training data.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 2, 2024
    Assignee: Nanjing University of Aeronautics and Astronautics
    Inventors: Qihui Wu, Tianchen Ruan, Shijin Zhao, Fuhui Zhou, Yang Huang
  • Patent number: 11941505
    Abstract: An information processing method implemented by a computer includes: executing a generation processing that includes generating a first mini-batch by performing data extension processing on learning data and processing to generate a second mini-batch without performing the data extension processing on the learning data; and executing a learning processing by using a neural network, the learning processing being configured to perform first learning by using the first mini-batch, and then perform second learning by using the second mini-batch.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: March 26, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Akihiro Tabuchi, Akihiko Kasagi
  • Patent number: 11934365
    Abstract: A system and method of autonomous data hub processing that uses semantic metadata, machine learning models, and a permissioned blockchain to autonomously standardize, identify and correct errors in supply chain data is disclosed. Embodiments input supply chain data stored in a supply chain database, train with the machine learning model trainer, one or more machine learning models to identify one or more data errors in the supply chain data, clean the one or more identified data errors from the supply chain data, and store cleaned supply chain data. Embodiments also update one or more machine learning models to identify one or more data errors in cleaned supply chain data, and join and aggregate one or more sets of cleaned supply chain data.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: March 19, 2024
    Assignee: Blue Yonder Group, Inc.
    Inventor: Rubesh Mehta
  • Patent number: 11935326
    Abstract: A face recognition method based on an evolutionary convolutional neural network is provided. The method optimizes the design of convolutional neural network architecture and the initialization of connection weights by using a genetic algorithm and finds an optimal neural network through continuous evolutionary calculation, thus reducing dependence on artificial experience during the design of the convolutional neural network architecture. The method encodes the convolutional neural networks by using a variable-length genetic encoding algorithm, so as to improve the diversity of structures of convolutional neural networks. Additionally, in order to cross over extended chromosomes, structural units at corresponding positions are separately crossed over and then recombined, thereby realizing the crossover of chromosomes with different lengths.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: March 19, 2024
    Assignee: SICHUAN UNIVERSITY
    Inventors: Yanan Sun, Siyi Li
  • Patent number: 11934945
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: March 19, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Patent number: 11934943
    Abstract: The present invention discloses a two-dimensional photonic neural network convolutional acceleration chip based on series connection structure, which is integrated with a modulator, M microring delay weighting units, M?1 secondary delay waveguide, a wavelength-division multiplexer and a photodetector.
    Type: Grant
    Filed: August 24, 2023
    Date of Patent: March 19, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Qingshui Guo, Kun Yin
  • Patent number: 11928708
    Abstract: Dynamic campaign optimization systems and methods may be used to continuously test many alternative campaign configurations while allowing all configurations, including configurations formerly identified as successful and unsuccessful, to be re-tested in order to identify successful configurations that may previously have been identified as unsuccessful.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: March 12, 2024
    Assignee: SYSTEMI OPCO, LLC
    Inventors: Nathan R. Janos, Sanjeev M. Rao, John W. Meacham, III, Gyu-Ho Lee
  • Patent number: 11928602
    Abstract: Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: March 12, 2024
    Assignee: Neurala, Inc.
    Inventors: Matthew Luciw, Santiago Olivera, Anatoly Gorshechnikov, Jeremy Wurbs, Heather Marie Ames, Massimiliano Versace
  • Patent number: 11922169
    Abstract: A method and apparatus for performing refactored multiply-and-accumulate operations is provided. A summing array includes a plurality of non-volatile memory elements arranged in columns. Each non-volatile memory element in the summing array is programmed to a high resistance state or a low resistance state based on weights of a neural network. The summing array is configured to generate a summed signal for each column based, at least in part, on a plurality of input signals. A multiplying array is coupled to the summing array, and includes a plurality of non-volatile memory elements. Each non-volatile memory element in the multiplying array is programmed to a different conductance level based on the weights of the neural network. The multiplying array is configured to generate an output signal based, at least in part, on the summed signals from the summing array.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: March 5, 2024
    Assignee: Arm Limited
    Inventors: Matthew Mattina, Shidhartha Das, Glen Arnold Rosendale, Fernando Garcia Redondo
  • Patent number: 11922314
    Abstract: Methods and apparatuses that generate a simulation object for a physical system are described. The simulation object includes a trained computing structure to determine future output data of the physical system in real time. The computing structure is trained with a plurality of input units and one or more output units. The plurality of input units include regular input units to receive input data and output data of the physical system. The output units include one or more regular output units to predict a dynamic rate of change of the input data over a period of time. The input data and output data of the physical system are obtained for training the computing structure. The input data represent a dynamic input excitation to the physical system over the period of time. And the output data represents a dynamic output response of the physical system to the dynamic input excitation over the period of time.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: March 5, 2024
    Assignee: ANSYS, INC.
    Inventors: Mohamed Masmoudi, Christelle Boichon-Grivot, Valéry Morgenthaler, Michel Rochette
  • Patent number: 11922051
    Abstract: A system for an artificial neural network (ANN) includes a processor configured to output a memory control signal including an ANN data locality; a main memory in which data of an ANN model corresponding to the ANN data locality is stored; and a memory controller configured to receive the memory control signal from the processor and to control the main memory based on the memory control signal. The memory controller may be further configured to control, based on the memory control signal, a read or write operation of data of the main memory required for operation of the artificial neural network. Thus, the system optimizes an ANN operation of the processor by utilizing the ANN data locality of the ANN model, which operates at a processor-memory level.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: March 5, 2024
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Patent number: 11921868
    Abstract: A device configured to provide access to a digital document to a user device and to receive an access request for a first masked data element within the digital document. The device is further configured to generate a first blockchain transaction that identifies a machine learning model that is stored in a blockchain. The device is further configured to publish the first blockchain transaction in a blockchain ledger for the blockchain and to receive a second blockchain transaction from the machine learning model in response to publishing the blockchain transaction in the blockchain ledger. The second transaction indicates whether the user is approved for accessing the masked data element. The device is further configured to provide access to the first masked data element on the user device for the user in response to determining that the user is approved for accessing the masked data element.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: March 5, 2024
    Assignee: Bank of America Corporation
    Inventor: Raja Arumugam Maharaja
  • Patent number: 11922303
    Abstract: Embodiments described herein provides a training mechanism that transfers the knowledge from a trained BERT model into a much smaller model to approximate the behavior of BERT. Specifically, the BERT model may be treated as a teacher model, and a much smaller student model may be trained using the same inputs to the teacher model and the output from the teacher model. In this way, the student model can be trained within a much shorter time than the BERT teacher model, but with comparable performance with BERT.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: March 5, 2024
    Assignee: Salesforce, Inc.
    Inventors: Wenhao Liu, Ka Chun Au, Shashank Harinath, Bryan McCann, Govardana Sachithanandam Ramachandran, Alexis Roos, Caiming Xiong
  • Patent number: 11914462
    Abstract: Methods and systems are disclosed herein for using anomaly detection in timeseries data of user sentiment to detect incidents in computing systems and identify events within an enterprise. An anomaly detection system may receive social media messages that include a timestamp indicating when each message was published. The system may generate sentiment identifiers for the social media messages. The sentiment identifiers and timestamps associated with the social media messages may be used to generate a timeseries dataset for each type of sentiment identifier. The timeseries datasets may be input into an anomaly detection model to determine whether an anomaly has occurred. The system may retrieve textual data from the social media messages associated with the detected anomaly and may use the text to determine a computing system or event associated with the detected anomaly.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: February 27, 2024
    Assignee: Capital One Services, LLC
    Inventors: Vannia Gonzalez Macias, Paul Cho, Rahul Gupta, Scott Garcia, Adithya Ramanathan
  • Patent number: 11911902
    Abstract: A method for obstacle avoidance in degraded environments of robots based on intrinsic plasticity of an SNN is disclosed. A decision network in a synaptic autonomous learning module takes lidar data, distance from a target point and velocity at a previous moment as state input, and outputs the velocity of left and right wheels of the robot through the autonomous adjustment of the dynamic energy-time threshold, so as to carry out autonomous perception and decision making. The method solves the difficulty of the lack of intrinsic plasticity in the SNN, which leads to the difficulty of adapting to degraded environments due to the homeostasis imbalance of the model, is successfully deployed in mobile robots to maintain a stable trigger rate for autonomous navigation and obstacle avoidance in degraded, disturbed and noisy environments, and has validity and applicability on different degraded scenes.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: February 27, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Jianchuan Ding, Bo Dong, Felix Heide, Baocai Yin
  • Patent number: 11907679
    Abstract: An arithmetic operation device is provided that removes a part of parameters of a predetermined number of parameters from a first machine learning model which includes the predetermined number of parameters and is trained so as to output second data corresponding to input first data, determines the number of bits of a weight parameter according to required performance related to an inference to generate a second machine learning model, and acquires data output from the second machine learning model so as to correspond to the input first data with a smaller computational complexity than the first machine learning model.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: February 20, 2024
    Assignee: Kioxia Corporation
    Inventors: Kengo Nakata, Asuka Maki, Daisuke Miyashita
  • Patent number: 11907842
    Abstract: A system comprises a memory that stores computer-executable components; and a processor, operably coupled to the memory, that executes the computer-executable components. The system includes a receiving component that receives a corpus of data; a relation extraction component that generates noisy knowledge graphs from the corpus; and a training component that acquires global representations of entities and relation by training from output of the relation extraction component.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: February 20, 2024
    Assignee: NTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alfio Massimiliano Gliozzo, Sarthak Dash, Michael Robert Glass, Mustafa Canim
  • Patent number: 11907172
    Abstract: An information processing system preserves data used in machine learning by distributing the data to a plurality of servers, reads setting information indicating a method of partitioning for cross-validation in the machine learning, specifies, based on the setting information, a validation server that executes the cross-validation among the plurality of servers, and validation data which is data used in the cross-validation, specifies an arrangement of the data in the plurality of servers, specifies deficiency data, which is data that is included in the validation data and that is not stored in the validation server, and causes a server that stores the deficiency data among the plurality of servers to transmit the deficiency data to the validation server, based on an arrangement of the specified deficiency data.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: February 20, 2024
    Assignee: NEC CORPORATION
    Inventor: Junichi Yasuda
  • Patent number: 11900235
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using recurrent neural networks. One of the systems includes a main recurrent neural network comprising one or more recurrent neural network layers and a respective hyper recurrent neural network corresponding to each of the one or more recurrent neural network layers, wherein each hyper recurrent neural network is configured to, at each of a plurality of time steps: process the layer input at the time step to the corresponding recurrent neural network layer, the current layer hidden state of the corresponding recurrent neural network layer, and a current hypernetwork hidden state of the hyper recurrent neural network to generate an updated hypernetwork hidden state.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: February 13, 2024
    Assignee: Google LLC
    Inventors: Andrew M. Dai, Quoc V. Le, David Ha
  • Patent number: 11893111
    Abstract: Techniques are disclosed for detecting adversarial attacks. A machine learning (ML) system processes the input into and output of a ML model using an adversarial detection module that does not include a direct external interface. The adversarial detection module includes a detection model that generates a score indicative of whether the input is adversarial using, e.g., a neural fingerprinting technique or a comparison of features extracted by a surrogate ML model to an expected feature distribution for the output of the ML model. In turn, the adversarial score is compared to a predefined threshold for raising an adversarial flag. Appropriate remedial measures, such as notifying a user, may be taken when the adversarial score satisfies the threshold and raises the adversarial flag.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: February 6, 2024
    Assignee: Harman International Industries, Incorporated
    Inventors: Srinivas Kruthiveti Subrahmanyeswara Sai, Aashish Kumar, Alexander Kreines, George Jose, Sambuddha Saha, Nir Morgulis, Shachar Mendelowitz
  • Patent number: 11895220
    Abstract: A method includes dividing a plurality of filters in a first layer of a neural network into a first set of filters and a second set of filters, applying each of the first set of filters to an input of the neural network, aggregating, at a second layer of the neural network, a respective one of a first set of outputs with a respective one of a second set of outputs, splitting respective weights of specific neurons activated in each remaining layer, at each specific neuron from each remaining layer, applying a respective filter associated with each specific neuron and a first corresponding weight, obtaining a second set of neuron outputs, for each specific neuron, aggregating one of the first set of neuron outputs with one of a second set of neuron outputs and generating an output of the neural network based on the aggregated neuron outputs.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: February 6, 2024
    Assignee: TripleBlind, Inc.
    Inventors: Greg Storm, Riddhiman Das, Babak Poorebrahim Gilkalaye
  • Patent number: 11887001
    Abstract: An apparatus and method are described for reducing the parameter density of a deep neural network (DNN). A layer-wise pruning module to prune a specified set of parameters from each layer of a reference dense neural network model to generate a second neural network model having a relatively higher sparsity rate than the reference neural network model; a retraining module to retrain the second neural network model in accordance with a set of training data to generate a retrained second neural network model; and the retraining module to output the retrained second neural network model as a final neural network model if a target sparsity rate has been reached or to provide the retrained second neural network model to the layer-wise pruning model for additional pruning if the target sparsity rate has not been reached.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: January 30, 2024
    Assignee: INTEL CORPORATION
    Inventors: Anbang Yao, Yiwen Guo, Lin Xu, Yan Lin, Yurong Chen
  • Patent number: 11886988
    Abstract: Adaptive exploration in deep reinforcement learning may be performed by inputting a current time frame of an action and observation sequence sequentially into a function approximator, such as a deep neural network, including a plurality of parameters, the action and observation sequence including a plurality of time frames, each time frame including action values and observation values, approximating a value function using the function approximator based on the current time frame to acquire a current value, updating an action selection policy through exploration based on an ?-greedy strategy using the current value, and updating the plurality of parameters.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: January 30, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Sakyasingha Dasgupta
  • Patent number: 11886989
    Abstract: Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: January 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian Michael Molloy
  • Patent number: 11880692
    Abstract: Provided is an apparatus configured to determine a common neural network based on a comparison between a first neural network included in a first application program and a second neural network included in a second application program, utilize the common neural network when the first application program or the second application program is executed.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: January 23, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyunjoo Jung, Jaedeok Kim, Chiyoun Park
  • Patent number: 11875258
    Abstract: Methods, systems, and apparatus for selecting actions to be performed by an agent interacting with an environment. One system includes a high-level controller neural network, low-level controller network, and subsystem. The high-level controller neural network receives an input observation and processes the input observation to generate a high-level output defining a control signal for the low-level controller. The low-level controller neural network receives a designated component of an input observation and processes the designated component and an input control signal to generate a low-level output that defines an action to be performed by the agent in response to the input observation.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: January 16, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Nicolas Manfred Otto Heess, Timothy Paul Lillicrap, Gregory Duncan Wayne, Yuval Tassa
  • Patent number: 11875268
    Abstract: A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: January 16, 2024
    Inventors: Zhengping Ji, Ilia Ovsiannikov, Yibing Michelle Wang, Lilong Shi
  • Patent number: 11875257
    Abstract: A normalization method for machine learning and an apparatus thereof are provided. The normalization method according to some embodiments of the present disclosure may calculate a value of a normalization parameter for an input image through a normalization model before inputting the input image to a target model and normalize the input image using the calculated value of the normalization parameter. Because the normalization model is updated based on a prediction loss of the target model, the input image can be normalized to an image suitable for a target task, so that stability of the learning and performance of the target model can be improved.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: January 16, 2024
    Assignee: LUNIT INC.
    Inventor: Jae Hwan Lee
  • Patent number: 11875250
    Abstract: An indication of semantic relationships among classes is obtained. A neural network whose loss function is based at least partly on the semantic relationships is trained. The trained neural network is used to identify one or more classes to which an input observation belongs.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: January 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Wei Xia, Meng Wang, Weixin Wu
  • Patent number: 11868882
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 9, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Claude Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 11868875
    Abstract: Provided are systems and methods for operating a neural network processor, wherein the processor includes an input selector circuit that can be configured to select the data that will be input into the processor's computational array. In various implementations, the selector circuit can determine, for a row of the array, whether the row input will be the output from a buffer memory or data that the input selector circuit has selected for a different row. The row can receive an input feature map from a set of input data or an input feature map that was selected for inputting into a different row, such that the input feature map is input into more than one row at a time. The selector circuit can also include a delay circuit, so that the duplicated input feature map can be input into the computational array later than the original input feature map.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Randy Renfu Huang, Jeffrey T. Huynh, Sundeep Amirineni
  • Patent number: 11868403
    Abstract: A method for utilizing a graph path cache to facilitate real-time data consumption by a plurality of machine learning models is disclosed. The method includes receiving an input from a source, the input relating to a request to characterize a data element; retrieving a data attribute that corresponds to the data element from a data management system; determining, in real-time using the graph path cache, a graph attribute that corresponds to the data element by performing deep link analysis on a graph database; executing, in real-time, a model by using the data attribute and the graph attribute, the model corresponding to the request in the input; and transmitting, in real-time, a result of the executed model to the source in response to the input.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: January 9, 2024
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Sambasiva R Vadlamudi, Ramana Nallajarla, Rakesh R Pillai, Satya Sai Sita Rama Rajesh Vegi
  • Patent number: 11853875
    Abstract: A processor-implemented neural network method includes acquiring connection weight of an analog neural network (ANN) node of a pre-trained ANN; and determining, a firing rate of a spiking neural network (SNN) node of an SNN, corresponding to the ANN node, based on an activation of the ANN node which is determined based on the connection weight. and the firing rate is also determined based on information indicating a timing at which the SNN node initially fires.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: December 26, 2023
    Assignees: Samsung Electronics Co., Ltd., UNIVERSITAET ZUERICH
    Inventors: Bodo Ruckauer, Shih-Chii Liu