Patents Examined by Brent Johnston Hoover
-
Patent number: 11836610Abstract: An artificial neural network that includes first subnetworks to implement known functions and second subnetworks to implement unknown functions is trained. The first subnetworks are trained separately and in parallel on corresponding known training datasets to determine first parameter values that define the first subnetworks. The first subnetworks are executing on a plurality of processing elements in a processing system. Input values from a network training data set are provided to the artificial neural network including the trained first subnetworks. Error values are generated by comparing output values produced by the artificial neural network to labeled output values of the network training data set. The second subnetworks are trained by back propagating the error values to modify second parameter values that define the second subnetworks without modifying the first parameter values. The first and second parameter values are stored in a storage component.Type: GrantFiled: December 13, 2017Date of Patent: December 5, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Dmitri Yudanov, Nicholas Penha Malaya
-
Patent number: 11823027Abstract: Embodiments of the present disclosure implement a stochastic neural network (SNN) where nodes are selectively activated depending on the inputs and which can be trained on multiple objectives. A system can include one or more nodes and one or more synapses, wherein each synapse connects a respective pair of the plurality of nodes. The system can further include one or more processing elements, wherein each of the processing elements is embedded in a respective synapse, and wherein each of the processing elements is adapted to receive an input and generate an output based on the input. The system can be configured to operate such that, upon receipt of a first problem input, a first subset of the nodes in the system is selectively activated. Upon receipt of a second problem input, a second subset of the nodes is selectively activated. The second subset of nodes can be different from the first subset of nodes. In various embodiments, the first and second subsets of nodes can be selectively activated in parallel.Type: GrantFiled: May 18, 2023Date of Patent: November 21, 2023Assignee: SILVRETTA RESEARCH, INC.Inventor: Giuseppe G. Nuti
-
Patent number: 11816578Abstract: The disclosed technology generally relates to novelty detection and more particularly to novelty detection methods using a deep learning neural network and apparatuses and non-transitory computer-readable media configured for performing the methods. In one aspect, a method for detecting novelty using a deep learning neural network model comprises providing a deep learning neural network model. The deep learning neural network model comprises an encoder comprising a plurality of encoder layers and a decoder comprising a plurality of decoder layers. The method additionally comprises feeding a first input into the encoder and successively processing the first input through the plurality of encoder layers to generate a first encoded input, wherein successively processing the first input comprises generating a first intermediate encoded input from one of the encoder layers prior to generating the first encoded input.Type: GrantFiled: March 3, 2022Date of Patent: November 14, 2023Assignee: MakinaRocks Co., Ltd.Inventors: Andre S. Yoon, Sangwoo Shim, Yongsub Lim, Ki Hyun Kim, Byungchan Kim, JeongWoo Choi, Jongseob Jeon
-
Patent number: 11797823Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.Type: GrantFiled: February 18, 2020Date of Patent: October 24, 2023Assignee: Adobe Inc.Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
-
Patent number: 11797822Abstract: The present invention relates to an improved artificial neural network for predicting one or more next items in a sequence of items based on an input sequence item. The improved artificial neural network has greatly reduced memory requirements, making it suitable for use on electronic devices such as mobile phones and tablets. The invention includes an electronic device on which the improved artificial neural network operates, and methods of predicting the one or more next items in the sequence using the improved artificial neural network.Type: GrantFiled: July 5, 2016Date of Patent: October 24, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Marek Rei, Matthew James Willson
-
Patent number: 11790239Abstract: A specification of a property required to be upheld by a computerized machine learning system is obtained. A training data set corresponding to the property and inputs and outputs of the system is built. The system is trained on the training data set. Activity of the system is monitored before, during, and after the training. Based on the monitoring, performance of the system is evaluated to determine whether the system, once trained on the training data set, upholds the property.Type: GrantFiled: December 29, 2018Date of Patent: October 17, 2023Assignee: International Business Machines CorporationInventors: George Kour, Guy Hadash, Yftah Ziser, Ofer Lavi, Guy Lev
-
Patent number: 11783217Abstract: Technologies are described herein to implement an optimizer that receives portions of a quantum circuit; identifies, from within the received portions of the quantum circuit, a pattern of quantum gates to perform a quantum function; searches a library for a replacement pattern of quantum gates, which is also to perform the quantum function, for the identified pattern of quantum gates; determines that a quantum cost of the replacement pattern of quantum gates is lower than a quantum cost of the identified pattern of quantum gates; and replaces the identified pattern of quantum gates with the replacement pattern of quantum gates.Type: GrantFiled: February 21, 2019Date of Patent: October 10, 2023Assignee: IONQ, INC.Inventors: Vandiver Chaplin, Yunseong Nam
-
Patent number: 11755911Abstract: The present disclosure provides a method and an apparatus for training a neural network and a computer server. The method includes: selecting automatically input data for which processing by the neural network fails, to obtain a set of data to be annotated; annotating the set of data to be annotated to obtain a new set of annotated data; acquiring a set of newly added annotated data containing the new set of annotated data, and determining a union of the set of newly added annotated data and a set of training sample data for training the neural network in a previous period as a set of training sample data for a current period; and training the neural network iteratively based on the set of training sample data for the current period, to obtain a neural network trained in the current period.Type: GrantFiled: May 23, 2019Date of Patent: September 12, 2023Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.Inventors: Zehao Huang, Naiyan Wang
-
Patent number: 11755890Abstract: A method for performing learning is described. A free inference is performed on a learning network for input signals. The input signals correspond to target output signals. The learning network includes inputs that receive the input signals, neurons, weights interconnecting the neurons, and outputs. The learning network is described by an energy for the free inference. The energy includes an interaction term corresponding to interactions consisting of neuron pair interactions. The free inference results in output signals. A first portion of the plurality of weights corresponding to data flow for the free inference. A biased inference is performed on the learning network by providing the input signals to the inputs and bias signals to the outputs. The bias signals are based on the target output signals and the output signals. The bias signals are fedback to the learning network through a second portion of the weights corresponding to a transpose of the first portion of the weights.Type: GrantFiled: October 31, 2022Date of Patent: September 12, 2023Assignee: Rain Neuromorphics Inc.Inventors: Suhas Kumar, Alexander Almela Conklin, Jack David Kendall
-
Patent number: 11748626Abstract: Systems, methods and apparatus of accelerating neural network computations of predictive maintenance of vehicles. For example, a data storage device of a vehicle includes: a host interface configured to receive a sensor data stream from at least one sensor configured on the vehicle; at least one storage media component having a non-volatile memory to store at least a portion of the sensor data stream; a controller; and a neural network accelerator coupled to the controller. The neural network accelerator is configured to perform at least a portion of computations based on an artificial neural network and the sensor data stream to predict a maintenance service of the vehicle.Type: GrantFiled: August 12, 2019Date of Patent: September 5, 2023Assignee: Micron Technology, Inc.Inventors: Poorna Kale, Robert Richard Noel Bielby
-
Patent number: 11741362Abstract: A system for training a neural network receives training data and performing lower precision format training calculations using lower precision format data at one or more training phases. One or more results from the lower precision format training calculations are converted to higher precision format data, and higher precision format training calculations are performed using the higher precision format data at one or more additional training phases. The neural network is modified using the results from the one or more additional training phases. The mixed precision format training calculations train the neural network more efficiently, while maintaining an overall accuracy.Type: GrantFiled: May 8, 2018Date of Patent: August 29, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Daniel Lo, Eric Sen Chung, Bita Darvish Rouhani
-
Patent number: 11734565Abstract: Embodiments disclosed herein describe systems, methods, and products that generate trained neural networks that are robust against adversarial attacks. During a training phase, an illustrative computer may iteratively optimize a loss function that may include a penalty for ill-conditioned weight matrices in addition to a penalty for classification errors. Therefore, after the training phase, the trained neural network may include one or more well-conditioned weight matrices. The one or more well-conditioned weight matrices may minimize the effect of perturbations within an adversarial input thereby increasing the accuracy of classification of the adversarial input. By contrast, conventional training approaches may merely reduce the classification errors using backpropagation, and, as a result, any perturbation in an input is prone to generate a large effect on the output.Type: GrantFiled: June 3, 2022Date of Patent: August 22, 2023Assignee: Adobe Inc.Inventors: Mayank Singh, Abhishek Sinha, Balaji Krishnamurthy
-
Patent number: 11727312Abstract: A computer-implemented method, system and computer program product for generating personalized recommendations to address a target problem. A machine learning prediction model directed to a target problem for an individual is built with historical data. After receiving data about the individual, a prediction for the individual is obtained in connection with the target problem by the built model using the received data about the individual. Key predictors (e.g., parameters) and their weight for the individual are generated using the prediction by an explanation model. Record(s) are identified from the historical data by performing similarity analysis of the historical data using the key predictors and their weight. Such records provide a population closely related to the individual with respect to the target problem. These records are then analyzed and recommendations are provided to a user to solve the target problem for the individual based on the analysis of the identified record(s).Type: GrantFiled: September 3, 2019Date of Patent: August 15, 2023Assignee: International Business Machines CorporationInventors: Xue Ying Zhang, Jing Xu, Xiao Ming Ma, Jing James Xu, Ying Xu, Ang Chang
-
Patent number: 11720784Abstract: A system and method for enhancing inferential accuracy of an artificial neural network during training includes during a simulated training of an artificial neural network identifying channel feedback values of a plurality of distinct channels of a layer of the artificial neural network based on an input of a training batch; if the channel feedback values do not satisfy a channel signal range threshold, computing a channel equalization factor based on the channel feedback values; identifying a layer feedback value based on the input of the training batch; and if the layer feedback value does not satisfy a layer signal range threshold, identifying a composite scaling factor based on the layer feedback values; during a non-simulated training of the artificial neural network, providing training inputs of: the training batch; the composite scaling factor; the channel equalization factor; and training the artificial neural network based on the training inputs.Type: GrantFiled: March 25, 2022Date of Patent: August 8, 2023Assignee: Mythic, Inc.Inventors: David Fick, Daniel Graves, Michael Klachko, Ilya Perminov, John Mark Beardslee, Evgeny Shapiro
-
Patent number: 11715287Abstract: Systems and methods may make exchanging data in a neural network (NN) during training more efficient. Exchanging weights among a number of processors training a NN across iterations may include sorting generated weights, compressing the sorted weights, and transmitting the compressed sorted weights. On each Kth iteration a sort order of the sorted weights may be created and transmitted. Exchanging weights among processors training a NN may include executing a forward pass to produce a set of loss values for processors, transmitting loss values to other processors, and at each of the processors, performing backpropagation on at least one layer of the NN using loss values received from other processors.Type: GrantFiled: November 16, 2018Date of Patent: August 1, 2023Inventors: Alexander Matveev, Nir Shavit
-
Patent number: 11709859Abstract: Aspects of the present disclosure relate to data visualization, and more specifically, to technology that automatically visualizes various analytics and predictions generated for mass participation endurance events, or other mass participation events of interest.Type: GrantFiled: November 15, 2021Date of Patent: July 25, 2023Assignee: Northwestern UniversityInventors: Karen R. Smilowitz, George T. Chiampas, Taylor G. Hanken, Rachel G. Lin, Ryan W. Rose, Bruno P. Velazquez, Samuel H. Young
-
Patent number: 11694105Abstract: This disclosure relates generally to the field of quantum algorithms and quantum data loading, and more particularly to constructing quantum circuits for loading classical data into quantum states which reduces the computational resources of the circuit, e.g., number of qubits, depth of quantum circuit, and type of gates in the circuit.Type: GrantFiled: March 29, 2022Date of Patent: July 4, 2023Assignee: QC Ware Corp.Inventor: Iordanis Kerenidis
-
Patent number: 11687816Abstract: This disclosure relates generally to the field of quantum algorithms and quantum data loading, and more particularly to constructing quantum circuits for loading classical data into quantum states which reduces the computational resources of the circuit, e.g., number of qubits, depth of quantum circuit, and type of gates in the circuit.Type: GrantFiled: August 6, 2020Date of Patent: June 27, 2023Assignee: QC Ware Corp.Inventor: Iordanis Kerenidis
-
Patent number: 11681917Abstract: Systems and methods for training a neural network or an ensemble of neural networks are described. A hyper-parameter that controls the variance of the ensemble predictors is used to address overfitting. For larger values of the hyper-parameter, the predictions from the ensemble have more variance, so there is less overfitting. This technique can be applied to ensemble learning with various cost functions, structures and parameter sharing. A cost function is provided and a set of techniques for learning are described.Type: GrantFiled: December 3, 2020Date of Patent: June 20, 2023Assignee: Deep Genomics IncorporatedInventors: Hui Yuan Xiong, Andrew Delong, Brendan Frey
-
Patent number: 11681939Abstract: This disclosure relates generally to the field of quantum algorithms and quantum data loading, and more particularly to constructing quantum circuits for loading classical data into quantum states which reduces the computational resources of the circuit, e.g., number of qubits, depth of quantum circuit, and type of gates in the circuit.Type: GrantFiled: August 6, 2020Date of Patent: June 20, 2023Assignee: QC Ware Corp.Inventor: Iordanis Kerenidis