Patents Examined by Lokesha G Patel
-
Patent number: 11941533Abstract: Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. The compiler, as part of generating the graph, in some embodiments, determines whether any set of channels contains no non-zero values (i.e., contains only zero values). For sets of channels that include no non-zero values, some embodiments perform a zero channel removal operation to remove all-zero channels wherever possible. In some embodiments, zero channel removal operations include removing input channels, removing output channels, forward propagation, and backward propagation of channels and constants.Type: GrantFiled: July 29, 2019Date of Patent: March 26, 2024Assignee: PERCEIVE CORPORATIONInventors: Brian Thomas, Steven L. Teig
-
Patent number: 11907825Abstract: Methods, systems, and apparatus, including instructions encoded on storage media, for performing reduction of gradient vectors for distributed training of a neural network. One of the methods includes receiving, at each of the plurality of devices, a respective batch; performing, by each device, a forward pass comprising, for each batch normalization layer: generating, by each of the devices, a respective output of the corresponding other layer for each training example in the batch, determining, by each of the devices, a per-replica mean and a per-replica variance; determining, for each sub-group, a distributed mean and a distributed variance from the per-replica means and the per-replica variances for the devices in the sub-group; and applying, by each device, batch normalization to the respective outputs of the corresponding other layer generated by the device using the distributed mean and the distributed variance for the sub-group to which the device belongs.Type: GrantFiled: October 21, 2019Date of Patent: February 20, 2024Assignee: Google LLCInventors: Blake Alan Hechtman, Sameer Kumar
-
Patent number: 11893495Abstract: A neural network system includes a first neural network configured to predict a mean value output and epistemic uncertainty of the output given input data, and a second neural network configured to predict total uncertainty of the output of the first neural network. The second neural network is trained to predict total uncertainty of the output of the first neural network given the input data through a training process involving minimizing a cost function that involves differences between a predicted mean value of a geophysical property of a geological formation from the first neural network and a ground-truth value of the geophysical property of the geological formation. The neural network system further includes one or more processors configured to run a software module that determines aleatoric uncertainty of the output of the first neural network based on the epistemic uncertainty of the output and the total uncertainty of the output.Type: GrantFiled: September 8, 2020Date of Patent: February 6, 2024Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventors: Ravinath Kausik Kadayam Viswanathan, Lalitha Venkataramanan, Augustin Prado
-
Patent number: 11886990Abstract: A classification device includes a generation unit, a learning unit, a classification unit, and an output control unit. The generation unit generates pseudo data having a feature similar to a feature of training data. The learning unit learns, by using the training data and the pseudo data, a classification model that classifies data into one of a pseudo class for classifying the pseudo data and a plurality of classification classes other than the pseudo class and that is constructed by a neural network. The classification unit classifies, by using the classification model, input data as a target for classification into one of the pseudo class and the plurality of classification classes. The output control unit outputs information indicating that the input data classified into the pseudo class is data not belonging to any of the plurality of classification classes.Type: GrantFiled: March 8, 2019Date of Patent: January 30, 2024Assignee: Kabushiki Kaisha ToshibaInventor: Kouta Nakata
-
Patent number: 11861459Abstract: Methods, systems and computer program products for providing automatic determination of recommended hyper-local data sources and features for use in modeling is provided. Responsive to training each model of a plurality of models, aspects include receiving client data, a use-case description and a selection of hyper-local data sources, generating a client data profile, determining feature importance and generating a use-case profile. Aspects also include generating a feature profile relation graph including client data profile nodes, hyper-local feature nodes and a use-case profile nodes, wherein each hyper-local feature node is associated with one or more client data profile nodes and user-case profile nodes by a respective edge having an associated edge weight. Responsive to receiving a new client data set and a new use-case description, aspects also include determining one or more hyper-local features as suggested hyper-local features for use in building a new model.Type: GrantFiled: June 11, 2019Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Rajendra Rao, Rajesh Phillips, Manisha Sharma Kohli, Puneet Sharma, Vijay Ekambaram
-
Patent number: 11853875Abstract: A processor-implemented neural network method includes acquiring connection weight of an analog neural network (ANN) node of a pre-trained ANN; and determining, a firing rate of a spiking neural network (SNN) node of an SNN, corresponding to the ANN node, based on an activation of the ANN node which is determined based on the connection weight. and the firing rate is also determined based on information indicating a timing at which the SNN node initially fires.Type: GrantFiled: October 23, 2018Date of Patent: December 26, 2023Assignees: Samsung Electronics Co., Ltd., UNIVERSITAET ZUERICHInventors: Bodo Ruckauer, Shih-Chii Liu
-
Patent number: 11853879Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating document vector representations. One of the methods includes obtaining a new document; and determining a vector representation for the new document using a trained neural network system, wherein the trained neural network system has been trained to receive an input document and a sequence of words from the input document and to generate a respective word score for each word in a set of words, wherein each of the respective word scores represents a predicted likelihood that the corresponding word follows a last word in the sequence in the input document, and wherein determining the vector representation for the new document using the trained neural network system comprises iteratively providing each of the plurality of sequences of words to the trained neural network system to determine the vector representation for the new document using gradient descent.Type: GrantFiled: July 26, 2019Date of Patent: December 26, 2023Assignee: Google LLCInventor: Quoc V. Le
-
Patent number: 11853910Abstract: Provided are a computer program product, system, and method for ranking action sets comprised of actions for an event to optimize action set selection. Information is maintained on actions for a plurality of events. Each action indicates an action value of the action to the user and event weights of the action with respect to a plurality of the events. A determination is made of actions sets having at least one action to perform for the event. For each determined action set, a rank of the action set is calculated as a function of the action value for each action in the action set and an event weight of the action with respect to the event. At least one action set is presented to the user for consideration. In response to receiving user feedback, an adjusted rank is set for at least one of the presented action sets.Type: GrantFiled: October 17, 2019Date of Patent: December 26, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORTIONInventors: Tansel Zenginler, Natalie Brooks Powell, Vinod A. Valecha
-
Patent number: 11853886Abstract: In a computer system that includes a trained recurrent neural network (RNN), a computer-based method includes: producing a copy of the trained RNN; producing a version of the RNN prior to any training; trying to solve a control task for the RNN with the copy of the trained RNN and with the untrained version of the RNN; and in response to the copy of the trained RNN or the untrained version of the RNN solving the task sufficiently well: retraining the trained RNN with one or more traces (sequences of inputs and outputs) from the solution; and retraining the trained RNN based on one or more traces associated with other prior control task solutions, as well as retraining the RNN based on previously observed traces to predict environmental inputs and other data (which maybe consequences of executed control actions).Type: GrantFiled: September 30, 2022Date of Patent: December 26, 2023Assignee: Nnaisense SAInventor: Hans Jürgen Schmidhuber
-
Patent number: 11763139Abstract: A neuromorphic chip includes synaptic cells including respective resistive devices, axon lines, dendrite lines and switches. The synaptic cells are connected to the axon lines and dendrite lines to form a crossbar array. The axon lines are configured to receive input data and to supply the input data to the synaptic cells. The dendrite lines are configured to receive output data and to supply the output data via one or more respective output lines. A given one of the switches is configured to connect an input terminal to one or more input lines and to changeably connect its one or more output terminals to a given one or more axon lines.Type: GrantFiled: January 19, 2018Date of Patent: September 19, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Atsuya Okazaki, Masatoshi Ishii, Junka Okazawa, Kohji Hosokawa, Takayuki Osogami
-
Patent number: 11734548Abstract: The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.Type: GrantFiled: November 27, 2019Date of Patent: August 22, 2023Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
-
Patent number: 11693392Abstract: Example implementations described herein are directed to a system for manufacturing dispatching using reinforcement learning and transfer learning. The systems and methods described herein can be deployed in factories for manufacturing dispatching for reducing job-due related costs. In particular, example implementations described herein can be used to reduce massive data collection and reduce model training time, which can eventually improve dispatching efficiency and reduce factory cost.Type: GrantFiled: January 30, 2019Date of Patent: July 4, 2023Assignee: HITACHI, LTD.Inventors: Shuai Zheng, Chetan Gupta, Susumu Serita
-
Patent number: 11645510Abstract: An example method for accelerating neuron computations in an artificial neural network (ANN) comprises receiving a plurality of pairs of first values and second values associated with a neuron of an ANN, selecting pairs from the plurality of pairs, wherein a count of the selected pairs is less than a count of all pairs in the plurality of pairs, performing mathematical operations on the selected pairs to obtain a result, determining that the result does not satisfy a criterion, and, until the result satisfies the criterion, selecting further pairs from the plurality, performing the mathematical operations on the selected further pairs to obtain further results, and determining, based on the result and the further results, an output of the neuron.Type: GrantFiled: April 8, 2019Date of Patent: May 9, 2023Assignee: MIPSOLOGY SASInventor: Ludovic Larzul
-
Patent number: 11640534Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.Type: GrantFiled: November 15, 2019Date of Patent: May 2, 2023Assignee: Raytheon CompanyInventor: John E. Mixter
-
Patent number: 11636001Abstract: Embodiments of the invention provide a method and system for determining an error threshold value when a vector distance based error measure is to be used for machine failure prediction. The method comprises: identifying a plurality of basic memory depth values based on a target sequence to be used for machine failure prediction; calculating an average depth value based on the plurality of basic memory depth values; retrieving an elementary error threshold value, based on the average depth value, from a pre-stored table which is stored in a memory and includes a plurality of mappings wherein each mapping associates a predetermined depth value of an elementary sequence to an elementary error threshold value; and calculating an error threshold value corresponding to the target sequence based on both the retrieved elementary error threshold value and a standard deviation of the plurality of basic memory depth values.Type: GrantFiled: April 24, 2019Date of Patent: April 25, 2023Assignee: Avanseus Holdings Pte. Ltd.Inventor: Chiranjib Bhandary
-
Patent number: 11588099Abstract: A reservoir element of the first aspect of the present disclosure includes: a first ferromagnetic layer; a plurality of second ferromagnetic layers positioned in a first direction with respect to the first ferromagnetic layer and spaced apart from each other in a plan view from the first direction; and a nonmagnetic layer positioned between the first ferromagnetic layer and the second ferromagnetic layers.Type: GrantFiled: September 10, 2019Date of Patent: February 21, 2023Assignee: TDK CORPORATIONInventors: Tomoyuki Sasaki, Tatsuo Shibata
-
Patent number: 11568301Abstract: A machine learning system includes multiple machine learning models. A target object, such as a file, is scanned for machine learning features. Context information of the target object, such as the type of the object and how the object was received in a computer, is employed to select a machine learning model among the multiple machine learning models. The machine learning model is also selected based on threat intelligence, such as census information of the target object. The selected machine learning model makes a prediction using machine learning features extracted from the target object. The target object is allowed or blocked depending on whether or not the prediction indicates that the target object is malicious.Type: GrantFiled: January 31, 2018Date of Patent: January 31, 2023Assignee: Trend Micro IncorporatedInventors: Peng-Yuan Yueh, Chia-Yen Chang, Po-I Wang, Te-Ching Chen
-
Patent number: 11429856Abstract: An approach for generating a trained neural network is provided. In an embodiment, a neural network, which can have an input layer, an output layer, and a hidden layer, is created. An initial training of the neural network is performed using a set of labeled data. The boosted neural network resulting from the initial training is applied to unlabeled data to determine whether any of the unlabeled data qualifies as additional labeled data. If it is determined that any of the unlabeled data qualifies as additional labeled data, the boosted neural network is retrained using the additional labeled data. Otherwise, if it is determined that none of the unlabeled data qualifies as additional labeled data, the neural network is updated to change a number of predictor nodes in the neural network.Type: GrantFiled: September 12, 2018Date of Patent: August 30, 2022Assignee: International Business Machines CorporationInventors: Jamal Hammoud, Marc Joel Herve Legroux
-
Patent number: 11416739Abstract: A simulation processor generates and stores a simulation model based on conditions associated with a physical structure, such as a building. A neural network processor implements a neural network, having an input layer coupled to receive sensor data from the structure and having an output layer coupled to supply control signals to the at least one electrically operable environmental control device. The neural network is trained using the simulation model. A particle swarm optimization processor programmed to receive the simulation results and perform particle swarm optimization, ascertains optimal parameters for controlling the at least one electrically operable environmental control device and supplies these optimal parameters to the neural network processor. The neural network processor uses the optimal parameters supplied by the particle swarm optimization processor to further train the neural network.Type: GrantFiled: January 29, 2018Date of Patent: August 16, 2022Assignee: Lawrence Livermore National Security, LLCInventor: Yining Qin
-
Patent number: 11392826Abstract: Sequences of computer network log entries indicative of a cause of an event described in a first type of entry are identified by training a long short-term memory (LSTM) neural network to detect computer network log entries of a first type. The network is characterized by a plurality of ordered cells Fi=(xi, ci-1, hi-1) and a final sigmoid layer characterized by a weight vector wT. A sequence of log entries xi is received. An hi for each entry is determined using the trained Fi. A value of gating function Gi(hi, hi-1)=II (wT(hi?hi-1)+b) is determined for each entry. II is an indicator function, b is a bias parameter. A sub-sequence of xi corresponding to Gi(hi, hi-1)=1 is output as a sequence of entries indicative of a cause of an event described in a log entry of the first type.Type: GrantFiled: December 27, 2017Date of Patent: July 19, 2022Assignee: Cisco Technology, Inc.Inventors: Saurabh Verma, Gyana R. Dash, Shamya Karumbaiah, Arvind Narayanan, Manjula Shivanna, Sujit Biswas, Antonio Nucci