Patents Examined by Brian J Hales
-
Patent number: 11861485Abstract: A data format converter rearranges data of an input image for input to a systolic array of multiply and accumulate processing elements. The image has a pixel height and a pixel width in a number of channels equal to a number of colors per pixel. The data format converter rearranges the data to a second, greater number of channels and inputs the second number of channels to one side of the systolic array. The second number of channels is less than or equal to the number of MAC PEs on the one side of the systolic array, and results in greater MAC PE utilization in the systolic array.Type: GrantFiled: November 22, 2019Date of Patent: January 2, 2024Assignee: BAIDU USA LLCInventor: Min Guo
-
Patent number: 11853890Abstract: Provided is an operation method for a memory device, the memory device being used for implementing an Artificial Neural Network (ANN). The operation method includes: reading from the memory device a weight matrix of a current layer of a plurality of layers of the ANN to extract a plurality of neuro values; determining whether to perform calibration; when it is determined to perform calibration, recalculating and updating a mean value and a variance value of the neuro values; and performing batch normalization based on the mean value and the variance value of the neuro values.Type: GrantFiled: July 26, 2019Date of Patent: December 26, 2023Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Chao-Hung Wang, Yu-Hsuan Lin, Ming-Liang Wei, Dai-Ying Lee
-
Patent number: 11817184Abstract: A computational method simulating the motion of elements within a multi-element system using a graph neural network (GNN). The method includes converting a molecular dynamics snapshot of the elements into a directed graph comprised of nodes and edges. The method further includes the step of initially embedding the nodes and the edges to obtain initially embedded nodes and edges. The method also includes updating the initially embedded nodes and edges by passing a first message from a first edge to a first node using a first message function and passing a second message from the first node to the first edge using a second message function to obtain updated embedded nodes and edges, and predicting a force vector for one or more elements based on the updated embedded edges and a unit vector pointing from the first node to a second node or the second node to the first node.Type: GrantFiled: May 16, 2019Date of Patent: November 14, 2023Assignee: Robert Bosch GmbHInventors: Cheol Woo Park, Jonathan Mailoa, Mordechai Kornbluth, Georgy Samsonidze, Soo Kim, Karim Gadelrab, Boris Kozinsky, Nathan Craig
-
Patent number: 11816564Abstract: A feature sub-network trainer improves robustness of interpretability of a deep neural network (DNN) by increasing the likelihood that the DNN will converge to a global minimum of a cost function of the DNN. After determining a plurality of correctly classified examples of a pre-trained DNN, the trainer extracts from the pre-trained DNN a feature sub-network that includes an input layer of the DNN and one or more subsequent sparsely-connected layers of the DNN. The trainer averages output signals from the sub-network to form an average representation of each class identifiable by the DNN. The trainer relabels each correctly classified example with the appropriate average representation, and then trains the feature sub-network with the relabeled examples. In one demonstration, the feature sub-network trainer improved classification accuracy of a seven-layer convolutional neural network, trained with two thousand examples, from 75% to 83% by reusing the training examples.Type: GrantFiled: May 21, 2019Date of Patent: November 14, 2023Assignee: Pattern Computer, Inc.Inventor: Irshad Mohammed
-
Patent number: 11790250Abstract: Various embodiments are generally directed to an apparatus, system, and other techniques for dynamic and intelligent deployment of a neural network or any inference model on a hardware executor or a combination of hardware executors. Computational costs for one or more operations involved in executing the neural network or inference model may be determined. Based on the computational costs, an optimal distribution of the computational workload involved in running the one or more operations among multiple hardware executors may be determined.Type: GrantFiled: May 9, 2019Date of Patent: October 17, 2023Assignee: Intel CorporationInventors: Padmashree Apparao, Michal Karzynski
-
Patent number: 11782839Abstract: A feature map caching method of a convolutional neural network includes a connection analyzing step and a plurality of layer operation steps. The connection analyzing step is for analyzing a network to establish a convolutional neural network connection list. The convolutional neural network connection list includes a plurality of tensors and a plurality of layer operation coefficients. Each of the layer operation coefficients includes a step index, at least one input operand label and an output operand label. The step index as a processing order for the layer operation step. At least one of the layer operation steps is for flushing at least one of the tensors in a cache according to a distance between the at least one of the layer operation steps and a future layer operation step of the layer operation steps. The distance is calculated according to the convolutional neural network connection list.Type: GrantFiled: August 19, 2019Date of Patent: October 10, 2023Assignee: NEUCHIPS CORPORATIONInventors: Ping Chao, Chao-Yang Kao, Youn-Long Lin
-
Patent number: 11783178Abstract: A method includes generating a training data set comprising a plurality of training examples, wherein each training example is generated by receiving map data associated with a road portion, receiving sensor data associated with a road agent located on the road portion, defining one or more corridors associated with the road portion based on the map data and the sensor data, extracting a plurality of agent features associated with the road agent based on the sensor data, extracting a plurality of corridor features associated with each of the one or more corridors based on the sensor data, and for each corridor, labeling the training example based on the position of the road agent with respect to the corridor, and training a neural network using the training data set.Type: GrantFiled: July 30, 2020Date of Patent: October 10, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Blake Warren Wulfe, Wolfram Burgard
-
Patent number: 11694091Abstract: A method for receiving an ownership graph, wherein the ownership graph comprises a first set of nodes and a first set of directional edges, and wherein each of the first set of directional edges connects two nodes and indicates ownership of a first node by a second node, each node having at most one owner, the ownership graph being acyclic. The method further includes receiving a dependency graph that also comprises a set of nodes and a set of directional edges. The method further includes creating a respective enumerating variable declaration for each node in a path from an owner node to a root node in the ownership graph. The method further includes creating a respective accessing variable declaration for each owner node in the dependency graph of the current node.Type: GrantFiled: November 20, 2018Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventors: Jean-Michel G. B. Bernelas, Ulrich M. Junker, Thierry Kormann, Guilhem J. Molines
-
Patent number: 11688160Abstract: A method of generating training data for training a neural network, method of training a neural network and using a neural network for autonomous operations, related devices and systems. In one aspect, a neural network for autonomous operation of an object in an environment is trained. Policy values are generated based on a sample data set. An approximate action-value function is generated from the policy values. A set of approximated policy values is generated using the approximate action-value function for all states in the sample data set for all possible actions. A training target for the neural network is calculated based on the approximated policy values. A training error is calculated as the difference between the training target and the policy value for the corresponding state-action pair in the sample data set. At least some of the parameters of the neural network are updated to minimize the training error.Type: GrantFiled: January 15, 2019Date of Patent: June 27, 2023Assignee: Huawei Technologies Co., Ltd.Inventor: Hengshuai Yao
-
Patent number: 11665976Abstract: A reservoir element of the first aspect of the present disclosure includes: a spin conduction layer containing a non-magnetic conductor; ferromagnetic layers positioned in a first direction with respect to the spin conduction layer and spaced apart from each other in a plan view from the first direction; and via wirings electrically connected to spin conduction layer on a surface opposite to a surface with the ferromagnetic layers.Type: GrantFiled: September 10, 2019Date of Patent: May 30, 2023Assignee: TDK CORPORATIONInventors: Tomoyuki Sasaki, Tatsuo Shibata
-
Patent number: 11645358Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.Type: GrantFiled: January 29, 2019Date of Patent: May 9, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
-
Patent number: 11630982Abstract: Aspects of the present disclosure address systems and methods for fixed-point quantization using a dynamic quantization level adjustment scheme. Consistent with some embodiments, a method comprises accessing a neural network comprising floating-point representations of filter weights corresponding to one or more convolution layers. The method further includes determining a peak value of interest from the filter weights and determining a quantization level for the filter weights based on a number of bits in a quantization scheme. The method further includes dynamically adjusting the quantization level based on one or more constraints. The method further includes determining a quantization scale of the filter weights based on the peak value of interest and the adjusted quantization level. The method further includes quantizing the floating-point representations of the filter weights using the quantization scale to generate fixed-point representations of the filter weights.Type: GrantFiled: September 14, 2018Date of Patent: April 18, 2023Assignee: Cadence Design Systems, Inc.Inventors: Ming Kai Hsu, Sandip Parikh
-
Patent number: 11593606Abstract: A system includes a data collection engine, a plurality of items including radio-frequency identification chips, a plurality of third party data and insight sources, a plurality of interfaces, client devices, a server and method thereof for preventing suicide. The server includes trained machine learning models, business logic and attributes of a plurality of patient events. The data collection engine sends attributes of new patient events to the server. The server can predict an adverse event risk of the new patient events based upon the attributes of the new patient events utilizing the trained machine learning models.Type: GrantFiled: September 11, 2018Date of Patent: February 28, 2023Assignee: Brain Trust Innovations I, LLCInventor: David LaBorde
-
Patent number: 11586912Abstract: Methods, systems, and circuits for training a neural network include applying noise to a set of training data across wordlines using a respective noise switch on each wordline. A neural network is trained using the noise-applied training data to generate a classifier that is robust against adversarial training.Type: GrantFiled: October 18, 2019Date of Patent: February 21, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chia-Yu Chen, Pin-Yu Chen, Mingu Kang, Jintao Zhang
-
Patent number: 11568235Abstract: Embodiments for implementing mixed precision learning for neural networks by a processor. A neural network may be replicated into a plurality of replicated instances and each of the plurality of replicated instances differ in precision used for representing and determining parameters of the neural network. Data instances may be routed to one or more of the plurality of replicated instances for processing according to a data pre-processing operation.Type: GrantFiled: November 19, 2018Date of Patent: January 31, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Zehra Sura, Parijat Dube, Bishwaranjan Bhattacharjee, Tong Chen
-
Patent number: 11568226Abstract: A processing system includes a receiving circuit 1 for receiving an input signal from an externally connected sensor, an expected signal generating circuit 4 for automatically generating a teaching signal for use in the learning circuit 5, a learning circuit 5 for calculating a weight value, a bias value, and the like of the neural network model to form an expected signal from the teaching signal generated by the expected signal generating circuit 4 and the signal from the receiving circuit 1, an inference circuit 2 for performing signal processing based on a learned model of the neural network model generated by the learning circuit 5, and a validity verification circuit 3? for performing similarity calculation between an output signal of the inference circuit 2 and an expected signal for comparison.Type: GrantFiled: December 10, 2019Date of Patent: January 31, 2023Assignee: RENESAS ELECTRONICS CORPORATIONInventor: Yasushi Wakayama
-
Patent number: 11556776Abstract: A task agnostic framework for neural model transfer from a first language to a second language, that can minimize computational and monetary costs by accurately forming predictions in a model of the second language by relying on only a labeled data set in the first language, a parallel data set between both languages, a labeled loss function, and an unlabeled loss function. The models may be trained jointly or in a two-stage process.Type: GrantFiled: October 18, 2018Date of Patent: January 17, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sujay Kumar Jauhar, Michael Gamon, Patrick Pantel
-
Patent number: 11521049Abstract: An optimization device includes: processing circuits each configured to: hold a first value of a neuron of an Ising model; and perform a process to determine whether to permit updating of the first value based on information of the Ising model and information about a target neuron; a control circuit configured to: set, while causing a portion of the processing circuits to perform the process for a partial neuron group, information to be used for the process for a first neuron other than the partial neuron group in a first processing circuit; cause a second processing circuit among the portion of the processing circuits to inactivate the process; and cause the first processing circuit to start the process for the first neuron; and an update neuron selection circuit configured to: select the target neuron from one or more update permissible neurons; and update the value of the target neuron.Type: GrantFiled: September 27, 2019Date of Patent: December 6, 2022Assignee: FUJITSU LIMITEDInventors: Sanroku Tsukamoto, Satoshi Matsubara
-
Patent number: 11455526Abstract: According to an embodiment, a neural network device includes: a plurality of cores each executing computation and processing of a partial component in a neural network; and a plurality of routers transmitting data output from each core to one of the plurality of cores such that computation and processing are executed according to structure of the neural network. Each of the plurality of cores outputs at least one of a forward data and a backward data propagated through the neural network in a forward direction and a backward direction, respectively. Each of the plurality of routers is included in one of a plurality of partial regions each being a forward region or a backward region. A router included in the forward region and a router included in the backward region transmit the forward data and the backward data to other routers in the same partial regions, respectively.Type: GrantFiled: March 12, 2019Date of Patent: September 27, 2022Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Kumiko Nomura, Takao Marukame, Yoshifumi Nishi