Patents Examined by Kamran Afshar
-
Patent number: 11868870Abstract: A neuromorphic apparatus configured to process a multi-bit neuromorphic operation including a single axon circuit, a single synaptic circuit, a single neuron circuit, and a controller. The single axon circuit is configured to receive, as a first input, an i-th bit of an n-bit axon. The single synaptic circuit is configured to store, as a second input, a j-th bit of an m-bit synaptic weight and output a synaptic operation value between the first input and the second input. The single neuron circuit is configured to obtain each bit value of a multi-bit neuromorphic operation result between the n-bit axon and the m-bit synaptic weight, based on the output synaptic operation value. The controller is configured to respectively determine the i-th bit and the j-th bit to be sequentially assigned for each time period of different time periods to the single axon circuit and the single synaptic circuit.Type: GrantFiled: August 30, 2019Date of Patent: January 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
-
Patent number: 11868860Abstract: Systems and methods may use one or more artificial intelligence models that predict an effect of a predicted event on a current state of the system. The systems and methods may use one or more artificial intelligence models that predict an effect and/or occurrence of a predicted event based on the current state of the system. In order to generate responses that are both timely and pertinent (e.g., in a dynamic fashion), the system must determine, both quickly (i.e., in real-time or near real-time) and accurately, the predicted event.Type: GrantFiled: February 24, 2023Date of Patent: January 9, 2024Assignee: Citibank, N.A.Inventors: Ernst Wilhelm Spannhake, II, Thomas Francis Gianelle, Milan Shah
-
Patent number: 11861459Abstract: Methods, systems and computer program products for providing automatic determination of recommended hyper-local data sources and features for use in modeling is provided. Responsive to training each model of a plurality of models, aspects include receiving client data, a use-case description and a selection of hyper-local data sources, generating a client data profile, determining feature importance and generating a use-case profile. Aspects also include generating a feature profile relation graph including client data profile nodes, hyper-local feature nodes and a use-case profile nodes, wherein each hyper-local feature node is associated with one or more client data profile nodes and user-case profile nodes by a respective edge having an associated edge weight. Responsive to receiving a new client data set and a new use-case description, aspects also include determining one or more hyper-local features as suggested hyper-local features for use in building a new model.Type: GrantFiled: June 11, 2019Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Rajendra Rao, Rajesh Phillips, Manisha Sharma Kohli, Puneet Sharma, Vijay Ekambaram
-
Patent number: 11861483Abstract: Provided is a spike neural network circuit including a synapse configured to generate an operation signal based on an input spike signal and a weight, and a neuron configured to generate an output spike signal using a comparator configured to compare a voltage of a membrane signal generated based on the operation signal with a voltage of a threshold signal, wherein the comparator includes a bias circuit configured to conditionally supply a bias current of the comparator depending on the membrane signal.Type: GrantFiled: November 19, 2019Date of Patent: January 2, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kwang IL Oh, Sung Eun Kim, Seong Mo Park, Young Hwan Bae, Jae-Jin Lee, In Gi Lim
-
Patent number: 11861489Abstract: Disclosed by the disclosure is a convolutional neural network on-chip learning system based on non-volatile memory, comprising: an input module, a convolutional neural network module, an output module and a weight update module. The on-chip learning of the convolutional neural network module implements the synaptic function by using the characteristic of the memristor, and the convolutional kernel value or synaptic weight value is stored in a memristor unit; the input module converts the input signal into the voltage signal; the convolutional neural network module converts the input voltage signal layer-by-layer, and transmits the result to the output module to obtain the output of the network; and the weight update module adjusts the conductance value of the memristor in the convolutional neural network module according to the result of the output module to update the network convolutional kernel value or synaptic weight value.Type: GrantFiled: July 12, 2019Date of Patent: January 2, 2024Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Xiangshui Miao, Yi Li, Wenqian Pan
-
Patent number: 11861485Abstract: A data format converter rearranges data of an input image for input to a systolic array of multiply and accumulate processing elements. The image has a pixel height and a pixel width in a number of channels equal to a number of colors per pixel. The data format converter rearranges the data to a second, greater number of channels and inputs the second number of channels to one side of the systolic array. The second number of channels is less than or equal to the number of MAC PEs on the one side of the systolic array, and results in greater MAC PE utilization in the systolic array.Type: GrantFiled: November 22, 2019Date of Patent: January 2, 2024Assignee: BAIDU USA LLCInventor: Min Guo
-
Patent number: 11853890Abstract: Provided is an operation method for a memory device, the memory device being used for implementing an Artificial Neural Network (ANN). The operation method includes: reading from the memory device a weight matrix of a current layer of a plurality of layers of the ANN to extract a plurality of neuro values; determining whether to perform calibration; when it is determined to perform calibration, recalculating and updating a mean value and a variance value of the neuro values; and performing batch normalization based on the mean value and the variance value of the neuro values.Type: GrantFiled: July 26, 2019Date of Patent: December 26, 2023Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Chao-Hung Wang, Yu-Hsuan Lin, Ming-Liang Wei, Dai-Ying Lee
-
Patent number: 11853905Abstract: Systems and methods to identify document transitions between adjacent documents within document bundles are disclosed. Exemplary implementations may train a model: obtain training information including a first training bundle and corresponding document separation markers; determine page-specific feature information pertaining to individual pages of the first training bundle; determine, based on the obtained page-specific feature information, page-specific feature values for individual features of the individual pages of the first training bundle; generate, for the individual pages of the first training bundle, a page-specific feature vector; train the model, using the training document bundles, to determine whether the first page and the second page are part of different document. Systems and methods may utilize the trained model to identify document transitions between adjacent documents within document bundles.Type: GrantFiled: June 24, 2022Date of Patent: December 26, 2023Assignee: Instabase, Inc.Inventor: Daniel Benjamin Cahn
-
Patent number: 11847540Abstract: Embodiments are directed to a method for accelerating machine learning using a plurality of graphics processing units (GPUs), involving receiving data for a graph to generate a plurality of random samples, and distributing the random samples across a plurality of GPUs. The method may comprise determining a plurality of communities from the random samples using unsupervised learning performed by each GPU. A plurality of sample groups may be generated from the communities and may be distributed across the GPUs, wherein each GPU merges communities in each sample group by converging to an optimal degree of similarity. In addition, the method may also comprise generating from the merged communities a plurality of subgraphs, dividing each sub-graph into a plurality of overlapping clusters, distributing the plurality of overlapping clusters across the plurality of GPUs, and scoring each cluster in the plurality of overlapping clusters to train an AI model.Type: GrantFiled: August 30, 2021Date of Patent: December 19, 2023Assignee: Visa International Service AssociationInventors: Theodore D. Harris, Yue Li, Tatiana Korolevskaya, Craig O'Connell
-
Patent number: 11836579Abstract: Disclosed is a technique that can be performed by an electronic device. The electronic device can generate time-stamped events, extract training data from the time-stamped events, and send the training data over a network to a remote computer. The electronic device can receive model data generated by the remote computer from the training data by use of a machine learning process, update a local model of the electronic device based on the received model data, and generate an output by processing locally sourced data of the electronic device with the updated local model.Type: GrantFiled: September 17, 2019Date of Patent: December 5, 2023Assignee: SPLUNK INC.Inventors: Pradeep Baliganapalli Nagaraju, Adam Jamison Oliner, Brian Matthew Gilmore, Erick Anthony Dean, Jiahan Wang
-
Patent number: 11816581Abstract: A fast neural transition-based parser. The fast neural transition-based parser includes a decision tree-based classifier and a state vector control loss function. The decision tree-based classifier is dynamically used to replace a multilayer perceptron in the fast neural transition-based parser, and the decision tree-based classifier increases speed of neural transition-based parsing. The state vector control loss function trains the fast neural transition-based parser, the state vector control loss function builds a vector space favorable for building a decision tree that is used for the decision tree-based classifier in the neural transition-based parser, and the state vector control loss function maintains accuracy of neural transition-based parsing while the decision tree-based classifier is used to increase the speed of the neural transition-based parsing while using the decision tree-based classifier to increase the speed of the neural transition-based parsing.Type: GrantFiled: September 8, 2020Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Ryosuke Kohita, Daisuke Takuma
-
Patent number: 11817184Abstract: A computational method simulating the motion of elements within a multi-element system using a graph neural network (GNN). The method includes converting a molecular dynamics snapshot of the elements into a directed graph comprised of nodes and edges. The method further includes the step of initially embedding the nodes and the edges to obtain initially embedded nodes and edges. The method also includes updating the initially embedded nodes and edges by passing a first message from a first edge to a first node using a first message function and passing a second message from the first node to the first edge using a second message function to obtain updated embedded nodes and edges, and predicting a force vector for one or more elements based on the updated embedded edges and a unit vector pointing from the first node to a second node or the second node to the first node.Type: GrantFiled: May 16, 2019Date of Patent: November 14, 2023Assignee: Robert Bosch GmbHInventors: Cheol Woo Park, Jonathan Mailoa, Mordechai Kornbluth, Georgy Samsonidze, Soo Kim, Karim Gadelrab, Boris Kozinsky, Nathan Craig
-
Patent number: 11816564Abstract: A feature sub-network trainer improves robustness of interpretability of a deep neural network (DNN) by increasing the likelihood that the DNN will converge to a global minimum of a cost function of the DNN. After determining a plurality of correctly classified examples of a pre-trained DNN, the trainer extracts from the pre-trained DNN a feature sub-network that includes an input layer of the DNN and one or more subsequent sparsely-connected layers of the DNN. The trainer averages output signals from the sub-network to form an average representation of each class identifiable by the DNN. The trainer relabels each correctly classified example with the appropriate average representation, and then trains the feature sub-network with the relabeled examples. In one demonstration, the feature sub-network trainer improved classification accuracy of a seven-layer convolutional neural network, trained with two thousand examples, from 75% to 83% by reusing the training examples.Type: GrantFiled: May 21, 2019Date of Patent: November 14, 2023Assignee: Pattern Computer, Inc.Inventor: Irshad Mohammed
-
Patent number: 11816532Abstract: Methods for receiving a request to process, on a hardware circuit, a neural network comprising a first convolutional neural network layer having a stride greater than one, and in response, generating instructions that cause the hardware circuit to, during processing of an input tensor, generate a layer output tensor equivalent to an output of the first convolutional neural network layer by processing the input tensor using a second convolutional neural network layer having a stride equal to one but that is otherwise equivalent to the first convolutional neural network layer to generate a first tensor, zeroing out elements of the first tensor that would not have been generated if the second convolutional neural network layer had the stride of the first convolutional neural network layer to generate a second tensor, and performing max pooling on the second tensor to generate the layer output tensor.Type: GrantFiled: July 6, 2020Date of Patent: November 14, 2023Assignee: Google LLCInventors: Reginald Clifford Young, William John Gulland
-
Patent number: 11810013Abstract: A detection modeling system has a processing device and a memory coupled to the processing device. The detection modeling system is configured to obtain health value data associated with an analytical model, determine a time period at which the model was trained based on the obtained health value data, and identify a survival time period of the model based on the determined time period at which the model was trained and a failure time period of the model. The detection modeling system is further configured to repeat these steps to determine a survival time period for a plurality of analytical models, and perform a survival analysis based on the survival time period for the plurality of analytical models.Type: GrantFiled: November 14, 2019Date of Patent: November 7, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Weichen Wang, Eliza Salkeld, Shuyan Lu, Shanna Hayes
-
Patent number: 11809987Abstract: A computer-implemented method controls input of at least a portion of a first training data set into a first machine learning algorithm. The first training data set includes data quantifying damage to a first compressor and data quantifying a first operating parameter of the first compressor. The first machine learning algorithm is executed, and data quantifying the first operating parameter is received as an output of the first machine learning algorithm. The first machine learning algorithm is trained using the received data output from the first machine learning algorithm and data quantifying the first operating parameter of the first compressor. The trained first machine learning algorithm is configured to enable determination of operability of a second compressor of a gas turbine engine.Type: GrantFiled: June 10, 2020Date of Patent: November 7, 2023Assignee: ROLLS-ROYCE plcInventors: Christopher R Hall, Malcolm L Hillel, Bryce D Conduit, Anthony M Dickens, James V Taylor, Robert J Miller
-
Patent number: 11803771Abstract: In various embodiments, a task-based recommendation subsystem automatically recommends workflows for software-based tasks based on a trained machine-learning model that maps different sets of commands to different distributions of weights applied to a set of tasks. In operation, the task-based recommendation subsystem applies a first set of commands associated with a target user to the trained machine-learning model to determine a target distribution of weights applied to the set of tasks. The task-based recommendation subsystem then performs processing operation(s) based on at least two different distributions of weights applied to the set of tasks and the target distribution to determine a training item. Subsequently, the task-based recommendation subsystem generates a recommendation that specifies the training item. Finally, the task-based recommendation subsystem transmits the recommendation to a user to assist the user in performing a particular task.Type: GrantFiled: January 14, 2019Date of Patent: October 31, 2023Assignee: AUTODESK, INC.Inventors: Tovi Grossman, Benjamin Lafreniere, Xu Wang
-
Patent number: 11797870Abstract: Obtain, from an existing machine learning classifier, original probabilistic scores classifying samples taken from two or more groups into two or more classes via supervised machine learning. Associate the original probabilistic scores with a plurality of original Lagrange multipliers. Adjust values of the plurality of original Lagrange multipliers via low-dimensional convex optimization to obtain updated Lagrange multipliers that satisfy fairness constraints as compared to the original Lagrange multipliers. Based on the updated Lagrange multipliers, closed-form transform the original probabilistic scores into transformed probabilistic scores that satisfy the fairness constraints while minimizing loss in utility. The fairness constraints are with respect to the two or more groups.Type: GrantFiled: May 29, 2020Date of Patent: October 24, 2023Assignees: International Business Machines Corporation, President and Fellows of Harvard CollegeInventors: Dennis Wei, Karthikeyan Natesan Ramamurthy, Flavio du Pin Calmon
-
Patent number: 11790211Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for adjusting neural network resource usage. One of the methods includes receiving a network input for processing by a task neural network, the task neural network comprising a plurality of neural network layers; receiving a usage input specifying a respective weight for each of one or more usage factors, wherein each usage factor impacts how many computational resources are used by the task neural network during the processing of the network input; and processing the network input using the task neural network in accordance with the usage input to generate a network output for the network input, comprising: selecting, based at least on the usage input, a proper subset of the plurality of neural network layers to be active while processing the network input, and processing the network input using only the selected neural network layers.Type: GrantFiled: January 30, 2018Date of Patent: October 17, 2023Assignee: Google LLCInventors: Augustus Quadrozzi Odena, John Dieterich Lawson
-
Patent number: 11790250Abstract: Various embodiments are generally directed to an apparatus, system, and other techniques for dynamic and intelligent deployment of a neural network or any inference model on a hardware executor or a combination of hardware executors. Computational costs for one or more operations involved in executing the neural network or inference model may be determined. Based on the computational costs, an optimal distribution of the computational workload involved in running the one or more operations among multiple hardware executors may be determined.Type: GrantFiled: May 9, 2019Date of Patent: October 17, 2023Assignee: Intel CorporationInventors: Padmashree Apparao, Michal Karzynski