Patents Examined by Marshall L Werner
-
Patent number: 11934950Abstract: An apparatus for embedding a sentence feature vector according to an embodiment includes a sentence acquisitor configured to acquire a first sentence and a second sentence, each including one or more words; a vector extractor configured to extract a first feature vector corresponding to the first sentence and a second feature vector corresponding to the second sentence by independently inputting each of the first sentence and the second sentence into a feature extraction network; and a vector compressor configured to compress the first feature vector and the second feature vector into a first compressed vector and a second compressed vector, respectively, by independently inputting each of the first feature vector and the second feature vector into a convolutional neural network (CNN)-based vector compression network.Type: GrantFiled: October 26, 2020Date of Patent: March 19, 2024Assignee: SAMSUNG SDS CO., LTD.Inventors: Seong Ho Joe, Young June Gwon, Seung Jai Min, Ju Dong Kim, Bong Kyu Hwang, Jae Woong Yun, Hyun Jae Lee, Hyun Jin Choi
-
Patent number: 11928602Abstract: Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.Type: GrantFiled: May 9, 2018Date of Patent: March 12, 2024Assignee: Neurala, Inc.Inventors: Matthew Luciw, Santiago Olivera, Anatoly Gorshechnikov, Jeremy Wurbs, Heather Marie Ames, Massimiliano Versace
-
Patent number: 11928571Abstract: Provided is a method for training distributed machine learning models. The method may include initializing a distributed machine learning model on a plurality of computing devices. Training data associated with a plurality of samples may be received. Each sample may be forward propagated through the distributed machine learning model to generate an output. A loss for each sample of the plurality of samples may be determined based on the output. The loss for each sample may be backward propagated to each computing device. The parameter(s) of each computational node may be asynchronously updated based on the loss as it is backward propagated and/or while at least one of the samples is forward propagating. The parameter(s) may be stored and/or communicated to the other computing devices. Each of the other computing devices of the plurality of computing devices may store the parameter(s). A system and computer program product are also disclosed.Type: GrantFiled: November 17, 2020Date of Patent: March 12, 2024Assignee: Visa International Service AssociationInventors: Shivam Mohan, Sudharshan Krishnakumar Gaddam
-
Patent number: 11922316Abstract: A computer-implemented method includes: initializing model parameters for training a neural network; performing a forward pass and backpropagation for a first minibatch of training data; determining a new weight value for each of a plurality of nodes of the neural network using a gradient descent of the first minibatch; for each determined new weight value, determining whether to update a running mean corresponding to a weight of a particular node; based on a determination to update the running mean, calculating a new mean weight value for the particular node using the determined new weight value; updating the weight parameters for all nodes based on the calculated new mean weight values corresponding to each node; assigning the running mean as the weight for the particular node when training on the first minibatch is completed; and reinitializing running means for all nodes at a start of training a second minibatch.Type: GrantFiled: August 13, 2020Date of Patent: March 5, 2024Assignee: LG ELECTRONICS INC.Inventors: Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah
-
Patent number: 11899669Abstract: A data processing system is configured to pre-process data for a machine learning classifier. The data processing system includes an input port that receives one or more data items, an extraction engine that extracts a plurality of data signatures and structure data, a logical rule set generation engine configured to generate a data structure, select a particular data signature of the data structure, identify each instance of the particular data signature in the data structure, segment the data structure around instances of the particular data signature, identify one or more sequences of data signatures connected to the particular data signature, and generate a logical ruleset. A classification engine executes one or more classifiers against the logical ruleset to classify the one or more data items received by the input port.Type: GrantFiled: March 20, 2018Date of Patent: February 13, 2024Assignee: Carnegie Mellon UniversityInventors: Jonathan Cagan, Phil LeDuc, Mark Whiting
-
Patent number: 11869237Abstract: An autonomous personal companion utilizing a method of object identification that relies on a hierarchy of object classifiers for categorizing one or more objects in a scene. The classifier hierarchy is composed of a set of root classifiers trained to recognize objects based on separate generic classes. Each root acts as the parent of a tree of child nodes, where each child node contains a more specific variant of its parent object classifier. The method covers walking the tree in order to classify an object based on more and more specific object features. The system is further comprised of an algorithm designed to minimize the number of object comparisons while allowing the system to concurrently categorize multiple objects in a scene.Type: GrantFiled: September 29, 2017Date of Patent: January 9, 2024Assignee: Sony Interactive Entertainment Inc.Inventors: Sergey Bashkirov, Michael Taylor, Javier Fernandez-Rico
-
Patent number: 11869664Abstract: Embodiments of the present systems and methods may provide techniques to predict the success or failure of a drug used for disease treatment. For example, a method of determining drug efficacy may include, for a plurality of patients, generating a directed acyclic graph from health related information of each patient comprising nodes representing a medical event of the patient, at least one first edge connecting the first node to an additional node, each additional edge connecting nodes representing two consecutive medical events, the edge having a weight based on a time difference between the two consecutive medical events, capturing a plurality of features from each directed acyclic graph, generating a binary graph classification model on captured features of each directed acyclic graph, determining a probability that a drug or treatment will be effective using the binary graph classification model, and determining a drug to be prescribed to a patient based on the determined probability.Type: GrantFiled: June 29, 2022Date of Patent: January 9, 2024Assignee: Georgetown UniversityInventors: Ophir Frieder, Hao-Ren Yao, Der-Chen Chang
-
Patent number: 11847561Abstract: Computer-implemented techniques can include obtaining, by a client computing device, a digital media item and a request for a processing task on the digital item and determining a set of operating parameters based on (i) available computing resources at the client computing device and (ii) a condition of a network. Based on the set of operating parameters, the client computing device or a server computing device can select one of a plurality of artificial neural networks (ANNs), each ANN defining which portions of the processing task are to be performed by the client and server computing devices. The client and server computing devices can coordinate processing of the processing task according to the selected ANN. The client computing device can also obtain final processing results corresponding to a final evaluation of the processing task and generate an output based on the final processing results.Type: GrantFiled: November 25, 2020Date of Patent: December 19, 2023Assignee: GOOGLE LLCInventors: Matthew Sharifi, Jakob Nicolaus Foerster
-
Patent number: 11836643Abstract: A method for performing federated learning includes initializing, by a server, a global model G0. The server shares G0 with a plurality of participants (N) using a secure communications channel. The server selects n out of N participants, according to filtering criteria, to contribute training for a round r. The server partitions the selected participants n into s groups and informs each participant about the other participants belonging to the same group. The server obtains aggregated group updates AU1, . . . , AUg from each group and compares the aggregated group updates and identifies suspicious aggregated group updates. The server combines the aggregated group updates by excluding the updates identified as suspicious, to obtain an aggregated update Ufinal. The server derives a new global model Gr from the previous model Gr-1 and the aggregated update Ufinal and shares Gr with the plurality of participants.Type: GrantFiled: March 8, 2019Date of Patent: December 5, 2023Assignee: NEC CORPORATIONInventors: Kumar Sharad, Ghassan Karame, Giorgia Azzurra Marson
-
Patent number: 11829877Abstract: Orthogonal neural networks impose orthogonality on the weight matrices. They may achieve higher accuracy and avoid evanescent or explosive gradients for deep architectures. Several classical gradient descent methods have been proposed to preserve orthogonality while updating the weight matrices, but these techniques suffer from long running times and provide only approximate orthogonality. In this disclosure, we introduce a new type of neural network layer. The layer allows for gradient descent with perfect orthogonality with the same asymptotic running time as a standard layer. The layer is inspired by quantum computing and can therefore be applied on a classical computing system as well as on a quantum computing system. It may be used as a building block for quantum neural networks and fast orthogonal neural networks.Type: GrantFiled: May 26, 2022Date of Patent: November 28, 2023Assignee: QC Ware Corp.Inventors: Iordanis Kerenidis, Jonas Landman, Natansh Mathur
-
Patent number: 11823057Abstract: An intelligent control method for a dynamic neural network-based variable cycle engine is provided. By adding a grey relation analysis method-based structure adjustment algorithm to the neural network training algorithm, the neural network structure is adjusted, a dynamic neural network controller is constructed, and thus the intelligent control of the variable cycle engine is realized. A dynamic neural network is trained through the grey relation analysis method-based network structure adjustment algorithm designed by the present invention, and an intelligent controller of the dynamic neural network-based variable cycle engine is constructed. Thus, the problem of coupling between nonlinear multiple variables caused by the increase of control variables of the variable cycle engine and the problem that the traditional control method relies too much on model accuracy are effectively solved.Type: GrantFiled: February 28, 2020Date of Patent: November 21, 2023Assignee: DALIAN UNIVERSITY OF TECHNOLOGYInventors: Yanhua Ma, Xian Du, Ximing Sun, Weiguo Xia
-
Patent number: 11816581Abstract: A fast neural transition-based parser. The fast neural transition-based parser includes a decision tree-based classifier and a state vector control loss function. The decision tree-based classifier is dynamically used to replace a multilayer perceptron in the fast neural transition-based parser, and the decision tree-based classifier increases speed of neural transition-based parsing. The state vector control loss function trains the fast neural transition-based parser, the state vector control loss function builds a vector space favorable for building a decision tree that is used for the decision tree-based classifier in the neural transition-based parser, and the state vector control loss function maintains accuracy of neural transition-based parsing while the decision tree-based classifier is used to increase the speed of the neural transition-based parsing while using the decision tree-based classifier to increase the speed of the neural transition-based parsing.Type: GrantFiled: September 8, 2020Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Ryosuke Kohita, Daisuke Takuma
-
Patent number: 11816594Abstract: Techniques for facilitating utilizing a quantum computing circuit in conjunction with a stochastic control problem are provided. In one embodiment, a system is provided that comprises a quantum computing circuit that prepares a quantum state that represents a stochastic control problem. The system can further comprise a classical computing device that determines parameters for the quantum computing circuit.Type: GrantFiled: September 24, 2018Date of Patent: November 14, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Stefan Woerner, Panagiotis Barkoutsos, Ivano Tavernelli
-
Patent number: 11816575Abstract: Deep learning training service framework mechanisms are provided. The mechanisms receive encrypted training datasets for training a deep learning model, execute a FrontNet subnet model of the deep learning model in a trusted execution environment, and execute a BackNet subnet model of the deep learning model external to the trusted execution environment. The mechanisms decrypt, within the trusted execution environment, the encrypted training datasets and train the FrontNet subnet model and BackNet subnet model of the deep learning model based on the decrypted training datasets. The FrontNet subnet model is trained within the trusted execution environment and provides intermediate representations to the BackNet subnet model which is trained external to the trusted execution environment using the intermediate representations. The mechanisms release a trained deep learning model comprising a trained FrontNet subnet model and a trained BackNet subnet model, to the one or more client computing devices.Type: GrantFiled: September 7, 2018Date of Patent: November 14, 2023Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
-
Patent number: 11816552Abstract: Methods and systems for neural network processing include configuring a physical network topology for a network that includes hardware nodes in accordance with a neural network topology, one of which is designated as a master node with any other nodes in the network being designated as slave nodes. One or more virtual neurons are configured at each of the hardware nodes by the master node to create a neural network having the neural network topology. Each virtual neuron has a neuron function and logical network connection information that establishes weighted connections between different virtual neurons. A neural network processing function is executed using the neural network.Type: GrantFiled: October 26, 2017Date of Patent: November 14, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Yasunao Katayama
-
Patent number: 11816538Abstract: This disclosure relates to methods of constructing efficient quantum circuits for Clifford loaders and variations of these methods following a similar scheme.Type: GrantFiled: April 29, 2021Date of Patent: November 14, 2023Assignee: QC Ware Corp.Inventors: Anupam Prakash, Iordanis Kerenidis
-
Patent number: 11816562Abstract: A digital experience enhancement system includes an ensemble deep learning model that includes an estimator ensemble and a neural network. The ensemble deep learning model is trained to generate a digital experience enhancement recommendation from an enhancement request. The ensemble deep learning model receives the enhancement request, which is input to the estimator ensemble. The estimator ensemble uses various different machine learning systems to generate estimator output values. The neural network uses the estimator output values from the estimator ensemble to generate a digital experience enhancement recommendation. The digital experience generation system then uses this digital experience enhancement recommendation to enhance the digital experience.Type: GrantFiled: April 4, 2019Date of Patent: November 14, 2023Assignee: Adobe Inc.Inventors: Michael Craig Burkhart, Kourosh Modarresi
-
Patent number: 11816549Abstract: Systems, computer-implemented methods, and computer program products to facilitate gradient weight compression are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a pointer component that can identify one or more compressed gradient weights not present in a first concatenated compressed gradient weight. The computer executable components can further comprise a compression component that can compute a second concatenated compressed gradient weight based on the one or more compressed gradient weights to update a weight of a learning entity of a machine learning system.Type: GrantFiled: November 29, 2018Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Wei Zhang, Chia-Yu Chen
-
Patent number: 11816548Abstract: Embodiments of the invention are directed to a computer-implemented method of distributed learning using a fusion-based approach. The method includes determining data statistics at each system node of a plurality of system nodes, wherein each system node respectively comprises an artificial intelligence model. The method further includes determining a set of control and coordination instructions for training each artificial intelligence model at each system node of the plurality of system nodes. The method further includes directing an exchange of data between the plurality of system nodes based on the data statistics of each system node of the plurality of system nodes. The method further includes fusing trained artificial intelligence models from the plurality of system nodes into a fused artificial intelligence model, wherein the trained artificial intelligence models are trained using the set of control and coordination instructions.Type: GrantFiled: January 8, 2019Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Dinesh C. Verma, Supriyo Chakraborty
-
Patent number: 11709854Abstract: A machine learning computing system for extracting structured data objects from electronic documents comprising unstructured text includes a first data repository storing a plurality of electronic documents including at least one text data object and an expert system computing device. The expert system computing device includes a processor and a non-transitory memory device storing instructions causing the expert system to receive a first data object comprising unstructured data identified from an electronic document stored in the first data repository, process, a first set of rules to identify at least one key-value pair data object from the first data object; process, by an inference engine module, a second set of rules to identify at least one free text data object from the first data object and store, in a non-transitory memory device, the at least one key-value pair and the at least one free text data object.Type: GrantFiled: January 2, 2018Date of Patent: July 25, 2023Assignee: Bank of America CorporationInventors: Nitin Saraswat, Rishi Jhamb