Patents Examined by Nicholas S Wu
-
Patent number: 12361280Abstract: To train a machine learning routine (BNN), a sequence of first training data (PIC) is read in through the machine learning routine. The machine learning routine is trained using the first training data, wherein a plurality of learning parameters (LP) of the machine learning routine is set by the training. Furthermore, a value distribution (VLP) of the learning parameters, which occurs during the training, is determined and a continuation signal (CN) is generated on the basis of the determined value distribution of the learning parameters. Depending on the continuation signal, the training is then continued with a further sequence of the first training data or other training data (PIC2) are requested for the training.Type: GrantFiled: July 29, 2019Date of Patent: July 15, 2025Assignee: Siemens Healthcare Diagnostics Inc.Inventors: Markus Michael Geipel, Stefan Depeweg, Christoph Tietz, Gaby Marquardt, Daniela Seidel
-
Patent number: 12354017Abstract: Systems and methods for aligning knowledge graphs. A subgraph type is assigned to each node of a plurality of nodes in a first knowledge graph and a second knowledge graph. The assigned subgraph type for each node is determined based on the labels of a plurality of edges and/or other nodes coupled to the node in the knowledge graph. An initial matching is performed to identify a plurality of candidate node-node pairs each including one node from each knowledge graph. A candidate node-node pair is identified as a valid node-node mapping based at least in part on a determination that the subgraph type combination of the candidate node-node pair matches a subgraph type combination of another candidate node-node pair that has previously been confirmed as a valid node-node mapping. In some implementations, a machine-learning model is trained to use the aligned knowledge graphs.Type: GrantFiled: March 3, 2021Date of Patent: July 8, 2025Assignee: Robert Bosch GmbHInventor: HyeongSik Kim
-
Patent number: 12333425Abstract: An embodiment includes extracting, responsive to an update request from a remote requesting system, technical descriptor data from a data source. The embodiment also includes forming a new graph data structure using the technical descriptor data extracted from the data source. The embodiment also includes augmenting the new graph data structure to include a concept based on a value from instance data from the data source. The embodiment also includes identifying a first pair of concepts that are connected in a pre-existing ontology that correspond with a second pair of concepts that lack a connection therebetween in the new graph structure. The embodiment also includes augmenting the new graph data structure to include a connection between the second pair of concepts. The embodiment also includes outputting the new graph data structure as part of a response to the update request from the requesting system.Type: GrantFiled: January 28, 2021Date of Patent: June 17, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chuan Lei, Junheng Hao, Vasilis Efthymiou, Fatma Ozcan, Abdul Quamar
-
Patent number: 12321826Abstract: A framework for interpreting machine learning models is proposed that utilizes interpretability methods to determine the contribution of groups of input variables to the output of the model. Input variables are grouped based on dependencies with other input variables. The groups are identified by processing a training data set with a clustering algorithm. Once the groups of input variables are defined, scores related to each group of input variables for a given instance of the input vector processed by the model are calculated according to one or more algorithms. The algorithms can utilize group Partial Dependence Plot (PDP) values, Shapley Additive Explanations (SHAP) values, and Banzhaf values, and their extensions among others, and a score for each group can be calculated for a given instance of an input vector per group. These scores can then be sorted, ranked, and then combined into one hybrid ranking.Type: GrantFiled: May 17, 2021Date of Patent: June 3, 2025Assignee: Discover Financial ServicesInventors: Alexey Miroshnikov, Konstandinos Kotsiopoulos, Arjun Ravi Kannan, Raghu Kulkarni, Steven Dickerson
-
Patent number: 12299084Abstract: Systems and methods are provided for reusing machine learning models. For example, the applicability of prior models may be compared using one or more assessment values, including a similarity threshold and/or an accuracy threshold. The similarity threshold may identify a similarity of data between a first data set used to generate a first model and a new data set that is received by the system. When the similarity between these two data sets is exceeded, the system may reuse a model with the highest similarity value. When an accuracy value of the data set does not exceed an accuracy threshold, the system may initiate a retraining process to generate a second ML model associated with the second data.Type: GrantFiled: January 20, 2021Date of Patent: May 13, 2025Assignee: Hewlett Packard Enterprise Development LPInventors: Chaitra Kallianpur, Kalapriya Kannan, Suparna Bhattacharya
-
Patent number: 12286115Abstract: In various examples, a three-dimensional (3D) intersection structure may be predicted using a deep neural network (DNN) based on processing two-dimensional (2D) input data. To train the DNN to accurately predict 3D intersection structures from 2D inputs, the DNN may be trained using a first loss function that compares 3D outputs of the DNN—after conversion to 2D space—to 2D ground truth data and a second loss function that analyzes the 3D predictions of the DNN in view of one or more geometric constraints—e.g., geometric knowledge of intersections may be used to penalize predictions of the DNN that do not align with known intersection and/or road structure geometries. As such, live perception of an autonomous or semi-autonomous vehicle may be used by the DNN to detect 3D locations of intersection structures from 2D inputs.Type: GrantFiled: December 9, 2020Date of Patent: April 29, 2025Assignee: NVIDIA CorporationInventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
-
Patent number: 12271812Abstract: A method includes providing a neural network having a set of weights. The neural network receives an input data structure for generating a corresponding output array according to values of the set of weights. The neural network is trained to obtain a trained neural network. The training includes setting values of the set of weights with a gradient descent algorithm which exploits a cost function including a loss term and a regularization term. The trained neural network is deployed on a device through a communication network, and used by the device. The regularization term is based on a rate of change of elements of the output array caused by variations of the set of weights values.Type: GrantFiled: July 18, 2019Date of Patent: April 8, 2025Assignee: TELECOM ITALIA S.p.A.Inventors: Attilio Fiandrotti, Gianluca Francini, Skjalg Lepsoy, Enzo Tartaglione
-
Patent number: 12265891Abstract: This application relates to apparatus and methods for training machine learning models using supervised, or semi-supervised, learning. In some examples, a computing device obtains training data that includes labelled, and unlabeled, data for training a machine learning model. The computing device applies the machine learning model to the training data to generate output data. The machine learning model executes with a plurality of coefficients applied to a plurality of hyperparameters. The computing device further applies a loss model to the training data and the output data to generate a loss value. Based on the loss values, the computing device determines updated values for the plurality of coefficients of the machine learning model. The computing device may continue to determine updated values for the plurality of coefficients until one or more conditions are satisfied. The computing device may then store the final coefficient values in a data repository.Type: GrantFiled: December 9, 2020Date of Patent: April 1, 2025Assignee: Walmart Apollo, LLCInventors: Sakib Abdul Mondal, Tuhin Bhattacharya, Abhijit Mondal
-
Patent number: 12260320Abstract: A method is disclosed to dynamically design acceleration units of neural networks. The method comprises steps of generating plural circuit description files through a neural network model; reading a model weight of the neural network model to determine a model data format of the neural network model; selecting one circuit description file from the plural circuit description files according to the model data format, so that the chip is reconfigured according to the selected circuit description file to form an acceleration unit adapted to the model data format. The acceleration unit is suitable for running a data segmentation algorithm, which may accelerate the inference process of the neural network model. Through this method the chip may be dynamically reconfigured into an efficient acceleration unit for the different model data format, thereby speeding up the inference process of the neural network model.Type: GrantFiled: June 30, 2021Date of Patent: March 25, 2025Assignee: NATIONAL TAIWAN UNIVERSITY OF SCIENCE & TECHNOLOGYInventors: Shun-Feng Su, Meng-Wei Chang
-
Patent number: 12260313Abstract: Apparatuses and methods can be related to implementing bypass paths in an ANN. The bypass path can be used to bypass a portion of the ANN such that the ANN generates an output with a particular level of confidence while utilizing less resources than if the portion of the ANN had not been bypassed.Type: GrantFiled: November 18, 2020Date of Patent: March 25, 2025Assignee: Micron Technology, Inc.Inventors: Saideep Tiku, Poorna Kale
-
Patent number: 12236360Abstract: A method, a computer system, and a computer program product for a shiftleft topology construction is provided. Embodiments of the present invention may include collecting datasets. Embodiments of the present invention may include extracting topological entities from the datasets. Embodiments of the present invention may include correlating a plurality of data from the topological entities. Embodiments of the present invention may include mapping the topological entities. Embodiments of the present invention may include marking entry points for a plurality of subgraphs of the topological entities. Embodiments of the present invention may include constructing a topology graph.Type: GrantFiled: September 17, 2020Date of Patent: February 25, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jinho Hwang, Larisa Shwartz, Srinivasan Parthasarathy, Qing Wang, Michael Elton Nidd, Frank Bagehorn, Jakub Krchák, Ota Sandr, Tomáš Ondrej, Michal Mýlek, Altynbek Orumbayev, Randall M George
-
Patent number: 12229670Abstract: Systems, computer-implemented methods, and computer program products that facilitate temporalizing and/or spatializing a machine learning and/or artificial intelligence network are provided. In various embodiments, a processor can combine output data from different layers of an artificial neural network trained on static image data. In various embodiments, the processor can employ the artificial neural network to infer an outcome from an image instance in a sequence of images based on combined output data from the different layers of the artificial neural network.Type: GrantFiled: June 25, 2021Date of Patent: February 18, 2025Assignee: GE PRECISION HEALTHCARE LLCInventors: Chandan Aladahalli, Krishna Seetharam Shriram, Vikram Melapudi
-
Patent number: 12190246Abstract: A neural waveform distinguishment apparatus includes: a neural waveform obtainment unit that obtains multiple neural waveforms in a pre-designated manner from neural signals sensed by way of at least one electrode; a preprocessing unit that obtains multiple gradient waveforms by calculating pointwise slopes in each of the neural waveforms; a feature extraction unit comprising an encoder ensemble composed of multiple encoders, which have a pattern estimation method learned beforehand and include different numbers of hidden layers, where the feature extraction unit obtains multiple codes as multiple features extracted by the encoders respectively from the gradient waveforms and concatenates the codes extracted by the encoders respectively to extract a feature ensemble for each of the gradient waveforms; and a clustering unit that distinguishes the neural waveforms corresponding respectively to the gradient waveforms by clustering the feature ensembles extracted respectively in correspondence to the gradient wavType: GrantFiled: November 19, 2020Date of Patent: January 7, 2025Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITYInventors: Do Sik Hwang, Jun Sik Eom, Han Byol Jang, Se Won Kim, In Yong Park
-
Patent number: 12124963Abstract: Disclosed is a disentangled personalized federated learning method via consensus representation extraction and diversity propagation provided by embodiments of the present application. The method includes: receiving, by a current node, local consensus representation extraction models and unique representation extraction models corresponding to other nodes, respectively; extracting, by the current node, the representations of the data of the current node by using the unique representation extraction models of other nodes respectively, and calculating first mutual information between different sets of representation distributions, determining similarity of the data distributions between the nodes based on the size of the first mutual information, and determining aggregation weights corresponding to the other nodes based on the first mutual information; the current node obtains the global consensus representation aggregation model corresponding to the current node.Type: GrantFiled: June 1, 2024Date of Patent: October 22, 2024Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhenan Sun, Yunlong Wang, Zhengquan Luo, Kunbo Zhang, Qi Li, Yong He
-
Patent number: 12086695Abstract: A system for training a multi-task model includes a processor and a memory in communication with the processor. The memory has a multi-task training module having instructions that, when executed by the processor, causes the processor to provide simulation training data having a plurality of samples to a multi-task model capable of performing at least a first task and a second task using at least one shared. The training module further causes the processor to determine a first value (gradience or loss) for the first task and a second value (gradience or loss) for a second task using the simulation training data and the at least one shared parameter, determine a task induced variance between the first value and the second value, and iteratively adjust the at least one shared parameter to reduce the task induced variance.Type: GrantFiled: March 18, 2021Date of Patent: September 10, 2024Assignee: Toyota Research Institute, Inc.Inventors: Dennis Park, Adrien David Gaidon