Patents Examined by Lut Wong
-
Patent number: 12265978Abstract: Disclosed is a customized product performance prediction method based on heterogeneous data difference compensation fusion. The method includes: on the basis of a depth auto-encoder, a neighborhood association method and a similarity difference compensation method, performing difference compensation correction on a calculation simulation data set by using a historical actual measurement data set; and training a BP neural network model by using the calculation simulation data set after the difference compensation correction to serve as a performance prediction method of a customized product.Type: GrantFiled: November 10, 2021Date of Patent: April 1, 2025Assignee: ZHEJIANG UNIVERSITYInventors: Lemiao Qiu, Yang Wang, Shuyou Zhang, Zili Wang, Huifang Zhou
-
Patent number: 12266031Abstract: A memory controller circuit for mapping data of a convolutional neural network to a physical memory is disclosed. The memory controller circuit comprises a receiving unit to receive a selection parameter value, and a mapping unit to map pixel values of one layer of the convolutional neural network to memory words of the physical memory according to one of a plurality of mapping schemas, wherein the mapping is dependent on the value of the received selection parameter value.Type: GrantFiled: April 28, 2021Date of Patent: April 1, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Martino Dazzi, Pier Andrea Francese, Abu Sebastian, Evangelos Stavros Eleftheriou
-
Patent number: 12260346Abstract: In some embodiments, a computer-implemented method for predicting agronomic field property data for one or more agronomic fields using a trained machine learning model is disclosed. The method comprises receiving, at an agricultural intelligence computer system, agronomic training data; training a machine learning model, at the agricultural intelligence computer system, using the agronomic training data; in response to receiving a request from a client computing device for agronomic field property data for one or more agronomic fields, automatically predicting the agronomic field property data for the one or more agronomic fields using the machine learning model configured to predict agronomic field property data; based on the agronomic field property data, automatically generating a first graphical representation; and causing to display the first graphical representation on the client computing device.Type: GrantFiled: January 7, 2021Date of Patent: March 25, 2025Assignee: CLIMATE LLCInventors: Angeles Casas, Xiaoyuan Yang, Ho Jin Kim, Steven Ward
-
Patent number: 12260350Abstract: The invention provides a method for constructing a target prediction model in a multicenter small sample scenario and a prediction method. In combination with the idea of transfer learning, a training set of a new node is predicted by directly using knowledge of a trained node, and a prediction error sample is used to reflect a difference between the new node and the trained node. The difference is used as supplementary knowledge. Model knowledge of the new node is quickly acquired, to avoid training of a new node from scratch each time. Finally, parallel integration of incremental subclassifiers is implemented by using a ridge regression method, so that the deployment time and costs are greatly reduced. The generalization of models is ensured through sharing of historical knowledge and a knowledge discarding mechanism, good classification effect can also be achieved for a node with a small sample size.Type: GrantFiled: January 13, 2024Date of Patent: March 25, 2025Assignee: JIANGNAN UNIVERSITYInventors: Pengjiang Qian, Zhihuang Wang, Shitong Wang, Yizhang Jiang, Wei Fang, Chao Fan, Jian Yao, Xin Zhang, Aiguo Chen, Yi Gu
-
Patent number: 12242950Abstract: A neural network circuit that can be embedded in an embedded device such as an IoT device, and that provides high performance. The neural network circuit includes a first memory unit that stores input data; a convolution operation circuit that performs a convolution operation on a weight and the input data stored in the first memory unit; a second memory unit that stores convolution operation output data from the convolution operation circuit; and a quantization operation circuit that performs a quantization operation on the convolution operation output data stored in the second memory unit; wherein the first memory unit stores a quantization operation output data from the quantization operation circuit; and the convolution operation circuit performs the convolution operation on the quantization operation output data stored in the first memory unit as the input data.Type: GrantFiled: April 12, 2021Date of Patent: March 4, 2025Assignee: LeapMind Inc.Inventors: Koumei Tomida, Nikolay Nez
-
Patent number: 12236363Abstract: Described herein are systems and methods for providing a natural language comprehension system that employs a two-stage process for machine comprehension of text. The first stage indicates words in one or more text passages that potentially answer a question. The first stage outputs a set of candidate answers for the question, along with a first probability of correctness for each candidate answer. The second stage forms one or more hypotheses by inserting each candidate answer into the question and determines whether a sematic relationship exists between each hypothesis and each sentence in the text. The second processing circuitry generates a second probability of correctness for each candidate answer and combines the first probability with the second probability to produce a score that is used to rank the candidate answers. The candidate answer with the highest score is selected as a predicted answer.Type: GrantFiled: June 24, 2022Date of Patent: February 25, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Adam Trischler, Philip Bachman, Xingdi Yuan, Alessandro Sordoni, Zheng Ye
-
Patent number: 12236337Abstract: Methods and systems for compressing a neural network (NN) which performs an inference task and for performing computations of a Kronecker layer of a Kronecker NN are described. Data samples are obtained from a training dataset. The input data of the data samples are inputted into a trained NN to generate NN predictions for the input data. Further, the input data are inputted into a Kronecker NN to generate Kronecker NN predictions for the input data. Two losses are computed: a knowledge distillation loss, based on outputs generated by a layer of the NN and a corresponding Kronecker layer of the Kronecker NN and a loss for Kronecker layer, based on the Kronecker NN predictions and ground-truth labels of the data samples. The two losses are combined into a total loss, which is propagated through the Kronecker NN to adjust values of learnable parameters of the Kronecker NN.Type: GrantFiled: May 17, 2021Date of Patent: February 25, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Marziehsadat Tahaei, Ali Ghodsi, Mehdi Rezagholizadeh, Vahid Partovi Nia
-
Patent number: 12229667Abstract: A method and an apparatus for generating a shared encoder are provided, which belongs to a field of computer technology and deep learning. The method includes: sending by a master node a shared encoder training instruction to child nodes, so that each child node obtains training samples based on a type of a target shared encoder included in the training instruction; sending an initial parameter set of the target shared encoder to be trained to each child node after obtaining a confirmation message returned by each child node; obtaining an updated parameter set of the target shared encoder returned by each child node; determining a target parameter set corresponding to the target shared encoder based on a first preset rule and the updated parameter set of the target shared encoder returned by each child node.Type: GrantFiled: March 23, 2021Date of Patent: February 18, 2025Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTDInventors: Daxiang Dong, Wenhui Zhang, Zhihua Wu, Dianhai Yu, Yanjun Ma, Haifeng Wang
-
Patent number: 12222849Abstract: Apparatus and methods for refactoring infrastructure. The methods may include (a) defining parameters of an application landscape. The methods may include (b) stress-testing an application in a simulated environment based on: the parameters; and a simulated input to the application. The methods may include (c) identifying a state of stress of the application based on output of the stress-test. The methods may include (d) repeating (b)-(c) with a different simulated input until the state of stress satisfies a predetermined stochastic threshold. The methods may include (e) providing the state of stress to an upside down reinforcement learning (“UDRL”) engine. The methods may include (f) comparing a throughput corresponding to the state of stress to a benchmark throughput. The methods may include (g) redefining the parameters. The methods may include (h) repeating (a)-(f) until a threshold proximity to the benchmark throughput is reached.Type: GrantFiled: May 3, 2021Date of Patent: February 11, 2025Assignee: Bank of America CorporationInventors: Madhu Sudhanan Krishnamoorthy, Sreeram Raghavan, Rajarajan Pandiyan
-
Patent number: 12223401Abstract: A method for integrating a machine learning (ML) model that impacts different factor groups for generating a dynamic recommendation to collectively optimize a parameter is provided. The method includes (i) processing a specification information and operational data associated with a demand management service obtained from client devices (116A-N), (ii) training the ML models with processed specification information and the operational data to obtain a trained ML model that includes an anticipation ML model that optimizes demand parameter or recommendation ML model that generates recommendation for optimizing a factor group, (iii) integrating the trained ML model with the ML models by setting an output of a first ML model as a feature of a second ML model and (iv) determining a demand of a product using the trained ML models and quantifying probabilistic values that signify prediction of the demand.Type: GrantFiled: January 20, 2021Date of Patent: February 11, 2025Assignee: SAMYA.AI INC.Inventor: Deepinder Dhingra
-
Patent number: 12210953Abstract: A data processing system receives a graph that includes a sequence of layers and executes graph cuts between a preceding layer in the graph and a succeeding layer in the graph that succeeds the preceding layer. The preceding layer generates a set of tiles on a tile-by-tile basis and the succeeding layer processes a tensor that includes multiple tiles in the set of tiles. Thus the graph is partitioned into a sequence of subgraphs, and a subgraph in the sequence of subgraphs including a sub-sequence of layers in the sequence of layers. One or more configuration files is generated to configure runtime logic to execute the sequence of subgraphs and the one or more configuration files are stored on a computer-readable media.Type: GrantFiled: March 4, 2022Date of Patent: January 28, 2025Assignee: SambaNova Systems, Inc.Inventors: Tejas Nagendra Babu Nama, Ruddhi Chaphekar, Ram Sivaramakrishnan, Raghu Prabhakar, Sumti Jairath, Junjue Wang, Kaizhao Liang, Adi Fuchs, Matheen Musaddiq, Arvind Krishna Sujeeth
-
Patent number: 12198029Abstract: The present disclosure provides a joint training method and apparatus for models, a device and a storage medium. The method may include: training a first-party model to be trained using a first sample quantity of first-party training samples to obtain first-party feature gradient information; acquiring second-party feature gradient information and second sample quantity information from a second party, where the second-party feature gradient information is obtained by training, by the second party, a second-party model to be trained using a second sample quantity of second-party training samples; and determining model joint gradient information according to the first-party feature gradient information, the second-party feature gradient information, first sample quantity information and the second sample quantity information, and updating the first-party model and the second-party model according to the model joint gradient information.Type: GrantFiled: March 23, 2021Date of Patent: January 14, 2025Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Chuanyuan Song, Zhi Feng, Liangliang Lyu
-
Patent number: 12198065Abstract: A system and method for designing a physical system using a genetic algorithm includes building a plurality of data structures necessary to build, heal, and verify a plurality of dependency chains; ensuring that multiple dependencies in a respective one of the plurality of dependency chains are represented correctly; removing any dependencies that will be trivially satisfied at random; in response to determining that one or more dependencies is consistent with another dependency, considering one or more combinations of dependencies; and building configurations that satisfy the dependencies and combinations of dependencies by associating the dependencies and combinations of dependencies with selected technology options and recursively specifying and/or revising additional technology options that are consistent with the dependencies or combinations of dependencies, until a configuration is fully specified.Type: GrantFiled: October 9, 2019Date of Patent: January 14, 2025Assignee: National Technology & Engineering Solutions of Sandia, LLCInventors: John H. Gauthier, Matthew John Hoffman, Geoffry Scott Pankretz, Adam J. Pierson, Stephen Michael Henry, Darryl J. Melander, Lucas Waddell, John P. Eddy
-
Patent number: 12182698Abstract: Use a computerized trained graph neural network model to classify an input instance with a predicted label. With a computerized graph neural network interpretation module, compute a gradient-based saliency matrix based on the input instance and the predicted label, by taking a partial derivative of class prediction with respect to an adjacency matrix of the model. With a computerized user interface, obtain user input responsive to the gradient-based saliency matrix. Optionally, modify the trained graph neural network model based on the user input; and re-classify the input instance with a new predicted label based on the modified trained graph neural network model.Type: GrantFiled: September 30, 2020Date of Patent: December 31, 2024Assignees: International Business Machines Corporation, Massachusetts Institute of TechnologyInventors: Dakuo Wang, Sijia Liu, Abel Valente, Chuang Gan, Bei Chen, Dongyu Liu, Yi Sun
-
Patent number: 12182671Abstract: A method optimizes machine learning systems. A computing device accesses a committee of classifiers that have been trained using an initial labeled instance of data from an annotator. The initial labeled instance of data includes annotator-ranked attributes of the data, initial values of the attributes, and an initial prediction label that describes an initial predicted state based on the values. The computing system compares the attributes ranking from the annotator to attributes rankings that are generated by and used by each of the machine learning systems when evaluating one or more instances of unlabeled data that include the attributes, and weights the machine learning systems according to how closely each of the attributes rankings generated by and used by each of the machine learning systems match the attributes ranking from the annotator. The machine learning systems are then optimized based on this matching.Type: GrantFiled: January 26, 2021Date of Patent: December 31, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yunfeng Zhang, Qingzi Liao, Bhavya Ghai, Klaus Mueller
-
Associative relevancy knowledge profiling architecture, system, method, and computer program product
Patent number: 12182734Abstract: Provided are architectures, system, methods, and computer program products that provide a user with the ability to define an association of data and/or information from known reference sets perceived by the user as relevant to a subject matter domain, thereby imparting and formalizing some of the user's knowledge about the domain. An associative relevancy knowledge profiler may also allow a user to create a profile by modifying or restricting the known reference sets and windowing the results from the association as a user might refine any other analysis algorithms. An associative relevancy knowledge profiler may also be used to define a user profile used by the user and others. A user profile may be usable in various manners depending upon, for example, rights management permissions and restrictions for a user.Type: GrantFiled: January 20, 2022Date of Patent: December 31, 2024Assignee: ARAICOM RESEARCH LLCInventor: Anthony Prestigiacomo -
Patent number: 12175371Abstract: A method using the Sifr optimizer for training a neural network model having layers and parameters comprises providing an input corresponding to each of samples comprised in a batch from a training dataset to an input layer, obtaining outputs from the neural network model, calculating a loss function for each of the samples based on the outputs and corresponding desired values, and determining values of the parameters for minimizing a mismatch between the outputs and the corresponding desired values across the samples for the parameters based on the loss function. Further, the determining of the values for the parameters comprises executing at least one of forward passes and backward passes through the neural network model, obtaining a curvature data based on the executing, obtaining a Sifr update based on the data. The determining of the values for the parameters is based on the Sifr update.Type: GrantFiled: April 16, 2024Date of Patent: December 24, 2024Inventor: Fares Mehouachi
-
Patent number: 12175365Abstract: According to one embodiment, a learning apparatus includes a setting unit, a training unit, and a display. The setting unit sets one or more second training conditions based on a first training condition relating to a first trained model. The training unit trains one or more neural networks in accordance with the one or more second training conditions and generates one or more second trained models which execute a task identical to a task executed by the first trained model. The display displays a graph showing an inference performance and calculation cost of each of the one or more second trained models.Type: GrantFiled: February 26, 2021Date of Patent: December 24, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa
-
Patent number: 12169767Abstract: Techniques for responding to a healthcare inquiry from a user are disclosed. In one particular embodiment, the techniques may be realized as a method for responding to a healthcare inquiry from a user, according to a set of instructions stored on a memory of a computing device and executed by a processor of the computing device, the method comprising the steps of: classifying an intent of the user based on the healthcare inquiry; instantiating a conversational engine based on the intent; eliciting, by the conversational engine, information from the user; and presenting one or more medical recommendations to the user based at least in part on the information.Type: GrantFiled: March 20, 2024Date of Patent: December 17, 2024Assignee: CURAI, INC.Inventors: Anitha Kannan, Murali Ravuri, Vitor Rodrigues, Vignesh Venkataraman, Geoffrey Tso, Neal Khosla, Neil Hunt, Xavier Amatriain, Manish Chablani
-
Patent number: 12165031Abstract: A treatment model trained to compute an estimated treatment variable value for each observation vector of a plurality of observation vectors is executed. Each observation vector includes covariate variable values, a treatment variable value, and an outcome variable value. An outcome model trained to compute an estimated outcome value for each observation vector using the treatment variable value for each observation vector is executed. A standard error value associated with the outcome model is computed using a first variance value computed using the treatment variable value of the plurality of observation vectors, using a second variance value computed using the treatment variable value and the estimated treatment variable value of the plurality of observation vectors, and using a third variance value computed using the estimated outcome value of the plurality of observation vectors. The standard error value is output.Type: GrantFiled: December 5, 2023Date of Patent: December 10, 2024Assignee: SAS Institute Inc.Inventors: Sylvie Tchumtchoua Kabisa, Xilong Chen, Gunce Eryuruk Walton, David Bruce Elsheimer, Ming-Chun Chang