Patents Issued in September 12, 2024
-
Publication number: 20240303508Abstract: Techniques of video processing for action detection using machine learning. An action depicted in a video is identified. A type of the action is predicted based on a classification module of one or more machine learning models. A video clip depicting the action is predicted in the video. To that end, a starting point and an ending point of the video clip in the video are determined. The video clip is predicted based on a localization module of the one or more machine learning models. A refinement is performed that includes refining the type of the action based on the video clip or refining the video clip based on the type of the action. An indication of the refined type or of the refined video clip is output.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Bo WU, Chuang GAN, Kaizhi QIAN, Pin-Yu CHEN
-
Publication number: 20240303509Abstract: A system and method are provided for processing game data to generate predictions for hypothetical or real future games. The method includes receiving input data comprising at least one of: i) historical data for one or more previous games, comprising box score information, or ii) play-by-play game data for one or more previous games; and transforming the input data into an abstraction space in which the abstraction space provides an abstraction comprising a numerical representation of player and/or team attributes. The method also includes mapping the numerical representation into one or more predictions of attributes of the games using at least one machine learning technique; and providing output data comprising the one or more predictions of attributes of the games.Type: ApplicationFiled: May 16, 2024Publication date: September 12, 2024Applicant: Sportlogiq Inc.Inventors: Michael John DAVIS, Juan Camilo GAMBOA HIGUERA, Oliver Norbert SCHULTE, Mehrsan JAVAN ROSHTKHARI
-
Publication number: 20240303510Abstract: Dynamic rule-based recommendations are disclosed. A rule-base, which includes rules, is received for a data-related operation. Potential targets of the data-related operation are validated against the rule-base. Targets that are eligible or validated using the rule-base may be returned and/or ranked. The data-relation operation may be performed using one of the validated targets.Type: ApplicationFiled: March 6, 2023Publication date: September 12, 2024Inventors: Nicholas A. Noto, Nitin Madan, Jingwen Zhang, Sriranjani Vaidyanathan, Ishan Khaparde
-
Publication number: 20240303511Abstract: Systems and methods are provided for classifying network traffic flows across a network. Specifically, the network traffic flows are classified under a fully-segmented ruleset, wherein the fully segmented ruleset was generated by training a decision tree machine learning (“ML”) algorithm with a training dataset, and wherein each item of the training dataset satisfies the complete rule pathway to different leaf nodes of the fully segmented ruleset. Classification under a fully-segmented ruleset allowing for capture of idiosyncratic patterns specific to a given malicious source of network traffic flows. Further, systems and methods are provided allowing for a user to designate network traffic flows for classification of network traffic flows at different network devices, wherein the classification at different network devices may allow for more computationally intensive classification.Type: ApplicationFiled: March 6, 2023Publication date: September 12, 2024Inventors: MADHUSOODHANA CHARI SESHA, Ramasamy Apathotharanan, Sumangala Bannur Subraya, Madhumitha Rajamohan, Azath Abdul Samadh, Chirag Dineshkumar Shah
-
Publication number: 20240303512Abstract: In some aspects, the techniques described herein relate to a method including: providing a plurality of inputs to a payload engine; providing a machine learning engine, wherein the machine learning includes a machine learning model, and wherein the machine learning model is configured to generate output based on the plurality of inputs; providing a rules engine, wherein the rules engine includes a logic tree based on the plurality of inputs; receiving, at the payload engine, a transaction and associated transaction details; generating, by the machine learning engine and based on the transaction, the associated transaction details, and the plurality of inputs, a first transaction processing parameter; generating, by the rules engine, and based on the transaction, the associated transaction details, and the plurality of inputs, a second transaction processing parameter; and combining the transaction, the first transaction processing parameter and the second transaction processing parameter into a transaction paType: ApplicationFiled: March 9, 2023Publication date: September 12, 2024Inventors: Kari HYTOENEN, Ayman HAMMAD, Gudmundur GUDMUNDSSON, Taylor HAVERKAMP, Elizabeth DANIEL, Thomas C. BARTON, David COMBS
-
Publication number: 20240303513Abstract: The present disclosure relates to a method, a computer program with instructions, and a device for operating an AI module. The disclosure also relates to an AI module suitable for the method and to a locomotion means that has an AI module or a device according to the teachings herein or is configured to carry out a method according to the teachings herein for operating an AI module. In a first step, a state is determined in which a system operated by the AI module is not in use for its primary purpose. Subsequently, data stored by the AI module are processed by adding a random component. A result of the processing is evaluated and the AI module is adapted depending on the evaluation.Type: ApplicationFiled: November 25, 2021Publication date: September 12, 2024Applicant: Volkswagen AktiengesellschaftInventors: Christoph Pohling, Heinz-Dieter Lindemann
-
Publication number: 20240303514Abstract: Various embodiments of the present disclosure provide graph-based techniques for generating granular predictive classifications for entities in a predictive domain. The graph-based techniques include generating a network graph for an entity or entity class based on a plurality of interaction data objects for the entity. The network graph includes a plurality of nodes and a plurality of edges. Each node corresponds to a particular interaction code of at least one of the plurality of interaction data objects. Each edge connects a node pair that is associated with a particular interaction data object. The nodes and edges are weighted to enable the clustering of the network graph for an entity class. An entity network graph may be compared to node clusters of a class network graph to generate a behavior based predictive classification.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Savindra SINGH, Neelabh MISHRA, Sanchit KUMAR, Sana ZEHRA
-
Publication number: 20240303515Abstract: A computer stores a reference corpus that consists of many reference points that each has a respective class. Later, an expected class and a subject point (i.e. instance to explain) that does not have the expected class are received. Multiple reference points that have the expected class are selected as starting points. Based on the subject point and the starting points, multiple discrete interpolated points are generated that have the expected class. Based on the subject point and the discrete interpolated points, multiple continuous interpolated points are generated that have the expected class. A counterfactual explanation of why the subject point does not have the expected class is directly generated based on continuous interpolated point(s) and, thus, indirectly generated based on the discrete interpolated points. For acceleration, neither way of interpolation (i.e. counterfactual generation) is iterative.Type: ApplicationFiled: November 17, 2023Publication date: September 12, 2024Inventors: Zahra Zohrevand, Ehsan Soltan Aghai, Yasha Pushak, Hesam Fathi Moghadam, Sungpack Hong, Hassan Chafi
-
Publication number: 20240303516Abstract: Techniques are provided for implementing a vector as a service. A first vector is generated, through a first pipeline, using a first embedding model hosted by an inference service. The first vector is assigned a first model identifier of the first embedding model, and is stored within storage. A second pipeline is constructed to utilize a second embedding model having a second model identifier. The first vector is extracted from the storage, and is used to generate an embedding storage request event to reindex and port the first vector from being embedded by the first embedding model to being embedded by the second embedding model. In this way, the second pipeline is used to execute the embedding storage request event to port the first vector into a second vector embedded by the second embedding model for storage within a vector database.Type: ApplicationFiled: May 16, 2024Publication date: September 12, 2024Inventors: Andrea BERGONZO, Yiping DENG, Srubin Sethu Madhavan
-
Publication number: 20240303517Abstract: In some examples, the designated set of resources are subsequently monitored for session activities of multiple users that are not of the first group. For each of the multiple users, the computer system utilizes one or more predictive models to determine a likelihood of the user performing a desired type of activity based on one or more session activities detected for that user.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Inventors: Manish Malhotra, Siddartha Sikdar
-
Publication number: 20240303518Abstract: Using a model executing on a classical processor, a set of classical features is scored. The scored set of classical features is divided into a set of feature groups, a number of classical features in a group determined according to a qubit capability of a quantum processor. Using a model executing on the quantum processor and a group of the scored set of classical features, a set of quantum features is scored. The score of a quantum feature is adjusted according to an accuracy of the quantum data model. The scored set of classical features and the scored set of quantum features are combined according to a measure of differences between the scored set of classical features and the scored set of quantum features. Using the combined set of scored features and a first set of input data of a resource, a valuation of a resource is calculated.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Applicant: International Business Machines CorporationInventors: Aaron K. Baughman, GURURAJA HEBBAR, Micah Forster, Kavitha Hassan Yogaraj, Yoshika Chhabra
-
Publication number: 20240303519Abstract: A device simulates an open quantum system including one or more quantum entities, each quantum entity being stabilized around a decoherence-free space. The corresponding simulation method is based on an original asymptotic development adapted to the so-called Heisenberg formulation of quantum mechanics and based on invariant operators of the local and nominal dynamics associated with each of the quantum entities. A computer-implemented simulates an open quantum system including a plurality of quantum entities including: one or more first quantum entities each being stabilized around a decoherence-free space, and a one or more second entities wherein each second quantum entity has an unstabilized component during a time period T such that a respective decoherence-free space cannot be defined for each second quantum entity during time 0<t<T.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Inventors: Pierre ROUCHON, François-Marie LE REGENT
-
Publication number: 20240303520Abstract: Cavity resonators are promising resources for quantum technology, while native nonlinear interactions for cavities are typically too weak to provide the level of quan-turn control required to deliver complex targeted operations. Here we investigate a scheme to engineer a target Hamiltonian for photonic cavities using ancilla qubits. By off-resonantly driving dispersively coupled ancilla qubits, we develop an optimized approach to engineering an arbitrary photon-number dependent (PND) Hamiltonian for the cavities while minimizing the operation errors. The engineered Hamiltonian admits various applications including canceling unwanted cavity self-Kerr interac-tions, creating higher-order nonlinearities for quantum simulations, and designing quantum gates resilient to noise. Our scheme can be implemented with coupled microwave cavities and transmon qubits in superconducting circuit systems.Type: ApplicationFiled: January 31, 2022Publication date: September 12, 2024Applicants: The University of Chicago, Yale UniversityInventors: Chiao-Hsuan Wang, Kyungjoo Noh, José Lebreuilly, Steven M. Girvin, Liang Jiang
-
Publication number: 20240303521Abstract: According to some embodiments, a system includes a first input coupled to a first qubit and a first switch, wherein the first switch includes a first output, a second output, and a third output. The system further includes a first single qubit measuring device coupled to the first output of the first switch and a second single qubit measuring device coupled to a first output of a second switch. The system further includes a first two qubit measuring device coupled to the second output of the first switch and a second output of the second switch and a second two qubit measuring device coupled to the third output of the first switch and a third output of the second switch.Type: ApplicationFiled: January 24, 2022Publication date: September 12, 2024Applicant: Psiquantum, Corp.Inventors: Mercedes Gimeno-Segovia, Terence Rudolph, Mihir Pant, Hector Bombin Palomo, Naomi Nickerson
-
Publication number: 20240303522Abstract: Circuits are provided that create entanglement among qubits having Gottesman-Kitaev-Preskill (GKP) encoding using photonic systems and structures. For example, networks of beam splitters and homodyne measurement circuits can be used to perform projective entangling measurements on GKP qubits from different quantum systems. In some embodiments. GKP qubits can be used to implement quantum computations using fusion-based quantum computing or other fault-tolerant quantum computing approaches.Type: ApplicationFiled: January 25, 2022Publication date: September 12, 2024Applicant: Psiquantum, Corp.Inventors: Andrew Doherty, Mercedes Gimeno-Segovia, Daniel Litinski, Naomi Nickerson, Mihir Pant, Terence Rudolph, Christopher Sparrow
-
Publication number: 20240303523Abstract: Disclosed herein are an apparatus and method for performing a fault-tolerant logical Hadamard gate operation. The apparatus is configured to perform a transversal logical Hadamard (H) operation of defining a logical quantum state and logical operators of a Hadamard-transformed logical qubit on a logical qubit of a prepared encoding flavor having an arbitrary quantum state, deform a boundary of the logical qubit while maintaining the logical quantum state using a boundary deformation technology, and perform an automatic flip of transforming a flavor of the logical qubit by flipping a rotated surface code while maintaining the logical quantum state and the definition of logical operators.Type: ApplicationFiled: November 21, 2023Publication date: September 12, 2024Applicant: Electronics and Telecommunications Research InstituteInventors: Sang-Min LEE, Young-Chul KIM, Soo-Cheol OH, Jin-Ho ON, Ki-Sung JIN, Gyu-Il CHA
-
Publication number: 20240303524Abstract: Disclosed herein are methods, systems, and devices including a tunable coupler design that harnesses interference due to higher energy levels to achieve zero static ZZ coupling between the two qubits. Biasing to zero ZZ interaction, a fast perfect entangler is realized with parametric flux modulation in less than 20 ns. The disclosed coupler provides very fast gates between far-detuned fixed frequency qubits, and is a crucial building block in scaled quantum computers.Type: ApplicationFiled: March 16, 2022Publication date: September 12, 2024Inventors: Andrew Houck, Pranav Mundada, Andrei Vrajitoarea, Sara Sussman, Alexandre Blais, Tudur-Alexandru Petrescu, Catherine Leroux, Agustin Di Paolo, Camille Le Calonnec, Charles Guinn
-
Publication number: 20240303525Abstract: According to an embodiment, a method of performing a quantum computation on a quantum system is provided. The method includes encoding a computational problem into a problem Hamiltonian of constituents of the quantum system. The method includes mapping a side condition or side conditions associated with the computational problem to an exchange Hamiltonian of a first part of the constituents of the quantum system. The method includes initializing the constituents of the quantum system in an initial state. The method includes evolving the quantum system by interactions of the constituents of the quantum system. The interactions include interactions determined by a final Hamiltonian, interactions determined by the exchange Hamiltonian, and interactions determined by a driver Hamiltonian. The final Hamiltonian is the sum of the problem Hamiltonian and of a short-range Hamiltonian. The driver Hamiltonian is a Hamiltonian of a second part of the constituents of the quantum system.Type: ApplicationFiled: January 14, 2021Publication date: September 12, 2024Inventor: Wolfgang LECHNER
-
Publication number: 20240303526Abstract: A novel and useful mechanism for iterative quantum error correction using multiple orthogonal low level decoders with machine learning assist to optimize decoder solutions for finding the optimal solution in real time for a fault tolerant quantum system. A machine learning algorithm is employed to find the optimal decoder solution to correct detected error(s) while preserving the logical state of the quantum system. The QEC mechanism addresses the disadvantages of the prior art by providing multiple error correction solutions and leveraging machine learning (ML) techniques to choose the best one to avoid the introduction of the logical error conditions and greatly increase the coverage for error correction within the system.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventor: David J. Redmond
-
Publication number: 20240303527Abstract: An information processing device configured to execute processing including: generating qubit information indicating a two-dimensional lattice in which data qubits and auxiliary qubits are alternately arranged in a row and column direction; setting, for the qubit information, error information indicating error occurrence in a first data qubit among the data qubits; setting, for the qubit information, error detection information by inverting, from an initial value, a state of an auxiliary qubit adjacent to the first data qubit in the row or column direction; setting, for the qubit information, error correction information indicating a second data qubit to be corrected by a surface code using the error detection information; and determining presence or absence of occurrence of a logical error, based on a number of data qubits each of which corresponds to any one of the first or second data qubit in one row or one column in the two-dimensional lattice.Type: ApplicationFiled: May 3, 2024Publication date: September 12, 2024Applicant: FUJITSU LIMITEDInventor: Jun FUJISAKI
-
Publication number: 20240303528Abstract: The present disclosure relates to an instruction from a base station apparatus causing a terminal device to start artificial intelligence and/or machine learning. The terminal device includes: a reception unit that receives a signal for instructing start of training; and a transmission unit that transmits training data, in which the signal for instructing the start of the training includes a training start timing and a training period, and in a case where the signal for instructing the start of the training is received, the training data is generated based on the training start timing and the training period.Type: ApplicationFiled: April 18, 2022Publication date: September 12, 2024Inventors: Taewoo LEE, Awn MUHAMMAD
-
Publication number: 20240303529Abstract: Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support machine learning-based application management for enterprise systems. The aspects described herein enable resource and time-efficient scheduling of training anomaly detection models (e.g., machine learning (ML) models) corresponding to the applications based on log data generated by the applications. Aspects also provide integration of the trained anomaly detection models with an application dependency graph to enable prediction of application failures based on detected anomalies and relationships between applications determined from the application dependency graph. Further aspects leverage this integration to output reasons associated with predicted application failures and to provide recommended recovery actions to be performed to recover from the predicted application failures. Other aspects and features are also described.Type: ApplicationFiled: March 6, 2023Publication date: September 12, 2024Inventors: Parag Rane, Prasanna Srinivasa Rao, Chinmaya Pani, Brett Parenzan, Saurav Gupta
-
Publication number: 20240303530Abstract: Systems, methods, and other embodiments associated with inverse-density exemplar selection for improved multivariate anomaly detection are described. In one embodiment, a method includes determining magnitudes of vectors from a set of time series readings collected from a plurality of sensors. And, the example method includes selecting exemplar vectors from the set of time series readings to train a machine learning model to detect anomalies. The exemplar vectors are selected by repetitively (i) increasing a first density of extreme vectors that are within tails of a distribution of amplitudes for the time series readings based on the magnitudes of vectors, and (ii) decreasing a second density of non-extreme vectors that are within a head of the distribution based on the magnitudes of vectors. The repetition continues until the machine learning model generates residuals within a threshold in order to reduce false or missed detection of the extreme vectors as anomalous.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Keyang RU, Guang Chao WANG, Ruixian LIU, Kenny C. GROSS
-
Publication number: 20240303531Abstract: Techniques for adapting extraction to different documents using machine learning are provided. In one technique, multiple parsing rules are stored, each parsing rule being used to map field values in extracted text to field names associated with the parsing rule. In response to receiving a request to use a parsing rule for a particular document, a particular parsing rule is selected from among the parsing rules. Text data associated with the particular document is extracted. The particular parsing rule is used to map field values (in the extracted text data) to field names associated with the particular parsing rule. First input that selects data associated with a particular field name of the field names is received. Second input that selects a visual portion of the particular document is also received. The particular parsing rule is updated based on the first and second inputs to generate an updated parsing rule.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Applicant: Ricoh Company, Ltd.Inventor: Kaoru Watanabe
-
Publication number: 20240303532Abstract: A method and a system for obtaining conditional demographic parity in the construction of a data-driven model are provided. The method includes: identifying features associated with the model; determining a first joint distribution of model outputs and a feature based on a first level of a particular one of the features and a second joint distribution of model outputs and a feature based on a second level of the particular feature; computing a bi-causal transport distance between the first joint distribution and the second joint distribution; computing a regularizer based on the bi-causal transport distance; and applying the regularizer to the model.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Applicant: JPMorgan Chase Bank, N.A.Inventors: Luhao ZHANG, Mohsen GHASSEMI, Ivan BRUGERE, Niccolo DALMASSO, Alan MISHLER, Vamsi Krishna POTLURU, Tucker Richard BALCH, Manuela VELOSO
-
Publication number: 20240303533Abstract: A method, system, and computer program product are configured to create a tuned data record matching model by adjusting values of one or more parameters in a data record matching model based on a second training data set labeled at a data record level, wherein the data record matching model is initially trained using a first training data set labeled at an attribute level.Type: ApplicationFiled: March 10, 2023Publication date: September 12, 2024Inventors: Abhishek SETH, Devbrat SHARMA, Mahendra Singh KANYAL, Soma Shekar NAGANNA
-
Publication number: 20240303534Abstract: A method includes: determining a first sample subset in an initial sample set by an initial model based on the initial sample set, wherein the initial sample set comprises a plurality of question-answer pairs, each of the plurality of question-answer pairs comprising a question and an answer; generating a first model by training the initial model with the first sample subset; determining a second sample subset in the first sample subset by the first model based on the first sample subset; generating a second model by training the first model with the second sample subset; determining, in response to at least one of the second sample subset and the second model satisfying a corresponding predetermined condition, a third sample subset of the initial sample set by the second model based on the initial sample set; and generating a third model by training the second model with the third sample subset.Type: ApplicationFiled: April 4, 2023Publication date: September 12, 2024Inventors: Zijia Wang, Zhisong Liu, Zhen Jia
-
Publication number: 20240303535Abstract: In some aspects, the techniques described herein relate to a method including: providing, on a model serving platform, a plurality of production machine learning models and a plurality of shadow machine learning models; routing input data to the plurality of production machine learning models and to the plurality of shadow machine learning models; receiving, at a model monitoring engine, production output data from a first production machine learning model of the plurality of production machine learning models; receiving, at the model monitoring engine, offline output data from a first shadow machine learning model of the plurality of shadow machine learning models; promoting the first shadow machine learning model to a production machine learning model based on the offline output data; and demoting the first production machine learning model based on the production output data.Type: ApplicationFiled: March 7, 2023Publication date: September 12, 2024Inventors: Anupam ARORA, Raj ARUMUGAM, John ROLLINS, Roy THARPE, Venu NELLURI
-
Publication number: 20240303536Abstract: A computer implemented method for data driven optimization. A number of processor units creates a regression model using historical data in a current neighborhood. The historical data is for a system over time. The number of processor units generates an optimization solution using the regression model created from the current neighborhood and an objective function. The number of processor units determines whether the optimization solution is within the current neighborhood. The number of processor units selects a new neighborhood containing the historical data in response to the optimization solution not being within the current neighborhood. The new neighborhood is based on the optimization solution and becomes the current neighborhood. The number of processor units repeats the creating, generating, determining, and selecting steps in response to the optimization solution not being within the current neighborhood.Type: ApplicationFiled: March 7, 2023Publication date: September 12, 2024Inventors: Dharmashankar Subramanian, Nianjun Zhou
-
Publication number: 20240303537Abstract: Methods and systems are described herein for facilitating segmentation of training data using measures of statistical dispersion (e.g., Gini impurities) of dataset features. The system determines, from a training dataset, a target feature and candidate features. The system determines, for the target feature in relation to each candidate feature, first Gini impurities. The system selects a first and second feature having the lowest first Gini impurities. The system determines, for the target feature in relation to a first combination of the first and second features, a second Gini impurity. If the second Gini impurity does not satisfy a threshold, the system selects a third feature having the next lowest first Gini impurity and determines a third Gini impurity for a second combination of the first, second, and third features. If the third Gini impurity satisfies the threshold, the system trains a model using the target, first, second, and third features.Type: ApplicationFiled: March 10, 2023Publication date: September 12, 2024Applicant: Capital One Services, LLCInventors: Ashwin Assysh SHARMA, Gunther HAVEL
-
Publication number: 20240303538Abstract: A management node is described. A method implemented in a management node configured for training and testing one or more machine learning (ML) models. The method comprises training a first ML model using a plurality of image modalities as training data. The plurality of image modalities comprises a first image modality and a second image modality different from the first image modality. The first image modality has a first image modality parameter and the second image modality has a second image modality parameter. The method further includes modifying one or both of the first image modality parameter and the second image modality parameter, training a second ML model using the plurality of image modalities and the modified one or both of the first image modality parameter and the second image modality parameter, and testing the first ML model and the second ML model based on an accuracy threshold.Type: ApplicationFiled: March 10, 2023Publication date: September 12, 2024Inventor: Tarmily WEN
-
Publication number: 20240303539Abstract: Embodiments described herein relate to methods and apparatuses for generating one or more answers relating to a machine learning, ML, model. A method in a first node comprises obtaining one or more queries relating to a first output of the ML model, wherein the first output of the machine learning, ML, model is intended to fulfil one or more requirements in an environment; for each of the one or more queries performing a reinforcement learning process.Type: ApplicationFiled: February 19, 2021Publication date: September 12, 2024Applicant: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Ajay KATTEPUR, Swarup KUMAR MOHALIK, Perepu SATHEESH KUMAR
-
Publication number: 20240303540Abstract: In a prediction device, an acquisition means acquires a feature quantity related to a well of shale gas or shale oil. A prediction means calculates a predicted value of a production volume of the well or a sand return amount of the well, based on the feature quantity, using a machine learning model. An output means outputs the predicted value and a contribution degree of the feature quantity to the predicted value.Type: ApplicationFiled: March 1, 2022Publication date: September 12, 2024Applicant: NEC CorporationInventor: Aya Ogata
-
Publication number: 20240303541Abstract: In an embodiment, a computer generates, from an input, an inference that contains multiple probabilities respectively for multiple mutually exclusive classes that contain a first class and a second class. The probabilities contain (e.g. due to overfitting) a higher probability for the first class that is higher than a lower probability for the second class. In response to a threshold exceeding the higher probability, the input is automatically and more accurately classified as the second class. One, some, or almost all classes may have a respective distinct threshold that can be concurrently applied for acceleration. Data parallelism may simultaneously apply a threshold to a batch of multiple inputs for acceleration.Type: ApplicationFiled: November 1, 2023Publication date: September 12, 2024Inventors: Yasha Pushak, Ali Seyfi, Hesam Fathi Moghadam, Sungpack Hong, Hassan Chafi
-
Publication number: 20240303542Abstract: A first information processing apparatus transmits first information used for acquiring values of parameters of a machine learning model to a plurality of second information processing apparatuses. The plurality of the second information processing apparatuses acquire an evaluation value of the machine learning model when the value of the parameters acquired based on the first information is applied to the machine learning model, and transmit the evaluation value to the first information processing apparatus. The first information processing apparatus aggregates a plurality of evaluation values received from the plurality of the second information processing apparatuses, and transmits the aggregate result of the evaluation values to the plurality of the second information processing apparatuses. The first information processing apparatus and the plurality of the second information processing apparatuses update the machine learning model based on the aggregate result of the evaluation values.Type: ApplicationFiled: February 15, 2024Publication date: September 12, 2024Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Shiro YANO
-
Publication number: 20240303543Abstract: A model training method and a model training apparatus are provided. In the method, a pre-trained model, an old dataset, and a new dataset are obtained. The pre-trained model is a machine-learning model trained by using the old dataset. The old dataset includes a plurality of old training samples. The new dataset includes a plurality of new training samples. The training of the pre-trained model has not yet used the new dataset. The old training samples of the old dataset are reduced to generate a reduced dataset. The reduced dataset and the new dataset are used to tune the pre-trained model. Accordingly, the training efficiency of fine-tuning can be improved.Type: ApplicationFiled: November 29, 2023Publication date: September 12, 2024Applicant: PEGATRON CORPORATIONInventor: Jonathan Guo
-
Publication number: 20240303544Abstract: A process is provided for using a graph database (e.g., SPOKE) to generate training vectors (SPOKEsigs) and train a machine learning model to classify biological entities. A cohort's input data records (EHRs) are compared to graph database nodes to identify overlapping concepts. Entry nodes (SEPs) associated with these overlapping concepts are used to generate propagated entry vectors (PSEVs) that encode the importance of each database node for a particular cohort, which helps train the model with only relevant information. Further, the propagated entry vectors for a given entity with a known classification can be aggregated to create training vectors. The training vectors are used as inputs to train a machine learning model. Biological entities with an unknown classification can be classified with a trained machine learning model. Entity signature vectors are generated for entities without a classification and input into the trained machine learning model to obtain a classification.Type: ApplicationFiled: March 29, 2022Publication date: September 12, 2024Inventors: Sergio E. Baranzini, Charlotte A. Nelson
-
Publication number: 20240303545Abstract: Provided is a learning device including a data acquisition unit that acquires a data set including a plurality of training data, a generation unit that includes a generation model that outputs pseudo data, a discrimination unit that includes a discrimination model that discriminates whether the input data is either the training data or the pseudo data according to an input of either the training data or the pseudo data, a management unit that sets a first hyperparameter to be used for updating the discrimination model based on a preset hyperparameter, and a second hyperparameter to be used for updating the generation model, and a learning processing unit that updates the discrimination model using the first hyperparameter and updates the generation model using the second hyperparameter.Type: ApplicationFiled: February 26, 2024Publication date: September 12, 2024Applicant: NEC CorporationInventors: Kenichiro FUKUSHI, Yoshitaka Nozaki, Kosuke Nishihara, Kentaro Nakahara
-
Publication number: 20240303546Abstract: A method for detecting whether a given input record of measurement data that is inputted to a trained machine learning model is in the domain and/or distribution of training examples with which the machine learning model was trained. The method includes: determining, from each training example, a training style that characterizes the domain and/or distribution to which the training example belongs; determining, from the given input record of measurement data, a test style that characterizes the domain and/or distribution to which the given record of measurement data belongs; evaluating, based on the training styles and the test style, to which extent the test style is a member of the distribution of the training styles; and based at least in part on the outcome of this evaluation, determining whether the given record of measurement data is in the domain and/or distribution of the training examples.Type: ApplicationFiled: February 27, 2024Publication date: September 12, 2024Inventors: Yumeng Li, Anna Khoreva, Dan Zhang
-
Publication number: 20240303547Abstract: A method for optimizing the detection of target cases in an imbalanced dataset, including generating a series of training datasets wherein the first training dataset includes an equal ratio of non-target cases and target cases and wherein the following training datasets of the series comprise a ratio of non-target to target cases that increases for each consecutive training dataset of the series. The method also includes training the machine learning model using the machine learning algorithm on each generated training datasets of the series of training datasets and recording the obtained performance score at each iteration, determining the maximum performance score among the recorded performance scores, determining the ratio of target to non-target cases for the determined maximum performance score, and training the machine learning model using the machine learning algorithm on a training dataset having the determined ratio of target to non-target cases to obtain an optimized model.Type: ApplicationFiled: February 28, 2024Publication date: September 12, 2024Applicant: BULL SASInventor: Dheeraj PATANKAR
-
Publication number: 20240303548Abstract: A system comprising: at least one training unit for one or more data owners collaborating to the system for storing private data and at least one encrypted model, said training unit being implemented as a trusted execution environment; at least one aggregator unit for each model owner collaborating to the system for storing and executing code of a training algorithm, said aggregator unit being implemented as a trusted execution environment; at least one administration unit for controlling communication and synchronization between the at least one training unit and the at least one aggregator unit, said administration unit being implemented as a trusted execution environment; wherein the communication between the at least one training unit and the at least one aggregator unit is encrypted.Type: ApplicationFiled: March 1, 2024Publication date: September 12, 2024Inventors: Alice DETHISE, Ruichuan CHEN, Istemi Ekin AKKUS, Paarijaat ADITYA, Antti Herman KOSKELA
-
Publication number: 20240303549Abstract: A facility for automatically adapting machine learning models for operation or execution on resources is described. The facility receives an indication of a machine learning model and resource constraints for the machine learning model. The facility determines which resources should be allocated for operation of the machine learning model based on the resource constraints and an indication of two or more resources. The facility causes the determined resources to be provisioned for operation of the machine learning model.Type: ApplicationFiled: March 5, 2024Publication date: September 12, 2024Inventors: Jason Knight, Luis Ceze, Itay Neeman, Jared Roesch, Spencer Krum
-
Publication number: 20240303550Abstract: Displaying an indication of ancestral data is disclosed. An indication that a genetic interval corresponds to a reference interval that has a likelihood of having one or more ancestral origins is received. One or more graphic display parameters are determined based at least in part on the indication. An indication of the one or more ancestral origins is visually displayed using the one or more graphic display parameters.Type: ApplicationFiled: May 22, 2024Publication date: September 12, 2024Inventors: John Michael Macpherson, Brian Thomas Naughton, Joanna Louise Mountain
-
Publication number: 20240303551Abstract: The disclosed embodiments include computer-implemented apparatuses and processes that facilitate a real-time prediction of future events using trained artificial-intelligence processes and inferred ground-truth labelling in multiple data populations. For example, an apparatus may receive application data characterizing an exchange of data from a device, and based on an application of an artificial-intelligence process to an input dataset that includes at least a portion of the application data, the apparatus may generate, in real time, output data indicative of a likelihood of an occurrence of at least one targeted event associated with the data exchange during a future temporal interval. The artificial-intelligence process may trained using datasets associated with inferred ground-truth labels and multiple data populations, and the apparatus may transmit at least a portion of the output data to the device for presentation within a digital interface.Type: ApplicationFiled: April 25, 2023Publication date: September 12, 2024Inventors: He LI, Jesse Cole CRESSWELL, Jean-Christophe BOUÉTTÉ, Mahdi GHELICHI, Zhiyi CHEN, George Frazer STEIN, Peter STARSZYK, Xiaochen ZHANG
-
Publication number: 20240303552Abstract: Retraining a model to present a target explanation with a prediction responsive to a source sample. The target explanation being selected from explanations provided by at least two machine learning models. A set of candidate samples is selected from samples generated from a relationship to the source sample. The retaining being performed with the set of candidate samples in a revised training dataset and causing a model presenting another explanation to present the target explanation with the prediction responsive to the source sample.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Diptikalyan Saha, Swagatam Haldar
-
Publication number: 20240303553Abstract: This application describes systems and methods for generating machine learning models (MLMs). An exemplary method includes obtaining a sample and user input data characterizing a product or service. A subset of the data is selected from the sample based on sampling the sample according to the user input data. An MLM is trained by applying the data subset as training input to the MLM, thereby providing a trained MLM to emulate a customer selection process unique to the product or service. A user interface (UI) configured to receive other user input data and cause the trained MLM to execute on the other user input data, thereby testing the trained MLM, is presented. A summary of results from the execution of the trained MLM is generated and presented in the UI. The summary of results indicates a contribution to the trained MLM of each of a plurality of features.Type: ApplicationFiled: April 29, 2024Publication date: September 12, 2024Inventors: David Sheehan, Siavash Yasani, Bingjia Wang, Yunyan Zhang, Qiumeng Yu, Ruochen Zha, Adam Kleinman, Sean Javad Kamkar, Lingzhi Du, Saar Yalov, Jerome Louis Budzik
-
Publication number: 20240303554Abstract: Embodiments herein use transfer learning paradigms to facilitate classification across entities without requiring the entities access to the other party's sensitive data. In one or more embodiments, one entity may train a model using its own data (which may include at least some non-shared data) and shares either the scores (or an intermediate representation of the scores). One or more other parties may use the scores as a feature in its own model. The scores may be considered to act as an embedding of the features but do not reveal the features. In other embodiments, parties may be used to train part of a model or participate in generating one or more nodes of a decision tree without revealing all its features. The trained models or decision trees may then be used for classifying unlabeled events or items.Type: ApplicationFiled: May 14, 2024Publication date: September 12, 2024Inventors: Ashish GOEL, Peter LOFGREN
-
Publication number: 20240303555Abstract: The present specification provides, amongst other things, a novel resource optimization engine. In one example system, a plurality of collaboration platforms and travel booking engines and client devices are provided that connect to the optimization engine. The plurality of collaboration platforms manage the accounts of client devices that collaborate. The system includes an optimization engine configured to optimize travel and scheduling itineraries between different collaborators.Type: ApplicationFiled: March 10, 2023Publication date: September 12, 2024Inventors: Nicolas Guillon, Yves Grealou, Cecile Jouve, Julien Bordas, Deepak Kochar
-
Publication number: 20240303556Abstract: An energy management method includes scheduling charging of an electrified vehicle for energy management to be performed at a predetermined charging point on a predetermined execution date, and pre-charging action of the electrified vehicle on the execution date at a location other than the charging point. When detected, determine whether the electrified vehicle can reach the charging point without charging and determine that the electrified vehicle can reach the charging point without charging at a location other than the charging point and requiring users of the electrified vehicle not to charge at locations other than at the charging point if so requested.Type: ApplicationFiled: December 14, 2023Publication date: September 12, 2024Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Kazuhisa MATSUDA
-
Publication number: 20240303557Abstract: The present disclosure is directed to an improved truck driving system and methods for improved truck driving, including solving well-known shortcomings in the truck driving industry. The system and software application, in which a truck driver or individual taking a long road trip, uses the app and system, along with third-party or integrated information about the scheduled route, to locate healthy truck stop stations, reserves a particular time for arriving, parking, exercising, taking classes, attending to hygiene and grooming, among other services and goods. The application also integrates with proprietary or third-party apps to provide advice on proper and healthier foods to purchase at the specific locations, and for safely driving and transporting goods.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Inventor: LISA COX