Patents Issued in April 14, 2022
-
Publication number: 20220114436Abstract: Systems and methods for artificial intelligence discovered codes are described herein. A method includes obtaining received samples from a receive decoder, obtaining decoded bits from the receive decoder based on the receiver samples, training an encoder neural network of a transmit encoder, the encoder neural network receiving parameters that comprise the information bits, the received samples, and the decoded bits. The encoder neural network is optimized using a loss function applied to the decoded bits and the information bits to calculate a forward error correcting code.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventors: RaviKiran Gopalan, Anand Chandrasekher, Yihan Jiang
-
Publication number: 20220114437Abstract: Methods, apparatus, and processor-readable storage media for correlating data center resources in a multi-tenant execution environment using machine learning techniques are provided herein. An example computer-implemented method includes obtaining multiple data streams pertaining to one or more data center resources in at least one multi-tenant executing environment; correlating one or more portions of the multiple data streams by processing at least a portion of the multiple data streams using at least one multi-tenant-capable search engine; determining one or more anomalies within the multiple data streams by processing the one or more correlated portions of the multiple data streams using a machine learning-based anomaly detection engine; and performing at least one automated action based at least in part on the one or more determined anomalies.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Inventors: James S. Watt, Bijan K. Mohanty, Bhaskar Todi
-
Publication number: 20220114438Abstract: Methods and systems for training and implementing metrology recipes while dynamically controlling the convergence trajectories of multiple performance objectives are described herein. Performance metrics are employed to regularize the optimization process employed during measurement model training, model-based regression, or both. Weighting values associated with each of the performance objectives in the loss function of the model optimization are dynamically controlled during model training. In this manner, convergence of each performance objective and the tradeoff between multiple performance objectives of the loss function is controlled to arrive at a trained measurement model in a stable, balanced manner. A trained measurement model is employed to estimate values of parameters of interest based on measurements of structures having unknown values of one or more parameters of interest.Type: ApplicationFiled: December 2, 2020Publication date: April 14, 2022Inventors: Stilian Ivanov Pandev, Arvind Jayaraman
-
Publication number: 20220114439Abstract: An approach is provided for generating an asynchronous learning rules and/or architectures. The approach involves, for example, configuring an asynchronous machine learning agent to learn based on machine learning tasks. The asynchronous machine learning agent includes agent inputs for inputting task inputs of the machine learning tasks, agent outputs for outputting task outputs of the machine learning tasks, task feedback signals for scoring a performance on the one or more machine learning tasks, and a stateful neural units that are arbitrarily connected. The approach also comprises initiating a training of the asynchronous machine learning agent to learn an agent architecture, an agent learning rule, or a combination thereof based on the machine learning tasks. The approach further comprises configuring the stateful neural units based on the agent architecture, the agent learning rule, or a combination thereof to perform a subsequent machine learning task.Type: ApplicationFiled: December 21, 2020Publication date: April 14, 2022Inventor: Tero KESKI-VALKAMA
-
Publication number: 20220114440Abstract: This disclosure generally provides solutions for improving the performance of a custom-built, packet-switched, TPU accelerator-side communication network. Specifically a set of solutions to improve the flow-control behavior by tuning the packet buffer queues in the on-chip router in the distributed training supercomputer network are described.Type: ApplicationFiled: December 29, 2020Publication date: April 14, 2022Inventors: Xiangyu Dong, Kais Belgaied, Yazhou Zu
-
Publication number: 20220114441Abstract: An apparatus for scheduling a data augmentation technique according to an embodiment includes a data set extractor, a first trainer, an operation extractor, a second trainer, and a schedule determinator. The apparatus may provide a schedule for a data augmentation technique capable of improving the performance of a neural network classification model in a shorter time compared to the related art.Type: ApplicationFiled: January 14, 2021Publication date: April 14, 2022Inventors: Jeong Hyung PARK, Seung Woo NAM, Ji Ah YU
-
Publication number: 20220114442Abstract: A non-transitory computer-readable recording medium having stored therein a machine learning program executable by one or more computers, the machine learning program including an instruction for generating a tensor comprising a first axis, a second axis, and a third axis, the first axis and the second axis representing relationships of a plurality of nodes included in graph information including data representing attributes of the plurality of nodes in the hierarchical structure, the third axis representing separately first data included in a first layer of the hierarchical structure and second data included in a second layer of the hierarchical structure, and an instruction for training a machine learning model by using the tensor as an input.Type: ApplicationFiled: July 9, 2021Publication date: April 14, 2022Applicant: FUJITSU LIMITEDInventor: Ryo ISHIZAKI
-
Publication number: 20220114443Abstract: An information processing program for causing a computer to execute processing, the processing including: converting each data included in a destination dataset and each data included in a plurality of source dataset candidates into a frequency spectrum; calculating an average of a spectrum intensity of the data included in the destination dataset and each average of a spectrum intensity of the data included in the plurality of source dataset candidates; calculating, for each of the plurality of source dataset candidates, a similarity with the destination dataset by using an inner product of the spectrum intensity of the data included in the destination dataset and the spectrum intensity of the data included in the plurality of source dataset candidates; and determining a source dataset that is the most similar to the destination dataset from among the plurality of source dataset candidates on the basis of the calculated similarity.Type: ApplicationFiled: July 12, 2021Publication date: April 14, 2022Applicant: FUJITSU LIMITEDInventor: Satoru Koda
-
Publication number: 20220114444Abstract: A computer-implemented method for training a neural network to perform a data processing task includes: for each data sample of a set of labeled data samples: by a first loss function for the data processing task, computing a first loss for that data sample; and by a second loss function, automatically computing a weight value for the data sample based on the first loss, the weight value indicative of a reliability of a label of the data sample predicted by the neural network for the data sample and dictating the extent to which that data sample impacts training of the neural network; and training the neural network with the set of labelled data samples according to their respective weight value.Type: ApplicationFiled: July 23, 2021Publication date: April 14, 2022Applicant: NAVER CORPORATIONInventors: Philippe WEINZAEPFEL, Jérome REVAUD, Thibault CASTELLS
-
METHOD AND SYSTEM FOR PROCESSING NEURAL NETWORK PREDICTIONS IN THE PRESENCE OF ADVERSE PERTURBATIONS
Publication number: 20220114445Abstract: A system and method for processing predictions in the presence of adversarial perturbations in a sensing system. The processor receives inputs from sensors and runs a neural network having a network function that generates, as outputs, predictions of the neural network. The method generates from a plurality of outputs a measurement quantity (m) that may be, at or near a given input, either (i) a first measurement quantity M1 corresponding to a gradient of the given output, (ii) a second measurement quantity M2 corresponding to a gradient of a predetermined objective function derived from a training process for the neural network, or (iii) a third measurement quantity M3 derived from a combination of M1, and M2. The method determines whether the measurement quantity (m) is equal to or greater than a threshold. If greater than the threshold, one or more remedial actions are performed to correct for a perturbation.Type: ApplicationFiled: January 3, 2020Publication date: April 14, 2022Inventors: Hans-Peter BEISE, Udo SCHRÖDER, Steve DIAS DA CRUZ, Jan SOKOLOWSKI -
Publication number: 20220114446Abstract: A method for creating a neural network, which includes an encoder that is connected to a decoder. The optimization method DARTS is used, a further cell type being added to the cell types of DARTS. A computer program and a device for carrying out the method, and a machine-readable memory element, on which the computer program is stored, are also described.Type: ApplicationFiled: April 8, 2020Publication date: April 14, 2022Inventors: Arber Zela, Frank Hutter, Thomas Brox, Tonmoy Saikia, Yassine Marrakchi
-
Publication number: 20220114447Abstract: A neural network parameter tuner has an auxiliary neural network receptive to an input data stream with signal components and noise components associated with ambient conditions. An ambient classification value is periodically derived from the input data stream based upon the noise components detected therein. A primary neural network receptive to the input data stream classifies the input data stream based upon an assigned detection threshold corresponding to the ambient classification value.Type: ApplicationFiled: October 8, 2021Publication date: April 14, 2022Inventors: Mouna Elkhatib, Adil Benyassine, Aruna Vittal, Eli Uc, Daniel Schoch
-
Publication number: 20220114448Abstract: In a neural network (NN) based wireless communication system, a BS determines, for an one-round latency T and an overall model size L of the NN model, i) Tu that makes {circumflex over (L)}*(Tu) larger than L and ii) Tl that makes {circumflex over (L)}*(Tl)<L; repeats determining {circumflex over (L)}*(Tm), {R*k,n}, {{circumflex over (L)}*k}, and {C*k,n} by using Tm=(Tu+Tl)/2 for k=1, K, and n=1, . . . N, while Tu is different from Tl; allocates NN model parameters to user equipments 1 to K based on {R*k,n}, {L*k}, and {C*k,n} determined based on Tm when Tu=Tl; and updates the NN model based on update results of the NN model parameters received from user equipments 1 to K.Type: ApplicationFiled: October 8, 2021Publication date: April 14, 2022Applicant: The University of Hong KongInventors: Kijun Jeon, Kaibin Huang, Dingzhu Wen, Sangrim Lee, Sungjin Kim
-
Publication number: 20220114449Abstract: A computing device trains a neural network machine learning model. A forward propagation of a first neural network is executed. A backward propagation of the first neural network is executed from a last layer to a last convolution layer to compute a gradient vector. A discriminative localization map is computed for each observation vector with the computed gradient vector using a discriminative localization map function. An activation threshold value is selected for each observation vector from at least two different values based on a prediction error of the first neural network. A biased feature map is computed for each observation vector based on the activation threshold value selected for each observation vector. A masked observation vector is computed for each observation vector using the biased feature map. A forward and a backward propagation of a second neural network is executed a predefined number of iterations using the masked observation vector.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Inventors: Xinmin Wu, Yingjian Wang, Xiangqian Hu
-
Publication number: 20220114450Abstract: A system and method for tuning hyperparameters and training a model includes implementing a hyperparameter tuning service that tunes hyperparameters of a model that includes receiving, via an API, a tuning request that includes: (i) a first part comprising tuning parameters for generating tuned hyperparameter values for hyperparameters of the model; and (ii) a second part comprising model training control parameters for monitoring and controlling a training of the model, wherein the model training control parameters include criteria for generating instructions for curtailing a training run of the model; monitoring the training run for training the model based on the second part of the tuning request, wherein the monitoring of the training run includes periodically collecting training run data; and computing an advanced training curtailment instruction based on the training run data that automatically curtails the training run prior to a predefined maximum training schedule of the training run.Type: ApplicationFiled: October 22, 2021Publication date: April 14, 2022Inventors: Michael McCourt, Taylor Jackie Springs, Ben Hsu, Simon Howey, Halley Nicki Vance, James Blomo, Patrick Hayes, Scott Clark
-
Publication number: 20220114451Abstract: Methods, apparatus, systems, and articles of manufacture for data enhanced automated model generation are disclosed. Example instructions, when executed, cause at least one processor to access a request to generate a machine learning model to perform a selected task, generate task knowledge based on a previously generated machine learning model, create a search space based on the task knowledge, and generate a machine learning model using neural architecture search, the neural architecture search beginning based on the search space.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Inventors: Chaunté W. Lacewell, Juan Pablo Muñoz, Rajesh Poornachandran, Nilesh Jain, Anahita Bhiwandiwalla, Eriko Nurvitadhi, Abhijit Davare
-
Publication number: 20220114452Abstract: A hierarchical compositional network, representable in Bayesian network form, includes first, second, third, fourth, and fifth parent feature nodes; first, second, and third pool nodes; first, second, and third weight nodes; and first, second, third, fourth, and fifth child feature nodes.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Inventor: Miguel Lazaro Gredilla
-
Publication number: 20220114453Abstract: A neural network pruning method includes: acquiring a first task accuracy of an inference task processed by a pretrained neural network; pruning, based on a channel unit, the neural network by adjusting weights between nodes of channels based on a preset learning weight and based on a channel-by-channel pruning parameter corresponding to a channel of each of a plurality of layers of the pretrained neural network; updating the learning weight based on the first task accuracy and a task accuracy of the pruned neural network; updating the channel-by-channel pruning parameter based on the updated learning weight and the task accuracy of the pruned neural network; and repruning, based on the channel unit, the pruned neural network based on the updated learning weight and based on the updated channel-by-channel pruning parameter.Type: ApplicationFiled: April 21, 2021Publication date: April 14, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Won-Jo LEE, Youngmin OH, Minkyoung CHO
-
Publication number: 20220114454Abstract: An electronic apparatus may include a memory configured to store compressed data that is to be decompressed for a neural network calculation of an artificial intelligence model; a decoder including a shift register configured to sequentially receive the compressed data in group units and output at least two groups of the compressed data, and a plurality of logic circuits configured to decompress the at least two groups of the compressed data to obtain decompressed data; and a processor configured to obtain the decompressed data in a form capable of being calculated by a neural network.Type: ApplicationFiled: November 4, 2021Publication date: April 14, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Baeseong PARK, Sejung KWON
-
Publication number: 20220114455Abstract: Pruning and/or quantizing a machine learning predictor or, in other words, a machine learning model such as a neural network is rendered more efficient if the pruning and/or quantizing is performed using relevance scores which are determined for portions of the machine learning predictor on the basis of an activation of the portions of the machine learning predictor manifesting itself in one or more inferences performed by the machine learning (ML) predictor.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Wojciech SAMEK, Sebastian LAPUSCHKIN, Simon WIEDEMANN, Philipp SEEGERER, Seul-Ki YEOM, Klaus-Robert MUELLER, Thomas WIEGAND
-
Publication number: 20220114456Abstract: Methods, systems, and computer program products for knowledge graph based embedding, explainability, and/or multi-task learning may connect task-specific inductive models with knowledge graph completion and enrichment processes.Type: ApplicationFiled: October 6, 2021Publication date: April 14, 2022Inventors: Azita Nouri, Mangesh Bendre, Mahashweta Das, Fei Wang, Hao Yang, Adit Krishnan
-
Publication number: 20220114457Abstract: Provided are various mechanisms and processes for quantization of tree-based machine learning models. A method comprises determining one or more parameter values in a trained tree-based machine learning model. The one or more parameter values exist within a first number space encoded in a first data type and are quantized into a second number space. The second number space is encoded in a second data type having a smaller file storage size relative to the first data type. An array is encoded within the tree-based machine learning model. The array stores parameters for transforming a given quantized parameter value in the second number space to a corresponding parameter value in the first number space. The tree-based machine learning model may be transmitted to an embedded system of a client device. The one or more parameter values correspond to threshold values or leaf values of the tree-based machine learning model.Type: ApplicationFiled: October 11, 2021Publication date: April 14, 2022Applicant: QEEXO, CO.Inventors: Leslie J. Schradin, III, Qifan He
-
Publication number: 20220114458Abstract: A device may include a processor. The processor may receive sensor data representative of an environment of a vehicle. The processor may also generate task data using the sensor data in accordance with a perception task. In addition, the task data may include a plurality of features of the environment. The processor may identify a latent representation of a negative effect of the environment within the sensor data. Further, the processor may estimate an error distribution for the task data based on the identified latent representation, the task data, and the perception task. The processor may generate output data. The output data may include a normalized distribution of the plurality of features based on the estimated error distribution and the task data.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Inventors: Maria Soledad ELLI, Javier FELIP LEON, David Israel GONZALEZ AGUIRRE, Javier S. TUREK, Ignacio J. ALVAREZ
-
Publication number: 20220114459Abstract: A computer device identifies (i) a dataset, (ii) a set of output class determinations made for data entries of the dataset by a computer decision algorithm, and (iii) an undesired disparity between output class determinations resulting from a first value of a first attribute of the dataset and output class determinations resulting from a second value of the first attribute. The computing device determines a value of a second attribute of the dataset is contributing to the undesired disparity by: providing an association rule mining model (i) a first group of the data entries having the first value of the first attribute, and (ii) a second group of the data entries having the second value of the first attribute, and selecting the value of the second attribute from a set of candidate attribute values produced by the association rule mining model based, at least in part, on a lift calculation.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventors: Manish Anand Bhide, Pranay Kumar Lohia
-
Publication number: 20220114460Abstract: An apparatus is provided for identifying representation. The representation is obtained for heterogeneous time series data. The apparatus comprises a model training device and a data classification device. Based on the requirements of compression rate and information loss, a most suitable time series representation is found out for a specific time series data. In particular, the model training device assesses each item of training time series data to evaluate the performance of various representations for thus identifying the most suitable representation for each item of the specific training time series data; and, then, the training time series data are clustered and the most representative time series data for each clustered data is determined.Type: ApplicationFiled: October 30, 2020Publication date: April 14, 2022Inventors: Chih-Yuan Huang, I-Sheng Tseng
-
Publication number: 20220114461Abstract: A model learning apparatus is configured to learn a model that shows a relationship between an input variable v input into a system and an output variable y output from the system. The model learning apparatus includes a storage that stores a model used to learn a nonlinear equation of state for predicting the output variable y by using the input variable v, and a processor programmed to learn the equation of state by using the model and an input-output data set including multiple sets of input variable data and output variable data with respect to the model. The model is an equation of state including a bijective mapping ? that uses the input variable v as an input thereof and a bijective mapping ? that uses the output variable y as an input thereof.Type: ApplicationFiled: October 12, 2021Publication date: April 14, 2022Applicant: KABUSHIKI KAISHA TOYOTA CHUO KENKYUSHOInventors: Ryuta MORIYASU, Taro IKEDA, Masato TAKEUCHI
-
Publication number: 20220114462Abstract: Recommendations for new experiments are generated via a pipeline that includes a predictive model and a preference procedure. In one example, a definition of a development task includes experiment parameters that may be varied, the outcomes of interest and the desired goals or specifications. Existing experimental data is used by machine learning algorithms to train a predictive model. The software system generates candidate experiments and uses the trained predictive model to predict the outcomes of the candidate experiments based on their parameters. A merit function (referred to as a preference function) is calculated for the candidate experiments. The preference function is a function of the experiment parameters and/or the predicted outcomes. It may also be a function of features that are derived from these quantities. The candidate experiments are ranked based on the preference function.Type: ApplicationFiled: November 29, 2021Publication date: April 14, 2022Inventors: Jason Isaac Hirshman, Noel Hollingsworth, Will Tashman
-
Publication number: 20220114463Abstract: Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20220114464Abstract: Embodiments described herein provide a two-stage model-agnostic approach for generating counterfactual explanation via counterfactual feature selection and counterfactual feature optimization. Given a query instance, counterfactual feature selection picks a subset of feature columns and values that can potentially change the prediction and then counterfactual feature optimization determines the best feature value for the selected feature as a counterfactual example.Type: ApplicationFiled: January 29, 2021Publication date: April 14, 2022Inventors: Wenzhuo Yang, Jia Li, Chu Hong Hoi, Caiming Xiong
-
Publication number: 20220114465Abstract: A method for tuning predictive control parameters of a building energy consumption system based on fuzzy logic: 1) constructing a controlled building energy consumption system, performing generalized predictive control on the building energy consumption system, and initializing an tuned parameter ? of a generalized predictive controller; 2) collecting the output slope yk (t), the actual output y(t), the set value yr(t) and the predicted output value ?(t+i) of the controlled building energy consumption system in the control process, and then taking them as fuzzy target parameters; 3) constructing a membership function for the fuzzy target parameters in step 2), and then optimally selecting the parameters of the fuzzy membership function by using a particle swarm optimization algorithm to obtain membership function parameters of each fuzzy target parameter; 4) carrying out fuzzy reasoning operation on the membership function parameters, and tuning the adjusted parameter ? by using the results of fuzzy reasoningType: ApplicationFiled: April 20, 2021Publication date: April 14, 2022Inventors: Ning HE, Gongbo XU
-
Publication number: 20220114466Abstract: A system and method for detecting synthetic identities are provided that determine a synthetic identity score for a given user, the synthetic identity score indicating a likelihood that the given user is using a synthetic identity to conduct activities. The synthetic identity score generated by the system and method disclosed herein can then be used to determine a risk associated with the given user and to inform what actions to take based on the associated risk that the given user may use the synthetic identity to perform a bad act.Type: ApplicationFiled: October 20, 2021Publication date: April 14, 2022Inventors: Maxwell BLUMENFELD, Naftali HARRIS
-
Publication number: 20220114467Abstract: The invention discloses a sewage treatment process fault monitoring method based on fuzzy width adaptive learning model. Including “offline modeling” and “online monitoring” two stages. “Offline modeling” first uses a batch of normal data and 4 batches of fault data as training samples to train the network offline and label the data. After the network training is completed, the weight parameters are obtained for online monitoring. “Online monitoring” includes: using newly collected data as test data, using the same steps as offline training networks for online monitoring. The output result of online monitoring adopts one-hot encoding to realize zero-one discrimination of the output result of online monitoring, so as to realize fault monitoring. The present invention only needs to increase the number of enhanced nodes, reconstruct in an incremental manner, and does not need to retrain the entire network from the beginning.Type: ApplicationFiled: October 22, 2021Publication date: April 14, 2022Inventors: Peng CHANG, Chunhao DING, Ruiwei LU, Zeyu LI, Kai WANG
-
Publication number: 20220114468Abstract: Systems and techniques that facilitate efficient synthesis of optimal multi-qubit Clifford circuits are provided. In various embodiments, a system can receive as input a number n representing a quantity of qubits. In various instances, the system can generate, via a cost-invariant reduction function, as output a library of different n-qubit canonical representatives that respectively correspond to different cost-invariant equivalence classes of n-qubit Clifford group elements. In various embodiments, a system can receive as input a first Clifford group element. In various aspects, the system can search a database of canonical representatives, wherein different canonical representatives in the database respectively correspond to different cost-invariant equivalence classes of Clifford group elements.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Sergey Bravyi, Joseph Latone, Dmitri Maslov
-
Publication number: 20220114469Abstract: A computing system can be configured to execute a classical-quantum hybrid algorithm. The computing system may comprise a classical computer comprising one or more classically-executable-nodes of the classical-quantum hybrid algorithm; and a quantum computer comprising a quantum-processor-unit. The quantum computer is operatively coupled to the classical computer. The one or more classically-executable-nodes may be configured to send a first-circuit and a second-circuit to the quantum computer for evaluation. The quantum computer may be configured to: receive the first-circuit and the second-circuit; evaluate the first-circuit, using the quantum-processor-unit, to determine a first-circuit-evaluation; and send the first-circuit-evaluation to the classical computer. The one or more classically-executable-nodes may be configured to: receive the first-circuit-evaluation; and process the first-circuit-evaluation during a first-time-interval.Type: ApplicationFiled: October 7, 2021Publication date: April 14, 2022Applicant: River Lane Research Ltd.Inventors: James Cruise, Coral Westoby
-
Publication number: 20220114470Abstract: An optimization apparatus finds the ground state of an Ising model that represents a target problem by running a simulation of state changes of the Ising model that occur upon reduction in a magnetic field applied to the Ising model. In doing so, the optimization apparatus adds a value corresponding to noise to some of coefficients used in the simulation. Then, the optimization apparatus performs a first process of real time propagation of reducing strength of the magnetic field as time in the simulation progresses and a second process of reducing energy of the Ising model based on an imaginary time propagation method.Type: ApplicationFiled: September 21, 2021Publication date: April 14, 2022Applicant: FUJITSU LIMITEDInventor: Daisuke KUSHIBE
-
Publication number: 20220114471Abstract: An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected to the classical store and determines the first time based on the index.Type: ApplicationFiled: May 3, 2021Publication date: April 14, 2022Applicant: Qubit Moving and Storage, LLCInventors: Gary Vacon, Kristin A. Rauschenbach
-
Publication number: 20220114472Abstract: Systems and methods for generating telecast forecasts is provided. An automated forecasting system uses machine learning-driven a forecast model for generating forecast for various telecasts varying periods of time. Estimate values that are used to generate the forecasts may be determined based on deriving trends and correlations from telecasts data using machine learning. The forecasting system may compare estimate values and actual values associated with the various telecasts and subsequently update the forecast model based on the comparison. The forecast model may be displayed on an electronic device of a client electronic device and may be updated or influenced by telecast providers.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Cameron Davies, Marco Antonio Morales Barba, David L. Synder, Tong Jian, Jiabin Chen, Soudeep Deb, Nana Yaw Essuman, Jiacheng Wang
-
Publication number: 20220114473Abstract: A computer system, product, and method are provided. The computer system includes an artificial intelligence (AI) platform operatively coupled to a processor. The AI platform includes tools in the form of a machine learning model (MLM) manager, a metric manager, and a training manager. The MLM manager accesses a plurality of pre-trained source MLMs, and inputs a plurality of data objects of a test dataset into each of the source MLMs. The test dataset includes the plurality of data objects associated with respective labels. For each source MLM, associated labels are generated from the inputted data objects and a similarity metric is calculated. The MLM manager selects a base MLM to be used for transfer learning from the plurality of source MLMs based upon the calculated similarity metric. The training manager trains the selected base MLM with a target dataset for the target domain.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Applicant: International Business Machines CorporationInventors: Parul Awasthy, Bishwaranjan Bhattacharjee, John Ronald Kender, Radu Florian, Hui Wan
-
Publication number: 20220114474Abstract: In various examples there is a method performed by a computer-implemented agent in an environment. The method comprises storing a reinforcement learning policy for controlling the computer-implemented agent. The method also comprises storing a distribution as a latent representation of a belief of the computer-implemented agent about at least one other agent in the environment. The method involves executing the computer-implemented agent according to the policy conditioned on parameters characterizing the distribution.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Katja HOFMANN, Luisa Maria ZINTGRAF, Sam Michael DEVLIN, Kamil Andrzej CIOSEK
-
Publication number: 20220114475Abstract: Methods and systems for decentralized federated learning are described. Each client participating the training of a local machine learning model identifies one or more neighbor clients in direct communication with itself. Each client transmits to its neighbor clients a weighting coefficient and a set of local model parameters for the local model. Each client also receives from its neighbor clients respective sets of local model parameters and respective weighting coefficients. Each client updates its own set of local model parameters using a weighted aggregation of the received sets of local model parameters, each received set of local model parameters being weighted with the respective received weighting coefficient. Each client trains its local machine learning model using a machine learning algorithm and its own local dataset.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Rui ZHU, Xiaorui LI, Yong ZHANG, Lanjun WANG
-
Publication number: 20220114476Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
-
Publication number: 20220114477Abstract: In one embodiment, in response to sensor data received from a sensor device of a detection device, an analysis is performed on the sensor data within the detection device. Configuration data is determined based on the analysis of the sensor data. The configuration data identifies one or more actions to be performed. Each action is associated with a particular artificial intelligence (AI) model. For each of the actions, an event is generated to trigger the corresponding action to be performed. In response to each of the events, an AI model corresponding to the action associated with the event is executed. The AI model is applied to at least a portion of the sensor data to classify the sensor data.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Inventors: Haofeng KOU, Yueqiang CHENG
-
Publication number: 20220114478Abstract: A system for enhancing a prediction model according to an embodiment includes a storage module configured to receive and store inference data for input data from a prediction model, a retraining module configured to train a retraining model using retraining data including the inference data, and a determination module configured to compare performances of the prediction model and the retraining model and replace the prediction model with the retraining model according to the comparison result.Type: ApplicationFiled: October 27, 2020Publication date: April 14, 2022Inventors: Byung Yong SUNG, Chang Ju LEE, Jun Cheol LEE, Jong Sung KIM
-
Publication number: 20220114479Abstract: A machine learning method using a trained machine learning model residing on an electronic device includes receiving an inference request by the electronic device. The method also includes determining, using the trained machine learning model, an inference result for the inference request using a selected inference path in the trained machine learning model. The selected inference path is selected based on a highest probability for each layer of the trained machine learning model. A size of the trained machine learning model is reduced corresponding to constraints imposed by the electronic device. The method further includes executing an action in response to the inference result.Type: ApplicationFiled: November 5, 2020Publication date: April 14, 2022Inventors: Changsheng Zhao, Yilin Shen, Hongxia Jin
-
Publication number: 20220114480Abstract: An apparatus for labeling data according to an embodiment of the present disclosure includes a data acquisitor that acquires a plurality of unlabeled data, a predicted label acquisitor that acquires predicted labels for the unlabeled data from a plurality of pre-training models pre-trained under different learning schemes, a sampler that selects a part of the unlabeled data as an initial review target, an initial review label acquisitor that acquires an initial review label for the part of the unlabeled data from a user, a model trainer that trains a labeling model based on the part of the unlabeled data, predicted labels for the part of the unlabeled data, and the initial review label, and a predictor that predicts labels of a part of the remaining unlabeled data excluding the part of the unlabeled data by applying the labeling model to the part of the remaining unlabeled data.Type: ApplicationFiled: January 14, 2021Publication date: April 14, 2022Inventors: Ji Hoon KIM, Seung Ho SHIN, Se Won WOO
-
Publication number: 20220114481Abstract: Embodiments described herein provide a two-stage model-agnostic approach for generating counterfactual explanation via counterfactual feature selection and counterfactual feature optimization. Given a query instance, counterfactual feature selection picks a subset of feature columns and values that can potentially change the prediction and then counterfactual feature optimization determines the best feature value for the selected feature as a counterfactual example.Type: ApplicationFiled: January 29, 2021Publication date: April 14, 2022Inventors: Wenzhuo Yang, Jia Li, Chu Hong Hoi, Caiming Xiong
-
Publication number: 20220114482Abstract: The present disclosure is directed to supervising displayed content. In particular, the methods and systems of the present disclosure may: generate data representing a plurality of images of interfaces displayed by a computing device configured to supervise content displayed to a user; determine, based at least in part on one or more machine learning (ML) models and the data representing the plurality of images, whether the interfaces displayed by the computing device include content of a type designated by a content supervisor of the user for identification; and generate data representing a graphical user interface (GUI) for presentation to the content supervisor, the GUI indicating whether the interfaces displayed by the computing device include content of the type designated for identification.Type: ApplicationFiled: February 11, 2021Publication date: April 14, 2022Inventor: Abbas Valliani
-
Publication number: 20220114483Abstract: A unified system with a machine learning feature data pipeline that can be shared among various product areas or teams of an electronic platform is described. A set of features can be fetched from multiple feature sources. The set of features can be combined with browsing event data to generate combined data. The combined data can be sampled to generate sampled data. The sampled data can be presented in a format having a structure that is agnostic to a feature source from which the set of features was fetched. The sampled data can be joined with old features by a backfilling process to generate training data designed to train one or more machine learning models. Related methods, apparatuses, articles of manufacture, and computer program products are also described.Type: ApplicationFiled: April 20, 2021Publication date: April 14, 2022Inventors: Aakash Sabharwal, Akhila Ananthram, Miao Wang, Ruixi Fan, Sarah Hale, Chu-Cheng Hsieh, Tianle Hu
-
Publication number: 20220114484Abstract: Provided are a production process determination device for a substrate processing apparatus, which can easily suppress deterioration of determination accuracy, and the like. A production process determination device 20 includes a process log acquisition section 21 that acquires process log data of a substrate processing apparatus 10, and a determination section 22 that creates input data based on the process log data and performs determination regarding production process in the substrate processing apparatus based on the input data. The determination section includes multiple learning models 25 each of which receives input of the input data and each of which outputs a determination result regarding the production process, and the multiple learning models are generated by performing machine learning by use of mutually different training datasets. The determination section can switch the learning model to be used for determination among the multiple learning models.Type: ApplicationFiled: March 2, 2020Publication date: April 14, 2022Inventors: Katsuji HANADA, Yuki FUJIWARA
-
Publication number: 20220114485Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Concentration Prediction Platform. According to various embodiments, the Concentration Prediction Platform receives an electrochemical signal and generates data based on deconvolving a respective contribution of an analyte(s) influencing the electrochemical signal. The Concentration Prediction Platform sends the data into one or more machine learning networks. The Concentration Prediction Platform receives, from the one or more machine learning networks, a predicted concentration of an analyte(s) influencing the electrochemical signal.Type: ApplicationFiled: June 22, 2021Publication date: April 14, 2022Inventors: Nicole Leilani Ing, Glenn Clifford Forrester