Patents Issued in July 20, 2023
-
Publication number: 20230229904Abstract: A method includes receiving model data, training a plurality of supervised models using the model data, each of the plurality of supervised models including a plurality of layers, slicing each of the plurality of supervised models into individual layers of the plurality of layers, calculating accuracy of feature detection of each of the individual layers of each of the plurality of supervised models, and combining a sequence of the individual layers taken from different models of the plurality of supervised models into a composite model based on the calculated accuracy of feature detection of each of the individual layers of each of the plurality of supervised models.Type: ApplicationFiled: January 4, 2022Publication date: July 20, 2023Inventors: Sathya Santhar, Sarbajit K. Rakshit, Sridevi Kannan, Samuel Mathew Jawaharlal
-
Publication number: 20230229905Abstract: A method for training a machine-learning model. A plurality of nodes are assigned for training the machine-learning model. Nodes include agents comprising at least an agent processing unit and local memory. Each agent manages, via a local network, one or more workers that include a worker processing unit. Shards of a training data set are distributed for parallel processing by workers at different nodes. Each worker processing unit is configured to iteratively train on minibatches of a shard, and to report checkpoint states indicating updated parameters for storage in local memory. Based at least on recognizing a worker processing unit failing, the failed worker processing unit is reassigned and initialized based at least on a checkpoint state stored in local memory.Type: ApplicationFiled: January 18, 2022Publication date: July 20, 2023Applicant: Microsoft Technology Licensing, LLCInventor: Yuan YU
-
Publication number: 20230229906Abstract: A computer-implemented method comprising: accessing a machine learning, ML, model that is operable to sample a causal graph from a graph distribution describing different possible graphs, wherein nodes represent the different variables of said set and edges represent causation, and the graph distribution comprises a matrix of probabilities of existence and causal direction of potential edges between pairs of nodes, and wherein the ML model is trained to be able to generate a respective simulated value of a selected variable from among said set based on the sampled causal graph. The method further comprises using the ML model to estimate a treatment effect from one or more intervened-on variables on another, target variable from among the variables of said set.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Inventors: Cheng ZHANG, Javier ANTORAN, Adam Evan FOSTER, Maria DEFANTE, Steve THOMAS, Tomas GEFFNER, Miltiadis ALLAMANIS, Karen FASSIO, Daniel TRUAX
-
Publication number: 20230229907Abstract: A method for optimizing water injection in a reservoir may include obtaining a first dataset from a first pipeline system in a first reservoir, training a first model by the first dataset, and determining reliability of the first dataset by the first model. The method may include upon determining that the first dataset is reliable, generating a first categorized dataset by the first dataset and a second model, and training a third model by the first categorized dataset. The method may include optimizing water injection control parameters of a second reservoir in accordance to a final water injection scheme by the third model.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Applicant: SAUDI ARABIAN OIL COMPANYInventors: Klemens Katterbauer, Abdulaziz Al-Qasim, Alberto F. Marsala, Abdallah Al Shehri, Ali Yousif
-
Publication number: 20230229908Abstract: A method for determining a non-linear beamforming (NLBF) tuple is disclosed. The method includes receiving a seismic data set and discretizing the seismic data set into a plurality of NLBF sub-problems. The method includes solving a subset of the NLBF sub-problems with a non-linear optimizer to create final NLBF tuples. The method further includes periodically training a machine-learned model with a subset of the NLBF sub-problems and final NLBF tuples data and obtaining intermediate NLBF tuple predictions from the trained machine-learned model. The intermediate NLBF tuple predictions may be used as initial values in a non-linear optimizer to create final NLBF tuples or may be accepted as final NLBF tuples. The method includes storing the final NLBF tuples.Type: ApplicationFiled: January 18, 2022Publication date: July 20, 2023Applicant: Aramco Overseas Company B.V.Inventor: Yimin Sun
-
Publication number: 20230229910Abstract: A compute block includes a DMA engine that reads data from an external memory and write the data into a local memory of the compute block. An MAC array in the compute block may use the data to perform convolutions. The external memory may store weights of one or more filters in a memory layout that comprises a sequence of sections for each filter. Each section may correspond to a channel of the filter and may store all the weights in the channel. The DMA engine may convert the memory layout to a different memory layout, which includes a sequence of new sections for each filter. Each new section may include a weight vector that includes a sequence of weights, each of which is from a different channel. The DMA engine may also compress the weights, e.g., by removing zero valued weights, before the conversion of the memory layout.Type: ApplicationFiled: October 3, 2022Publication date: July 20, 2023Applicant: Intel CorporationInventors: Kevin Brady, Sudheendra Kadri, Niall Hanrahan
-
Publication number: 20230229911Abstract: This disclosure relates generally to time series forecasting, and, more particularly, to a system and method for online time series forecasting using spiking reservoir. Existing systems do not cater for efficient online time-series analysis and forecasting due to their memory and computation power requirements. System and method of the present disclosure convert a time series value F(t) at time ‘t’ to an encoded multivariate spike train and extracts temporal features from the encoded multivariate spike train by the excitatory neurons of a reservoir, predict a time series value Y(t + k) at time ‘t’ by performing a linear combination of extracted temporal features with read-out weights, compute an error for predicted time series value Y(t + k) with input time series value F(t + k), employs a FORCE learning on read-out weights using the error to reduce error in future forecasting. Feeding a feedback value back to the reservoir to optimize memory of the reservoir.Type: ApplicationFiled: November 29, 2022Publication date: July 20, 2023Applicant: Tata Consultancy Services LimitedInventors: Arun GEORGE, Dighanchal BANERJEE, Sounak DEY, Arijit MUKHERJEE
-
Publication number: 20230229912Abstract: A model compression method is provided, which can be applied to the field of artificial intelligence. The method includes: obtaining a first neural network model, a second neural network model, and a third neural network model; processing first to-be-processed data using the first neural network model, to obtain a first output; processing the first to-be-processed data using the third neural network model, to obtain a second output; determining a first target loss based on the first output and the second output, and updating the second neural network model based on the first target loss, to obtain an updated second neural network model; and compressing the updated second neural network model to obtain a target neural network model. The model generated based on the method has higher processing precision.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Wei ZHANG, Lu HOU, Yichun YIN, Lifeng SHANG
-
Publication number: 20230229913Abstract: A method and apparatus for training an information adjustment model of a charging station, an electronic device, and a storage medium are provided. An implementation comprises: acquiring a battery charging request, and determining environment state information corresponding to each charging station in a charging station set; determining, through an initial policy network, target operational information of each charging station in the charging station set for the battery charging request, according to the environment state information; determining, through an initial value network, a cumulative reward expectation corresponding to the battery charging request according to the environment state information and the target operational information; training the initial policy network and the initial value network by using a deep deterministic policy gradient algorithm; and determining the trained policy network as an information adjustment model corresponding to each charging station.Type: ApplicationFiled: March 23, 2023Publication date: July 20, 2023Inventors: Weijia ZHANG, Le ZHANG, Hao LIU, Jindong HAN, Chuan QIN, Hengshu ZHU, Hui XIONG
-
Publication number: 20230229914Abstract: A neural network system for predicting a polling time and a neural network model processing method using the neural network system are provided. The neural network system includes a first resource to generate a first calculation result obtained by performing at least one calculation operation corresponding to a first calculation processing graph and a task manager to calculate a first polling time taken for the first resource to perform the at least one calculation operation and to poll the first calculation result from the first resource based on the calculated first polling time.Type: ApplicationFiled: March 29, 2023Publication date: July 20, 2023Inventor: Seung-soo YANG
-
Publication number: 20230229915Abstract: Disclosed herein is a method and apparatus for predicting a future state and reliability based on time series data. In the method and the apparatus, a future state is predicted by preprocessing past state data and executing an algorithm based on the preprocessed past state data to generate a trained model, followed by preprocessing current state data and executing an algorithm based on the created trained model, the preprocessed current state data, and the preprocessed past state data.Type: ApplicationFiled: January 17, 2023Publication date: July 20, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hwin Dol PARK, Jae Hun CHOI, Young Woong HAN
-
Publication number: 20230229916Abstract: A method for contracting a tensor network is provided. The method comprises generating a graph representation of the tensor network, processing the graph representation to determine a contraction for the tensor network by an agent that implements a reinforcement learning algorithm, and processing the tensor network in accordance with the contraction to generate a contracted tensor network.Type: ApplicationFiled: January 20, 2023Publication date: July 20, 2023Inventors: Gal Chechik, Eli Alexander Meirom, Haggai Maron, Brucek Kurdo Khailany, Paul Martin Springer, Shie Mannor
-
Publication number: 20230229917Abstract: A compute block can perform hybrid multiply-accumulate (MAC) operations. The compute block may include a weight compressing module and a processing element (PE) array. The weight compression module may select a first group of one or more weights and a second group of one or more weights from a weight tensor of a DNN (deep neural network) layer. A weight in the first group is quantized to a power of two value. A weight in the second group is quantized to an integer. The integer and the exponent of the power of two value may be stored in a memory in lieu of the original values of the weights. A PE in the PE array includes a shifter configured to shift an activation of the layer by the exponent of the power of two value and a multiplier configured to multiplying the integer with another activation of the layer.Type: ApplicationFiled: March 15, 2023Publication date: July 20, 2023Applicant: Intel CorporationInventors: Michael Wu, Arnab Raha, Deepak Abraham Mathaikutty, Nihat Tunali, Martin Langhammer
-
Publication number: 20230229918Abstract: According to the present invention, a method for continuous learning of object anomaly detection and state classification model includes acquiring, by a detection and classification apparatus, information about a medium of anomaly detection from an inspection target; generating, by the detection and classification apparatus, an input value, which is a feature vector matrix including a plurality of feature vectors, from the medium information; deriving, by the detection and classification apparatus, a restored value imitating the input value through a detection network learned to generates the restored value for the input value; determining, by the detection and classification apparatus, whether a restoration error indicating a difference between the input value and the restored value is greater than or equal to a previously calculated reference value; and storing, by the detection and classification apparatus, the input value as normal data upon determining that the restoration error is less than the referencType: ApplicationFiled: March 17, 2023Publication date: July 20, 2023Inventor: Yeong hyeon Park
-
Publication number: 20230229919Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar— and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
-
Publication number: 20230229920Abstract: A method and device for training a neural network are disclosed. The method comprises: selecting, by a training device, a teacher network performing the same functions of a student network; and iteratively training the student network and obtaining a target network, through aligning distributions of features between a first middle layer and a second middle layer corresponding to the same training sample data, so as to transfer knowledge of features of a middle layer of the teacher network to the student network.Type: ApplicationFiled: March 27, 2023Publication date: July 20, 2023Inventors: Naiyan WANG, Zehao HUANG
-
Publication number: 20230229921Abstract: Neural network systems and methods are provided. One method for processing a neural network includes, for at least one neural network layer that includes a plurality of weights, applying an offset function to each of a plurality of weight values in the plurality of weights to generate an offset weight value, and quantizing the offset weight values to form quantized offset weight values. The plurality of weights are pruned. One method for executing a neural network includes reading, from a memory, at least one neural network layer that includes quantized offset weight values and an offset value ?, and performing a neural network layer operation on an input feature map, based on the quantized offset weight values and the offset value ?, to generate an output feature map. The quantized offset weight values are signed integer numbers.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Applicant: Arm LimitedInventors: Igor Fedorov, Paul Nicholas Whatmough
-
Publication number: 20230229922Abstract: A training method, an operating method and a memory system are provided. The operating method comprises using a first memory block of the memory system for computation; obtaining an aging condition of the memory system; determining whether the aging condition meets a predetermined aging condition; and when it is determined that the aging condition meets the predetermined aging condition, enabling the second memory block and using the first memory block and the second memory block for computation.Type: ApplicationFiled: January 17, 2022Publication date: July 20, 2023Applicant: Taiwan Semiconductor Manufacturing Company, Ltd.Inventors: Xiaoyu Sun, Kerem Akarvardar, Rawan Naous
-
Publication number: 20230229924Abstract: Techniques for updating a visualization of a neural network and for refining a neural network are disclosed. Network data is obtained, where this data describes the neural network. At least some of the network data is normalized. A visual representation of the neural network is generated. The visual representation includes a set of nodes. The visual representation further includes edges connecting various nodes. The visual representation is updated using the normalized network data. As a result of updating the visual representation using the normalized network data, a display of the nodes and/or of the edges is modified in a manner to reflect a relative relationship that exists between the nodes and/or the edges. The relative relationship is based on the normalized network data. The updated visual representation is then displayed.Type: ApplicationFiled: January 12, 2023Publication date: July 20, 2023Inventor: Kyle Jordan RUSSELL
-
Publication number: 20230229925Abstract: Systems/techniques that facilitate thermally adaptive scan sequencing for computed tomography scanners are provided. In various embodiments, a system can access a set of scanning protocols performable by a computed tomography scanner. In various aspects, the system can further access a current thermal state of the computed tomography scanner. In various instances, the system can identify, in the set of scanning protocols and based on the current thermal state, a scanning protocol that is predicted to reduce or control thermal stresses experienced by the computed tomography scanner. In various cases, the system can cause the computed tomography scanner to perform the identified scanning protocol.Type: ApplicationFiled: January 19, 2022Publication date: July 20, 2023Inventors: Arka Datta, John M. Boudry
-
Publication number: 20230229926Abstract: A processor-implemented method of generating feature data includes: receiving an input image; generating, based on a pixel value of the input image, at least one low-bit image having a number of bits per pixel lower than a number of bits per pixel of the input image; and generating, using at least one neural network, feature data corresponding to the input image from the at least one low-bit image.Type: ApplicationFiled: March 29, 2023Publication date: July 20, 2023Applicant: Samsung Electronics Co., Ltd.Inventors: Chang Kyu CHOI, Youngjun KWAK, Seohyung LEE
-
Publication number: 20230229927Abstract: The present disclosure discloses a method and system for training a neural network for determining severity, and more particularly, a method and system which may effectively learn a neural network performing patch unit severity diagnosis using a pathological slide image to which a severity indication (label) is given.Type: ApplicationFiled: June 3, 2021Publication date: July 20, 2023Inventors: Sun Woo KIM, Tae Yeong KWAK, Hye Yoon CHANG, Ye Chan MUN
-
Publication number: 20230229928Abstract: Systems, methods, and computer-readable storage media for forecasting the impact of climate change, and more specifically to the impact on water quality and/or quantity. The system receives, from a plurality of sensors within a predefined geographic area, environmental data. The system normalizes the environmental data and executes an artificial intelligence algorithm, where inputs to the artificial intelligence algorithm include the normalized environmental data, and outputs of the artificial intelligence algorithm include environmental risks, consequences, and probabilities associated with at least one environmental event. The system then modifies a planned project using the environmental risks, consequences, and probabilities associated with the at least one environmental event.Type: ApplicationFiled: January 18, 2023Publication date: July 20, 2023Inventors: William C. LOUISELL, III, David BANKSTON
-
Publication number: 20230229929Abstract: A computing system for performing distributed large scale reinforcement learning with improved efficiency can include a plurality of actor devices, wherein each actor device locally stores a local version of a machine-learned model, wherein each actor device is configured to implement the local version of the machine-learned model at the actor device to determine an action to take in an environment to generate an experience, a server computing system configured to perform one or more learning algorithms to learn an updated version of the machine-learned model based on the experiences generated by the plurality of actor devices, and a hierarchical and distributed data caching system including a plurality of layers of data caches that propagate data descriptive of the updated version of the machine-learned model from the server computing system to the plurality of actor devices to enable each actor device to update its respective local version of the model.Type: ApplicationFiled: January 28, 2021Publication date: July 20, 2023Inventors: Amir Yazdanbakhsh, Yu Zheng, Junchao Chen
-
Publication number: 20230229930Abstract: Systems and methods for locality preserving federated learning are disclosed. In one embodiment, a method for locality preserving federated learning may include: (1) receiving, at an aggregator computer program and from each of a plurality of clients, weights for each client's local machine learning model; (2) generating, by the aggregator computer program, an averaged machine learning model based on the received weights; (3) sharing, by the aggregator computer program, the averaged machine learning model with the plurality of clients; and (4) applying, by each client, a scaling factor to the averaged machine learning model to update its local machine learning model.Type: ApplicationFiled: January 17, 2023Publication date: July 20, 2023Inventors: Antonios GEORGIADIS, Fanny SILAVONG, Sean MORAN, Rob OTTER
-
Publication number: 20230229931Abstract: A processor-implemented method of a neural network includes obtaining intermediate pooling results, respectively corresponding to sub-pooling kernels obtained by decomposing an original pooling kernel, by performing a pooling operation on input pixels included in a current window in an input feature map with the sub-pooling kernels, obtaining a final pooling result corresponding to the current window by post-processing the intermediate pooling results, and determining an output pixel value of an output feature map, based on the final pooling result, wherein the current window is determined according to the original pooling kernel having been slid, according to a raster scan order, in the input feature map.Type: ApplicationFiled: March 18, 2023Publication date: July 20, 2023Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB FoundationInventors: Hyunsun PARK, Soonhoi HA, Donghyun KANG, Jintaek KANG
-
Publication number: 20230229932Abstract: A genetic algorithm system generates a set of computer programs and executes a process for assessment and conditional modification of the set, repeating the process over a plurality of generations to mutate the population of solutions over time. At each generation, the system scores each program in the set to generate a respective primary score adjustment, a respective secondary score adjustment, and a respective current score. If a current score for a program is less than or equal to a first threshold, the system removes the computer program from the set. If the current score is greater than or equal to a second threshold, the system modifies the computer program to generate one or more offspring programs for use in subsequent generations. If a primary score adjustment for a program is greater than or equal to a third threshold, the system selects the computer program for performance of a task.Type: ApplicationFiled: February 16, 2022Publication date: July 20, 2023Applicant: NOBLIS, INC.Inventor: Ilya BASIN
-
Publication number: 20230229933Abstract: Quantum branch-and-bound algorithms with heuristics are disclosed. A method may include: receiving a branch and bound problem; setting an upper bound, a best bound, an incumbent, and a counter i; executing a subtree estimation procedure that returns branch_m that represents a tree of size m; determining branch_i and cost_i for branch_m; setting cost_feas to a value COST(N) for feasible nodes N, and to +? for unfeasible nodes; instructing a quantum computer to execute a QuantumMinimumLeaf procedure to get a node N and setting incumbent? to COST(N); instructing the quantum computer to execute the QuantumMinimumLeaf procedure to get a node N? and to setting best bound? to equal COST(N?); and returning the node N when an absolute value of a difference between a minimum of incumbent and incumbent? and a minimum of best bound and best bound? is less than the approximation margin.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Inventors: Shouvanik CHAKRABARTI, Pierre MINSSEN, Romina YALOVETZKY, Marco PISTOIA
-
Publication number: 20230229934Abstract: A computer implemented method of hypothesis scoring based on causal relationships is provided. The computer implemented method includes creating a causal relationship model utilizing a plurality of hypotheses and a causal relationship between each of two or more pairs of hypotheses, and obtaining pro and con sentiment scores for each hypothesis utilizing a scoring function. The computer implemented method further includes assigning the obtained pro and con sentiment scores to each hypothesis in the causal relationship model, and propagating the pro and con sentiment scores from leaf hypotheses to a root hypothesis utilizing axioms to test the propagating scores for reasonableness. The computer implemented method further includes determining a final pro and con score for the root hypothesis, and presenting the final pro and con scores representing a prediction of the hypotheses to a user.Type: ApplicationFiled: January 19, 2022Publication date: July 20, 2023Inventors: Futoshi Iwama, Sachiko Yoshihama, Issei Yoshida, Naoto Sato
-
Publication number: 20230229935Abstract: The present disclosure relates to a method, a device, and a program product for training a model. The method includes: receiving at least one unlabeled sample and at least one labeled sample for training a pre-training model, the pre-training model being used to extract features of the samples; creating an undirected graph associated with the pre-training model using the at least one unlabeled sample and a set of training samples associated with the pre-training model; dividing the undirected graph to form a plurality of sub-graphs based on corresponding features of the unlabeled sample and the set of training samples, the plurality of sub-graphs corresponding to a plurality of classifications of the samples, respectively; and training, based on the plurality of sub-graphs and the at least one labeled sample, the pre-training model to generate a training model. A corresponding device and a corresponding computer program product are provided.Type: ApplicationFiled: March 7, 2022Publication date: July 20, 2023Inventors: Wenbin Yang, Zijia Wang, Jiacheng Ni, Qiang Chen, Zhen Jia
-
Publication number: 20230229936Abstract: This disclosure relates to extraction of tasks from documents based on a weakly supervised classification technique, wherein extraction of tasks is identification of mentions of tasks in a document. There are several prior arts addressing the problem of extraction of events, however due to crucial distinctions between events-tasks, task extraction stands as a separate problem. The disclosure explicitly defines specific characteristics of tasks, creates labelled data at a word-level based on a plurality of linguistic rules to train a word-level weakly supervised model for task extraction. The labelled data is created based on the plurality of linguistic rules for a non-negation aspect, a volitionality aspect, an expertise aspect and a plurality of generic aspects. Further the disclosure also includes a phrase expansion technique to capture the complete meaning expressed by the task instead of merely mentioning the task that may not capture the entire meaning of the sentence.Type: ApplicationFiled: July 15, 2022Publication date: July 20, 2023Applicant: Tata Consultancy Services LimitedInventors: SACHIN SHARAD PAWAR, GIRISH KESHAV PALSHIKAR, ANINDITA SINHA BANERJEE
-
Publication number: 20230229937Abstract: To efficiently collect training data for training an AI model, an input of a training profile is received that includes item values corresponding to a plurality of data items, including analysis target data to be analyzed by the AI model and information on the model type. A first query is acquired to extract training data from a training database. The number of pieces of first training data to be extracted from the training database is calculated. The required number of pieces of the training data to train the AI model is calculated using the information on the model type. Whether the number of pieces of the first training data is equal to or greater than the required number is determined. When the determined number of pieces of the first training data is less than the required number, a supplementary query for extracting the training data is generated.Type: ApplicationFiled: December 21, 2022Publication date: July 20, 2023Applicant: Hitachi, Ltd.Inventors: Mika Takata, Toshihiko Kashiyama
-
Publication number: 20230229938Abstract: Systems and methods are described for integrating one or more machine learning models with a client application using Remote Procedure Calls (RPCs). A server deploys a software container associated with a client application, the container comprising executable code corresponding to a machine learning model, a plurality of inputs to the machine learning model, and a plurality of outputs of the machine learning model. The server generates a protocol buffer profile using the inputs and the outputs, the protocol buffer profile defining RPC functions for integrating the client application and the machine learning model. The server receives, from the client application, a request to access the machine learning model using a first RPC function. The server executes the machine learning model to generate a classification value for input provided in the request. The server transmits the classification value to the client application using a second RPC function.Type: ApplicationFiled: January 18, 2023Publication date: July 20, 2023Inventors: John Mariano, David Johnston, Vall Herard, Jason Matthew Megaro
-
Publication number: 20230229939Abstract: A computer-implemented method for ascertaining a fusion of a plurality of predictions, the predictions of the plurality of predictions in each case characterizing a classification and/or a regression result relating to a sensor signal. The fusion is ascertained based on a product of probabilities of the respective classifications and/or regression results and based on an a priori probability of the fusion, the a priori probability for ascertaining the fusion entering into a power, the exponent of the power being the number of elements in the plurality of predictions minus 1.Type: ApplicationFiled: January 11, 2023Publication date: July 20, 2023Inventor: Christoph-Nikolas Straehle
-
Publication number: 20230229940Abstract: One example method includes for each pillar in a group of AI ethics pillars, storing, in a datastore, context data concerning the AI ethics pillar, and the context data is determined using context rules. The method further includes storing, in the datastore, the context rules as minimum context requirements, and receiving, by the datastore, a request from a user to register an asset in the datastore. When user-supplied context information for the asset meets ethical requirements specified by the context rules, registering the asset in the datastore, and ensuring that an assessment mechanism is able to access, and assess, the context data for each AI ethics pillar.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Inventors: Nicole Reineke, Stephen J. Todd
-
Publication number: 20230229941Abstract: Rule induction is used to produce human readable descriptions of patterns within a dataset. A rule induction algorithm or classifier is a type supervised machine learning classification algorithm. A rule induction classifier is trained, which involves using labelled examples in the dataset to produce a set of rules. Rather than using the rules/classifier to make predictions on new unlabeled samples, the training of the rule induction model outputs human-readable descriptions of patterns (rules) within the dataset that gave rise to the rules (rather than using the rules to predict new unlabeled samples). Parameters of the rule induction algorithm are tuned to favor simple and understandable rules, instead of only tuning for predictive accuracy. The learned set of rules are outputted during the training process in a human-friendly format.Type: ApplicationFiled: March 24, 2023Publication date: July 20, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Edmund Chi Man Tse, Brett Owens Simons, Sandeep Repaka, Yatpang Cheung
-
Publication number: 20230229942Abstract: Ethics-based decision making is disclosed. Outputs of artificial intelligence processes are tagged or labeled with ethics scores. When decisions are made using the outputs, the ethics tag inform the decision engine regarding the ethics of the outputs. The decisions can, based on the ethics scores, proceed or be delayed until additional input is received. In one example, the selection of an asset may depend the output ethics score.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Inventors: Stephen J. Todd, Nicole Reineke
-
Publication number: 20230229943Abstract: A post-processing method, system, and computer program product for post-hoc improvement of instance-level and group-level prediction metrics, including training a bias detector on a payload data that learns to detect a sample in a customer model that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, suggesting, in the run-time, a de-biased prediction based on the selected biased sample by a de-biasing procedure, and an arbiter decides based on user feedback whether to use the de-biased prediction or an original prediction made prior to the de-biasing procedure from the customer model which is then used as an output.Type: ApplicationFiled: March 23, 2023Publication date: July 20, 2023Inventors: Manish Bhide, Pranay Lohia, Karthikeyan Natesan Ramamurthy, Ruchir Puri, Diptikalyan Saha, Kush Raj Varshney
-
Publication number: 20230229944Abstract: User interactions with a supply chain system are monitored based on a tracked ontology enrichment process, an explainable reasoning graph is constructed based on the monitored user interactions and domain specific reasoning information; and an explainable insight of the monitored user interactions is learned, as is a user interaction embedding for an embedding space, based on the constructed explainable reasoning graph and the explainable insight. External data is incorporated into the embedding space, a joint embedding is learned based on the user interaction embedding, and missing entities and relationships are identified for incorporation into an ontology based on the user interactions and joint embedding. The ontology is revised to incorporate the missing entities and relationships into the ontology to create a revised ontology, and a supply chain is controlled based on the revised ontology.Type: ApplicationFiled: December 30, 2021Publication date: July 20, 2023Inventors: Fred Ochieng Otieno, Smitkumar Narotambhai Marvaniya, Reginald Eugene Bryant, Komminist Weldemariam
-
Publication number: 20230229945Abstract: An outlier detection mechanism is disclosed that improves transparency and explainability in machine learning models. The outlier detection mechanism can quantify, at prediction time, how a new observation differs from training observations. The outlier detection mechanism can also provide a way to aggregate outputs from decision trees by weighting the outputs of the decision trees based on their explainability.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado
-
Publication number: 20230229946Abstract: Methods, non-transitory computer readable media, and causal explanation computing apparatus that assists with generating and providing causal explanation of artificial intelligence models includes obtaining a dataset as an input for an artificial intelligence model, wherein the obtained dataset is filtered to a disentangled low-dimensional representation. Next, a plurality of first factors from the disentangled low-dimensional representation of the obtained data that affect an output of the artificial intelligence model is identified. Further, a generative mapping from the disentangled low-dimensional representation between the identified plurality of first factors and the output of the artificial intelligence model, using causal reasoning is determined. An explanation data is generated using the determined generative mapping, wherein the generated explanation data provides a description of an operation leading to the output of the artificial intelligence model using the identified plurality of first factors.Type: ApplicationFiled: June 24, 2021Publication date: July 20, 2023Inventors: Matthew O'Shaughnessy, Gregory Canal, Marissa Connor, Mark Davenport, Christopher John Rozell
-
Publication number: 20230229947Abstract: The present invention belongs to the field of monitoring alerts to be classified according to their severity. In particular, the invention describes a method and a system that monitor a large amount of alerts automatically to prioritize those with severe character. Such alerts are generated by measuring instruments or devices—as sensors or detectors—or are generated by third party tools.Type: ApplicationFiled: February 9, 2021Publication date: July 20, 2023Inventors: Pablo Soldevilla Martínez, Escolástico Sánchez Martínez, Miguel Ángel Sánchez Moreno
-
Publication number: 20230229948Abstract: A quantum circuit that is a quantum random access memory that can write basis states of a weighted superposition in real time into a memory cell or a superposition of memory cells. The quantum circuit is a quantum random access memory that can write a prepared superposition. The quantum circuit is a quantum random access memory that can write classical data.Type: ApplicationFiled: December 1, 2022Publication date: July 20, 2023Applicant: Abu Dhabi UniversityInventors: Hichem El Euch, Mohammed Abdellatif Abdelaal Zidan, Abdulhaleem Mohamed Ahmed Abdelaty, Mahmoud Mohamed Ahmed Abdel-Aty, Ashraf Khalil
-
Publication number: 20230229949Abstract: A method includes receiving a plurality of quantum systems, wherein each quantum system of the plurality of quantum system includes a plurality of quantum sub-systems in an entangled state, and wherein respective quantum systems of the plurality of quantum systems are independent quantum systems that are not entangled with one another. The method further includes performing a plurality of joint measurements on different quantum sub-systems from respective ones of the plurality of quantum systems, wherein the joint measurements generate joint measurement outcome data and determining, by a decoder, a plurality of syndrome graph values based on the joint measurement outcome data.Type: ApplicationFiled: March 10, 2023Publication date: July 20, 2023Applicant: Psiquantum, Corp.Inventors: Mercedes Gimeno-Segovia, Terence Rudolph, Naomi Nickerson
-
Publication number: 20230229950Abstract: A method for searching data includes storing a probe data and a target data expressed in a first orthogonal domain. The target data includes potential probe match data each characterized by the length of the target data. The probe data representation and the target data are transformed into an orthogonal domain. In the orthogonal domain, the target data is encoded with modulation functions to produce a plurality of encoded target data, each of the modulation functions having a position index corresponding to one of the potential probe match data. The plurality of encoded target data is interfered with the probe data in the orthogonal domain and an inverse transform result is obtained. If the inverse transform result exceeds a threshold, information is output indicating a match between the probe data and a corresponding one of the potential probe match data.Type: ApplicationFiled: January 20, 2023Publication date: July 20, 2023Inventor: Roger Selly
-
Publication number: 20230229951Abstract: An integrated circuit and a method for operating the integrated circuit to perform quantum analog computing. The integrated circuit comprises a plurality of qubits connected to each other, each qubit of the plurality of qubits comprising resistors, inductors, capacitors and a switch, which can be implemented using CMOS elements, wherein the qubits are connected to each other according to a connectivity topology, such as a Hopfield network, that provides an analog of quantum behavior at room temperature.Type: ApplicationFiled: May 28, 2021Publication date: July 20, 2023Inventors: Jean-Michel SELLIER, Kristina KAPANOVA
-
Publication number: 20230229952Abstract: One-dimensional and two-dimensional arrays of qubits are disclosed. The one-dimensional array includes two or more double-quantum dots embedded in silicon, the two or more double-quantum dots arranged in an Echelon formation, such that the distance between the two or more double-quantum dots is approximately 40 nm and the distance between the two quantum dots in each double-quantum dot is approximately 12 nm; two or more reservoirs to load electrons to the corresponding two or more double-quantum dots to form singlet-triplet qubits in each double-quantum dot; and two or more gates for controlling the formed singlet-triplet qubits. The two-dimensional array of qubits includes two or more layers of vertically-stacked one-dimensional arrays of qubits.Type: ApplicationFiled: June 4, 2021Publication date: July 20, 2023Applicant: Silicon Quantum Computing Pty LimitedInventors: Prasanna Pakkiam, Michelle Yvonne Simmons
-
Publication number: 20230229953Abstract: Provided are a computer system and a control device, which are capable of reducing the necessity for reconfiguration according to the computation details in the circuit configuration of a quantum computer. The computer system includes an acquisition unit 122 that acquires computation details; a group of computation units including a plurality of computation units each configured to execute computation using quantum effects or thermal effects in a superconducting state; a selection unit 124 that selects a computation unit from the group of computation units based on the computation details; and an execution unit 212 that causes the computation unit selected by the selection unit to execute computation.Type: ApplicationFiled: June 4, 2021Publication date: July 20, 2023Applicant: NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE AND TECHNOLOGYInventor: Daisuke SAIDA
-
Publication number: 20230229954Abstract: While a qubit control system (e.g., a laser system) is in a first configuration, it causes a qubit state (as represented as a point on the surface of a Bloch sphere) of a quantum state carrier (QSC), e.g., an atom, to rotate in a first direction from an initial qubit state to a first configuration qubit state. While the qubit control system is in a second configuration, it causes the QSC state to rotate in a second direction opposite the first direction from the first configuration qubit state to a second configuration qubit state. The second configuration qubit state is read out as a |0? or |1?. Repeating these actions results in a distribution of |0?s and |1?s that can be used to determine which of the two configurations results in higher Rabi frequencies. Iterating the above for other pairs of configurations can identify a configuration that delivers the most power to the QSC and thus yields the highest Rabi frequency.Type: ApplicationFiled: January 17, 2023Publication date: July 20, 2023Inventors: Daniel C. Cole, Woo Chang Chung
-
Publication number: 20230229955Abstract: A method for measuring the spin of an electron in a quantum dot that is tunnel coupled to a reservoir is disclosed. The method includes measuring a spin state of the injected electron while applying a ramped detuning for a time period.Type: ApplicationFiled: December 22, 2022Publication date: July 20, 2023Applicant: Silicon Quantum Computing Pty LimitedInventors: Michelle Yvonne Simmons, Samuel Keith Gorman, Brandur Thorgrimsson, Ludwik Kranz, Daniel Keith, Yousun Chung