Abstract: With respect to an inference method performed by at least one processor, the method includes inputting, by the at least one processor, into a learned model, second non-processed image data and second parameter data of a simulator, and inferring, by the at least one processor using the learned model, second processed image data. The learned model has been trained so that first processed image data, obtained as an output in response to first non-processed image data and first parameter data of the simulator for the first non-processed image data being input, approaches first simulator processed image data, obtained as a result of the simulator for the first non-processed image data by using the first parameter data.
Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
Abstract: A system used for generating a three-dimensional representation from one or more two-dimensional images includes a plurality of imaging devices arranged to image a real object whose three-dimensional representation is to be generated; and a marker utilized in calculating a pose of the imaging device, the pose being utilized in generating the three-dimensional representation of the real object. At least one of the plurality of imaging devices is arranged to image the real object and the marker from below to obtain a two-dimensional image including the real object and the marker.
Abstract: An information processing system comprises at least one memory and at least one processor. The at least one processor is configured to receive a request of a user; determine whether or not to permit processing of the request based on a term of use of the user; and execute the processing using a neural network based on the request whose processing is determined as being permitted. The term of use includes a condition related to a structure of a processing target of the neural network.
Type:
Application
Filed:
November 9, 2023
Publication date:
May 16, 2024
Applicant:
Preferred Networks, Inc.
Inventors:
Masateru KAWAGUCHI, Takuya OGATA, Yuta TSUBOI
Abstract: An information processing system includes a first information processing device and a second information processing device. The first information processing device is configured to receive the atomic information from the second information processing device, calculate a processing result corresponding to the atomic information by inputting the atomic information into a neural network, and transmit the processing result to the second information processing device. The second information processing device is configured to transmit atomic information to the first information processing device.
Abstract: A training device includes processor. The processor inputs a first atomic structure including a surface and an adsorbed molecule close to the surface into a model to obtain an energy outputted from the model in response to the input, and obtains a first error based on the outputted energy of the first atomic structure and a ground truth value of the energy of the first atomic structure, input a fourth atomic structure including a cluster and an adsorbed molecule close to the cluster into the model to obtain an energy outputted from the model in response to the input, and obtains a fourth error based on the outputted energy of the fourth atomic structure and a ground truth value of the energy of the fourth atomic structure, and update a parameter of the model by the first and the fourth error. The surface and the cluster include the same atom.
Type:
Application
Filed:
December 8, 2023
Publication date:
April 18, 2024
Applicant:
Preferred Networks, Inc.
Inventors:
Chikashi SHINAGAWA, So TAKAMOTO, Iori KURATA
Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to obtain three-dimensional structures of a plurality of molecules; and input the three-dimensional structures of the plurality of molecules into a neural network model and infer one or more physical properties of the plurality of molecules.
Abstract: An information processing device includes one or more memories and one or more processors. The one or more processors are configured to receive information on a plurality of graphs from one or more second information processing devices; select a plurality of graphs which are simultaneously processable using a graph neural network model among the plurality of graphs; input information on the plurality of graphs which are simultaneously processable into the graph neural network model and simultaneously process the information on the plurality of graphs which are simultaneously processable to acquire a processing result for each of the plurality of graphs which are simultaneously processable; and transmit the processing result to the second information processing device which has transmitted the corresponding information on the graph.
Abstract: A server device configured to communicate, via a communication network, with at least one device including a learner configured to perform processing by using a learned model, includes processor, a transmitter, and a storage configured to store a plurality of shared models pre-learned in accordance with environments and conditions of various devices. The processor is configured to acquire device data including information on an environment and conditions from the at least one device, and select an optimum shared model for the at least one device based on the acquired device data. The transmitter is configured to transmit a selected shared model to the at least one device.
Abstract: An inferring device includes one or more memories; and one or more processors. The one or more processors are configured to input information on each atom in an atomic system into a second model to infer a difference between energy based on a first-principles calculation corresponding to the atomic system and energy of an interatomic potential function corresponding to the atomic system.
Type:
Application
Filed:
December 8, 2023
Publication date:
April 4, 2024
Applicant:
Preferred Networks, Inc.
Inventors:
Daisuke MOTOKI, Chikashi SHINAGAWA, So TAKAMOTO, Hiroki IRIGUCHI
Abstract: An information processing device includes one or more processors. The one or more processors are configured to optimize, for a specific elementary reaction in a reaction using a catalyst including a plurality of elementary reactions, an arrangement of a promoter element in the catalyst based on activation energy acquired using a trained model, and search for the promoter element based on the activation energy acquired using the trained model for each type of the promoter element.
Type:
Application
Filed:
December 8, 2023
Publication date:
April 4, 2024
Applicants:
ENEOS Corporation, Preferred Networks, Inc.
Abstract: An inferring device includes one or more processors. The one or more processors are configured to acquire an output from a neural network model based on information related to an atomic structure and label information related to an atomic simulation, wherein the neural network model is trained to infer a simulation result with respect to the atomic structure generated by the atomic simulation corresponding to the label information.
Abstract: An inferring device comprises one or more memories and one or more processors. The one or more processors execute decision of an action based on a tree representation including a node and an edge of a molecular graph, and a trained model trained through reinforcement learning, and execute generation of a state including information on the molecular graph based on the action, wherein the edge has connection information on the nodes.
Abstract: With respect to an inference method performed by at least one processor, the method includes inputting, by the at least one processor, into a learned model, non-processed object image data of a second object and data related to a second process for the second object, and inferring, by the at least one processor using the learned model, processed object image data of the second object on which the second process has been performed. The learned model has been trained so that an output obtained in response to non-processed object image data of a first object and data related to a first process for the first object being input approaches processed object image data of the first object on which the first process has been performed.
Type:
Grant
Filed:
March 2, 2021
Date of Patent:
March 5, 2024
Assignees:
Preferred Networks, Inc., Tokyo Electron Limited
Abstract: A method and system that efficiently selects sensors without requiring advanced expertise or extensive experience even in a case of new machines and unknown failures. An abnormality detection system includes a storage unit for storing a latent variable model and a joint probability model, an acquisition unit for acquiring sensor data that is output by a sensor, a measurement unit for measuring the probability of the sensor data acquired by the acquisition unit based on the latent variable model and the joint probability model stored by the storage unit, a determination unit for determining whether the sensor data is normal or abnormal based on the probability of the sensor data measured by the measurement unit, and a learning unit for learning the latent variable model and the joint probability model based on the sensor data output by the sensor.
Abstract: An apparatus and a method for coloring line drawing is disclosed for: acquiring line drawing data; performing reduction processing on the line drawing data to be a predetermined reduced size to obtain reduced line drawing data; coloring the reduced line drawing data based on a first learned model which is learned in advance using sample data; and coloring original line drawing data with the colored reduced data and the original line drawing data as inputs based on a second learned model which is learned in advance.
Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
Abstract: A machine learning device for a robot that allows a human and the robot to work cooperatively, the machine learning device including a state observation unit that observes a state variable representing a state of the robot during a period in that the human and the robot work cooperatively; a determination data obtaining unit that obtains determination data for at least one of a level of burden on the human and a working efficiency; and a learning unit that learns a training data set for setting an action of the robot, based on the state variable and the determination data.
Type:
Grant
Filed:
September 17, 2020
Date of Patent:
February 20, 2024
Assignees:
FANUC CORPORATION, PREFERRED NETWORKS, INC.
Abstract: An autoencoder includes memory configured to store data including an encode network and a decode network, and processing circuitry coupled to the memory. The processing circuitry is configured to cause the encode network to convert inputted data to a plurality of values and output the plurality of values, batch-normalize values indicated by at least two or more layers of the encode network, out of the output plurality of values, the batch-normalized values having a predetermined average value and a predetermined variance value, quantize each of the batch-normalized values, and cause the decode network to decode each of the quantized values.
Abstract: A computer system for generating information regarding chemical compound includes one or more memories and one or more processors configured to generate information regarding chemical compound based on a latent variable, and to evaluate the generated information regarding chemical compound based on desired characteristics, wherein generating the information regarding chemical compound is restricted by the desired characteristics.
Type:
Grant
Filed:
August 24, 2020
Date of Patent:
February 13, 2024
Assignee:
Preferred Networks, Inc.
Inventors:
Kenta Oono, Justin Clayton, Nobuyuki Ota