Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to input input data including at least information regarding a first state in a differentiable physical model to calculate an inferred second state; and infer, based on a second state and the inferred second state, a parameter that transits from the first state to the second state.
Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to generate information on a tree including information on a node and information on an edge from a latent representation by using a trained inference model; and generate a graph from the information on the tree. The information on the tree includes connection information on the nodes.
Abstract: A control system includes at least one processor and at least one memory. The at least one processor is configured to determine operation data by repeating a process of calculating control target data indicating a predicted value of a control target in a plant and the operation data indicating an operation value of a control device of the plant by a given calculation model based on observation data indicating an actual value of the plant.
Type:
Application
Filed:
October 26, 2022
Publication date:
May 4, 2023
Applicants:
ENEOS Corporation, Preferred Networks, Inc.
Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to acquire a plurality of latent variables; generate a plurality of structural formulas by inputting the plurality of latent variables, respectively, in a model; and calculate a plurality of scores by evaluating the plurality of structural formulas, respectively. The one or more processors execute processing of the acquisition of the plurality of latent variables, the generation of the plurality of structural formulas, and the calculation of the plurality of scores, at least two times or more. The one or more processors acquire, based on the acquired plurality of latent variables and the calculated plurality of scores, the plurality of latent variables in any of the execution of at least second time or thereafter.
Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to acquire a latent variable; generate a structural formula by inputting the latent variable in a first model; and calculate a score with respect to the structural formula. The one or more processors execute processing of the acquisition of the latent variable, the generation of the structural formula, and the calculation of the score, at least two times or more, to generate the structural formula indicating the score higher than that of the structural formula generated at the execution of the first time.
Type:
Application
Filed:
December 7, 2022
Publication date:
March 30, 2023
Applicant:
Preferred Networks, Inc.
Inventors:
Motoki ABE, Mizuki TAKEMOTO, Ryuichiro ISHITANI, Keita ODA
Abstract: One aspect of the present disclosure relates to a generation method for a training dataset, comprising: capturing, by one or more processors, a target object to which a marker unit recognizable under a first illumination condition is provided; and acquiring, by the one or more processors, a first image where the marker unit is recognizable and a second image obtained by capturing the target object under a second illumination condition.
Abstract: A method of generating a three-dimensional model of an object, is executed by a processor. The method includes executing rendering of the three-dimensional model of the object based on an image captured by the imaging device; and modifying the three-dimensional model.
Abstract: [Problem] To provide a learning device for performing more efficient machine learning. [Solution] A learning device unit according to one embodiment comprises at least one learning device and a connection device for connecting an intermediate learning device having an internal state shared by another learning device unit to the at least one learning device.
Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
Abstract: According to one embodiment, a communication device includes: acquisition circuitry including a plurality of data acquirers configured to acquire data for transmission; processing circuitry configured to determine consecutive first sequence numbers for a plurality of pieces of the data acquired by the plurality of data acquirers, and to generate a plurality of packets that include the data acquired by the plurality of data acquirers and the first sequence numbers determined for the plurality of pieces of the data; and transmitting circuitry configured to transmit the plurality of packets wherein the packet includes an identifier that identifies the data acquirer having acquired the data in the packet or an identifier that identifies an application corresponding to the data in the packet.
Type:
Application
Filed:
July 28, 2022
Publication date:
November 17, 2022
Applicants:
Preferred Networks, Inc., TOYOTA JIDOSHA KABUSHIKI KAISHA
Abstract: [Problem] To provide a learning device for performing more efficient machine learning. [Solution] A learning device unit according to one embodiment comprises at least one learning device and a connection device for connecting an intermediate learning device having an internal state shared by another learning device unit to the at least one learning device.
Abstract: A line drawing automatic coloring method according to the present disclosure includes: acquiring line drawing data of a target to be colored; receiving at least one local style designation for applying a selected local style to at least one place of the acquired line drawing data; and performing coloring processing reflecting the local style designation on the line drawing data based on a learned model for coloring in which it is learned in advance using the line drawing data and the local style designation as inputs.
Abstract: A server device configured to communicate, via a communication network, with at least one device including a learner configured to perform processing by using a learned model, includes processor, a transmitter, and a storage configured to store a plurality of shared models pre-learned in accordance with environments and conditions of various devices. The processor is configured to acquire device data including information on an environment and conditions from the at least one device, and select an optimum shared model for the at least one device based on the acquired device data. The transmitter is configured to transmit a selected shared model to the at least one device.
Abstract: The disclosure relates to data processing methods, computer readable hardware storage devices, and systems for correlating data corresponding to levels of biomarkers with various breast diseases.
Type:
Application
Filed:
July 7, 2020
Publication date:
September 1, 2022
Applicant:
Preferred Networks, Inc.
Inventors:
Nobuyuki OTA, Sandeep AYYAR, Timothy J. NOLAN
Abstract: A gaze point estimation processing apparatus in an embodiment includes a storage configured to store a neural network as a gaze point estimation model and one or more processors. The storage stores a gaze point estimation model generated through learning based on an image for learning and information relating to a first gaze point for the image for learning. The one or more processors estimate information relating to a second gaze point with respect to an image for estimation from the image for estimation using the gaze point estimation model.
Abstract: A method and system that efficiently selects sensors without requiring advanced expertise or extensive experience even in a case of new machines and unknown failures. An abnormality detection system includes a storage unit for storing a latent variable model and a joint probability model, an acquisition unit for acquiring sensor data that is output by a sensor, a measurement unit for measuring the probability of the sensor data acquired by the acquisition unit based on the latent variable model and the joint probability model stored by the storage unit, a determination unit for determining whether the sensor data is normal or abnormal based on the probability of the sensor data measured by the measurement unit, and a learning unit for learning the latent variable model and the joint probability model based on the sensor data output by the sensor.
Abstract: An information processing device includes a memory, and processing circuitry coupled to the memory. The processing circuitry is configured to acquire gradation processing target image data, and perform gradation processing on the gradation processing target image data based on a learned model learned in advance.
Abstract: One aspect of the present disclosure relates to a data compression method. The method includes generating, by one or more processors, compressed data from data, wherein the compressed data includes one or more unduplicated values of the data and generating, by the one or more processors, index data from the data, wherein the index data includes indices indicative of storage locations for the unduplicated values.
Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors input a vector relating to an atom into a first network which extracts a feature of the atom in a latent space from the vector relating to the atom, and infer the feature of the atom in the latent space through the first network.