Patents Assigned to Preferred Networks, Inc.
  • Publication number: 20230206094
    Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to input input data including at least information regarding a first state in a differentiable physical model to calculate an inferred second state; and infer, based on a second state and the inferred second state, a parameter that transits from the first state to the second state.
    Type: Application
    Filed: March 6, 2023
    Publication date: June 29, 2023
    Applicant: Preferred Networks, Inc.
    Inventor: Masashi YOSHIKAWA
  • Publication number: 20230196075
    Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to generate information on a tree including information on a node and information on an edge from a latent representation by using a trained inference model; and generate a graph from the information on the tree. The information on the tree includes connection information on the nodes.
    Type: Application
    Filed: February 13, 2023
    Publication date: June 22, 2023
    Applicant: Preferred Networks, Inc.
    Inventor: Ryuichiro ISHITANI
  • Publication number: 20230138268
    Abstract: A control system includes at least one processor and at least one memory. The at least one processor is configured to determine operation data by repeating a process of calculating control target data indicating a predicted value of a control target in a plant and the operation data indicating an operation value of a control device of the plant by a given calculation model based on observation data indicating an actual value of the plant.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 4, 2023
    Applicants: ENEOS Corporation, Preferred Networks, Inc.
    Inventors: Taichiro HIRAI, Kosei KANUMA, Hiroyuki HINO, Yu YOSHIMURA, Kazuki UEHARA, Akira KINOSHITA, Masahiro SAKAI, Keigo KAWAMURA, Kaizaburo KIDO, Keisuke YAHATA, Keisuke NAKATA, Yo IIDA, Takuro MORIYAMA, Masashi YOSHIKAWA, Tsutomu OGASAWARA
  • Publication number: 20230111538
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Application
    Filed: November 11, 2022
    Publication date: April 13, 2023
    Applicant: Preferred Networks, Inc.
    Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
  • Publication number: 20230112275
    Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to acquire a plurality of latent variables; generate a plurality of structural formulas by inputting the plurality of latent variables, respectively, in a model; and calculate a plurality of scores by evaluating the plurality of structural formulas, respectively. The one or more processors execute processing of the acquisition of the plurality of latent variables, the generation of the plurality of structural formulas, and the calculation of the plurality of scores, at least two times or more. The one or more processors acquire, based on the acquired plurality of latent variables and the calculated plurality of scores, the plurality of latent variables in any of the execution of at least second time or thereafter.
    Type: Application
    Filed: December 7, 2022
    Publication date: April 13, 2023
    Applicant: Preferred Networks, Inc.
    Inventors: Ryuichiro ISHITANI, Motoki ABE
  • Publication number: 20230095369
    Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors are configured to acquire a latent variable; generate a structural formula by inputting the latent variable in a first model; and calculate a score with respect to the structural formula. The one or more processors execute processing of the acquisition of the latent variable, the generation of the structural formula, and the calculation of the score, at least two times or more, to generate the structural formula indicating the score higher than that of the structural formula generated at the execution of the first time.
    Type: Application
    Filed: December 7, 2022
    Publication date: March 30, 2023
    Applicant: Preferred Networks, Inc.
    Inventors: Motoki ABE, Mizuki TAKEMOTO, Ryuichiro ISHITANI, Keita ODA
  • Patent number: 11590650
    Abstract: One aspect of the present disclosure relates to a generation method for a training dataset, comprising: capturing, by one or more processors, a target object to which a marker unit recognizable under a first illumination condition is provided; and acquiring, by the one or more processors, a first image where the marker unit is recognizable and a second image obtained by capturing the target object under a second illumination condition.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 28, 2023
    Assignee: Preferred Networks, Inc.
    Inventors: Kenta Yonekura, Kuniyuki Takahashi
  • Patent number: 11551419
    Abstract: A method of generating a three-dimensional model of an object, is executed by a processor. The method includes executing rendering of the three-dimensional model of the object based on an image captured by the imaging device; and modifying the three-dimensional model.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 10, 2023
    Assignee: Preferred Networks, Inc.
    Inventors: Eiichi Matsumoto, Hironori Yoshida, Hiroharu Kato
  • Publication number: 20220414473
    Abstract: [Problem] To provide a learning device for performing more efficient machine learning. [Solution] A learning device unit according to one embodiment comprises at least one learning device and a connection device for connecting an intermediate learning device having an internal state shared by another learning device unit to the at least one learning device.
    Type: Application
    Filed: September 1, 2022
    Publication date: December 29, 2022
    Applicant: Preferred Networks, Inc.
    Inventors: Daisuke Okanohara, Ryosuke Okuta, Eiichi Matsumoto, Keigo Kawaai
  • Patent number: 11521070
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: December 6, 2022
    Assignee: Preferred Networks, Inc.
    Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
  • Publication number: 20220368565
    Abstract: According to one embodiment, a communication device includes: acquisition circuitry including a plurality of data acquirers configured to acquire data for transmission; processing circuitry configured to determine consecutive first sequence numbers for a plurality of pieces of the data acquired by the plurality of data acquirers, and to generate a plurality of packets that include the data acquired by the plurality of data acquirers and the first sequence numbers determined for the plurality of pieces of the data; and transmitting circuitry configured to transmit the plurality of packets wherein the packet includes an identifier that identifies the data acquirer having acquired the data in the packet or an identifier that identifies an application corresponding to the data in the packet.
    Type: Application
    Filed: July 28, 2022
    Publication date: November 17, 2022
    Applicants: Preferred Networks, Inc., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Hirochika ASAI, Yusuke DOI, Masahiro ISHIYAMA, Ryokichi ONISHI
  • Patent number: 11475289
    Abstract: [Problem] To provide a learning device for performing more efficient machine learning. [Solution] A learning device unit according to one embodiment comprises at least one learning device and a connection device for connecting an intermediate learning device having an internal state shared by another learning device unit to the at least one learning device.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: October 18, 2022
    Assignee: Preferred Networks, Inc.
    Inventors: Daisuke Okanohara, Ryosuke Okuta, Eiichi Matsumoto, Keigo Kawaai
  • Publication number: 20220301239
    Abstract: A line drawing automatic coloring method according to the present disclosure includes: acquiring line drawing data of a target to be colored; receiving at least one local style designation for applying a selected local style to at least one place of the acquired line drawing data; and performing coloring processing reflecting the local style designation on the line drawing data based on a learned model for coloring in which it is learned in advance using the line drawing data and the local style designation as inputs.
    Type: Application
    Filed: June 7, 2022
    Publication date: September 22, 2022
    Applicant: Preferred Networks, Inc.
    Inventor: Eiichi MATSUMOTO
  • Publication number: 20220286512
    Abstract: A server device configured to communicate, via a communication network, with at least one device including a learner configured to perform processing by using a learned model, includes processor, a transmitter, and a storage configured to store a plurality of shared models pre-learned in accordance with environments and conditions of various devices. The processor is configured to acquire device data including information on an environment and conditions from the at least one device, and select an optimum shared model for the at least one device based on the acquired device data. The transmitter is configured to transmit a selected shared model to the at least one device.
    Type: Application
    Filed: May 24, 2022
    Publication date: September 8, 2022
    Applicant: Preferred Networks, Inc.
    Inventors: Keigo Kawaai, Shohei Hido, Nobuyuki Kubota, Daisuke Tanaka
  • Publication number: 20220275455
    Abstract: The disclosure relates to data processing methods, computer readable hardware storage devices, and systems for correlating data corresponding to levels of biomarkers with various breast diseases.
    Type: Application
    Filed: July 7, 2020
    Publication date: September 1, 2022
    Applicant: Preferred Networks, Inc.
    Inventors: Nobuyuki OTA, Sandeep AYYAR, Timothy J. NOLAN
  • Publication number: 20220254063
    Abstract: A gaze point estimation processing apparatus in an embodiment includes a storage configured to store a neural network as a gaze point estimation model and one or more processors. The storage stores a gaze point estimation model generated through learning based on an image for learning and information relating to a first gaze point for the image for learning. The one or more processors estimate information relating to a second gaze point with respect to an image for estimation from the image for estimation using the gaze point estimation model.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Preferred Networks, Inc.
    Inventor: Masaaki FUKUDA
  • Publication number: 20220237060
    Abstract: A method and system that efficiently selects sensors without requiring advanced expertise or extensive experience even in a case of new machines and unknown failures. An abnormality detection system includes a storage unit for storing a latent variable model and a joint probability model, an acquisition unit for acquiring sensor data that is output by a sensor, a measurement unit for measuring the probability of the sensor data acquired by the acquisition unit based on the latent variable model and the joint probability model stored by the storage unit, a determination unit for determining whether the sensor data is normal or abnormal based on the probability of the sensor data measured by the measurement unit, and a learning unit for learning the latent variable model and the joint probability model based on the sensor data output by the sensor.
    Type: Application
    Filed: April 8, 2022
    Publication date: July 28, 2022
    Applicant: Preferred Networks, Inc.
    Inventors: Daisuke OKANOHARA, Kentra OONO
  • Publication number: 20220230025
    Abstract: An information processing device includes a memory, and processing circuitry coupled to the memory. The processing circuitry is configured to acquire gradation processing target image data, and perform gradation processing on the gradation processing target image data based on a learned model learned in advance.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 21, 2022
    Applicant: Preferred Networks, Inc.
    Inventor: Taizan YONETSUJI
  • Patent number: 11387844
    Abstract: One aspect of the present disclosure relates to a data compression method. The method includes generating, by one or more processors, compressed data from data, wherein the compressed data includes one or more unduplicated values of the data and generating, by the one or more processors, index data from the data, wherein the index data includes indices indicative of storage locations for the unduplicated values.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: July 12, 2022
    Assignee: Preferred Networks, Inc.
    Inventor: Tanvir Ahmed
  • Publication number: 20220207370
    Abstract: An inferring device includes one or more memories and one or more processors. The one or more processors input a vector relating to an atom into a first network which extracts a feature of the atom in a latent space from the vector relating to the atom, and infer the feature of the atom in the latent space through the first network.
    Type: Application
    Filed: March 18, 2022
    Publication date: June 30, 2022
    Applicant: Preferred Networks, Inc.
    Inventor: Daisuke MOTOKI