Abstract: An optimization apparatus includes one or more memories and one or more processors. For an operation node constituting a representation of an operation of a neural network, the one or more processors are configured to calculate a time consumption for recomputing an operation result of a focused operation node, from another operation node whose operation result has been stored, and acquire data on the operation node whose operation result is to be stored, based on the time consumption.
Abstract: A training apparatus includes one or more memories and one or more processors. The one or more processors are configured to generate a graph based on a path of an error backward propagation, assign an identifier to each node based on the path of the error backward propagation in the graph, and execute the error backward propagation based on the graph and on the identifier.
Type:
Application
Filed:
November 25, 2019
Publication date:
May 28, 2020
Applicant:
Preferred Networks, Inc.
Inventors:
Seiya TOKUI, Daisuke NISHINO, Hiroyuki Vincent YAMAZAKI, Naotoshi SEO, Akifumi IMANISHI
Abstract: A device for shortening time for learning curve prediction includes a sampler, a learning curve predictor, a learning executor, and a learning curve calculator. The sampler samples a weight parameter of a parameter model which outputs a parameter of a learning curve model of a neural network (NNW) on the basis of a set value of a hyperparameter of the NNW. The learning curve predictor calculates a prediction learning curve of the NNW on the basis of the sampled weight parameter and an actual learning curve of the NNW. The learning executor advances learning in the NNW. The learning curve calculator calculates an actual learning curve resulting from the advance of the learning in the NNW. The learning curve predictor updates the prediction learning curve of the NNW on the basis of the weight parameter sampled before the learning advances and the actual learning curve calculated after the learning advances.
Abstract: According to some embodiments, a tactile information estimation apparatus may include one or more memories and one or more processors. The one or more processors are configured to input at least first visual information of an object acquired by a visual sensor to a model. The model is generated based on visual information and tactile information linked to the visual information. The one or more processors are configured to extract, based on the model, a feature amount relating to tactile information of the object.
Type:
Application
Filed:
December 6, 2019
Publication date:
April 30, 2020
Applicant:
Preferred Networks, Inc.
Inventors:
Kuniyuki TAKAHASHI, Jethro Eliezer Tanuwijaya TAN
Abstract: A computer is caused to realize: a line drawing data acquisition function to acquire line drawing data to be colored; a size-reducing process function to perform a size-reducing process on the line drawing data acquired to a predetermined reduced size so as to obtain size-reduced line drawing data; a first coloring process function to perform a coloring process on the size-reduced line drawing data based on a first learned model that has previously learned the coloring process on the size-reduced line drawing data by using sample data; and a second coloring process function to perform a coloring process on original line drawing data by receiving an input of the original line drawing data and colored, size-reduced line drawing data as the size-reduced line drawing data on which the first coloring process function has performed the coloring, based on a second learned model that has previously learned the coloring process on the sample data by receiving an input of the sample data and colored, size-reduced sampl
Abstract: An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.
Abstract: Apparatus, methods, and systems for cross-domain time series data conversion are disclosed. In an example embodiment, a first time series of a first type of data is received and stored. The first time series of the first type of data is encoded as a first distributed representation for the first type of data. The first distributed representation is converted to a second distributed representation for a second type of data which is different from the first type of data. The second distributed representation for the second type of data is decoded as a second time series of the second type of data.
Abstract: A server device configured to communicate, via a communication network, with at least one device including a learner configured to perform processing by using a learned model, includes processor, a transmitter, and a storage configured to store a plurality of shared models pre-learned in accordance with environments and conditions of various devices. The processor is configured to acquire device data including information on an environment and conditions from the at least one device, and select an optimum shared model for the at least one device based on the acquired device data. The transmitter is configured to transmit a selected shared model to the at least one device.
Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.
Abstract: A gaze point estimation processing apparatus in an embodiment includes a storage configured to store a neural network as a gaze point estimation model and one or more processors. The storage stores a gaze point estimation model generated through learning based on an image for learning and information relating to a first gaze point for the image for learning. The one or more processors estimate information relating to a second gaze point with respect to an image for estimation from the image for estimation using the gaze point estimation model.
Abstract: A motion generating apparatus includes memory and processing circuitry coupled to the memory. The memory is configured to store a learned model. The learned model outputs, when path information is input, motion information of an object which moves according to the path information. The processing circuitry accepts input of parameters regarding a plurality of objects, and generates path information of the plurality of objects based on the parameters according to predetermined rules. The processing circuitry inputs the generated path information of the plurality of objects into the learned model, and causes the learned model to generate motion information with respect to the path information of the plurality of objects.
Abstract: A control apparatus performs analysis by using partial information and determines whether or not communication is abnormal. If the communication is determined to be abnormal, the control apparatus controls a communication route for a communication control device such that the communication is transmitted from a communication apparatus to the control apparatus. Further, the control apparatus determines whether or not the communication transmitted by the control of the communication route is malicious communication. As a result, if the communication is determined to be malicious communication, the control apparatus controls the communication control device to restrict the malicious communication.
Type:
Grant
Filed:
April 26, 2017
Date of Patent:
August 27, 2019
Assignees:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION, Preferred Networks, Inc.
Abstract: An autoencoder includes memory configured to store data including an encode network and a decode network, and processing circuitry coupled to the memory. The processing circuitry is configured to cause the encode network to convert inputted data to a plurality of values and output the plurality of values, batch-normalize values indicated by at least two or more layers of the encode network, out of the output plurality of values, the batch-normalized values having a predetermined average value and a predetermined variance value, quantize each of the batch-normalized values, and cause the decode network to decode each of the quantized values.
Abstract: A behavior determining method includes causing a program to operate on a virtual environment including a virtual memory, while the program is operating on the virtual environment, generating access information of the virtual memory for determining a behavior of the program, based on information of at least one of a first flag or a second flag, the first flag indicating whether or not the program has read from a location in a virtual address space, and the second flag indicating whether or not the program has written to the location in the virtual address space, and inferring whether the behavior of the program is normal or abnormal, based on the access information.
Abstract: An information processing apparatus includes a memory and processing circuitry coupled to the memory. The processing circuitry is configured to acquire target image data to be subjected to coloring, designate an area to be subjected to coloring by using reference information in the target image data, determine reference information to be used for the designated area, and perform a coloring process on the designated area by using the determined reference information, based on a learned model for coloring which has been previously learned in the coloring process using the reference information.
Abstract: An information processing device includes a memory, and processing circuitry coupled to the memory. The processing circuitry is configured to acquire gradation processing target image data, and perform gradation processing on the gradation processing target image data based on a learned model learned in advance.
Abstract: A data augmentation apparatus includes a memory and processing circuitry coupled to the memory. The processing circuitry is configured to input a first data set including first image data and first text data related to the first image data, perform first image processing on the first image data to obtain second image data, edit the first text data based on contents of the first image processing to obtain the edited first text data as second text data, and output an augmented data set including the second image data and the second text data.
Type:
Application
Filed:
November 21, 2018
Publication date:
May 23, 2019
Applicant:
Preferred Networks, Inc.
Inventors:
Yuta TSUBOI, Yuya UNNO, Jun HATORI, Sosuke KOBAYASHI, Yuta KIKUCHI
Abstract: An object detection device includes a memory that stores data, and processing circuitry coupled to the memory. The processing circuitry is configured to set an object point indicating a position of an object in image data, detect a candidate region that is a candidate for an object region where the object in the image data exists, select the candidate region having the object point as an object region, and output the selected candidate region as the object region where the object exists.
Abstract: A line drawing automatic coloring method according to the present disclosure includes: acquiring line drawing data of a target to be colored; receiving at least one local style designation for applying a selected local style to at least one place of the acquired line drawing data; and performing coloring processing reflecting the local style designation on the line drawing data based on a learned model for coloring in which it is learned in advance using the line drawing data and the local style designation as inputs.