Abstract: A deep learning framework application database server includes: an input/output unit configured to receive an inference query from a user; a storage unit serving as a database and comprising a previously learned first learning model table and an inference dataset table associated with the inference query; and a framework unit configured to interwork with the database and perform deep learning for the inference query using the first learning model table and the inference dataset table.
Abstract: A system for continuous integration and deployment of a service model using a deep learning framework, includes: a plurality of edge servers configured to provide a deep learning inference service; a distributed deep learning training cloud comprising a plurality of distributed servers, each comprising a deep learning framework application query-based deep learning database server, and a main server configured to manage the plurality of distributed server and to perform distributed training for a learning model; a software configuration management (SCM) repository configured to automatically handle revision, version management, backup, and rollback processes of a service model table, which is an outcome of a service model that is the learning model subjected to distributed training; and a controller configured to, according to a predetermined deployment policy, deploy the service model table to be executed on the edge servers when changes to the service model table occur in the SCM repository.
Abstract: A deep learning framework application database server includes: an input/output unit configured to receive an inference query from a user; a storage unit serving as a database and comprising a previously learned first learning model table and an inference dataset table associated with the inference query; and a framework unit configured to interwork with the database and perform deep learning for the inference query using the first learning model table and the inference dataset table.