Patents by Inventor Seiya Shibata

Seiya Shibata has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220284595
    Abstract: An object tracking device includes an object detection unit and a matching unit. The object detection unit includes a plurality of detectors having different processing speeds or accuracy. The object detection unit detects, for each of frames of image data input in time series, each object from the image data using one detector selected from the plurality of the detectors. The matching unit matches a current object detection result which the object detection unit generates from image data of a current frame with a previous object detection result which the object detection unit generates from image data of a previous frame before the current frame, and generates a tracking result of the object based on a matching result.
    Type: Application
    Filed: July 25, 2019
    Publication date: September 8, 2022
    Applicant: NEC Corporation
    Inventors: Hiroaki IGARASHI, Seiya SHIBATA, Liang FENG
  • Publication number: 20220215237
    Abstract: Each chip 70 includes weight storage unit for storing weights for each edge determined by learning under the condition that channels in a first layer that is a layer in a neural network and channels in a 0th layer that is a previous layer to the first layer are divided into groups whose number is equal to the number of the chips, respectively, the groups of the channels in the first layer and the groups of the channels in the 0th layer and the chips are associated, an edge is set between the channels belonging to corresponding groups, an edge is set between the channels belonging to non-corresponding groups under a restriction. The weight storage unit stores the weights determined for the edge between the channels, each of which corresponds to each chip including the weight storage unit, belonging to corresponding groups.
    Type: Application
    Filed: May 8, 2019
    Publication date: July 7, 2022
    Applicant: NEC corporation
    Inventors: Takashi TAKENAKA, Fumiyo TAKANO, Seiya SHIBATA, Hiroaki INOUE
  • Publication number: 20220207339
    Abstract: The determining unit 72 divides channels in the 0th layer and channels in the first layer into groups whose number is equal to the number of chips that are included in an operation device executing an operation of the neural network using a learning result of the weight for each edge, respectively. The determining unit 72 determines association of the groups of the channels in the 0th layer and the groups of the channels in the first layer and the chips included in the operation device, and edges to be removed, and removes the edges to be removed. The weight assignment unit 73 stores the weights of the edges in the weight storage unit in the chip corresponding to the edge.
    Type: Application
    Filed: May 8, 2019
    Publication date: June 30, 2022
    Applicant: NEC Corporation
    Inventors: Takashi TAKENAKA, Fumiyo TAKANO, Seiya SHIBATA, Hiroaki INOUE
  • Publication number: 20220194502
    Abstract: A vehicle inspection method in a production line of a vehicle includes: performing a part inspection in steps of manufacturing parts of the vehicle, the part inspection in which each of the parts after being manufactured is inspected with an inspection device and which includes one or more inspection items; storing a result of the part inspection in a storage device such that the result is associated with the part; and displaying the result of the part inspection, on a display device used in a completed vehicle inspection for inspecting a completed vehicle serving as the vehicle that is completed, information indicating at least the part of the parts corresponding to a failed item serving as the inspection item which is determined not to be passed in the result of the part inspection stored in the storage device and information indicating the failed item.
    Type: Application
    Filed: November 15, 2021
    Publication date: June 23, 2022
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Tatsuhiro TAKAHASHI, Hiromitsu KAMATA, Satoru KOSHI, Kazuki INOUE, Tetsuya TAMOTO, Tooru NISHIMURA, Kiyoshi SUZUKI, Shinya IMAI, Seiya SHIBATA
  • Publication number: 20220172032
    Abstract: A neural network circuit 201 is a neural network circuit divides convolution operation into convolution operation in a spatial direction and convolution operation in a channel direction, performs the respective convolution operation separately, and includes a 1×1 convolution operation circuit 10 that performs convolution in the channel direction, an SRAM 20 in which a computation result of the 1×1 convolution operation circuit 10 is stored, and an N×N convolution operation circuit 30 that performs convolution in the spatial direction for the computation result stored in the SRAM 20.
    Type: Application
    Filed: March 25, 2019
    Publication date: June 2, 2022
    Applicant: NEC Corporation
    Inventors: Seiya SHIBATA, Yuka HAYASHI
  • Publication number: 20220171650
    Abstract: A management apparatus (10) includes: a storage (103) that stores information indicating a correspondence between at least one virtual network function (VNF) operating on a server and a programmable logic circuit (FPGA) capable of operating at least part of a virtual network function; and a controller (106) that causes first and second servers to perform migration of a virtual network function operating on a programmable logic circuit of the first server to the second server, on the basis of the correspondence information.
    Type: Application
    Filed: February 17, 2022
    Publication date: June 2, 2022
    Applicant: NEC Corporation
    Inventors: Hideo HASEGAWA, Shintaro NAKANO, Satoru ISHII, Seiya SHIBATA
  • Patent number: 11288086
    Abstract: A management apparatus (10) includes: a storage (103) that stores information indicating a correspondence between at least one virtual network function (VNF) operating on a server and a programmable logic circuit (FPGA) capable of operating at least part of a virtual network function; and a controller (106) that causes first and second servers to perform migration of a virtual network function operating on a programmable logic circuit of the first server to the second server, on the basis of the correspondence information.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: March 29, 2022
    Assignee: NEC CORPORATION
    Inventors: Hideo Hasegawa, Shintaro Nakano, Satoru Ishii, Seiya Shibata
  • Patent number: 11082297
    Abstract: A network system includes multiple processing units (21-1, 21-2, 22-1, 22-2) on each of which a desired virtual network function can be configured and a management apparatus that determines a communication path that connects the processing units so as to deploy a set of desired virtual network functions. At least one of the processing units includes a first communication interface that is connectable to any different processing unit and at least one second communication interface that is directly connectable to a predetermined different processing unit. The management apparatus determines the communication path for deploying the set of the desire virtual network functions, in accordance with respective connectable communication interfaces of the processing units.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: August 3, 2021
    Assignee: NEC CORPORATION
    Inventors: Seiya Shibata, Takashi Takenaka, Hideo Hasegawa, Satoru Ishii, Shintaro Nakano
  • Publication number: 20210201120
    Abstract: An inference apparatus comprises a plurality of PEs (Processing Elements) and a control part. The control part operates a convolution operation in a convolutional neural network using each of a plurality of pieces of input data and a weight group including a plurality of weights corresponding to each of the plurality of pieces of input data by controlling the plurality of PEs. Further, each of the plurality of PEs executes a computation including multiplication of a single piece of the input data by a single weight and also executes multiplication included in the convolution operation using an element with a non-zero value included in each of the plurality of pieces of input data.
    Type: Application
    Filed: October 22, 2018
    Publication date: July 1, 2021
    Applicant: NEC Corporation
    Inventor: Seiya SHIBATA
  • Publication number: 20210110236
    Abstract: An inferential device, including a quantization part that quantizes a result of a convolutional operation in a convolutional neural network using input data and weights; a convolutional operation part that performs a convolutional operation using the quantized operation result as input data; and an input data conversion part that converts the input data to a first layer to enable the convolutional operation part to process both the input data to the first layer and the input data that is quantized by the quantization part in a same way.
    Type: Application
    Filed: February 28, 2019
    Publication date: April 15, 2021
    Applicant: NEC CORPORATION
    Inventor: Seiya SHIBATA
  • Publication number: 20210004701
    Abstract: An inference device comprises a weight storage part that stores weights, an input data storage part that stores input data, and a PE (Processing Element) that executes convolution computation in convolutional neural network using the weights and input data. The PE adds up weight elements to be multiplied with elements of the input data for each of variable values of the elements of the input data. The PE multiplies each of the variable values of the elements of the input data with each cumulative sum value of weights corresponding to the variable values of the elements of the input data. The PE adds up a plurality of multiplication results obtained by the multiplications.
    Type: Application
    Filed: February 28, 2019
    Publication date: January 7, 2021
    Applicant: NEC CORPORATION
    Inventor: Seiya SHIBATA
  • Publication number: 20200401432
    Abstract: A management method, a management apparatus, and a network system, for efficiently managing a network including programmable logic circuits as a VNF infrastructure, are provided. A management apparatus (10) for a network including servers on which virtual network functions operate stores at least one virtual network function (VNF-1 to VNF-5) operating on a server (A, B, C, D), and server attribute information indicating whether or not the server supports a programmable logic circuit as an operation subject of the virtual network function, wherein the at least one virtual network function and the server attribute information are associated with each other. The management apparatus, at least, manages the server that includes the programmable logic circuit based on the associated information, wherein the virtual network function operations on the server.
    Type: Application
    Filed: March 27, 2016
    Publication date: December 24, 2020
    Applicant: NEC CORPORATION
    Inventors: Shintaro NAKANO, Hideo HASEGAWA, Satoru ISHII, Seiya SHIBATA
  • Publication number: 20200301747
    Abstract: A control method, a control apparatus, a network system, and a server, for efficiently controlling a network including programmable logical circuits as a VM/VNF infrastructure, are provided. A control apparatus (10), for a network including servers on which virtual network functions operate, includes a storage means (102, 103) that stores a correspondence relation among a programmable logic circuit included in a server, a virtual machine operating on the server, and a virtual network function implemented by the virtual machine of the server, and a control means (106) that controls at least the virtual machine and the programmable logic circuit on which the virtual machine operates, based on the correspondence relation.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 24, 2020
    Applicant: NEC CORPORATION
    Inventors: Satoru ISHII, Shintaro NAKANO, Hideo HASEGAWA, Seiya SHIBATA
  • Publication number: 20200301995
    Abstract: An information processing apparatus includes a sparse element detection part, a sparse location weight addition part, a multiplication part, a non-sparse data operation part, and an addition part. The sparse element detection part detects a predetermined sparse element from input data and outputs information about the sparse element. The sparse location weight addition part adds a first weight elements corresponding to the sparse element. The multiplication part multiplies an output of the sparse location weight addition part by the sparse element. The non-sparse data operation part performs an operation on non-sparse elements, each other than the sparse element in the input data. The addition part adds an output of the multiplication part and an output of the non-sparse data operation part.
    Type: Application
    Filed: October 30, 2018
    Publication date: September 24, 2020
    Applicant: NEC Corporation
    Inventor: Seiya SHIBATA
  • Patent number: 10762264
    Abstract: Provided is for reducing access latency. A high-level synthesis device includes feature quantity obtaining unit and implementation determination unit. Feature quantity obtaining unit obtains an access feature quantity including a feature quantity relating to communication between a plurality of modules by analyzing an access pattern in communication between the plurality of modules. Implementation determination unit determines an implementation method for communicating between the plurality of modules based on the obtained access feature quantity.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: September 1, 2020
    Assignee: NEC CORPORATION
    Inventors: Seiya Shibata, Takashi Takenaka
  • Patent number: 10721468
    Abstract: An intra-prediction mode determination device is applied in a video coding device that recursively divides input video blocks into small blocks to perform coding by intra-prediction or inter-frame prediction, and includes a prediction mode selection unit and a number of bins adjustment unit that are provided in correspondence to only one of possible sizes of the small blocks. The prediction mode selection unit evaluates, with respect to each of the small blocks, the coding cost of a plurality of prediction mode candidates on the basis of a residual corresponding to the prediction mode candidates and the number of bins allocated to the prediction mode candidates, and selects an intra-prediction mode from the plurality of prediction mode candidates. The number of bins adjustment unit, when the prediction mode selection unit evaluates the coding cost, increases the number of bins corresponding to a specific prediction mode candidate.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: July 21, 2020
    Assignee: NEC CORPORATION
    Inventor: Seiya Shibata
  • Publication number: 20190364271
    Abstract: An intra-prediction mode determination method according to an aspect of the present invention is applied to a video encoding device that recursively divides a block of an input video into small blocks and encodes the small blocks by intra-prediction or inter-frame prediction, the method including: evaluating encoding costs of a plurality of prediction mode candidates, based on a residual in a prediction mode candidate and a bin number that is a count of bins allocated to the prediction mode candidate, for each of the small blocks in only one size of sizes able to be taken by the small blocks; selecting, based on an evaluation result, an intra-prediction mode from the plurality of prediction mode candidates; and increasing a bin number for a specific prediction mode candidate when evaluating the encoding costs.
    Type: Application
    Filed: September 8, 2017
    Publication date: November 28, 2019
    Applicant: NEC Corporation
    Inventor: Seiya SHIBATA
  • Publication number: 20190129742
    Abstract: A management apparatus (10) includes: a storage (103) that stores information indicating a correspondence between at least one virtual network function (VNF) operating on a server and a programmable logic circuit (FPGA) capable of operating at least part of a virtual network function; and a controller (106) that causes first and second servers to perform migration of a virtual network function operating on a programmable logic circuit of the first server to the second server, on the basis of the correspondence information.
    Type: Application
    Filed: March 27, 2017
    Publication date: May 2, 2019
    Applicant: NEC Corporation
    Inventors: Hideo HASEGAWA, Shintaro NAKANO, Satoru ISHII, Seiya SHIBATA
  • Publication number: 20190109765
    Abstract: A network system includes multiple processing units (21-1, 21-2, 22-1, 22-2) on each of which a desired virtual network function can be configured and a management apparatus that determines a communication path that connects the processing units so as to deploy a set of desired virtual network functions. At least one of the processing units includes a first communication interface that is connectable to any different processing unit and at least one second communication interface that is directly connectable to a predetermined different processing unit. The management apparatus determines the communication path for deploying the set of the desire virtual network functions, in accordance with respective connectable communication interfaces of the processing units.
    Type: Application
    Filed: March 27, 2017
    Publication date: April 11, 2019
    Applicant: NEC CORPORATION
    Inventors: Seiya SHIBATA, Takashi TAKENAKA, Hideo HASEGAWA, Satoru ISHII, Shintaro NAKANO
  • Publication number: 20190034570
    Abstract: Provided is for reducing access latency. A high-level synthesis device includes feature quantity obtaining unit and implementation determination unit. Feature quantity obtaining unit obtains an access feature quantity including a feature quantity relating to communication between a plurality of modules by analyzing an access pattern in communication between the plurality of modules. Implementation determination unit determines an implementation method for communicating between the plurality of modules based on the obtained access feature quantity.
    Type: Application
    Filed: January 31, 2017
    Publication date: January 31, 2019
    Applicant: NEC CORPORATION
    Inventors: Seiya SHIBATA, Takashi TAKENAKA