Patents by Inventor Shuichi Enokida

Shuichi Enokida has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230077398
    Abstract: A tracking device includes full-spherical cameras arranged on the right and left. The tracking device pastes a left full-spherical camera image captured with the full-spherical camera on a spherical object, and is installed with a virtual camera inside the spherical object. The virtual camera may freely rotate in a virtual image capturing space formed inside the spherical object, and acquire an external left camera image. Similarly, the tracking device is also installed with a virtual camera that acquires a right camera image, and forms a convergence stereo camera by means of the virtual cameras. The tracking device tracks a location of a subject by means of a particle filter by using the convergence stereo camera formed in this way. In a second embodiment, the full-spherical cameras are vertically arranged and the virtual cameras are vertically installed.
    Type: Application
    Filed: March 8, 2021
    Publication date: March 16, 2023
    Applicants: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Masatoshi SHIBATA, Shuichi ENOKIDA
  • Patent number: 11481919
    Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: October 25, 2022
    Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Ryuya Muramatsu, Masatoshi Shibata, Hakaru Tamukoh, Shuichi Enokida, Yuta Yamasaki
  • Patent number: 11468572
    Abstract: An image processing device has a function for plotting a luminance gradient co-occurrence pair of an image on a feature plane and applying an EM algorithm to form a GMM. The device learns a pedestrian image and creates a GMM, subsequently learns a background image and creates a GMM, and calculates a difference between the two and generates a GMM for relearning based on the calculation. The device plots a sample that conforms to the GMM for relearning on the feature plane by applying an inverse function theorem. The device forms a GMM that represents the distribution of samples at a designated mixed number and thereby forms a standard GMM that serves as a standard for image recognition. When this mixed number is set to less than a mixed number designated earlier, the dimensions with which an image is analyzed are reduced, making it possible to reduce calculation costs.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: October 11, 2022
    Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Kazuhiro Kuno, Masatoshi Shibata, Shuichi Enokida, Hiromichi Ohtsuka
  • Publication number: 20220180546
    Abstract: An image processing device can use a calculation formula based on an ellipse to approximate a base function of a reference GMM. The burden rate according to a co-occurrence correspondence point can be approximately determined by a calculation in which the Manhattan distance to the ellipse and the co-occurrence correspondence point and the width of the ellipse are input to a calculation formula for the burden rate based on the base function. The width of the ellipse is quantized by the nth power of 2 (where n is an integer of 0 or greater), and the calculation can be carried out by means of a bit shift.
    Type: Application
    Filed: March 30, 2020
    Publication date: June 9, 2022
    Applicants: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Kazuki YOSHIHIRO
  • Patent number: 11256950
    Abstract: An image processing device converts an image that is a recognition object image to high-resolution, medium-resolution, and low-resolution images. The device sets the pixel of interest of the high-resolution image, and votes the co-occurrence in a gradient direction with offset pixels, the co-occurrence in the gradient direction pixels in the medium-resolution image, and the co-occurrence in the gradient direction pixels in the low-resolution image, to a co-occurrence matrix. The device creates such a co-occurrence matrix for each pixel combination and for each resolution. The device executes the process on each of the pixels of the high-resolution image, and creates a co-occurrence histogram wherein the elements of a plurality of co-occurrence matrices are arranged in a line. The device normalizes the co-occurrence histogram and extracts, as a feature quantity of the image, a vector quantity having as a component a frequency resulting from the normalization.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: February 22, 2022
    Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Kazuhiro Kuno, Masatoshi Shibata, Shuichi Enokida, Hakaru Tamukoh
  • Patent number: 11157724
    Abstract: An image recognition device executes a Hilbert scan of frame image data constituting moving-image data to generate one-dimensional spatial image data, and further arrays the one-dimensional spatial image data in a time direction to generate two-dimensional spatio-temporal image data that holds spatial information and temporal information. The image recognition device converts the moving-image data into the two-dimensional spatio-temporal image data while holding the spatial and temporal information. By means of a CNN unit, the image recognition device executes a convolution process wherein a two-dimensional filter is used on the spatio-temporal image data to image-recognize a behavior of a pedestrian who is a recognition object.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: October 26, 2021
    Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Ryuya Muramatsu, Masatoshi Shibata, Shuichi Enokida
  • Patent number: 11017262
    Abstract: A hardware configuration is constructed for calculating at high speed the co-occurrence of luminance gradient directions between differing resolutions for a subject image. In an image processing device, a processing line for high-resolution images, a processing line for medium-resolution images, and a processing line for low-resolution images are arranged in parallel, and the luminance gradient directions are extracted for each pixel simultaneously in parallel from images having the three resolutions. Co-occurrence matrix preparation units prepare co-occurrence matrices by using the luminance gradient directions extracted from these images having the three resolutions, and a histogram preparation unit outputs a histogram as an MRCoHOG feature amount by using these matrices. To concurrently processing the images having the three resolutions, high-speed processing can be performed, and moving pictures output from a camera can be processed in real time.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: May 25, 2021
    Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Kazuhiro Kuno, Hakaru Tamukoh, Shuichi Enokida, Shiryu Ooe
  • Patent number: 10902614
    Abstract: An image recognition device sets an overall observation region which surrounds a whole body of an object and partial observation regions which surround characteristic parts of the object respectively to locations in an image which are estimated to include captured images of the object. The device clips images in the overall observation region and the partial observation regions, and calculates similarity degrees between them and previously learned images on the basis of a combination of two image feature amounts. The device calculates an optimum ratio in combining the HOG feature amount and the color distribution feature amount individually for the regions. This ratio is determined by setting a weight parameter ?i for setting a weight used for combining the HOG feature amount and the color distribution feature amount to be included in a state vector and subjecting the result to complete search by a particle filter.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: January 26, 2021
    Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo Yamada, Kazuhiro Kuno, Shuichi Enokida, Tatsuya Hashimoto
  • Publication number: 20200286254
    Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.
    Type: Application
    Filed: September 26, 2018
    Publication date: September 10, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Yuta YAMASAKI
  • Publication number: 20200279166
    Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.
    Type: Application
    Filed: September 26, 2018
    Publication date: September 3, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Yuta YAMASAKI
  • Publication number: 20200242425
    Abstract: A spatio-temporal image recognition device includes spatio-temporal image data generation units for converting moving-image data which continuously holds spatial information and temporal information to spatio-temporal image data, and they scan the moving-image data on scanning paths different from each other. The spatio-temporal image data generation units generate spatio-temporal image data scanned on the scanning paths different from each other and output them to an image recognition unit. The image recognition unit generates two-dimensional feature maps by individual convolution process of the spatio-temporal image data and then, integrates them, analyzes them by a neural network, and outputs an image recognition result.
    Type: Application
    Filed: July 31, 2018
    Publication date: July 30, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Shuichi ENOKIDA, Yuto KAI
  • Publication number: 20200160043
    Abstract: An image recognition device executes a Hilbert scan of frame image data constituting moving-image data to generate one-dimensional spatial image data, and further arrays the one-dimensional spatial image data in a time direction to generate two-dimensional spatio-temporal image data that holds spatial information and temporal information. The image recognition device converts the moving-image data into the two-dimensional spatio-temporal image data while holding the spatial and temporal information. By means of a CNN unit, the image recognition device executes a convolution process wherein a two-dimensional filter is used on the spatio-temporal image data to image-recognize a behavior of a pedestrian who is a recognition object.
    Type: Application
    Filed: July 31, 2018
    Publication date: May 21, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Shuichi ENOKIDA
  • Publication number: 20200143544
    Abstract: An image recognition device sets an overall observation region which surrounds a whole body of an object and partial observation regions which surround characteristic parts of the object respectively to locations in an image which are estimated to include captured images of the object. The device clips images in the overall observation region and the partial observation regions, and calculates similarity degrees between them and previously learned images on the basis of a combination of two image feature amounts. The device calculates an optimum ratio in combining the HOG feature amount and the color distribution feature amount individually for the regions. This ratio is determined by setting a weight parameter ?i for setting a weight used for combining the HOG feature amount and the color distribution feature amount to be included in a state vector and subjecting the result to complete search by a particle filter.
    Type: Application
    Filed: March 30, 2017
    Publication date: May 7, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Kazuhiro KUNO, Shuichi ENOKIDA, Tatsuya HASHIMOTO
  • Publication number: 20200005467
    Abstract: An image processing device has a function for plotting a luminance gradient co-occurrence pair of an image on a feature plane and applying an EM algorithm to form a GMM. The device learns a pedestrian image and creates a GMM, subsequently learns a background image and creates a GMM, and calculates a difference between the two and generates a GMM for relearning based on the calculation. The device plots a sample that conforms to the GMM for relearning on the feature plane by applying an inverse function theorem. The device forms a GMM that represents the distribution of samples at a designated mixed number and thereby forms a standard GMM that serves as a standard for image recognition. When this mixed number is set to less than a mixed number designated earlier, the dimensions with which an image is analyzed are reduced, making it possible to reduce calculation costs.
    Type: Application
    Filed: January 31, 2018
    Publication date: January 2, 2020
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Kazuhiro KUNO, Masatoshi SHIBATA, Shuichi ENOKIDA, Hiromichi OHTSUKA
  • Publication number: 20190392249
    Abstract: An image processing device converts an image that is a recognition object image to high-resolution, medium-resolution, and low-resolution images. The device sets the pixel of interest of the high-resolution image, and votes the co-occurrence in a gradient direction with offset pixels, the co-occurrence in the gradient direction pixels in the medium-resolution image, and the co-occurrence in the gradient direction pixels in the low-resolution image, to a co-occurrence matrix. The device creates such a co-occurrence matrix for each pixel combination and for each resolution. The device executes the process on each of the pixels of the high-resolution image, and creates a co-occurrence histogram wherein the elements of a plurality of co-occurrence matrices are arranged in a line. The device normalizes the co-occurrence histogram and extracts, as a feature quantity of the image, a vector quantity having as a component a frequency resulting from the normalization.
    Type: Application
    Filed: January 31, 2018
    Publication date: December 26, 2019
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Kazuhiro KUNO, Masatoshi SHIBATA, Shuichi ENOKIDA, Hakaru TAMUKOH
  • Publication number: 20180322361
    Abstract: A hardware configuration is constructed for calculating at high speed the co-occurrence of luminance gradient directions between differing resolutions for a subject image. In an image processing device, a processing line for high-resolution images, a processing line for medium-resolution images, and a processing line for low-resolution images are arranged in parallel, and the luminance gradient directions are extracted for each pixel simultaneously in parallel from images having the three resolutions. Co-occurrence matrix preparation units prepare co-occurrence matrices by using the luminance gradient directions extracted from these images having the three resolutions, and a histogram preparation unit outputs a histogram as an MRCoHOG feature amount by using these matrices. To concurrently processing the images having the three resolutions, high-speed processing can be performed, and moving pictures output from a camera can be processed in real time.
    Type: Application
    Filed: March 30, 2017
    Publication date: November 8, 2018
    Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Hideo YAMADA, Kazuhiro KUNO, Hakaru TAMUKOH, Shuichi ENOKIDA, Shiryu OOE
  • Patent number: 9317771
    Abstract: In an object detecting method according to an embodiment, external reference points are set in external space of a model of an object and an internal reference point is set in internal space of the model. A table is stored in which feature quantities on a local surface of the model are associated with positions of the external reference points and the internal reference point. The feature quantity on the local surface of the model is calculated, and the position of the reference point whose feature quantity is identical to the calculated feature quantity is acquired from the table and is converted into a position in a real space. When the converted position is outside the object, the position is excluded from information for estimation and the position and the attitude of the object are estimated.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: April 19, 2016
    Assignees: KYUSHU INSTITUTE OF TECHNOLOGY, KABUSHIKI KAISHA YASKAWA DENKI
    Inventors: Toshiaki Ejima, Shuichi Enokida, Masakazu Sadano, Hisashi Ideguchi, Tomoyuki Horiuchi, Toshiyuki Kono
  • Publication number: 20140233807
    Abstract: In an object detecting method according to an embodiment, external reference points are set in external space of a model of an object and an internal reference point is set in internal space of the model. A table is stored in which feature quantities on a local surface of the model are associated with positions of the external reference points and the internal reference point. The feature quantity on the local surface of the model is calculated, and the position of the reference point whose feature quantity is identical to the calculated feature quantity is acquired from the table and is converted into a position in a real space. When the converted position is outside the object, the position is excluded from information for estimation and the position and the attitude of the object are estimated.
    Type: Application
    Filed: February 19, 2014
    Publication date: August 21, 2014
    Applicants: KABUSHIKI KAISHA YASKAWA DENKI, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Toshiaki EJIMA, Shuichi Enokida, Masakazu Sadano, Hisashi Ideguchi, Tomoyuki Horiuchi, Toshiyuki Kono
  • Patent number: 8805023
    Abstract: An embodiment of the present invention provides an object motion estimating device and the like, which solve a problem of assuming translational movement (that is, a spatial gradient is 0) of an optical flow and are suitable to estimation of an optical flow by an image analysis of captured images. The object motion estimating device performs the image analysis of the captured images of the object to estimate motion of the object. The object motion estimating device includes an image analyzing unit 15. The image analyzing unit 15 assumes time invariance of the optical flow while not assuming the translational movement, and the image analyzing unit 15 estimates the optical flow with intensity at each point of the captured images as measurement variable while simultaneously estimating a spatial gradient of the optical flow. The assumption about the translational movement of the optical flow is an assumption about the optical flow that is actually unknown.
    Type: Grant
    Filed: January 25, 2010
    Date of Patent: August 12, 2014
    Assignee: Kyushu Institute of Technology
    Inventors: Noboru Sebe, Eitaku Nobuyama, Shuichi Enokida
  • Publication number: 20110299739
    Abstract: An embodiment of the present invention provides an object motion estimating device and the like, which solve a problem of assuming translational movement (that is, a spatial gradient is 0) of an optical flow and are suitable to estimation of an optical flow by an image analysis of captured images. The object motion estimating device performs the image analysis of the captured images of the object to estimate motion of the object. The object motion estimating device includes an image analyzing unit 15. The image analyzing unit 15 assumes time invariance of the optical flow while not assuming the translational movement, and the image analyzing unit 15 estimates the optical flow with intensity at each point of the captured images as measurement variable while simultaneously estimating a spatial gradient of the optical flow. The assumption about the translational movement of the optical flow is an assumption about the optical flow that is actually unknown.
    Type: Application
    Filed: January 25, 2010
    Publication date: December 8, 2011
    Applicant: KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Noboru Sebe, Eitaku Nobuyama, Shuichi Enokida