Patents by Inventor Shuichi Enokida
Shuichi Enokida has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230077398Abstract: A tracking device includes full-spherical cameras arranged on the right and left. The tracking device pastes a left full-spherical camera image captured with the full-spherical camera on a spherical object, and is installed with a virtual camera inside the spherical object. The virtual camera may freely rotate in a virtual image capturing space formed inside the spherical object, and acquire an external left camera image. Similarly, the tracking device is also installed with a virtual camera that acquires a right camera image, and forms a convergence stereo camera by means of the virtual cameras. The tracking device tracks a location of a subject by means of a particle filter by using the convergence stereo camera formed in this way. In a second embodiment, the full-spherical cameras are vertically arranged and the virtual cameras are vertically installed.Type: ApplicationFiled: March 8, 2021Publication date: March 16, 2023Applicants: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Masatoshi SHIBATA, Shuichi ENOKIDA
-
Patent number: 11481919Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.Type: GrantFiled: September 26, 2018Date of Patent: October 25, 2022Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Ryuya Muramatsu, Masatoshi Shibata, Hakaru Tamukoh, Shuichi Enokida, Yuta Yamasaki
-
Patent number: 11468572Abstract: An image processing device has a function for plotting a luminance gradient co-occurrence pair of an image on a feature plane and applying an EM algorithm to form a GMM. The device learns a pedestrian image and creates a GMM, subsequently learns a background image and creates a GMM, and calculates a difference between the two and generates a GMM for relearning based on the calculation. The device plots a sample that conforms to the GMM for relearning on the feature plane by applying an inverse function theorem. The device forms a GMM that represents the distribution of samples at a designated mixed number and thereby forms a standard GMM that serves as a standard for image recognition. When this mixed number is set to less than a mixed number designated earlier, the dimensions with which an image is analyzed are reduced, making it possible to reduce calculation costs.Type: GrantFiled: January 31, 2018Date of Patent: October 11, 2022Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Kazuhiro Kuno, Masatoshi Shibata, Shuichi Enokida, Hiromichi Ohtsuka
-
Publication number: 20220180546Abstract: An image processing device can use a calculation formula based on an ellipse to approximate a base function of a reference GMM. The burden rate according to a co-occurrence correspondence point can be approximately determined by a calculation in which the Manhattan distance to the ellipse and the co-occurrence correspondence point and the width of the ellipse are input to a calculation formula for the burden rate based on the base function. The width of the ellipse is quantized by the nth power of 2 (where n is an integer of 0 or greater), and the calculation can be carried out by means of a bit shift.Type: ApplicationFiled: March 30, 2020Publication date: June 9, 2022Applicants: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Kazuki YOSHIHIRO
-
Patent number: 11256950Abstract: An image processing device converts an image that is a recognition object image to high-resolution, medium-resolution, and low-resolution images. The device sets the pixel of interest of the high-resolution image, and votes the co-occurrence in a gradient direction with offset pixels, the co-occurrence in the gradient direction pixels in the medium-resolution image, and the co-occurrence in the gradient direction pixels in the low-resolution image, to a co-occurrence matrix. The device creates such a co-occurrence matrix for each pixel combination and for each resolution. The device executes the process on each of the pixels of the high-resolution image, and creates a co-occurrence histogram wherein the elements of a plurality of co-occurrence matrices are arranged in a line. The device normalizes the co-occurrence histogram and extracts, as a feature quantity of the image, a vector quantity having as a component a frequency resulting from the normalization.Type: GrantFiled: January 31, 2018Date of Patent: February 22, 2022Assignees: AISIN CORPORATION, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Kazuhiro Kuno, Masatoshi Shibata, Shuichi Enokida, Hakaru Tamukoh
-
Patent number: 11157724Abstract: An image recognition device executes a Hilbert scan of frame image data constituting moving-image data to generate one-dimensional spatial image data, and further arrays the one-dimensional spatial image data in a time direction to generate two-dimensional spatio-temporal image data that holds spatial information and temporal information. The image recognition device converts the moving-image data into the two-dimensional spatio-temporal image data while holding the spatial and temporal information. By means of a CNN unit, the image recognition device executes a convolution process wherein a two-dimensional filter is used on the spatio-temporal image data to image-recognize a behavior of a pedestrian who is a recognition object.Type: GrantFiled: July 31, 2018Date of Patent: October 26, 2021Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Ryuya Muramatsu, Masatoshi Shibata, Shuichi Enokida
-
Patent number: 11017262Abstract: A hardware configuration is constructed for calculating at high speed the co-occurrence of luminance gradient directions between differing resolutions for a subject image. In an image processing device, a processing line for high-resolution images, a processing line for medium-resolution images, and a processing line for low-resolution images are arranged in parallel, and the luminance gradient directions are extracted for each pixel simultaneously in parallel from images having the three resolutions. Co-occurrence matrix preparation units prepare co-occurrence matrices by using the luminance gradient directions extracted from these images having the three resolutions, and a histogram preparation unit outputs a histogram as an MRCoHOG feature amount by using these matrices. To concurrently processing the images having the three resolutions, high-speed processing can be performed, and moving pictures output from a camera can be processed in real time.Type: GrantFiled: March 30, 2017Date of Patent: May 25, 2021Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Kazuhiro Kuno, Hakaru Tamukoh, Shuichi Enokida, Shiryu Ooe
-
Patent number: 10902614Abstract: An image recognition device sets an overall observation region which surrounds a whole body of an object and partial observation regions which surround characteristic parts of the object respectively to locations in an image which are estimated to include captured images of the object. The device clips images in the overall observation region and the partial observation regions, and calculates similarity degrees between them and previously learned images on the basis of a combination of two image feature amounts. The device calculates an optimum ratio in combining the HOG feature amount and the color distribution feature amount individually for the regions. This ratio is determined by setting a weight parameter ?i for setting a weight used for combining the HOG feature amount and the color distribution feature amount to be included in a state vector and subjecting the result to complete search by a particle filter.Type: GrantFiled: March 30, 2017Date of Patent: January 26, 2021Assignees: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo Yamada, Kazuhiro Kuno, Shuichi Enokida, Tatsuya Hashimoto
-
Publication number: 20200286254Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.Type: ApplicationFiled: September 26, 2018Publication date: September 10, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Yuta YAMASAKI
-
Publication number: 20200279166Abstract: An image recognition device includes: an image processing device that acquires a feature amount from an image; and an identification device that determines whether a prescribed identification object is present in the image, and identifies the identification object. The identification device includes a BNN that has learned the identification object in advance, and performs identification processing by performing a binary calculation with the BNN on the feature amount acquired by the image processing device. Then, the identification device selects a portion effective for identification from among high-dimensional feature amounts output by the image processing device to reduce the dimensions used in identification processing, and copies low-dimensional feature amounts output by the image processing device to increase dimensions.Type: ApplicationFiled: September 26, 2018Publication date: September 3, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Hakaru TAMUKOH, Shuichi ENOKIDA, Yuta YAMASAKI
-
Publication number: 20200242425Abstract: A spatio-temporal image recognition device includes spatio-temporal image data generation units for converting moving-image data which continuously holds spatial information and temporal information to spatio-temporal image data, and they scan the moving-image data on scanning paths different from each other. The spatio-temporal image data generation units generate spatio-temporal image data scanned on the scanning paths different from each other and output them to an image recognition unit. The image recognition unit generates two-dimensional feature maps by individual convolution process of the spatio-temporal image data and then, integrates them, analyzes them by a neural network, and outputs an image recognition result.Type: ApplicationFiled: July 31, 2018Publication date: July 30, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Shuichi ENOKIDA, Yuto KAI
-
Publication number: 20200160043Abstract: An image recognition device executes a Hilbert scan of frame image data constituting moving-image data to generate one-dimensional spatial image data, and further arrays the one-dimensional spatial image data in a time direction to generate two-dimensional spatio-temporal image data that holds spatial information and temporal information. The image recognition device converts the moving-image data into the two-dimensional spatio-temporal image data while holding the spatial and temporal information. By means of a CNN unit, the image recognition device executes a convolution process wherein a two-dimensional filter is used on the spatio-temporal image data to image-recognize a behavior of a pedestrian who is a recognition object.Type: ApplicationFiled: July 31, 2018Publication date: May 21, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Ryuya MURAMATSU, Masatoshi SHIBATA, Shuichi ENOKIDA
-
Publication number: 20200143544Abstract: An image recognition device sets an overall observation region which surrounds a whole body of an object and partial observation regions which surround characteristic parts of the object respectively to locations in an image which are estimated to include captured images of the object. The device clips images in the overall observation region and the partial observation regions, and calculates similarity degrees between them and previously learned images on the basis of a combination of two image feature amounts. The device calculates an optimum ratio in combining the HOG feature amount and the color distribution feature amount individually for the regions. This ratio is determined by setting a weight parameter ?i for setting a weight used for combining the HOG feature amount and the color distribution feature amount to be included in a state vector and subjecting the result to complete search by a particle filter.Type: ApplicationFiled: March 30, 2017Publication date: May 7, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Kazuhiro KUNO, Shuichi ENOKIDA, Tatsuya HASHIMOTO
-
Publication number: 20200005467Abstract: An image processing device has a function for plotting a luminance gradient co-occurrence pair of an image on a feature plane and applying an EM algorithm to form a GMM. The device learns a pedestrian image and creates a GMM, subsequently learns a background image and creates a GMM, and calculates a difference between the two and generates a GMM for relearning based on the calculation. The device plots a sample that conforms to the GMM for relearning on the feature plane by applying an inverse function theorem. The device forms a GMM that represents the distribution of samples at a designated mixed number and thereby forms a standard GMM that serves as a standard for image recognition. When this mixed number is set to less than a mixed number designated earlier, the dimensions with which an image is analyzed are reduced, making it possible to reduce calculation costs.Type: ApplicationFiled: January 31, 2018Publication date: January 2, 2020Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Kazuhiro KUNO, Masatoshi SHIBATA, Shuichi ENOKIDA, Hiromichi OHTSUKA
-
Publication number: 20190392249Abstract: An image processing device converts an image that is a recognition object image to high-resolution, medium-resolution, and low-resolution images. The device sets the pixel of interest of the high-resolution image, and votes the co-occurrence in a gradient direction with offset pixels, the co-occurrence in the gradient direction pixels in the medium-resolution image, and the co-occurrence in the gradient direction pixels in the low-resolution image, to a co-occurrence matrix. The device creates such a co-occurrence matrix for each pixel combination and for each resolution. The device executes the process on each of the pixels of the high-resolution image, and creates a co-occurrence histogram wherein the elements of a plurality of co-occurrence matrices are arranged in a line. The device normalizes the co-occurrence histogram and extracts, as a feature quantity of the image, a vector quantity having as a component a frequency resulting from the normalization.Type: ApplicationFiled: January 31, 2018Publication date: December 26, 2019Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Kazuhiro KUNO, Masatoshi SHIBATA, Shuichi ENOKIDA, Hakaru TAMUKOH
-
Publication number: 20180322361Abstract: A hardware configuration is constructed for calculating at high speed the co-occurrence of luminance gradient directions between differing resolutions for a subject image. In an image processing device, a processing line for high-resolution images, a processing line for medium-resolution images, and a processing line for low-resolution images are arranged in parallel, and the luminance gradient directions are extracted for each pixel simultaneously in parallel from images having the three resolutions. Co-occurrence matrix preparation units prepare co-occurrence matrices by using the luminance gradient directions extracted from these images having the three resolutions, and a histogram preparation unit outputs a histogram as an MRCoHOG feature amount by using these matrices. To concurrently processing the images having the three resolutions, high-speed processing can be performed, and moving pictures output from a camera can be processed in real time.Type: ApplicationFiled: March 30, 2017Publication date: November 8, 2018Applicants: EQUOS RESEARCH CO., LTD., KYUSHU INSTITUTE OF TECHNOLOGYInventors: Hideo YAMADA, Kazuhiro KUNO, Hakaru TAMUKOH, Shuichi ENOKIDA, Shiryu OOE
-
Patent number: 9317771Abstract: In an object detecting method according to an embodiment, external reference points are set in external space of a model of an object and an internal reference point is set in internal space of the model. A table is stored in which feature quantities on a local surface of the model are associated with positions of the external reference points and the internal reference point. The feature quantity on the local surface of the model is calculated, and the position of the reference point whose feature quantity is identical to the calculated feature quantity is acquired from the table and is converted into a position in a real space. When the converted position is outside the object, the position is excluded from information for estimation and the position and the attitude of the object are estimated.Type: GrantFiled: February 19, 2014Date of Patent: April 19, 2016Assignees: KYUSHU INSTITUTE OF TECHNOLOGY, KABUSHIKI KAISHA YASKAWA DENKIInventors: Toshiaki Ejima, Shuichi Enokida, Masakazu Sadano, Hisashi Ideguchi, Tomoyuki Horiuchi, Toshiyuki Kono
-
Publication number: 20140233807Abstract: In an object detecting method according to an embodiment, external reference points are set in external space of a model of an object and an internal reference point is set in internal space of the model. A table is stored in which feature quantities on a local surface of the model are associated with positions of the external reference points and the internal reference point. The feature quantity on the local surface of the model is calculated, and the position of the reference point whose feature quantity is identical to the calculated feature quantity is acquired from the table and is converted into a position in a real space. When the converted position is outside the object, the position is excluded from information for estimation and the position and the attitude of the object are estimated.Type: ApplicationFiled: February 19, 2014Publication date: August 21, 2014Applicants: KABUSHIKI KAISHA YASKAWA DENKI, KYUSHU INSTITUTE OF TECHNOLOGYInventors: Toshiaki EJIMA, Shuichi Enokida, Masakazu Sadano, Hisashi Ideguchi, Tomoyuki Horiuchi, Toshiyuki Kono
-
Patent number: 8805023Abstract: An embodiment of the present invention provides an object motion estimating device and the like, which solve a problem of assuming translational movement (that is, a spatial gradient is 0) of an optical flow and are suitable to estimation of an optical flow by an image analysis of captured images. The object motion estimating device performs the image analysis of the captured images of the object to estimate motion of the object. The object motion estimating device includes an image analyzing unit 15. The image analyzing unit 15 assumes time invariance of the optical flow while not assuming the translational movement, and the image analyzing unit 15 estimates the optical flow with intensity at each point of the captured images as measurement variable while simultaneously estimating a spatial gradient of the optical flow. The assumption about the translational movement of the optical flow is an assumption about the optical flow that is actually unknown.Type: GrantFiled: January 25, 2010Date of Patent: August 12, 2014Assignee: Kyushu Institute of TechnologyInventors: Noboru Sebe, Eitaku Nobuyama, Shuichi Enokida
-
Publication number: 20110299739Abstract: An embodiment of the present invention provides an object motion estimating device and the like, which solve a problem of assuming translational movement (that is, a spatial gradient is 0) of an optical flow and are suitable to estimation of an optical flow by an image analysis of captured images. The object motion estimating device performs the image analysis of the captured images of the object to estimate motion of the object. The object motion estimating device includes an image analyzing unit 15. The image analyzing unit 15 assumes time invariance of the optical flow while not assuming the translational movement, and the image analyzing unit 15 estimates the optical flow with intensity at each point of the captured images as measurement variable while simultaneously estimating a spatial gradient of the optical flow. The assumption about the translational movement of the optical flow is an assumption about the optical flow that is actually unknown.Type: ApplicationFiled: January 25, 2010Publication date: December 8, 2011Applicant: KYUSHU INSTITUTE OF TECHNOLOGYInventors: Noboru Sebe, Eitaku Nobuyama, Shuichi Enokida