Patents by Inventor Sotaro Tsukizawa

Sotaro Tsukizawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170083796
    Abstract: In an image recognition method executed by a computer of an image recognizer using a convolutional neural network, the convolutional neural network is a first convolutional neural network in which a fully connected layer is changed to a convolutional layer, and the method includes controlling a first convolutional neural network to acquire an input image, controlling the first convolutional neural network to estimate a center area of a recognition target in the acquired input image and to output a value indicating the estimated center area as a location of the recognition target in the input image.
    Type: Application
    Filed: September 12, 2016
    Publication date: March 23, 2017
    Inventors: MIN YOUNG KIM, LUCA RIGAZIO, RYOTA FUJIMURA, SOTARO TSUKIZAWA, KAZUKI KOZUKA
  • Publication number: 20170068873
    Abstract: An image transmitted, through a network, from any of at least one terminal having a function of capturing an image or obtaining an image from another device is obtained. A probability that the obtained image includes a certain imaging target is calculated. If the probability is higher than a first threshold, information indicating the certain imaging target is added to the image. If the probability is lower than a second threshold, the information indicating the certain imaging target is not to the image. If the probability is equal to or higher than the second threshold and if the probability is equal to or lower than the first threshold, the image and request reception information for requesting addition of the information is transmitted to the image to any of the at least one terminal through the network.
    Type: Application
    Filed: August 4, 2016
    Publication date: March 9, 2017
    Inventors: REIKO HAGAWA, YASUNORI ISHII, SOTARO TSUKIZAWA, MASAKI TAKAHASHI
  • Publication number: 20170061664
    Abstract: A method, executed by a processor of an image generation system, includes obtaining an image of a first area included in a first image and an image of a second area included in a second image, calculating a first conversion parameter for converting the image of the first area such that color information regarding the image of the first area becomes similar to color information regarding the image of the second area, converting the first image using the first conversion parameter, and generating a third image as a training image used for machine learning for image recognition by combining the converted first image and the second image with each other.
    Type: Application
    Filed: August 3, 2016
    Publication date: March 2, 2017
    Inventors: YASUNORI ISHII, SOTARO TSUKIZAWA, MASAKI TAKAHASHI, REIKO HAGAWA
  • Publication number: 20160335116
    Abstract: A task generation method includes: receiving worker information from equipment of a worker over a network, the worker information including attribute information regarding a personal attribute of the worker; calculating degrees of association between each of pieces of analysis information resulting from analysis of pieces of data stored in a storage device connected to a computer and the worker information; extracting a piece of data to be subjected to task processing the worker is requested to perform from the pieces of data as specific data, based on the degrees of association; and generating a request task that is a task for making, to the equipment of the worker, a request for performing task processing for giving label information to the extracted specific data by using the equipment of the worker.
    Type: Application
    Filed: April 28, 2016
    Publication date: November 17, 2016
    Inventors: YASUNORI ISHII, SOTARO TSUKIZAWA, MASAKI TAKAHASHI, REIKO HAGAWA
  • Publication number: 20160314306
    Abstract: Provided is an image tagging device including: a first functional unit in which an image including an object that is a target of privacy protection is stored and that removes privacy information by changing part of the master image; a second functional unit that acquires the changed image from the first functional unit and changes a region image of an object that is not to be tagged in the first image; a function that distributes the changed image from the second functional unit to a tagging operation terminal device and receives image tag information from the tagging operation terminal device over a network; and a tagged image generator that generates a tagged image on the basis of the master image and the image tag information. This makes it possible to collect tagged images while achieving both privacy protection and an improvement in efficiency of a tagging operation.
    Type: Application
    Filed: April 18, 2016
    Publication date: October 27, 2016
    Inventors: MASAKI TAKAHASHI, SOTARO TSUKIZAWA, YASUNORI ISHII, REIKO HAGAWA
  • Patent number: 9465979
    Abstract: A measurement-target-selecting device that is capable of estimating a face shape with high precision and at low computational time. In this device, a face texture assessment value calculating part (103) calculates a face texture assessment value representing a degree of match between an input face image and the texture of a face shape candidate, a facial-expression-change-likelihood-calculating part (104) calculates a first likelihood between a face shape constituting a reference and a face shape candidate, a correlation assessment part (105); calculates a first correlation assessment value representing the strength of a correlation between the face texture assessment value and the first likelihood, and a selection part (107) selects from among the plurality of face shape candidates as a measurement target a face shape candidate having a first correlation assessment value that is lower than a first threshold.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: October 11, 2016
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Sotaro Tsukizawa, Hiroyuki Kubotani, ZhiHeng Niu, Sugiri Pranata
  • Publication number: 20160260014
    Abstract: Learning method includes performing a first process in which a coarse class classifier configured with a first neural network is made to classify a plurality of images given as a set of images each attached with a label indicating a detailed class into a plurality of coarse classes including a plurality of detailed classes and is then made to learn a first feature that is a feature common in each of the coarse classes, and performing a second process in which a detailed class classifier, configured with a second neural network that is the same in terms of layers other than the final layer as but different in terms of the final layer from the first neural network made to perform the learning in the first process, is made to classify the set of images into detailed classes and learn a second feature of each detailed class.
    Type: Application
    Filed: February 25, 2016
    Publication date: September 8, 2016
    Inventors: REIKO HAGAWA, SOTARO TSUKIZAWA, YASUNORI ISHII
  • Publication number: 20160259995
    Abstract: An image recognition method includes: receiving an image; acquiring processing result information including values of processing results of convolution processing at positions of a plurality of pixels that constitute the image by performing the convolution processing on the image by using different convolution filters; determining 1 feature for each of the positions of the plurality of pixels on the basis of the values of the processing results of the convolution processing at the positions of the plurality of pixels included in the processing result information and outputting the determined feature for each of the positions of the plurality of pixels; performing recognition processing on the basis of the determined feature for each of the positions of the plurality of pixels; and outputting recognition processing result information obtained by performing the recognition processing.
    Type: Application
    Filed: February 22, 2016
    Publication date: September 8, 2016
    Inventors: YASUNORI ISHII, SOTARO TSUKIZAWA, REIKO HAGAWA
  • Patent number: 9082002
    Abstract: A detection device capable of reliably detecting an object to be detected. An intersection region pattern setting unit (106) sets a configuration pattern of a first intersection region pattern group in sequence for each unit image pair. Each intersection region pattern is defined by set image information which denotes locations and sizes of regions (where n is a natural number greater than 1) within respective unit images (e.g., unit image plane coordinates), as well as whether each region is set within either or both of a first unit image and a second unit image. A detection unit (108) detects the object to be detected, based on a total feature value relating to each configuration pattern of the first intersection region pattern group, computed by a feature value computation unit (107), and a strong identification apparatus configured from a plurality of weak identification apparatuses and stored in an identification apparatus storage unit (112).
    Type: Grant
    Filed: December 20, 2011
    Date of Patent: July 14, 2015
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Sotaro Tsukizawa, Hiroyuki Kubotani, Zhiheng Niu, Sugiri Pranata
  • Patent number: 8983235
    Abstract: Disclosed is a pupil detection device capable improving the pupil detection accuracy even if a detection target image is a low-resolution image. In a pupil detection device (100), an eye area actual size calculation unit (102) acquires an actual scale value of an eye area, a pupil state prediction unit (103) calculates an actual scale prediction value of a pupil diameter, a necessary resolution estimation unit (105) calculates a target value of resolution on the basis of the calculated actual scale prediction value, an eye area image normalization unit (107) calculates a scale-up/scale-down factor on the basis of the calculated target value of resolution and the actual scale value of the eye area, and normalizes the image of the eye area on the basis of the calculated scale-up/scale-down factor, and a pupil detection unit (108) detects a pupil image from the normalized eye area image.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: March 17, 2015
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Publication number: 20140375581
    Abstract: An input control method includes a posture change detection step of detecting a change over time in a posture to a finger first joint with respect to a contact surface, a first angle change detection step of detecting a change over time in a first angle that is a bending state of the finger first joint, and a change direction determination step of determining change directions of the posture change and the first angle change. An input control device receives push-down input made by the finger based on a determination result made by a change direction determination unit.
    Type: Application
    Filed: June 18, 2014
    Publication date: December 25, 2014
    Inventors: TOSHIYA ARAI, SOTARO TSUKIZAWA, YOICHI IKEDA
  • Publication number: 20140369571
    Abstract: A measurement-target-selecting device that is capable of estimating a face shape with high precision and at low computational time. In this device, a face texture assessment value calculating part (103) calculates a face texture assessment value representing a degree of match between an input face image and the texture of a face shape candidate, a facial-expression-change-likelihood-calculating part (104) calculates a first likelihood between a face shape constituting a reference and a face shape candidate, a correlation assessment part (105); calculates a first correlation assessment value representing the strength of a correlation between the face texture assessment value and the first likelihood, and a selection part (107) selects from among the plurality of face shape candidates as a measurement target a face shape candidate having a first correlation assessment value that is lower than a first threshold.
    Type: Application
    Filed: December 4, 2012
    Publication date: December 18, 2014
    Inventors: Sotaro Tsukizawa, Hiroyuki Kubotani, ZhiHeng Niu, Sugiri Pranata
  • Patent number: 8810642
    Abstract: In a pupil detection apparatus, based on a calculated value of red-eye occurrence intensity that is relative brightness of brightness within a first pupil image detected by a pupil detector with respect to brightness of a peripheral image outside the first pupil image, and a correlation characteristic of red-eye occurrence intensity and a pupil detection accuracy value, a switching selector selectively outputs a detection result of the first pupil image or a detection result of a second pupil image detected by a pupil detector. The pupil detection apparatus has a first imaging pair including an imager and an illuminator separated by a separation distance, and a second imaging pair whose separation distance is greater than that of the first imaging pair.
    Type: Grant
    Filed: January 24, 2011
    Date of Patent: August 19, 2014
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Patent number: 8659751
    Abstract: Disclosed are an ambient light reflection determination apparatus and an ambient light reflection determination method enabling to determine reflection without using an edge and even in a case where luminance of a reflection generating part in eyeglasses is low. In a reflection determination apparatus (100), a luminance histogram calculation section (102) calculates a luminance histogram representing a luminance distribution of an eye area image, a difference calculation section (104) calculates a difference histogram by finding a difference between the two luminance histograms calculated from the two eye area images having different photographing timings, an evaluation value calculation section (105) calculates an evaluation value regarding reflection of ambient light based on the difference histogram and a weight in accordance with luminance, and a reflection determination section (107) determines the reflection of ambient light based on the calculated evaluation value.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: February 25, 2014
    Assignee: Panasonic Corporation
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Publication number: 20140043459
    Abstract: A line-of-sight direction determination device includes: a line-symmetric position determiner that determines that the corneal reflection image of a left eye or a right eye is located at a line-symmetric position with respect to the center line of the pupils of the left and right eyes; and a line-of-sight direction determiner that determines, from the line-symmetric position determination result, a line-of-sight direction at a specific position including the installation position of an imaging unit or an irradiation unit at the substantially same position as the imaging unit. The line-of-sight direction determination device determines that the corneal reflection image is line symmetric and, from this line symmetry property, determines a specific line-of-sight direction.
    Type: Application
    Filed: September 16, 2013
    Publication date: February 13, 2014
    Applicant: PANASONIC CORPORATION
    Inventors: Sotaro TSUKIZAWA, Kensuke MARUYA
  • Patent number: 8649583
    Abstract: A pupil detection device and a pupil detection method, which are capable of stably detecting the pupil by actively using information of cornea-reflected image even when most of the pupil is hidden by the cornea-reflected image. In pupil detection device (100), peripheral state evaluating section (105) sets a plurality of line segments having a reference point of a cornea-reflected image as one end and having a predetermined length, and calculates a luminance evaluation value based on luminance of each pixel in each line segment and reference luminance. Pupil center straight line calculation section (106) specifies a pupil center straight line passing through a center of a pupil image from among a plurality of line segments based on a luminance evaluation value. Pupil search section (107) detects a pupil image based on a luminance state around the pupil center straight line.
    Type: Grant
    Filed: June 15, 2011
    Date of Patent: February 11, 2014
    Assignee: Panasonic Corporation
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Patent number: 8538044
    Abstract: Provided are a line-of-sight direction determination device and a line-of-sight direction determination method capable of highly precisely and accurately determining a line-of-sight direction from immediately after start of measurement without indication of an object to be carefully observed and adjustment work done in advance. The line-of-sight direction determination device (100) comprises: a line-symmetric position determination unit (150) for determining that the corneal reflection image of a left eye or a right eye is located at a line-symmetric position with respect to the center line of the pupils of the right and left eyes; and a line-of-sight direction determination unit (170) for determining, from the line-symmetric position determination result, a line-of-sight direction at a specific position including the installation position of an imaging unit (111) or an irradiation unit (112) at the substantially same position as the imaging unit (111).
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: September 17, 2013
    Assignee: Panasonic Corporation
    Inventors: Sotaro Tsukizawa, Kensuke Maruya
  • Patent number: 8503737
    Abstract: The visual line estimating apparatus 200 comprises: an image inputting section 201 operable to take an image of a human; a visual line measurement section 202 operable to measure a direction of a visual line on the basis of the taken image; a visual line measuring result storing section 211 operable to store therein visual line measuring results previously measured; a representative value extracting section 212 operable to extract a previous representative value; and a visual line determining section 213 operable to judge whether or not a difference between the representative value and the visual line measuring result is lower than a predetermined threshold to determine a visual line estimating result from the representative value and the visual line measuring result.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: August 6, 2013
    Assignee: Panasonic Corporation
    Inventors: Kenji Oka, Sotaro Tsukizawa
  • Publication number: 20130170754
    Abstract: Disclosed is a pupil detection device capable improving the pupil detection accuracy even if a detection target image is a low-resolution image. In a pupil detection device (100), an eye area actual size calculation unit (102) acquires an actual scale value of an eye area, a pupil state prediction unit (103) calculates an actual scale prediction value of a pupil diameter, a necessary resolution estimation unit (105) calculates a target value of resolution on the basis of the calculated actual scale prediction value, an eye area image normalization unit (107) calculates a scale-up/scale-down factor on the basis of the calculated target value of resolution and the actual scale value of the eye area, and normalizes the image of the eye area on the basis of the calculated scale-up/scale-down factor, and a pupil detection unit (108) detects a pupil image from the normalized eye area image.
    Type: Application
    Filed: September 22, 2011
    Publication date: July 4, 2013
    Applicant: PANASONIC CORPORATION
    Inventors: Sotaro Tsukizawa, Kenji Oka
  • Publication number: 20130142416
    Abstract: A detection device capable of reliably detecting an object to be detected. An intersection region pattern setting unit (106) sets a configuration pattern of a first intersection region pattern group in sequence for each unit image pair. Each intersection region pattern is defined by set image information which denotes locations and sizes of regions (where n is a natural number greater than 1) within respective unit images (e.g., unit image plane coordinates), as well as whether each region is set within either or both of a first unit image and a second unit image. A detection unit (108) detects the object to be detected, based on a total feature value relating to each configuration pattern of the first intersection region pattern group, computed by a feature value computation unit (107), and a strong identification apparatus configured from a plurality of weak identification apparatuses and stored in an identification apparatus storage unit (112).
    Type: Application
    Filed: December 20, 2011
    Publication date: June 6, 2013
    Applicant: PANASONIC CORPORATION
    Inventors: Sotaro Tsukizawa, Hiroyuki Kubotani, Zhiheng Niu, Sugiri Pranata