Patents by Inventor Kaori Taya

Kaori Taya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161503
    Abstract: An event detection unit 20 detects, as an event, that a luminance change of a pixel in an imaging unit that photoelectrically converts an optical image indicating a subject exceeds a preset threshold value. A determination unit 41a of an information processing unit 40-1 determines whether no event is detected on the basis of event detection information indicating a detection result of the event generated by the event detection unit 20. In a case where no event is detected by the determination unit 41a, a movement control unit 42a moves a position of the optical image indicating the subject in an imaging unit 22 of the event detection unit 20 so that an event can be detected in an event non-detection region. An event information generation unit 43 generates event information including the event detection information and movement information regarding movement of the position of the optical image.
    Type: Application
    Filed: January 4, 2022
    Publication date: May 16, 2024
    Applicant: Sony Group Corporation
    Inventor: Kaori TAYA
  • Patent number: 11589023
    Abstract: An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: February 21, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kaori Taya
  • Patent number: 11017588
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: May 25, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuhiro Yoshimura, Kaori Taya, Shugo Higuchi, Tatsuro Koizumi
  • Patent number: 11010882
    Abstract: A single composite image is generated by selecting a pixel value for each pixel in any one of a plurality of captured images having been consecutively captured. A motion vector between at least two captured images among the plurality of captured images is calculated. Correction processing is performed, based on a motion vector corresponding to a pixel of interest in the composite image, on the pixel of interest or a pixel near the pixel of interest. The correction processing is performed by updating, using a bright pixel value, a pixel value of at least one pixel in a group of pixels associated with a path from the pixel of interest to a pixel indicated by the motion vector corresponding to the pixel of interest.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: May 18, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kaori Taya
  • Publication number: 20210029338
    Abstract: An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.
    Type: Application
    Filed: October 14, 2020
    Publication date: January 28, 2021
    Inventor: Kaori Taya
  • Patent number: 10841555
    Abstract: An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: November 17, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kaori Taya
  • Patent number: 10708505
    Abstract: An image processing apparatus according to an embodiment of the present invention specifies a partial area of a three-dimensional shape model that is generated based on a plurality of captured images obtained by a plurality of cameras. The image processing apparatus includes: a display control unit configured to display a display image based on a captured image of at least one camera of the plurality of cameras on a display unit; a designation unit configured to designate an area on the display image displayed by the display control unit; and a specification unit configured to specify an area on a three-dimensional shape model, which corresponds to an area designated by the designation unit on the display image, based on captured images of two or more cameras of the plurality of cameras.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 7, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kaori Taya
  • Publication number: 20200211271
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Application
    Filed: March 10, 2020
    Publication date: July 2, 2020
    Inventors: Kazuhiro YOSHIMURA, Kaori TAYA, Shugo HIGUCHI, Tatsuro KOIZUMI
  • Patent number: 10699441
    Abstract: The calibration apparatus includes: an image acquisition unit configured to acquire images captured by a plurality of cameras; a vibration detection unit configured to detect vibration of the camera from the images for each of the cameras; an extraction unit configured to extract images captured by the camera whose vibration is within an allowable value and whose position and orientation are regarded as being the same as an image group for each of the cameras; a selection unit configured to select the image groups whose number is larger than or equal to a predetermined number of cameras as a combination; and an estimation unit configured to estimate a position and orientation parameter for each of the cameras by using the selected combination of the image groups.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: June 30, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kaori Taya
  • Patent number: 10628993
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: April 21, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuhiro Yoshimura, Kaori Taya, Shugo Higuchi, Tatsuro Koizumi
  • Publication number: 20190378255
    Abstract: A single composite image is generated by selecting a pixel value for each pixel in any one of a plurality of captured images having been consecutively captured. A motion vector between at least two captured images among the plurality of captured images is calculated. Correction processing is performed, based on a motion vector corresponding to a pixel of interest in the composite image, on the pixel of interest or a pixel near the pixel of interest. The correction processing is performed by updating, using a bright pixel value, a pixel value of at least one pixel in a group of pixels associated with a path from the pixel of interest to a pixel indicated by the motion vector corresponding to the pixel of interest.
    Type: Application
    Filed: May 17, 2019
    Publication date: December 12, 2019
    Inventor: Kaori Taya
  • Publication number: 20190342537
    Abstract: An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.
    Type: Application
    Filed: April 26, 2019
    Publication date: November 7, 2019
    Inventor: Kaori Taya
  • Publication number: 20190325608
    Abstract: The calibration apparatus includes: an image acquisition unit configured to acquire images captured by a plurality of cameras; a vibration detection unit configured to detect vibration of the camera from the images for each of the cameras; an extraction unit configured to extract images captured by the camera whose vibration is within an allowable value and whose position and orientation are regarded as being the same as an image group for each of the cameras; a selection unit configured to select the image groups whose number is larger than or equal to a predetermined number of cameras as a combination; and an estimation unit configured to estimate a position and orientation parameter for each of the cameras by using the selected combination of the image groups.
    Type: Application
    Filed: April 11, 2019
    Publication date: October 24, 2019
    Inventor: Kaori Taya
  • Publication number: 20180295289
    Abstract: An image processing apparatus according to an embodiment of the present invention specifies a partial area of a three-dimensional shape model that is generated based on a plurality of captured images obtained by a plurality of cameras. The image processing apparatus includes: a display control unit configured to display a display image based on a captured image of at least one camera of the plurality of cameras on a display unit; a designation unit configured to designate an area on the display image displayed by the display control unit; and a specification unit configured to specify an area on a three-dimensional shape model, which corresponds to an area designated by the designation unit on the display image, based on captured images of two or more cameras of the plurality of cameras.
    Type: Application
    Filed: March 23, 2018
    Publication date: October 11, 2018
    Inventor: Kaori Taya
  • Publication number: 20180098047
    Abstract: There is provided with an imaging system. The imaging system has a plurality of image capturing apparatuses. The plurality of image capturing apparatuses are configured to obtain captured images to generate a free-viewpoint image. The plurality of image capturing apparatuses include a first group of one or more image capturing apparatuses facing a first gaze point and a second group of one or more image capturing apparatuses facing a second gaze point different from the first gaze point.
    Type: Application
    Filed: September 15, 2017
    Publication date: April 5, 2018
    Inventors: Kina ITAKURA, Tatsuro KOIZUMI, Kaori TAYA, Shugo HIGUCHI
  • Publication number: 20180096520
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Application
    Filed: September 22, 2017
    Publication date: April 5, 2018
    Inventors: Kazuhiro YOSHIMURA, Kaori TAYA, Shugo HIGUCHI, Tatsuro KOIZUMI
  • Patent number: 9243935
    Abstract: A method estimates the distance of a flat part while maintaining the precision needed for estimating the distance in detail. Distance information is estimated by estimating each piece of distance information of an image represented by image data, and distance information represented by resolution-converted image data, and combining the distance information of the image and the distance information of the resolution-converted image.
    Type: Grant
    Filed: August 22, 2013
    Date of Patent: January 26, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kaori Taya
  • Publication number: 20140063235
    Abstract: To provide a method which can estimate a distance of a flat part while keeping a precision for estimating a distance in detail. Distance information is estimated by estimating each piece of distance information of an image represented by image data, and distance information represented by resolution-converted image data, and combining the distance information of the image and the distance information of the resolution-converted image.
    Type: Application
    Filed: August 22, 2013
    Publication date: March 6, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Kaori Taya
  • Publication number: 20140064633
    Abstract: The present invention controls blur and sharpness according to a depth without performing processes repeatedly for each object determination or for each distance. A filter for a target pixel is determined by comparing multiple thresholds representing an optical characteristic of an image capturing unit and multiple values representing distance to a subject in the target pixel and pixels around the target pixel. Then, the filter is applied to the target pixel.
    Type: Application
    Filed: August 26, 2013
    Publication date: March 6, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Kaori Taya
  • Publication number: 20130342536
    Abstract: Object information concerning an object image is input. Line-of-sight information of a viewer is input. Conversion parameters for converting for values on a coordinate system on an object image in the virtual space defined with respect to a display device into values on the coordinate system of the display device. A rendering image is created by rendering an object image based on the conversion parameters. A deformed filter is created by deforming a reference filter to be applied to the object image set at reference coordinates in a virtual space, based on the line-of-sight information and conversion parameters. The image data obtained by performing filter processing for the rendering image by using the deformed filter is output.
    Type: Application
    Filed: June 4, 2013
    Publication date: December 26, 2013
    Inventor: Kaori Taya