Patents by Inventor Kaname Tomite

Kaname Tomite has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013348
    Abstract: An image generation method includes the following steps. Illumination lamps on a target object in an original image are detected; the original image is converted into an image to be processed that is in a specific scene; in the image to be processed, at least one illumination lamp of which operation state does not match scene information of the specific scene is determined as an object to be replaced; a replacement image including the object to be replaced of which operation state matches the scene information is determined according to the scene information and the object to be replaced; and the image in a region occupied by the object to be replaced in the image to be processed is replaced with the replacement image to generate a target image.
    Type: Application
    Filed: September 22, 2023
    Publication date: January 11, 2024
    Applicants: SENSETIME GROUP LIMITED, HONDA MOTOR CO., LTD.
    Inventors: Guangliang CHENG, Jianping SHI, Yuji YASUI, Hideki MATSUNAGA, Kaname TOMITE
  • Publication number: 20240013453
    Abstract: An image generation method includes: an original image is converted into an image to be processed under a specific scenario; in the image to be processed, a region to be adjusted of which image rendering state does not match scenario information of the specific scenario is determined; the image rendering state of the region to be adjusted is related to light illuminated on a target object in the original image, and the target object includes a vehicle; a target rendering state in the region to be adjusted is determined; the target rendering state matches the scenario information; and in the image to be processed, the image rendering state of the region to be adjusted is adjusted to the target rendering state, to obtain a target image.
    Type: Application
    Filed: September 25, 2023
    Publication date: January 11, 2024
    Applicants: SENSETIME GROUP LIMITED, HONDA MOTOR CO., LTD.
    Inventors: Guangliang CHENG, Jianping SHI, Yuji YASUI, Hideki MATSUNAGA, Kaname TOMITE
  • Patent number: 9292745
    Abstract: An object detection apparatus includes a first detection unit configured to detect a first portion of an object from an input image, a second detection unit configured to detect a second portion different from the first portion of the object, a first estimation unit configured to estimate a third portion of the object based on the first portion, a second estimation unit configured to estimate a third portion of the object based on the second portion, a determination unit configured to determine whether the third portions, which have been respectively estimated by the first and second estimation units, match each other, and an output unit configured to output, if the third portions match each other, a detection result of the object based on at least one of a detection result of the first or second detection unit and an estimation result of the first or second estimation unit.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 22, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kan Torii, Kenji Tsukamoto, Atsushi Nogami, Kaname Tomite, Masakazu Matsugu
  • Patent number: 9082213
    Abstract: When an image in a virtual space in which a virtual object is arranged is generated using a ray tracing method, and when it is determined that a ray which is generated in accordance with the ray tracing method successively intersected an approximate virtual object such as a hand which is a real object at lest twice, an image corresponding to a first intersection is generated in accordance with the ray emitted to the first intersection.
    Type: Grant
    Filed: November 5, 2008
    Date of Patent: July 14, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masakazu Fujiki, Kaname Tomite, Yasuo Katano, Takayuki Hashimoto
  • Patent number: 8933965
    Abstract: A method for combining a real space image with a virtual image, includes: causing an imaging unit to capture an image of a real space; generating an image covering a predetermined space based on a plurality of real images in the captured real space; extracting position information of a light source based on the generated image; and adding a light source or a shadow on the virtual image based on the extracted position information of the light source.
    Type: Grant
    Filed: July 23, 2007
    Date of Patent: January 13, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kaname Tomite, Toshikazu Ohshima
  • Patent number: 8884947
    Abstract: A CPU (201) updates scene data (206) by changing the management order of the data of virtual objects in the scene data (206) based on the processing result of a search of the scene data (206), which is executed upon generating an image of virtual space viewed from a first viewpoint. The CPU (201) sets the updated scene data (206) as scene data (206) to be used to generate an image viewed from a second viewpoint different from the first viewpoint.
    Type: Grant
    Filed: September 25, 2008
    Date of Patent: November 11, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kaname Tomite, Masakazu Fujiki
  • Patent number: 8698804
    Abstract: Upon generation of an image of a virtual space on which a virtual object is laid out using a ray tracing method, an approximate virtual object, which is configured by at least one virtual element to approximate the shape of a physical object, is laid out on the virtual space. Then, intersect determination between a ray generated based on the ray tracing method and an object on the virtual space is executed. As a result of the intersect determination, when the ray and the approximate virtual object have a predetermined intersect state, a pixel corresponding to the ray is generated based on a ray before the predetermined intersect state is reached.
    Type: Grant
    Filed: November 5, 2008
    Date of Patent: April 15, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kaname Tomite, Masakazu Fujiki, Yasuo Katano, Takayuki Hashimoto
  • Publication number: 20130259307
    Abstract: An object detection apparatus includes a first detection unit configured to detect a first portion of an object from an input image, a second detection unit configured to detect a second portion different from the first portion of the object, a first estimation unit configured to estimate a third portion of the object based on the first portion, a second estimation unit configured to estimate a third portion of the object based on the second portion, a determination unit configured to determine whether the third portions, which have been respectively estimated by the first and second estimation units, match each other, and an output unit configured to output, if the third portions match each other, a detection result of the object based on at least one of a detection result of the first or second detection unit and an estimation result of the first or second estimation unit.
    Type: Application
    Filed: March 15, 2013
    Publication date: October 3, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Kan Torii, Kenji Tsukamoto, Atsushi Nogami, Kaname Tomite, Masakazu Matsugu
  • Patent number: 8280115
    Abstract: A key region extraction unit (303) extracts a first region including pixels having a predetermined pixel value in a physical space image. A motion vector detection unit (304) calculates motion vectors at a plurality of portions on the physical space image. An object region detection unit (305) specifies, using the motion vectors, a second region to be merged with the first region. When superimposing a virtual space image on the physical space image, an image composition unit (308) excludes a composition region obtained by merging the first region with the second region from a virtual space image superimposition target.
    Type: Grant
    Filed: October 28, 2008
    Date of Patent: October 2, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventors: Dai Matsumura, Kaname Tomite
  • Publication number: 20090128552
    Abstract: When an image in a virtual space in which a virtual object is arranged is generated using a ray tracing method, and when it is determined that a ray which is generated in accordance with the ray tracing method successively intersected an approximate virtual object such as a hand which is a real object at lest twice, an image corresponding to a first intersection is generated in accordance with the ray emitted to the first intersection.
    Type: Application
    Filed: November 5, 2008
    Publication date: May 21, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Masakazu Fujiki, Kaname Tomite, Yasuo Katano, Takayuki Hashimoto
  • Publication number: 20090115784
    Abstract: Upon generation of an image of a virtual space on which a virtual object is laid out using a ray tracing method, an approximate virtual object, which is configured by at least one virtual element to approximate the shape of a physical object, is laid out on the virtual space. Then, intersect determination between a ray generated based on the ray tracing method and an object on the virtual space is executed. As a result of the intersect determination, when the ray and the approximate virtual object have a predetermined intersect state, a pixel corresponding to the ray is generated based on a ray before the predetermined intersect state is reached.
    Type: Application
    Filed: November 5, 2008
    Publication date: May 7, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Kaname Tomite, Masakazu Fujiki, Yasuo Katano, Takayuki Hashimoto
  • Publication number: 20090110291
    Abstract: A key region extraction unit (303) extracts a first region including pixels having a predetermined pixel value in a physical space image. A motion vector detection unit (304) calculates motion vectors at a plurality of portions on the physical space image. An object region detection unit (305) specifies, using the motion vectors, a second region to be merged with the first region. When superimposing a virtual space image on the physical space image, an image composition unit (308) excludes a composition region obtained by merging the first region with the second region from a virtual space image superimposition target.
    Type: Application
    Filed: October 28, 2008
    Publication date: April 30, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Dai Matsumura, Kaname Tomite
  • Publication number: 20090102834
    Abstract: A CPU (201) updates scene data (206) by changing the management order of the data of virtual objects in the scene data (206) based on the processing result of a search of the scene data (206), which is executed upon generating an image of virtual space viewed from a first viewpoint. The CPU (201) sets the updated scene data (206) as scene data (206) to be used to generate an image viewed from a second viewpoint different from the first viewpoint.
    Type: Application
    Filed: September 25, 2008
    Publication date: April 23, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Kaname Tomite, Masakazu Fujiki
  • Patent number: 7487468
    Abstract: A video combining apparatus to superimpose a virtual image such a CG image on a video image of the real world or on a see-through type display device. An area in which the virtual image is not to be displayed can be easily designated by a user. If the user holds a frame with markers in his/her sight, the frame is image-sensed in the video image of the real world. The area designated by the user is detected by detecting the position of the marker in the video image, and the virtual image is not superimposed in this area.
    Type: Grant
    Filed: September 29, 2003
    Date of Patent: February 3, 2009
    Assignee: Canon Kabushiki Kaisha
    Inventors: Rika Tanaka, Toshikazu Ohshima, Kaname Tomite
  • Publication number: 20080024523
    Abstract: A method for combining a real space image with a virtual image, includes: causing an imaging unit to capture an image of a real space; generating an image covering a predetermined space based on a plurality of real images in the captured real space; extracting position information of a light source based on the generated image; and adding a light source or a shadow on the virtual image based on the extracted position information of the light source.
    Type: Application
    Filed: July 23, 2007
    Publication date: January 31, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Kaname Tomite, Toshikazu Ohshima
  • Publication number: 20040070611
    Abstract: A video combining apparatus to superimpose a virtual image such a CG image on a video image of the real world or on a see-through type display device. An area in which the virtual image is not to be displayed can be easily designated by a user. If the user holds a frame with markers in his/her sight, the frame is image-sensed in the video image of the real world. The area designated by the user is detected by detecting the position of the marker in the video image, and the virtual image is not superimposed in this area.
    Type: Application
    Filed: September 29, 2003
    Publication date: April 15, 2004
    Applicant: Canon Kabushiki Kaisha
    Inventors: Rika Tanaka, Toshikazu Ohshima, Kaname Tomite