Patents by Inventor Kaname Tomite
Kaname Tomite has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240013348Abstract: An image generation method includes the following steps. Illumination lamps on a target object in an original image are detected; the original image is converted into an image to be processed that is in a specific scene; in the image to be processed, at least one illumination lamp of which operation state does not match scene information of the specific scene is determined as an object to be replaced; a replacement image including the object to be replaced of which operation state matches the scene information is determined according to the scene information and the object to be replaced; and the image in a region occupied by the object to be replaced in the image to be processed is replaced with the replacement image to generate a target image.Type: ApplicationFiled: September 22, 2023Publication date: January 11, 2024Applicants: SENSETIME GROUP LIMITED, HONDA MOTOR CO., LTD.Inventors: Guangliang CHENG, Jianping SHI, Yuji YASUI, Hideki MATSUNAGA, Kaname TOMITE
-
Publication number: 20240013453Abstract: An image generation method includes: an original image is converted into an image to be processed under a specific scenario; in the image to be processed, a region to be adjusted of which image rendering state does not match scenario information of the specific scenario is determined; the image rendering state of the region to be adjusted is related to light illuminated on a target object in the original image, and the target object includes a vehicle; a target rendering state in the region to be adjusted is determined; the target rendering state matches the scenario information; and in the image to be processed, the image rendering state of the region to be adjusted is adjusted to the target rendering state, to obtain a target image.Type: ApplicationFiled: September 25, 2023Publication date: January 11, 2024Applicants: SENSETIME GROUP LIMITED, HONDA MOTOR CO., LTD.Inventors: Guangliang CHENG, Jianping SHI, Yuji YASUI, Hideki MATSUNAGA, Kaname TOMITE
-
Patent number: 9292745Abstract: An object detection apparatus includes a first detection unit configured to detect a first portion of an object from an input image, a second detection unit configured to detect a second portion different from the first portion of the object, a first estimation unit configured to estimate a third portion of the object based on the first portion, a second estimation unit configured to estimate a third portion of the object based on the second portion, a determination unit configured to determine whether the third portions, which have been respectively estimated by the first and second estimation units, match each other, and an output unit configured to output, if the third portions match each other, a detection result of the object based on at least one of a detection result of the first or second detection unit and an estimation result of the first or second estimation unit.Type: GrantFiled: March 15, 2013Date of Patent: March 22, 2016Assignee: Canon Kabushiki KaishaInventors: Kan Torii, Kenji Tsukamoto, Atsushi Nogami, Kaname Tomite, Masakazu Matsugu
-
Patent number: 9082213Abstract: When an image in a virtual space in which a virtual object is arranged is generated using a ray tracing method, and when it is determined that a ray which is generated in accordance with the ray tracing method successively intersected an approximate virtual object such as a hand which is a real object at lest twice, an image corresponding to a first intersection is generated in accordance with the ray emitted to the first intersection.Type: GrantFiled: November 5, 2008Date of Patent: July 14, 2015Assignee: Canon Kabushiki KaishaInventors: Masakazu Fujiki, Kaname Tomite, Yasuo Katano, Takayuki Hashimoto
-
Patent number: 8933965Abstract: A method for combining a real space image with a virtual image, includes: causing an imaging unit to capture an image of a real space; generating an image covering a predetermined space based on a plurality of real images in the captured real space; extracting position information of a light source based on the generated image; and adding a light source or a shadow on the virtual image based on the extracted position information of the light source.Type: GrantFiled: July 23, 2007Date of Patent: January 13, 2015Assignee: Canon Kabushiki KaishaInventors: Kaname Tomite, Toshikazu Ohshima
-
Patent number: 8884947Abstract: A CPU (201) updates scene data (206) by changing the management order of the data of virtual objects in the scene data (206) based on the processing result of a search of the scene data (206), which is executed upon generating an image of virtual space viewed from a first viewpoint. The CPU (201) sets the updated scene data (206) as scene data (206) to be used to generate an image viewed from a second viewpoint different from the first viewpoint.Type: GrantFiled: September 25, 2008Date of Patent: November 11, 2014Assignee: Canon Kabushiki KaishaInventors: Kaname Tomite, Masakazu Fujiki
-
Patent number: 8698804Abstract: Upon generation of an image of a virtual space on which a virtual object is laid out using a ray tracing method, an approximate virtual object, which is configured by at least one virtual element to approximate the shape of a physical object, is laid out on the virtual space. Then, intersect determination between a ray generated based on the ray tracing method and an object on the virtual space is executed. As a result of the intersect determination, when the ray and the approximate virtual object have a predetermined intersect state, a pixel corresponding to the ray is generated based on a ray before the predetermined intersect state is reached.Type: GrantFiled: November 5, 2008Date of Patent: April 15, 2014Assignee: Canon Kabushiki KaishaInventors: Kaname Tomite, Masakazu Fujiki, Yasuo Katano, Takayuki Hashimoto
-
Publication number: 20130259307Abstract: An object detection apparatus includes a first detection unit configured to detect a first portion of an object from an input image, a second detection unit configured to detect a second portion different from the first portion of the object, a first estimation unit configured to estimate a third portion of the object based on the first portion, a second estimation unit configured to estimate a third portion of the object based on the second portion, a determination unit configured to determine whether the third portions, which have been respectively estimated by the first and second estimation units, match each other, and an output unit configured to output, if the third portions match each other, a detection result of the object based on at least one of a detection result of the first or second detection unit and an estimation result of the first or second estimation unit.Type: ApplicationFiled: March 15, 2013Publication date: October 3, 2013Applicant: CANON KABUSHIKI KAISHAInventors: Kan Torii, Kenji Tsukamoto, Atsushi Nogami, Kaname Tomite, Masakazu Matsugu
-
Patent number: 8280115Abstract: A key region extraction unit (303) extracts a first region including pixels having a predetermined pixel value in a physical space image. A motion vector detection unit (304) calculates motion vectors at a plurality of portions on the physical space image. An object region detection unit (305) specifies, using the motion vectors, a second region to be merged with the first region. When superimposing a virtual space image on the physical space image, an image composition unit (308) excludes a composition region obtained by merging the first region with the second region from a virtual space image superimposition target.Type: GrantFiled: October 28, 2008Date of Patent: October 2, 2012Assignee: Canon Kabushiki KaishaInventors: Dai Matsumura, Kaname Tomite
-
Publication number: 20090128552Abstract: When an image in a virtual space in which a virtual object is arranged is generated using a ray tracing method, and when it is determined that a ray which is generated in accordance with the ray tracing method successively intersected an approximate virtual object such as a hand which is a real object at lest twice, an image corresponding to a first intersection is generated in accordance with the ray emitted to the first intersection.Type: ApplicationFiled: November 5, 2008Publication date: May 21, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Masakazu Fujiki, Kaname Tomite, Yasuo Katano, Takayuki Hashimoto
-
Publication number: 20090115784Abstract: Upon generation of an image of a virtual space on which a virtual object is laid out using a ray tracing method, an approximate virtual object, which is configured by at least one virtual element to approximate the shape of a physical object, is laid out on the virtual space. Then, intersect determination between a ray generated based on the ray tracing method and an object on the virtual space is executed. As a result of the intersect determination, when the ray and the approximate virtual object have a predetermined intersect state, a pixel corresponding to the ray is generated based on a ray before the predetermined intersect state is reached.Type: ApplicationFiled: November 5, 2008Publication date: May 7, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Kaname Tomite, Masakazu Fujiki, Yasuo Katano, Takayuki Hashimoto
-
Publication number: 20090110291Abstract: A key region extraction unit (303) extracts a first region including pixels having a predetermined pixel value in a physical space image. A motion vector detection unit (304) calculates motion vectors at a plurality of portions on the physical space image. An object region detection unit (305) specifies, using the motion vectors, a second region to be merged with the first region. When superimposing a virtual space image on the physical space image, an image composition unit (308) excludes a composition region obtained by merging the first region with the second region from a virtual space image superimposition target.Type: ApplicationFiled: October 28, 2008Publication date: April 30, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Dai Matsumura, Kaname Tomite
-
Publication number: 20090102834Abstract: A CPU (201) updates scene data (206) by changing the management order of the data of virtual objects in the scene data (206) based on the processing result of a search of the scene data (206), which is executed upon generating an image of virtual space viewed from a first viewpoint. The CPU (201) sets the updated scene data (206) as scene data (206) to be used to generate an image viewed from a second viewpoint different from the first viewpoint.Type: ApplicationFiled: September 25, 2008Publication date: April 23, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Kaname Tomite, Masakazu Fujiki
-
Patent number: 7487468Abstract: A video combining apparatus to superimpose a virtual image such a CG image on a video image of the real world or on a see-through type display device. An area in which the virtual image is not to be displayed can be easily designated by a user. If the user holds a frame with markers in his/her sight, the frame is image-sensed in the video image of the real world. The area designated by the user is detected by detecting the position of the marker in the video image, and the virtual image is not superimposed in this area.Type: GrantFiled: September 29, 2003Date of Patent: February 3, 2009Assignee: Canon Kabushiki KaishaInventors: Rika Tanaka, Toshikazu Ohshima, Kaname Tomite
-
Publication number: 20080024523Abstract: A method for combining a real space image with a virtual image, includes: causing an imaging unit to capture an image of a real space; generating an image covering a predetermined space based on a plurality of real images in the captured real space; extracting position information of a light source based on the generated image; and adding a light source or a shadow on the virtual image based on the extracted position information of the light source.Type: ApplicationFiled: July 23, 2007Publication date: January 31, 2008Applicant: CANON KABUSHIKI KAISHAInventors: Kaname Tomite, Toshikazu Ohshima
-
Publication number: 20040070611Abstract: A video combining apparatus to superimpose a virtual image such a CG image on a video image of the real world or on a see-through type display device. An area in which the virtual image is not to be displayed can be easily designated by a user. If the user holds a frame with markers in his/her sight, the frame is image-sensed in the video image of the real world. The area designated by the user is detected by detecting the position of the marker in the video image, and the virtual image is not superimposed in this area.Type: ApplicationFiled: September 29, 2003Publication date: April 15, 2004Applicant: Canon Kabushiki KaishaInventors: Rika Tanaka, Toshikazu Ohshima, Kaname Tomite