Patents by Inventor TETSUYA FUKUYASU
TETSUYA FUKUYASU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12260491Abstract: There is provided an information processing device to generate a video to which a wide range of renditions are applied from a three-dimensional object generated by a volumetric technology. The information processing device includes a first generation unit (134) that generates, based on a three-dimensional model of a subject generated by using a plurality of captured images obtained by imaging the subject and based on a two-dimensional image, a video in which a subject generated from the three-dimensional model, and the two-dimensional image, are simultaneously present.Type: GrantFiled: July 15, 2021Date of Patent: March 25, 2025Assignee: SONY GROUP CORPORATIONInventor: Tetsuya Fukuyasu
-
Patent number: 12148211Abstract: The present technology relates to an image processing apparatus, a 3D model generation method, and a program capable of reducing failed image capturing in multi-view image capturing for 3D model generation. The image processing apparatus includes a 3D region calculation unit that generates a 3D region of image capturing ranges generated from a plurality of multi-view images, and a determination unit that determines a situation in which an image capturing device captures a subject on the basis of a region image obtained by projecting the 3D region onto a specific viewpoint and a subject image from the image capturing device corresponding to the specific viewpoint. The present technology can be applied to, for example, an image processing apparatus for 3D model generation.Type: GrantFiled: March 27, 2020Date of Patent: November 19, 2024Assignee: SONY GROUP CORPORATIONInventors: Hiroaki Takahashi, Tetsuya Fukuyasu
-
Publication number: 20230260199Abstract: There is provided an information processing device to generate a video to which a wide range of renditions are applied from a three-dimensional object generated by a volumetric technology. The information processing device includes a first generation unit (134) that generates, based on a three-dimensional model of a subject generated by using a plurality of captured images obtained by imaging the subject and based on a two-dimensional image, a video in which a subject generated from the three-dimensional model, and the two-dimensional image, are simultaneously present.Type: ApplicationFiled: July 15, 2021Publication date: August 17, 2023Applicant: Sony Group CorporationInventor: Tetsuya FUKUYASU
-
Publication number: 20220172474Abstract: The present technology relates to an image processing apparatus, a 3D model generation method, and a program capable of reducing failed image capturing in multi-view image capturing for 3D model generation. The image processing apparatus includes a 3D region calculation unit that generates a 3D region of image capturing ranges generated from a plurality of multi-view images, and a determination unit that determines a situation in which an image capturing device captures a subject on the basis of a region image obtained by projecting the 3D region onto a specific viewpoint and a subject image from the image capturing device corresponding to the specific viewpoint. The present technology can be applied to, for example, an image processing apparatus for 3D model generation.Type: ApplicationFiled: March 27, 2020Publication date: June 2, 2022Inventors: HIROAKI TAKAHASHI, TETSUYA FUKUYASU
-
Patent number: 11184602Abstract: The present disclosure relates to an image processing apparatus and an image processing method capable of storing auxiliary information in CbCr components in a YCbCr format. The image processing apparatus includes a receiving section that receives depth image data in which a depth image transmitted together with a texture image is stored in a Y component in a YCbCr format and auxiliary information is stored in CbCr components in the YCbCr format, and an auxiliary information utilization section that executes a predetermined image process using the auxiliary information on at least one of the texture image or the depth image. A value that can be taken on by the auxiliary information for each pixel of the texture image has N patterns, and a gradation value out of N×N gradation values into which a combination of the auxiliary information regarding two pixels is converted is stored in the CbCr components.Type: GrantFiled: January 30, 2018Date of Patent: November 23, 2021Assignee: SONY CORPORATIONInventor: Tetsuya Fukuyasu
-
Patent number: 11006135Abstract: The present disclosure relates to an image processing apparatus and an image processing method that make it possible to suppress deterioration of picture quality of an image within a viewing range of a viewer. An image processing apparatus includes an image processing section that performs, based on priorities between a plurality of encoded streams obtained by encoding a plurality of projection images that are obtained by projecting an omnidirectional image to a plurality of faces or a plurality of viewpoint images from different viewpoints, decoding of the encoded streams and generation or selection of an image to be used for generation of a display image, and a drawing section that generates the display image based on the generated or selected image.Type: GrantFiled: July 21, 2017Date of Patent: May 11, 2021Assignee: SONY CORPORATIONInventors: Tetsuya Fukuyasu, Kendai Furukawa
-
Patent number: 10991144Abstract: There is provided an image processing apparatus that includes an ML3D model generation section, which are applicable to a home server that generates a display image of a predetermined viewpoint from an omnidirectional image or the like. The ML3D model generation section receives transmission information in which auxiliary information is added to at least one of texture information of a first layer, depth information of the first layer, texture information of a second layer or depth information of the second layer, and executes predetermined image processing using the auxiliary information for at least one of the texture information of the first layer, the depth information of the first layer, the texture information of the second layer or the depth information of the second layer.Type: GrantFiled: July 14, 2017Date of Patent: April 27, 2021Assignee: SONY CORPORATIONInventors: Yuichi Araki, Junichi Tanaka, Hiroshi Oryoji, Yuichi Hasegawa, Tooru Masuda, Tetsuya Fukuyasu
-
Information processor, information processing system, and information processing method, and program
Patent number: 10771683Abstract: Accurate motion detection is performed by discriminating whether a sensor detecting an object motion is mounted on a human body or not, and processing is executed with respect to metadata based on the result. Sensor information according to the motion is input from the sensor, and a sensor mounting position is determined. A sensor mounting position detection unit calculates a ratio between a high-frequency component and a low-frequency component included in the sensor information, and discriminates whether the sensor is mounted on the human body or is mounted on other than the human body, on the basis of the calculated ratio. A metadata generating unit inputs user motion detection information obtained by executing a motion detection algorithm assuming a sensor mounting position coincident with a sensor mounting position detection result, and generates the shot image corresponding metadata.Type: GrantFiled: December 5, 2016Date of Patent: September 8, 2020Assignee: Sony CorporationInventors: Tetsuya Fukuyasu, Noriyuki Aramaki -
INFORMATION PROCESSOR, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD, AND PROGRAM
Publication number: 20200014845Abstract: Accurate motion detection is performed by discriminating whether a sensor detecting an object motion is mounted on a human body or not, and processing is executed with respect to metadata based on the result. Sensor information according to the motion is input from the sensor, and a sensor mounting position is determined. A sensor mounting position detection unit calculates a ratio between a high-frequency component and a low-frequency component included in the sensor information, and discriminates whether the sensor is mounted on the human body or is mounted on other than the human body, on the basis of the calculated ratio. A metadata generating unit inputs user motion detection information obtained by executing a motion detection algorithm assuming a sensor mounting position coincident with a sensor mounting position detection result, and generates the shot image corresponding metadata.Type: ApplicationFiled: December 5, 2016Publication date: January 9, 2020Inventors: Tetsuya Fukuyasu, Noriyuki Aramaki -
Publication number: 20200007845Abstract: The present disclosure relates to an image processing apparatus and an image processing method capable of storing auxiliary information in CbCr components in a YCbCr format so as to prevent deterioration by a codec distortion. The image processing apparatus includes a receiving section that receives depth image data in which a depth image transmitted together with a texture image is stored in a Y component in a YCbCr format and auxiliary information is stored in CbCr components in the YCbCr format, and an auxiliary information utilization section that executes a predetermined image process using the auxiliary information on at least one of the texture image or the depth image. A value that can be taken on by the auxiliary information for each pixel of the texture image has N patterns, and a gradation value out of N×N gradation values into which a combination of the auxiliary information regarding two pixels is converted is stored in the CbCr components.Type: ApplicationFiled: January 30, 2018Publication date: January 2, 2020Inventor: TETSUYA FUKUYASU
-
Publication number: 20190287289Abstract: The present disclosure relates to an image processing apparatus and an image processing method that make it possible to generate a texture image of high picture quality at a predetermined viewpoint using an omnidirectional image. An ML3D model generation section receives transmission information in which auxiliary information is added to at least one of texture information of a first layer, depth information of the first layer, texture information of a second layer or depth information of the second layer, and executes predetermined image processing using the auxiliary information for at least one of the texture information of the first layer, the depth information of the first layer, the texture information of the second layer or the depth information of the second layer. The present disclosure can be applied, for example, to a home server that generates a display image of a predetermined viewpoint from an omnidirectional image or the like.Type: ApplicationFiled: July 14, 2017Publication date: September 19, 2019Inventors: YUICHI ARAKI, JUNICHI TANAKA, HIROSHI ORYOJI, YUICHI HASEGAWA, TOORU MASUDA, TETSUYA FUKUYASU
-
Publication number: 20190268612Abstract: The present disclosure relates to an image processing apparatus and an image processing method that make it possible to suppress deterioration of picture quality of an image within a viewing range of a viewer. An image processing apparatus includes an image processing section configured to perform, based on priorities between a plurality of encoded streams obtained by encoding a plurality of projection images that are obtained by projecting an omnidirectional image to a plurality of faces or a plurality of viewpoint images from different viewpoints, decoding of the encoded streams and generation or selection of an image to be used for generation of a display image, and a drawing section configured to generate the display image based on the generated or selected image. The present disclosure can be applied to a home server and so forth that generate a display image within a viewing range of a viewer from an omnidirectional image.Type: ApplicationFiled: July 21, 2017Publication date: August 29, 2019Inventors: TETSUYA FUKUYASU, KENDAI FURUKAWA