Patents by Inventor Pongsak Lasang

Pongsak Lasang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210012538
    Abstract: A three-dimensional data encoding method includes: dividing three-dimensional points included in three-dimensional data into three-dimensional point sub-clouds including a first three-dimensional point sub-cloud and a second three-dimensional point sub-cloud; appending first information indicating a space of the first three-dimensional point sub-cloud to a header of the first three-dimensional point sub-cloud, and appending second information indicating a space of the second three-dimensional point sub-cloud to a header of the second three-dimensional point sub-cloud; and encoding the first three-dimensional point sub-cloud and the second three-dimensional point sub-cloud so that the first three-dimensional point sub-cloud and the second three-dimensional point sub-cloud are decodable independently of each other.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 14, 2021
    Inventors: Chi WANG, Pongsak LASANG, Chung Dean HAN, Toshiyasu SUGIO
  • Patent number: 10893251
    Abstract: A three-dimensional model generating device includes: a converted image generating unit that, for each of input images included in one or more items of video data and having mutually different viewpoints, generates a converted image from the input image that includes fewer pixels than the input image; a camera parameter estimating unit that detects features in the converted images and estimates, for each of the input images, a camera parameter at a capture time of the input image, based on a pair of similar features between two of the converted images; and a three-dimensional model generating unit that generates a three-dimensional model using the input images and the camera parameters.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: January 12, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Tatsuya Koyama, Toshiyasu Sugio, Toru Matsunobu, Satoshi Yoshikawa, Pongsak Lasang, Chi Wang
  • Publication number: 20200351519
    Abstract: A three-dimensional data encoding method includes entropy encoding a bit sequence representing an N-ary tree structure of three-dimensional points included in three-dimensional data, using a coding table selected from coding tables, where N is an integer greater than or equal to 2. The bit sequence includes N-bit information for each of nodes in the N-ary tree structure. The N-bit information includes N pieces of 1-bit information each indicating whether a three-dimensional point is present in a corresponding one of N child nodes of a corresponding one of the nodes. In each of the coding tables, a context is set to each of bits in the N-bit information. In the entropy encoding, each of the bits in the N-bit information is entropy encoded using the context set to the bit in the coding table selected.
    Type: Application
    Filed: July 21, 2020
    Publication date: November 5, 2020
    Inventors: Toshiyasu SUGIO, Tatsuya KOYAMA, Chi WANG, Pongsak LASANG
  • Publication number: 20200349742
    Abstract: A three-dimensional data encoding method includes: generating an N-ary tree structure of three-dimensional points included in three-dimensional data, where N is an integer greater than or equal to 2; generating first encoded data by encoding a first branch using a first encoding process, the first branch having, as a root, a first node included in a first layer that is one of layers included in the N-ary tree structure; generating second encoded data by encoding a second branch using a second encoding process different from the first encoding process, the second branch having, as a root, a second node included in the first layer and different from the first node; and generating a bitstream including the first encoded data and the second encoded data.
    Type: Application
    Filed: July 17, 2020
    Publication date: November 5, 2020
    Inventors: Chi WANG, Pongsak LASANG, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Publication number: 20200329258
    Abstract: An encoding method according to the present disclosure includes: inputting three-dimensional data including three-dimensional coordinate data to a deep neural network (DNN); encoding the three-dimensional data by the DNN to generate encoded three-dimensional data; and outputting the encoded three-dimensional data.
    Type: Application
    Filed: June 25, 2020
    Publication date: October 15, 2020
    Inventors: Chi WANG, Pongsak LASANG, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Patent number: 10789765
    Abstract: Provided is a three-dimensional reconstruction method of reconstructing a three-dimensional model from multi-view images. The method includes: selecting two frames from the multi-view images; calculating image information of each of the two frames; selecting a method of calculating corresponding keypoints in the two frames, according to the image information; and calculating the corresponding keypoints using the method of calculating corresponding keypoints selected in the selecting of the method of calculating corresponding keypoints.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: September 29, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Toru Matsunobu, Toshiyasu Sugio, Satoshi Yoshikawa, Tatsuya Koyama, Pongsak Lasang, Jian Gao
  • Publication number: 20200288161
    Abstract: A three-dimensional data encoding method includes: generating first information in which an N-ary tree structure of a plurality of three-dimensional points included in three-dimensional data is expressed using a first formula, where N is an integer of 2 or higher; and generating a bitstream including the first information. The first information includes pieces of three-dimensional point information each associated with a corresponding one of the plurality of three-dimensional points. The pieces of three-dimensional point information each include indexes each associated with a corresponding one of a plurality of levels in the N-ary tree structure. The indexes each indicate a subblock, among N subblocks belonging to a corresponding one of the plurality of levels, to which a corresponding one of the plurality of three-dimensional points belongs.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Inventors: Chi WANG, Pongsak LASANG, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Publication number: 20200278450
    Abstract: A three-dimensional point cloud generation method for generating a three-dimensional point cloud including one or more three-dimensional points includes: obtaining (i) a two-dimensional image obtained by imaging a three-dimensional object using a camera and (ii) a first three-dimensional point cloud obtained by sensing the three-dimensional object using a distance sensor; detecting, from the two-dimensional image, one or more attribute values of the two-dimensional image that are associated with a position in the two-dimensional image; and generating a second three-dimensional point cloud including one or more second three-dimensional points each having an attribute value, by performing, for each of the one or more attribute values detected, (i) identifying, from a plurality of three-dimensional points forming the first three-dimensional point cloud, one or more first three-dimensional points to which the position of the attribute value corresponds, and (ii) appending the attribute value to the one or more fi
    Type: Application
    Filed: May 19, 2020
    Publication date: September 3, 2020
    Inventors: Pongsak LASANG, Chi WANG, Zheng WU, Sheng Mei SHEN, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Publication number: 20200252659
    Abstract: A three-dimensional data encoding method includes: determining whether to encode, using an octree structure, a current space unit among a plurality of space units included in three-dimensional data; encoding the current space unit using the octree structure, when it is determined that the current space unit is to be encoded using the octree structure; encoding the current space unit using a different method that is not the octree structure, when it is determined that the current space unit is not to be encoded using the octree structure; and appending, to a bitstream, information that indicates whether the current space unit has been encoded using the octree structure.
    Type: Application
    Filed: April 22, 2020
    Publication date: August 6, 2020
    Inventors: Pongsak LASANG, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Publication number: 20200250798
    Abstract: A three-dimensional model encoding device includes: a projector that generates a two-dimensional image by projecting a three-dimensional model to at least one two-dimensional plane; a corrector that generates, using the two-dimensional image, a corrected image by correcting one or more pixels forming an inactive area to which the three-dimensional model is not projected, the inactive area being included in the two-dimensional image; and an encoder that generates encoded data by performing two-dimensional encoding on the corrected image.
    Type: Application
    Filed: April 24, 2020
    Publication date: August 6, 2020
    Inventors: Pongsak LASANG, Chi WANG, Toshiyasu SUGIO, Tatsuya KOYAMA
  • Publication number: 20200250885
    Abstract: A generation device includes: a communication interface via which two-dimensional (2D) images are to be received, the 2D images having been generated by photographing a space from different viewpoints with at least one camera; and a processor connected to the communication interface and configured to determine, according to the 2D images, a matching pattern to match feature points included in two pictures among the 2D images, match the feature points according to the matching pattern in order to generate first three-dimensional (3D) points, the first 3D points indicating respective first positions in the space, generate second 3D point based on the 2D images, the second 3D point indicating a second position in the space, and generate a 3D model of the space based on the first 3D points and the second 3D point.
    Type: Application
    Filed: April 22, 2020
    Publication date: August 6, 2020
    Inventors: Zhen Peng BIAN, Pongsak LASANG, Toshiyasu SUGIO, Toru MATSUNOBU, Satoshi YOSHIKAWA, Tatsuya KOYAMA
  • Publication number: 20200242811
    Abstract: A three-dimensional data encoding method includes: generating predicted position information using position information on three-dimensional points included in three-dimensional reference data. associated with a time different from a time associated with current three-dimensional data; and encoding position information on three-dimensional points included in the current three-dimensional data, using the predicted position information.
    Type: Application
    Filed: April 15, 2020
    Publication date: July 30, 2020
    Inventors: Chi WANG, Pongsak LASANG, Toshiyasu SUGIO, Takahiro NISHI, Toru MATSUNOBU
  • Publication number: 20200234453
    Abstract: There is provided a projection instruction device that generates a projection image to be projected on parcel based on sensing information of the parcel, the device including: a processor; and a memory, in which by cooperating with the memory, the processor performs weighting on a value of a feature amount of a color image of parcel included in the sensing information based on a distance image of parcel included in the sensing information, and tracks the parcel based on the weighted value of the feature amount of the color image.
    Type: Application
    Filed: May 16, 2018
    Publication date: July 23, 2020
    Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Takaaki IDERA, Takaaki MORIYAMA, Shohji OHTSUBO, Pongsak LASANG, Takrit TANASNITIKUL
  • Publication number: 20200226825
    Abstract: A generation method is disclosed. Two-dimensional (2D) images that are generated by photographing a space from different viewpoints with at least one camera are obtained. Resolutions of the 2D images are reduced to generate first images, respectively. Second images are generated from the 2D images, respectively such that a resolution of each of the second images is higher than a resolution of any one of the first images. First three-dimensional (3D) points are generated based on the first images. The first 3D points indicate respective first positions in the space. A second 3D point is generated based on the second images. The second 3D point indicates a second position in the space. A 3D model of the space is generated based on the first 3D points and the second 3D point.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Zhen Peng BIAN, Pongsak LASANG, Toshiyasu SUGIO, Toru MATSUNOBU, Satoshi YOSHIKAWA, Tatsuya KOYAMA
  • Publication number: 20200202611
    Abstract: This image generation method is for generating a virtual image by a processor using at least one of images obtained by cameras disposed in different positions and attitudes capturing the same target space in a three-dimensional (3D) space. The virtual image is a two-dimensional (2D) image of the target space viewed from a virtual viewpoint in the 3D space. When generating the virtual image using one or more second images captured by one or more second cameras, at least one of which is different from one or more first cameras that capture one or more first images serving as a basis among the images, a second process which includes at least one of luminance and color adjustments and is different from a first process performed to generate the virtual image using the one or more first images is performed on the one or more second images.
    Type: Application
    Filed: March 3, 2020
    Publication date: June 25, 2020
    Inventors: Chi WANG, Pongsak LASANG, Toshiyasu SUGIO, Toru MATSUNOBU, Satoshi YOSHIKAWA, Tatsuya KOYAMA, Yoichi SUGINO
  • Publication number: 20200193612
    Abstract: There is provided a parcel recognition device that recognizes parcel based on a color image including one or more parcels, the device including: a processor; and a memory, in which by cooperating with the memory, the processor estimates a region of the one or more parcels in the color image, switches a color of a background which is a region excluding the region of the one or more parcels in the color image, and recognizes each of the one or more parcels based on the background having the switched color and a color of the region of the parcel.
    Type: Application
    Filed: May 16, 2018
    Publication date: June 18, 2020
    Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Takaaki MORIYAMA, Takaaki IDERA, Shohji OHTSUBO, Pongsak LASANG, Takrit TANASNITIKUL
  • Publication number: 20200111221
    Abstract: There is provided a projection indication device that generates a projection image to be projected on parcel based on sensing information of the parcel, the device including: a processor; and a memory, in which by cooperating with the memory, the processor specifies parcel based on a distance image of the parcel included in sensing information, tracks the parcel based on a color image of the parcel included in the sensing information, and tracks the parcel based on the distance image of the parcel in a case where the color image of the parcel on which a projection image is projected includes a white region expressed in white which is not an original gradation.
    Type: Application
    Filed: May 16, 2018
    Publication date: April 9, 2020
    Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Takaaki MORIYAMA, Takaaki IDERA, Shohji OHTSUBO, Pongsak LASANG, Takrit TANASNITIKUL
  • Publication number: 20190208177
    Abstract: A three-dimensional model generating device includes: a converted image generating unit that, for each of input images included in one or more items of video data and having mutually different viewpoints, generates a converted image from the input image that includes fewer pixels than the input image; a camera parameter estimating unit that detects features in the converted images and estimates, for each of the input images, a camera parameter at a capture time of the input image, based on a pair of similar features between two of the converted images; and a three-dimensional model generating unit that generates a three-dimensional model using the input images and the camera parameters.
    Type: Application
    Filed: March 7, 2019
    Publication date: July 4, 2019
    Inventors: Tatsuya KOYAMA, Toshiyasu SUGIO, Toru MATSUNOBU, Satoshi YOSHIKAWA, Pongsak LASANG, Chi WANG
  • Publication number: 20190051036
    Abstract: Provided is a three-dimensional reconstruction method of reconstructing a three-dimensional model from multi-view images. The method includes: selecting two frames from the multi-view images; calculating image information of each of the two frames; selecting a method of calculating corresponding keypoints in the two frames, according to the image information; and calculating the corresponding keypoints using the method of calculating corresponding keypoints selected in the selecting of the method of calculating corresponding keypoints.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: Toru MATSUNOBU, Toshiyasu SUGIO, Satoshi YOSHIKAWA, Tatsuya KOYAMA, Pongsak LASANG, Jian GAO
  • Patent number: 9769488
    Abstract: The current invention provides methods for 3D content capturing, 3D content coding and packaging at content production side, and 3D content consuming and rendering at display or terminal side, in order to ensure healthy and effective 3D viewing all the time. According to the current invention, maximum disparity and 3D budget, which are scene dependent, are calculated, utilized for coding and embedded in the coded streams or media file, and checked against the allowable values during the content rendering, so to determine if the same 3D content can be shown to the user according to the viewing condition which the user has. In the case where a healthy 3D viewing guideline cannot be met, it is suggested to adjust the 3D content for its new maximum disparity and 3D budget to be within the allowable range, to achieve healthy and effective 3D viewing for that user with his/her viewing condition.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: September 19, 2017
    Assignee: SUN PATENT TRUST
    Inventors: Sheng Mei Shen, Pongsak Lasang, Chong Soon Lim, Toshiyasu Sugio, Takahiro Nishi, Hisao Sasai, Youji Shibahara, Kyoko Tanikawa, Toru Matsunobu, Kengo Terada