Patents by Inventor Shen-Kai Chang

Shen-Kai Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10187640
    Abstract: A method and system for encoding a group of coding blocks and packetizing the compressed data into slices/packets with hard-limited packet size are disclosed. According to the present invention, a packetization map for at least a portion of a current picture is determined. The packetization map associates coding blocks in at least a portion of the current picture with one or more packets by identifying a corresponding group of coding blocks for each packet of said one or more packets. The corresponding group of coding blocks for each packet is then encoded according to the packetization map and the size of each packet is determined. The packet size is checked. If any packet size exceeds a constrained size, a new packetization map is generated and the corresponding group of coding blocks for each packet is encoded according to the new packetization map.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: January 22, 2019
    Assignee: MEDIATEK INC.
    Inventors: Chao-Chih Huang, Ting-An Lin, Shen-Kai Chang, Han-Liang Chou
  • Publication number: 20180359459
    Abstract: A video processing method includes obtaining projection face(s) from an omnidirectional content of a sphere, and obtaining a re-sampled projection face by re-sampling at least a portion of a projection face of the projection face(s) through non-uniform mapping. The omnidirectional content of the sphere is mapped onto the projection face(s) via a 360-degree Virtual Reality (360 VR) projection. The projection face has a first source region and a second source region. The re-sampled projection face has a first re-sampled region and a second re-sampled region. The first re-sampled region is derived from re-sampling the first source region with a first sampling density. The second re-sampled region is derived from re-sampling the second source region with a second sampling density that is different from the first sampling density.
    Type: Application
    Filed: April 3, 2018
    Publication date: December 13, 2018
    Inventors: Ya-Hsuan Lee, Peng Wang, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180338160
    Abstract: Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, each 360-degree virtual reality image is projected into one first projection picture using first projection-format conversion. The first projection pictures are encoded and decoded into first reconstructed projection pictures. Each first reconstructed projection picture is then projected into one second reconstructed projection picture or one third reconstructed projection picture corresponding to a selected viewpoint using second projection-format conversion. One or more discontinuous edges in one or more second reconstructed projection pictures or one or more third reconstructed projection pictures corresponding to the selected viewpoint are identified. A post-processing filter is then applied to at least one discontinuous edge in the second reconstructed projection pictures or third reconstructed projection pictures corresponding to the selected viewpoint to generate filtered output.
    Type: Application
    Filed: May 10, 2018
    Publication date: November 22, 2018
    Inventors: Ya-Hsuan LEE, Jian-Liang LIN, Shen-Kai CHANG
  • Publication number: 20180332305
    Abstract: A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.
    Type: Application
    Filed: September 30, 2017
    Publication date: November 15, 2018
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Chia-Ying Li, Shen-Kai Chang, Chi-Cheng Ju
  • Publication number: 20180276788
    Abstract: A video processing method includes receiving an omnidirectional content corresponding to a sphere, generating a projection-based frame according to at least the omnidirectional content and a segmented sphere projection (SSP) format, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by a first circular projection face, a second circular projection face, and at least one rectangular projection face packed in an SSP layout. A north polar region of the sphere is mapped onto the first circular projection face. A south polar region of the sphere is mapped onto the second circular projection face. At least one non-polar ring-shaped segment between the north polar region and the south polar region of the sphere is mapped onto said at least one rectangular projection face.
    Type: Application
    Filed: March 20, 2018
    Publication date: September 27, 2018
    Inventors: Ya-Hsuan Lee, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180262774
    Abstract: A video processing method includes: receiving a first input frame with a 360-degree Virtual Reality (360 VR) projection format; applying first content-oriented rotation to the first input frame to generate a first content-rotated frame; encoding the first content-rotated frame to generate a first part of a bitstream, including generating a first reconstructed frame and storing a reference frame derived from the first reconstructed frame; receiving a second input frame with the 360 VR projection format; applying second content-oriented rotation to the second input frame to generate a second content-rotated frame; configuring content re-rotation according to the first content-oriented rotation and the second content-oriented rotation; applying the content re-rotation to the reference frame to generate a re-rotated reference frame; and encoding, by a video encoder, the second content-rotated frame to generate a second part of the bitstream, including using the re-rotated reference frame for predictive coding of
    Type: Application
    Filed: March 5, 2018
    Publication date: September 13, 2018
    Inventors: Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180262775
    Abstract: A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.
    Type: Application
    Filed: March 12, 2018
    Publication date: September 13, 2018
    Inventors: Ya-Hsuan Lee, Chia-Ying Li, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Patent number: 10057590
    Abstract: A hybrid video encoding method and system using a software engine and a hardware engine. The software engine receives coding unit data associated with a current picture, and performs a first part of the video encoding operation by executing instructions. The first part of the video encoding operation generates an inter predictor and control information corresponding to the coding unit data of the current picture. The first part of the video encoding operation stores the inter predictor into an off-chip memory. The hardware engine performs a second part of the video encoding operation according to the control information. The second part of the video encoding operation receives the inter predictor, and subtracts the inter predictor from the coding unit data to generate a residual signal.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: August 21, 2018
    Assignee: MEDIATEK INC.
    Inventors: Chao-Chih Huang, Ting-An Lin, Shen-Kai Chang, Han-Liang Chou
  • Publication number: 20180192074
    Abstract: A video processing method includes receiving a projection-based frame, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by projection faces packed in a 360-degree Virtual Reality (360 VR) projection layout, and there is at least one image content discontinuity boundary resulting from packing of the projection faces. The step of encoding the projection-based frame includes performing a prediction operation upon a current block in the projection-based frame, including: checking if the current block and a spatial neighbor are located at different projection faces and are on opposite sides of one image content discontinuity boundary; and when a checking result indicates that the current block and the spatial neighbor are located at different projection faces and are on opposite sides of one image content discontinuity boundary, treating the spatial neighbor as non-available.
    Type: Application
    Filed: January 3, 2018
    Publication date: July 5, 2018
    Inventors: Cheng-Hsuan Shih, Shen-Kai Chang, Jian-Liang Lin, Hung-Chih Lin
  • Publication number: 20180192024
    Abstract: A video processing method includes receiving an omnidirectional content corresponding to a sphere, generating a projection-based frame according to the omnidirectional content and a pyramid projection layout, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by a base projection face and a plurality of lateral projection faces packed in the pyramid projection layout. The base projection face and the lateral projection faces are obtained according to at least projection relationship between a pyramid and the sphere.
    Type: Application
    Filed: January 3, 2018
    Publication date: July 5, 2018
    Inventors: Jian-Liang Lin, Peng Wang, Hung-Chih Lin, Shen-Kai Chang
  • Publication number: 20180165886
    Abstract: A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and a viewport-based cube projection layout, and encoding the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by rectangular projection faces packed in the viewport-based cube projection layout. The rectangular projection faces include a first rectangular projection face, a second rectangular projection face, a third rectangular projection face, a fourth rectangular projection face, a fifth rectangular projection face, and a sixth rectangular projection face split into partial rectangular projection faces.
    Type: Application
    Filed: December 12, 2017
    Publication date: June 14, 2018
    Inventors: Hung-Chih Lin, Chia-Ying Li, Le Shi, Ya-Hsuan Lee, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180158170
    Abstract: A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and an octahedron projection layout, and encoding, by a video encoder, the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by triangular projection faces packed in the octahedron projection layout. The omnidirectional image/video content of the viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
    Type: Application
    Filed: November 28, 2017
    Publication date: June 7, 2018
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Hui Ou Yang, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180130175
    Abstract: A video processing method includes: receiving a current input frame having a 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format, applying content-oriented rotation to the 360-degree image/video content in the current input frame to generate a content-rotated frame having a rotated 360-degree image/video content represented in the 360 VR projection format, encoding the content-rotated frame to generate a bitstream, and signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate rotation information of the content-oriented rotation.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 10, 2018
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180098090
    Abstract: Methods and apparatus for processing a 360°-VR frame sequence are disclosed. According to one method, input data associated with a 360°-VR frame sequence are received, where each 360°-VR frame comprises one set of faces associated with a polyhedron format. Each set of faces is rearranged into one rectangular whole VR frame consisting of a front sub-frame and a rear sub-frame, where the front sub-frame corresponds to first contents in a first field of view covering front 180°×180° view and the rear sub-frame corresponds to second contents in a second field of view covering rear 180°×180° view. Output data corresponding to a rearranged 360°-VR frame sequence consisting of a sequence of rectangular whole VR frames are provided.
    Type: Application
    Filed: October 2, 2017
    Publication date: April 5, 2018
    Inventors: Hung-Chih LIN, Jian-Liang LIN, Shen-Kai CHANG
  • Publication number: 20180054613
    Abstract: A video encoding method includes: generating reconstructed blocks for coding blocks within a frame, respectively, wherein the frame has a 360-degree image content represented by projection faces arranged in a 360-degree Virtual Reality (360 VR) projection layout, and there is at least one image content discontinuity edge resulting from packing of the projection faces in the frame; and configuring at least one in-loop filter, such that the at least one in-loop filter does not apply in-loop filtering to reconstructed blocks located at the least one image content discontinuity edge.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 22, 2018
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Shen-Kai Chang
  • Publication number: 20180041764
    Abstract: For omnidirectional video such as 360-degree Virtual Reality (360VR) video, a video system that support independent decoding of different views of the omnidirectional video is provided. A decoder for such a system can extract a specified part of a bitstream to decode a desired perspective/face/view of an omnidirectional image without decoding the entire image while suffering minimal or no loss in coding efficiency.
    Type: Application
    Filed: August 7, 2017
    Publication date: February 8, 2018
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Shen-Kai Chang
  • Publication number: 20170374385
    Abstract: A method and apparatus of video encoding or decoding for a video encoding or decoding system applied to multi-face sequences corresponding to a 360-degree virtual reality sequence are disclosed. According the present invention, one or more multi-face sequences representing the 360-degree virtual reality sequence are derived. If Inter prediction is selected for a current block in a current face, one virtual reference frame is derived for each face of said one or more multi-face sequences by assigning one target reference face to a center of said one virtual reference frame and connecting neighboring faces of said one target reference face to said one target reference face at boundaries of said one target reference face. Then, the current block in the current face is encoded or decoded using a current virtual reference frame derived for the current face to derive an Inter predictor for the current block.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 28, 2017
    Inventors: Chao-Chih HUANG, Hung-Chih LIN, Jian-Liang LIN, Chia-Ying LI, Shen-Kai CHANG
  • Publication number: 20170374364
    Abstract: A method and apparatus of video encoding or decoding for a video encoding or decoding system applied to multi-face sequences corresponding to a 360-degree virtual reality sequence are disclosed. According to embodiments of the present invention, at least one face sequence of the multi-face sequences is encoded or decoded using face-independent coding, where the face-independent coding encodes or decodes a target face sequence using prediction reference data derived from previous coded data of the target face sequence only. Furthermore, one or more syntax elements can be signaled in a video bitstream at an encoder side or parsed from the video bitstream at a decoder side, where the syntax elements indicate first information associated with a total number of faces in the multi-face sequences, second information associated with a face index for each face-independent coded face sequence, or both the first information and the second information.
    Type: Application
    Filed: June 21, 2017
    Publication date: December 28, 2017
    Inventors: Jian-Liang LIN, Chao-Chih HUANG, Hung-Chih LIN, Chia-Ying LI, Shen-Kai CHANG
  • Publication number: 20170366808
    Abstract: Methods and apparatus of processing cube face images are disclosed. According to embodiments of the present invention, one or more discontinuous boundaries within each assembled cubic frame are determined and used for selective filtering, where the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled. Furthermore, the filtering process is applied to one or more continuous areas in each assembled cubic frame.
    Type: Application
    Filed: June 9, 2017
    Publication date: December 21, 2017
    Inventors: Hung-Chih LIN, Jian-Liang LIN, Chia-Ying LI, Chao-Chih HUANG, Shen-Kai CHANG
  • Publication number: 20170353737
    Abstract: A method and apparatus or video coding or processing for an image sequence corresponding to virtual reality (VR) video are disclosed. According to embodiments of the present invention, a padded area outside one cubic face frame boundary of one cubic face frame is padded to form a padded cubic face frame using one or more extended cubic faces, where at least one boundary cubic face in said one cubic face frame has one padded area using pixel data derived from one extended cubic face in a same cubic face frame.
    Type: Application
    Filed: June 6, 2017
    Publication date: December 7, 2017
    Inventors: Jian-Liang LIN, Hung-Chih LIN, Chia-Ying LI, Chao-Chih HUANG, Shen-Kai CHANG