Patents by Inventor Lulin Chen

Lulin Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200304834
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode and/or decode video data. Immersive media data is accessed that comprises a hierarchical track structure comprising at least (a) a first track at a first level of the hierarchical track structure comprising first immersive media elementary data, wherein the first track is a parameter track, and the first immersive media elementary data comprises timed metadata, and (b) a second track at a second level in the hierarchical track structure that is different than the first level of the first track, the second track comprising metadata specifying an immersive media track derivation operation. The immersive media track derivation operation is performed on at least the first immersive media elementary data to generate composite immersive media data for the second track.
    Type: Application
    Filed: March 17, 2020
    Publication date: September 24, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20200296397
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode and/or decode video data. Point cloud video data includes a plurality of tracks. first metadata of a first track of the plurality of tracks, the first metadata specifying a first source region of a plurality of source regions of the point cloud video data, wherein each source region corresponds to a different spatial portion of the point cloud video data. The first metadata specifies a sub-region of the first track in the first source region comprising data indicative of a spatial position of video data of the first track in the first source region. Point cloud media is generated, based on the first metadata, for the sub-region of the first source region using the video data of the first track.
    Type: Application
    Filed: March 11, 2020
    Publication date: September 17, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Patent number: 10778993
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to derive a composite track. Three-dimensional video data includes a plurality of two-dimensional sub-picture tracks associated with a viewport. A composite track derivation for composing the plurality of two-dimensional sub-picture tracks for the viewport includes data indicative of the plurality of two-dimensional sub-picture tracks belonging to a same group, placement information to compose sample images from the plurality of two-dimensional tracks into a canvas associated with the viewport, and a composition layout operation to adjust the composition if the canvas comprises a composition layout created by two or more of the plurality of two-dimensional sub-picture tracks composed on the canvas. The composite track derivation can be encoded and/or used to decode the three-dimensional video data.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: September 15, 2020
    Assignee: MediaTek Inc.
    Inventors: Xin Wang, Lulin Chen, Shuai Zhao
  • Patent number: 10742999
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode or decode a region of interest associated with video data. A spherical region structure is associated with the video data that specifies the region of interest on a sphere, the spherical region structure including a reference point of the region of interest on the sphere, and data indicative of a set of side points, comprising a side point for each side of the region of interest on the sphere. The region of interest in the video data is determined based on the reference point and the set of side points. The video data can be composite video data. The spherical region structure, and/or metadata based on the spherical region structure, can be implicitly or explicitly associated with the video data.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: August 11, 2020
    Assignee: MediaTek Inc.
    Inventors: Xin Wang, Wang Lin Lai, Lulin Chen, Shan Liu
  • Publication number: 20200226792
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode and/or decode video data. Point cloud video data is received that includes metadata specifying one or more regions of interest of the point cloud video data. A first region of interest is determined from the one or more regions of interest. A portion of the point cloud video data associated with the first region of interest is determined. Point cloud media is generated for viewing by a user based on the determined portion of the point cloud video data associated with the first region of interest.
    Type: Application
    Filed: January 9, 2020
    Publication date: July 16, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20200219536
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to access multimedia data comprising a hierarchical track structure with a first track of a first sequence of temporally-related media units at a first level, and a second track at a second level comprising metadata specifying a temporal track derivation operation. The metadata includes a set of one or more operations to perform on the first track, each operation including a unit duration of the first sequence and a start unit in the first sequence. The temporal track derivation operation is performed on a set of media units comprising at least the first sequence, and includes applying the set of one or more operations to temporally modify the first sequence to generate second media data for the second track that includes a second sequence of temporally-related media units from the set of media units.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 9, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20200213588
    Abstract: Methods and apparatuses for processing video data include receiving input data associated with a current video picture, dividing the current video picture into non-overlapping rectangular tiles, grouping the tiles in the current video picture into tile groups, and encoding or decoding video data in the tile groups within the current video picture. According to one embodiment, each tile group is composed of an integer number of tiles, and shapes of all the tile groups are constrained to be rectangle. According to one embodiment, a flag is used to indicate whether one or more in-loop filtering operations are performed across tile group boundaries.
    Type: Application
    Filed: December 24, 2019
    Publication date: July 2, 2020
    Inventor: Lulin CHEN
  • Publication number: 20200177923
    Abstract: A method and apparatus of video coding are disclosed. In the encoding side, video data are received, where a GDR (Gradual Decoding Refresh) picture type is supported by the encoding device. A syntax structure including a first syntax in NAL (Network Access Layer) unit header is generated, where an NAL unit type indicated by the first syntax comprises the GDR picture type. Encoded video data including the syntax structure from the video data are generated. A corresponding method and apparatus for the decoding side are also disclosed.
    Type: Application
    Filed: November 29, 2019
    Publication date: June 4, 2020
    Inventors: Lulin CHEN, Chih-Wei HSU, Yu-Wen HUANG
  • Publication number: 20200169754
    Abstract: A video processing method includes receiving at least one virtual reality (VR) content, obtaining at least one picture from the at least one VR content, encoding the at least one picture to generate a part of a coded bitstream, and encapsulating, by a file encapsulation circuit, the part of the coded bitstream into at least one ISO Base Media File Format (ISOBMFF) file. The at least one ISOBMFF file includes a first track parameterized with a first set of translational coordinates, wherein the first set of translational coordinates identifies an origin of a first omnidirectional media content.
    Type: Application
    Filed: July 13, 2018
    Publication date: May 28, 2020
    Inventors: Xin Wang, Lulin Chen, Shuai Zhao
  • Patent number: 10623635
    Abstract: A method that specifies, signals and uses coding-independent code points (CICP) in processing media contents from multiple media sources is provided. An apparatus implementing the method receives media contents captured by a plurality of media sources in one or more clusters. The apparatus processes the media contents to provide a plurality of coding-independent code points for the plurality of media sources. The apparatus also encodes the media contents to provide at least one elementary stream.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: April 14, 2020
    Assignee: MEDIATEK INC.
    Inventors: Xin Wang, Lulin Chen, Wang Lin Lai, Shan Liu
  • Publication number: 20200111510
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to access multimedia data that has a hierarchical track structure that includes at least a first track at a first level of the hierarchical track structure comprising first media data, wherein the first media data comprises a first sequence of temporally-related media units, and a second track at a second level in the hierarchical track structure that is different than the first level of the first track, the second track comprising metadata specifying a temporal track derivation operation. The temporal track derivation operation is performed on a set of media units comprising at least the first sequence of temporally-related media units to temporally modify the set of media units to generate second media data for the second track, wherein the second media data comprises a second sequence of temporally-related media units from the set of media units.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 9, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20200105063
    Abstract: A video processing method includes receiving a virtual reality (VR) content, obtaining a picture from the VR content, encoding the picture to generate a part of a coded bitstream, and encapsulating the part of the coded bitstream into ISO Base Media File Format (ISOBMFF) file(s). In one exemplary implementation, the ISOBMFF file(s) may include a transform property item that is set to enable at least one of a projection transformation, a packing transformation, a VR viewport selection, and a VR region of interest (ROI) selection in track derivation. In another exemplary implementation, the ISOBMFF file(s) may include a first parameter, a second parameter, and a third parameter associated with orientation of a viewport, with the first, second and third parameters indicating a yaw angle, a pitch angle and a roll angle of a center of the viewport, respectively. Further, an associated video processing apparatus is provided.
    Type: Application
    Filed: March 23, 2018
    Publication date: April 2, 2020
    Inventors: Xin Wang, Lulin Chen, Wang Lin Lai
  • Patent number: 10602239
    Abstract: Aspects of the disclosure provide an apparatus that includes interface circuitry and processing circuitry. The interface circuitry is configured to receive signals carrying metadata for visual track composition from multiple visual tracks. The visual track composition includes alpha compositing, and can include spatial compositing and background compositing. The processing circuitry is configured to parse the metadata to extract configuration information for the visual track composition. Further, the processing circuitry receives a first sample from a first visual track and a second sample from a second visual track, and combines the first sample with the second sample to generate a composite sample based on the configuration information for the visual track composition.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: March 24, 2020
    Assignee: MEDIATEK INC.
    Inventors: Lulin Chen, Xin Wang, Shuai Zhao, Wang Lin Lai
  • Publication number: 20200092530
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to perform media processing. A media processing entity includes at least one processor in communication with a memory, wherein the memory stores computer-readable instructions that, when executed by the at least one processor, cause the at least one processor to perform receiving, from a remote computing device, multi-view multimedia data comprising a hierarchical track structure comprising at least a first track comprising first media data at a first level of the hierarchical track structure, and metadata associated with a second track at a second level in the hierarchical track structure that is different than the first level of the first track. The instructions further cause the processor to perform processing the first media data of the first track based on the metadata associated with the second track to generate second media data for the second track.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 19, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20200053282
    Abstract: A video processing method includes receiving a virtual reality (VR) content, encoding visual data obtained from the VR content to generate a part of a coded bitstream, and encapsulating the part of the coded bitstream into ISO Base Media File Format (ISOBMFF) file (s). In one exemplary implementation, the ISOBMFF file (s) may include a timed metadata track associated with a sphere visual track, where the timed metadata track is set to signal that the associated sphere visual track contains at least one spherical region contributed from at least one region visual track. In another exemplary implementation, the ISOBMFF file (s) may include a timed metadata track associated with a region visual track, where the timed metadata track is set to signal that the associated region visual track contributes to at least one spherical region carried in at least one sphere visual track. Further, an associated video processing apparatus is provided.
    Type: Application
    Filed: March 29, 2018
    Publication date: February 13, 2020
    Inventors: Xin Wang, Lulin Chen, Wang Lin Lai
  • Patent number: 10542297
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to perform an asset change for video data. First video data comprises a sequence of data units separated by data unit boundaries, and a sequence of error correction data units, wherein each error correction data unit is associated with one or more data units from the sequence of data units. Based on the received first video data, it can be determined whether an error correction data unit from the sequence of error correction data units associated with an asset change point of the first video data crosses one or more data unit boundaries of the first video data. Based on the determination, an asset change operation for the first video data and second video data is performed, wherein the second video data is different than the first video data.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: January 21, 2020
    Assignee: MediaTek Inc.
    Inventors: Lulin Chen, Shan Liu
  • Publication number: 20200014906
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to decode video data. Video data includes video content, overlay content, and overlay metadata that is specified separate from the video content and overlay content. The overlay content is determined to be associated with the video content based on the overlay metadata. The overlay content is overlaid onto the video content in the region of the video content.
    Type: Application
    Filed: July 3, 2019
    Publication date: January 9, 2020
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen
  • Publication number: 20190394537
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to adaptively process video data. Media presentation data is associated with video data encoded in a plurality of formats, each format associated with a different bit rate. The media presentation data is generated to include first data indicative of whether the media presentation data is for static video data or dynamic video data, and second data indicative of at least one location reference to a location of a content update to the video data. A client determines, from the media presentation data, the first data and the second data, and downloads a media segment of the video data based on the first data and the second data.
    Type: Application
    Filed: May 13, 2019
    Publication date: December 26, 2019
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Lulin Chen, Xin Wang
  • Publication number: 20190364259
    Abstract: A media content delivery apparatus that encodes media content as multiple spatial objects is provided. The media content delivery apparatus encodes a first spatial object according to a first set of parameters. The media content delivery apparatus also encodes a second spatial object according to a second set of parameters. The first and second spatial objects are encoded independently. The media content delivery apparatus also generates a metadata based on the first set of parameters, the second set of parameters, and a relationship between the first and second spatial objects. The media content delivery apparatus then transmits or stores the encoded first spatial object, the encoded second spatial object, and the generated metadata.
    Type: Application
    Filed: September 1, 2017
    Publication date: November 28, 2019
    Inventors: Lulin CHEN, Shan LIU, Xin WANG, Wang-Lin LAI
  • Publication number: 20190320190
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to specify two-dimensional spatial relationship information. Video data includes a track group type for a group of two-dimensional tracks. The track group type is a two-dimensional spatial relationship track group type, wherein a spatial relationship of the group of tracks is specified based on a two-dimensional Cartesian coordinate system. Two-dimensional spatial relationship description data for the group of tracks, can specify a two-dimensional region based on the two-dimensional Cartesian coordinate system, and a relation of each two-dimensional track in the group of two-dimensional tracks to the two-dimensional region. Source data for the two-dimensional region can be generated by composing each two-dimensional track from the group of tracks based on the associated relation of the two-dimensional track to the two-dimensional region.
    Type: Application
    Filed: April 10, 2019
    Publication date: October 17, 2019
    Applicant: MEDIATEK Singapore Pte. Ltd.
    Inventors: Xin Wang, Lulin Chen