Patents by Inventor Yue Yu
Yue Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250240413Abstract: In some embodiments, a video decoder decodes a video bitstream into video frames. A decoder decodes frames of a video from a video bitstream. The decoder further performs inter prediction to decode a current frame of the video by using the decoded frames as reference frames. Performing the inter prediction includes performing reference picture resampling by upsampling a reference frame for the current frame using one or more filters selected from a set of 32 6-tap interpolation filters. This set of interpolation filters is also used for interpolating chroma components for motion compensation. The decoded frame and the decoded current frame are output for display.Type: ApplicationFiled: April 21, 2023Publication date: July 24, 2025Inventors: Jonathan GAN, Yue YU, Haoping YU
-
Publication number: 20250227302Abstract: A video encoding method includes: dividing, based on a size of region, a plurality of quantization levels comprising a last non-zero quantization level within a block of a video into a plurality of regions in a level coding scan order to select a plurality of defined regions among the plurality of regions based on the last non-zero quantization level; generating, based on the quantization levels within the plurality of defined regions in a predefined scan order, a plurality of syntax structures associated with the plurality of defined regions; and encoding the plurality of syntax structures associated with the plurality of defined regions into a bitstream.Type: ApplicationFiled: June 2, 2023Publication date: July 10, 2025Inventors: Yue YU, Haoping YU
-
Publication number: 20250225680Abstract: According to one aspect of the present disclosure, a method for decoding a point cloud that is represented in a one-dimension (ID) array that includes a set of points is provided. The method may include identifying, by at least one processor, a maximum number of transform coefficients used to predict an attribute value of a point in the set of points. The method may include decoding, by the at least one processor, a bitstream to identify the maximum number of transform coefficients based on a logarithmic format minus a fixed integer.Type: ApplicationFiled: June 21, 2023Publication date: July 10, 2025Inventors: Yue YU, Haoping YU
-
Publication number: 20250225679Abstract: According to one aspect of the present disclosure, a method for decoding a point cloud that is represented in a one-dimension (ID) array that includes a set of points is provided. The method may include parsing, by at least one processor, a bitstream to obtain a first syntax element indicative of an enablement of multiple attribute parameter sets for the point cloud. The method may include determining, by the at least one processor, whether the first syntax element indicates that multiple attribute parameter sets are enabled for the point cloud. In response to determining that the multiple attribute parameter sets are enabled for the point cloud, the method may include decompressing, by the at least one processor, the point cloud based on the multiple attribute parameter sets.Type: ApplicationFiled: June 22, 2023Publication date: July 10, 2025Inventors: Yue YU, Haoping YU
-
Patent number: 12355966Abstract: A system and method for regenerating high dynamic range (HDR) video data from encoded video data, extracts, from the encoded video data, a self-referential metadata structure specifying a video data reshaping transfer function. The video data reshaping transfer function is regenerated using data from the metadata structure and the extracted reshaping transfer function is used to generate the HDR video data by applying decoded video data values to the reshaping transfer function.Type: GrantFiled: May 21, 2024Date of Patent: July 8, 2025Assignee: ARRIS Enterprises LLCInventors: David M. Baylon, Zhouye Gu, Ajay Luthra, Koohyar Minoo, Yue Yu
-
Publication number: 20250220214Abstract: A method of decoding a video, a method of encoding a video, and a non-transitory computer-readable storage medium storing a bitstream are provided. In the method of decoding, an additional bit count M indicating a quantity of addition general constraints information (GCI) bits included in a bitstream of the video is decoded from the bitstream of the video. Here, an expected value of the additional bit count M is 0, 6 or greater than 6. In response to determining that a value of the additional bit count M is equal to 6, six flag bits representing six respective flags indicating six respective additional coding tools to be constrained for the video are decoded from the bitstream of the video. A remaining portion of the bitstream of the video is decoded into images based, at least in part, upon constraints for the six additional coding tools indicated by the six flags.Type: ApplicationFiled: March 21, 2025Publication date: July 3, 2025Inventors: Jonathan GAN, Yue YU, Haoping YU
-
Patent number: 12348741Abstract: A method, apparatus, article of manufacture, and a memory structure for signaling extension functions used in decoding a sequence comprising a plurality of pictures, each picture processed at least in part according to a picture parameter set is disclosed. In one embodiment, the method comprises reading a first extension flag signaling a first extension function in the processing of the sequence and determining if the first extension flag has a first value. Further, the method reads a second extension flag signaling a second extension function in the processing of the sequence and performs the second extension function according to the read second extension flag only if the first extension flag has a first value.Type: GrantFiled: July 16, 2024Date of Patent: July 1, 2025Assignee: ARRIS Enterprises LLCInventors: Yue Yu, Limin Wang
-
Publication number: 20250202724Abstract: Example data processing methods are apparatus are described. In one example method, a system includes a plurality of data management apparatuses, and each data management apparatus corresponds to one blockchain node in a blockchain network. The method includes receiving, by a target data management apparatus in the plurality of data management apparatuses, a transaction request from a blockchain client, where the transaction request includes identifiers of transaction participants. The target data management apparatus performs input/output (I/O), in the blockchain network based on the transaction request, on a transaction information ciphertext. The transaction information ciphertext is obtained by encrypting a transaction information plaintext by using a key that is invisible to a participant other than the transaction participant. The target data management apparatus then returns a transaction result to the blockchain client.Type: ApplicationFiled: February 28, 2025Publication date: June 19, 2025Inventors: Ziyi ZHANG, Qiang QU, Mingxiao DU, Yue YU
-
Publication number: 20250196523Abstract: A thermal transfer sheet includes a substrate and a transfer layer, in which the transfer layer after transfer has a reduced peak height (Spk) of 0.6 ?m or more. A method for producing a printed material using a thermal transfer sheet including a particle layer disposed on a substrate and an image-receiving sheet including a thermal protrusion-and/or-recess forming layer and a receiving layer stacked in that order on a second substrate, the receiving layer including an image that has been formed, includes the steps of heating the image-receiving sheet to form a protrusion and/or a recess at the image-receiving sheet, and heating the thermal transfer sheet to transfer the particle layer to at least part of the protrusion of the image-receiving sheet.Type: ApplicationFiled: March 5, 2025Publication date: June 19, 2025Applicant: Dai Nippon Printing Co., Ltd.Inventors: Yue YU, Hiroshi EGUCHI, Masayuki TANI, Yasushi YONEYAMA
-
Patent number: 12335464Abstract: A method of decoding JVET video includes receiving a bitstream that includes encoded video data that includes encoded video data. From the encoded data, a horizontal predictor and a vertical predictor for a pixel in the current coding block may be interpolated. A coding block size may be identified to determine whether to use equal weight or unequal weights to apply to each of the horizontal and vertical predictors for calculating a final planar prediction value P(x,y) by comparing the coding block size to a coding block size threshold.Type: GrantFiled: January 30, 2024Date of Patent: June 17, 2025Assignee: ARRIS Enterprises LLCInventors: Krit Panusopone, Yue Yu, Seungwook Hong, Limin Wang
-
Publication number: 20250193386Abstract: In some embodiments, a video encoder encodes a video into a video bitstream. The video encoder accesses a set of frames of the video and performs inter prediction for the set of frames using a set of integerized interpolation filters to generate prediction residuals to be encoded into the video bitstream. The set of integerized interpolation filters are generated by integerizing a set of interpolation filters, each of the set of interpolation filters having floating-point filter coefficients. For each interpolation filter, two integerized filter coefficient values are generated for each filter coefficient and a set of filter candidates are generated based on the two integerized values for each filter coefficient. An error metric for each filter candidate is calculated and an integerized interpolation filter having the lowest error metric is selected for the interpolation filter from the set of filter candidates.Type: ApplicationFiled: March 2, 2023Publication date: June 12, 2025Inventors: Jonathan GAN, Yue YU, Haoping YU
-
Patent number: 12328433Abstract: A method is provided for inter-coding video in which transmission bandwidth requirements associated with second motion vectors for bi-directional temporal prediction is reduced. In the method, vector information for one of motion vectors for multi-directional temporal prediction can be transmitted together with information on how to derive or construct the second motion vectors. Thus, rather than sending express information regarding each of the plurality of motion vectors, express information related to only one motion vector along with information related to reconstruction/derivation of the second motion vectors is transmitted, thus reducing bandwidth requirements and increasing coding efficiency.Type: GrantFiled: March 5, 2024Date of Patent: June 10, 2025Assignee: ARRIS Enterprises LLCInventors: Yue Yu, Krit Panusopone, Limin Wang
-
Publication number: 20250184538Abstract: A video decoder reconstructs a current frame of a video from a video bitstream based on a reconstructed reference frame. For a block of the current frame, the video decoder identifies a reference block in the reference frame based on a motion vector associated with the block. The decoder determines the slope and offset parameters of a local illumination compensation model based on reconstructed pixels in the current frame and the reference frame. The video decoder decodes, from the video bitstream, an adjustment to the slope and updates the slope by applying the decoded adjustment. The decoder further determines an adjusted offset parameter for the local illumination compensation model. The decoder generates predicted pixels for the block by at least applying, to the reference block, the local illumination compensation model with the updated parameters.Type: ApplicationFiled: March 3, 2023Publication date: June 5, 2025Inventors: Yue YU, Haoping YU, Jonathan GAN
-
Publication number: 20250184498Abstract: A method of decoding JVET video, comprising receiving a bitstream indicating how a coding tree unit was partitioned into coding units according to a partitioning structure that allows nodes to be split according to a partitioning technique. An intra direction mode for a coding unit may be selected, as well as one or more of the plurality of reference lines to generate at least one predictor for the intra direction mode. A predictor may be generated from reference samples within each selected reference line by combining predicted pixel values based on a projected position on a main reference line in combination with predicted pixel values based on a projected position on a side reference line. The predicted pixel values are weighted according to a weight parameter, wherein the weight parameter is determined based on a shift conversion factor.Type: ApplicationFiled: January 28, 2025Publication date: June 5, 2025Applicant: ARRIS Enterprises LLCInventors: Krit PANUSOPONE, Koohyar MINOO, Yue YU, Limin WANG
-
Publication number: 20250184510Abstract: A method for decoding a video from a video bitstream representing the video, the method includes: accessing a binary string from the video bitstream, the binary string representing a slice of a frame of the video; determining an initial context value of an entropy coding model for the slice to be one of a first context value stored for a first CTU in a previous slice of the slice, a second context value stored for a second CTU in the previous slice, and a default initial context value independent of the previous slice; decoding the slice by decoding at least a portion of the binary string according to the entropy coding model with the initial context value; reconstructing the frame of the video based, at least in part, upon the decoded slice; and causing the reconstructed frame to be displayed along with other frames of the video.Type: ApplicationFiled: March 9, 2023Publication date: June 5, 2025Inventors: Kazushi SATO, Yue YU, Haoping YU
-
Patent number: 12323601Abstract: A method for inter-coding video is provided in which transmission bandwidth requirements associated with second motion vectors for bi-directional temporal prediction is reduced. In the method motion vector information for only one of the two motion vectors for bi-directional temporal prediction can be transmitted together with information on how to derive or construct the second motion vector. Thus, rather than sending express information regarding two motion vectors, express information related to only one motion vector along with information related to reconstruction/derivation of the second motion vector is transmitted, thus reducing bandwidth requirements and increasing coding efficiency.Type: GrantFiled: January 19, 2024Date of Patent: June 3, 2025Assignee: ARRIS Enterprises LLCInventors: Krit Panusopone, Yue Yu, Limin Wang
-
Publication number: 20250175594Abstract: A system and method of planar motion vector derivation which, in some embodiments can employ an unequal weighted combination of adjacent motion vectors. In some embodiments, motion vector information associated with a bottom right pixel or block adjacent to a current coding unit can be derived from motion information associated with a top row or top neighboring row of a current coding unit and motion information associated with a left column or left neighboring column of a current coding unit. Weighted or non-weighted combinations of such values can be combined in a planar mode prediction model to derive associated motion information for bottom and/or right adjacent pixels or blocks.Type: ApplicationFiled: January 29, 2025Publication date: May 29, 2025Applicant: ARRIS Enterprises LLCInventors: Krit Panusopone, Seungwook Hong, Yue Yu, Limin Wang
-
Publication number: 20250166231Abstract: In some embodiments, a mesh encoder encodes a dynamic mesh with connectivity simplification. The encoder encodes geometry component images of the dynamic mesh using a video encoder to generate a geometry component bitstream and decodes the geometry component bitstream to generate reconstructed geometry component images. The encoder further determines, using the reconstructed geometry component images, a face to be removed from connectivity component images of the dynamic mesh and updates the connectivity component images of the dynamic mesh by removing the face from the connectivity component images. The encoder encodes the updated connectivity component images to generate a connectivity component bitstream and generates a coded mesh bitstream by including at least the geometry component bitstream and the connectivity component bitstream.Type: ApplicationFiled: December 28, 2022Publication date: May 22, 2025Inventors: Vladyslav ZAKHARCHENKO, Yue YU, Haoping YU
-
Publication number: 20250168360Abstract: A method of decoding JVET video that includes receiving a bitstream indicating how a coding tree unit was partitioned into coding units, and parsing said bitstream to generate at least one predictor based on an intra prediction mode signaled in the bitstream, the intra prediction mode selected from a plurality of intra prediction modes for calculating a prediction pixel P[x,y] at coordinate x,y for the coding unit. A number of intra prediction modes available for coding the coding unit are reduced by replacing two or more non-weighted intra prediction modes by a weighted intra prediction mode.Type: ApplicationFiled: January 17, 2025Publication date: May 22, 2025Applicant: ARRIS Enterprises LLCInventors: Yue Yu, Limin Wang, Krit Panusopone
-
Publication number: 20250159223Abstract: A method for decoding a video includes: extracting a GCI flag from a bitstream of the video; determining that one or more general constraints are imposed for the video based on a value of the GCI flag; in response to determining that one or more general constraints are imposed for the video, extracting, from the bitstream of the video, a value indicating a quantity of additional bits included in the bitstream, the additional bits comprising flag bits indicating respective additional coding tools to be constrained for the video; determining that the value is no greater than 5; and in response to determining that the value is no greater than 5, extracting one or more bits from the bitstream, wherein a number of the one or more bits equals to the value, and decoding a remaining portion of the bitstream into images independent of the one or more bits.Type: ApplicationFiled: November 8, 2022Publication date: May 15, 2025Inventors: Jonathan GAN, Yue YU, Haoping YU