Patents by Inventor Akira Osamoto
Akira Osamoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250113047Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.Type: ApplicationFiled: December 13, 2024Publication date: April 3, 2025Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Patent number: 12184841Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.Type: GrantFiled: May 19, 2023Date of Patent: December 31, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
-
Patent number: 12170784Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.Type: GrantFiled: February 19, 2023Date of Patent: December 17, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Publication number: 20240185431Abstract: A method includes obtaining a reference frame from among multiple image frames of a scene. The method also includes generating a segmentation mask using the reference frame, where the segmentation mask contains information for separation of foreground and background in the scene. The method further includes applying the segmentation mask to each of the multiple image frames to generate foreground image frames and background image frames. The method also includes performing multi-frame registration on each of the foreground image frames to generate registered foreground image frames. The method further includes performing multi-frame registration on each of the background image frames to generate registered background image frames. In addition, the method includes combining the registered foreground image frames and the registered background image frames to generate a combined registered multi-frame image of the scene.Type: ApplicationFiled: December 1, 2022Publication date: June 6, 2024Inventors: Yufang Sun, Akira Osamoto, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20240114174Abstract: A method and a video processor for preventing start code confusion. The method includes aligning bytes of a slice header relating to slice data when the slice header is not byte aligned or inserting differential data at the end of the slice header before the slice data when the slice header is byte aligned, performing emulation prevention byte insertion on the slice header, and combine the slice header and the slice data after performing emulation prevention byte insertion.Type: ApplicationFiled: December 15, 2023Publication date: April 4, 2024Inventors: Vivienne Sze, Madhukar Budagavi, Akira Osamoto, Yasutomo Matsuba
-
Publication number: 20240098249Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.Type: ApplicationFiled: November 17, 2023Publication date: March 21, 2024Inventors: Madhukar Budagavi, Akira Osamoto
-
Patent number: 11849148Abstract: A method and a video processor for preventing start code confusion. The method includes aligning bytes of a slice header relating to slice data when the slice header is not byte aligned or inserting differential data at the end of the slice header before the slice data when the slice header is byte aligned, performing emulation prevention byte insertion on the slice header, and combine the slice header and the slice data after performing emulation prevention byte insertion.Type: GrantFiled: June 17, 2021Date of Patent: December 19, 2023Assignee: Texas Instruments IncorporatedInventors: Vivienne Sze, Madhukar Budagavi, Akira Osamoto, Yasutomo Matsuba
-
Patent number: 11825078Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.Type: GrantFiled: July 27, 2022Date of Patent: November 21, 2023Assignee: Texas Instruments IncorporatedInventors: Madhukar Budagavi, Akira Osamoto
-
Patent number: 11812041Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.Type: GrantFiled: December 23, 2021Date of Patent: November 7, 2023Assignee: Texas Instruments IncorporatedInventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Publication number: 20230291897Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.Type: ApplicationFiled: May 19, 2023Publication date: September 14, 2023Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
-
Publication number: 20230267702Abstract: An electronic device is provided. The electronic device includes a display, a camera module disposed under the display, and a processor electrically connected to the display and the camera module. The processor is configured to acquire a sample frame by using the camera module, identify whether a light source object is included in the sample frame, determine an imaging parameter for acquisition of first multiple frames when the light source object is identified to be included in the sample frame, acquire multiple frames, based on the imaging parameter, composite the multiple frames to generate a composite frame, identify an attribute of the light source object included in the composite frame, and perform frame correction of the composite frame, based on the identified attribute.Type: ApplicationFiled: April 21, 2023Publication date: August 24, 2023Inventors: Woojhon CHOI, Wonjoon DO, Jaesung CHOI, Alok Shankarlal SHUKLA, Manoj Kumar MARRAMREDDY, Saketh SHARMA, Hamid Rahim SHEIKH, John Seokjun LEE, Akira OSAMOTO, Yibo XU
-
Publication number: 20230199201Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.Type: ApplicationFiled: February 19, 2023Publication date: June 22, 2023Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Patent number: 11659171Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.Type: GrantFiled: September 15, 2021Date of Patent: May 23, 2023Assignee: Texas Instruments IncorporatedInventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
-
Patent number: 11589060Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.Type: GrantFiled: March 5, 2021Date of Patent: February 21, 2023Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Publication number: 20220368894Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.Type: ApplicationFiled: July 27, 2022Publication date: November 17, 2022Inventors: Madhukar Budagavi, Akira Osamoto
-
Patent number: 11431963Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.Type: GrantFiled: January 12, 2021Date of Patent: August 30, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Madhukar Budagavi, Akira Osamoto
-
Patent number: 11381813Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.Type: GrantFiled: March 10, 2020Date of Patent: July 5, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
-
Publication number: 20220116604Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Patent number: 11245912Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.Type: GrantFiled: July 12, 2012Date of Patent: February 8, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
-
Publication number: 20220007012Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.Type: ApplicationFiled: September 15, 2021Publication date: January 6, 2022Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto