Patents by Inventor Akira Osamoto

Akira Osamoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250113047
    Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.
    Type: Application
    Filed: December 13, 2024
    Publication date: April 3, 2025
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Patent number: 12184841
    Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.
    Type: Grant
    Filed: May 19, 2023
    Date of Patent: December 31, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
  • Patent number: 12170784
    Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.
    Type: Grant
    Filed: February 19, 2023
    Date of Patent: December 17, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Publication number: 20240185431
    Abstract: A method includes obtaining a reference frame from among multiple image frames of a scene. The method also includes generating a segmentation mask using the reference frame, where the segmentation mask contains information for separation of foreground and background in the scene. The method further includes applying the segmentation mask to each of the multiple image frames to generate foreground image frames and background image frames. The method also includes performing multi-frame registration on each of the foreground image frames to generate registered foreground image frames. The method further includes performing multi-frame registration on each of the background image frames to generate registered background image frames. In addition, the method includes combining the registered foreground image frames and the registered background image frames to generate a combined registered multi-frame image of the scene.
    Type: Application
    Filed: December 1, 2022
    Publication date: June 6, 2024
    Inventors: Yufang Sun, Akira Osamoto, John Seokjun Lee, Hamid R. Sheikh
  • Publication number: 20240114174
    Abstract: A method and a video processor for preventing start code confusion. The method includes aligning bytes of a slice header relating to slice data when the slice header is not byte aligned or inserting differential data at the end of the slice header before the slice data when the slice header is byte aligned, performing emulation prevention byte insertion on the slice header, and combine the slice header and the slice data after performing emulation prevention byte insertion.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 4, 2024
    Inventors: Vivienne Sze, Madhukar Budagavi, Akira Osamoto, Yasutomo Matsuba
  • Publication number: 20240098249
    Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 21, 2024
    Inventors: Madhukar Budagavi, Akira Osamoto
  • Patent number: 11849148
    Abstract: A method and a video processor for preventing start code confusion. The method includes aligning bytes of a slice header relating to slice data when the slice header is not byte aligned or inserting differential data at the end of the slice header before the slice data when the slice header is byte aligned, performing emulation prevention byte insertion on the slice header, and combine the slice header and the slice data after performing emulation prevention byte insertion.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: December 19, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Vivienne Sze, Madhukar Budagavi, Akira Osamoto, Yasutomo Matsuba
  • Patent number: 11825078
    Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: November 21, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Madhukar Budagavi, Akira Osamoto
  • Patent number: 11812041
    Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: November 7, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Publication number: 20230291897
    Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.
    Type: Application
    Filed: May 19, 2023
    Publication date: September 14, 2023
    Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
  • Publication number: 20230267702
    Abstract: An electronic device is provided. The electronic device includes a display, a camera module disposed under the display, and a processor electrically connected to the display and the camera module. The processor is configured to acquire a sample frame by using the camera module, identify whether a light source object is included in the sample frame, determine an imaging parameter for acquisition of first multiple frames when the light source object is identified to be included in the sample frame, acquire multiple frames, based on the imaging parameter, composite the multiple frames to generate a composite frame, identify an attribute of the light source object included in the composite frame, and perform frame correction of the composite frame, based on the identified attribute.
    Type: Application
    Filed: April 21, 2023
    Publication date: August 24, 2023
    Inventors: Woojhon CHOI, Wonjoon DO, Jaesung CHOI, Alok Shankarlal SHUKLA, Manoj Kumar MARRAMREDDY, Saketh SHARMA, Hamid Rahim SHEIKH, John Seokjun LEE, Akira OSAMOTO, Yibo XU
  • Publication number: 20230199201
    Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.
    Type: Application
    Filed: February 19, 2023
    Publication date: June 22, 2023
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Patent number: 11659171
    Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: May 23, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
  • Patent number: 11589060
    Abstract: A method for coding unit partitioning in a video encoder is provided that includes performing intra-prediction on each permitted coding unit (CU) in a CU hierarchy of a largest coding unit (LCU) to determine an intra-prediction coding cost for each permitted CU, storing the intra-prediction coding cost for each intra-predicted CU in memory, and performing inter-prediction, prediction mode selection, and CU partition selection on each permitted CU in the CU hierarchy to determine a CU partitioning for encoding the LCU, wherein the stored intra-prediction coding costs for the CUs are used.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: February 21, 2023
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Publication number: 20220368894
    Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 17, 2022
    Inventors: Madhukar Budagavi, Akira Osamoto
  • Patent number: 11431963
    Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes down sampling a first reconstructed luma block of a largest coding unit (LCU), computing parameters ? and ? of a linear model using immediate top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of the first reconstructed luma block and reconstructed neighboring chroma samples of a chroma block corresponding to the first reconstructed luma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the down sampled first reconstructed luma block, and wherein the immediate top neighboring reconstructed luma samples are the only top neighboring reconstructed luma samples used, and computing samples of a first predicted chroma block from corresponding samples of the down sampled first reconstructed luma block using the linear model and the parameters.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: August 30, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Madhukar Budagavi, Akira Osamoto
  • Patent number: 11381813
    Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: July 5, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto
  • Publication number: 20220116604
    Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Patent number: 11245912
    Abstract: A method for motion estimation is provided that includes determining a first motion vector for a first child coding unit (CU) of a parent CU and a second motion vector for a second child CU of the parent CU, wherein the first child CU, the second child CU, and the parent CU are in a CU hierarchy, wherein the first and second child CUs are smallest size CUs in the CU hierarchy, and wherein a first motion search type is used to determine the first motion vector and the second motion vector, selecting the first and second motion vectors as candidate predictors for the parent CU, selecting a predictor for a prediction unit (PU) of the first parent CU from the candidate predictors, and refining the predictor using a second motion search type to determine a motion vector for the PU.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: February 8, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hyung Joon Kim, Minhua Zhou, Akira Osamoto, Hideo Tamama
  • Publication number: 20220007012
    Abstract: Several systems and methods for intra-prediction estimation of video pictures are disclosed. In an embodiment, the method includes accessing four ‘N×N’ pixel blocks comprising luma-related pixels. The four ‘N×N’ pixel blocks collectively configure a ‘2N×2N’ pixel block. A first pre-determined number of candidate luma intra-prediction modes is accessed for each of the four ‘N×N’ pixel blocks. A presence of one or more luma intra-prediction modes that are common among the candidate luma intra-prediction modes of at least two of the four ‘N×N’ pixel blocks is identified. The method further includes performing, based on the identification, one of (1) selecting a principal luma intra-prediction mode for the ‘2N×2N’ pixel block and (2) limiting a partitioning size to a ‘N×N’ pixel block size for a portion of the video picture corresponding to the ‘2N×2N’ pixel block.
    Type: Application
    Filed: September 15, 2021
    Publication date: January 6, 2022
    Inventors: Ranga Ramanujam Srinivasan, Hyung Joon Kim, Akira Osamoto