Patents by Inventor Pramod Kumar Swami

Pramod Kumar Swami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078284
    Abstract: A hardware accelerator is configured to perform matrix multiplication and/or additional operations to optimize keypoint matching. A sum of squared error (SSE) calculation may be determined by utilizing the hardware accelerator to perform matrix multiplication to obtain a cost matrix for two sets of keypoint descriptors from two images. The hardware accelerator may determine a best cost calculation for each keypoint in each direction, which is utilized to perform keypoint matching.
    Type: Application
    Filed: November 1, 2023
    Publication date: March 7, 2024
    Inventors: Deepak Kumar PODDAR, Soyeb NAGORI, Hrushikesh Tukaram GARUD, Pramod Kumar SWAMI
  • Patent number: 11915431
    Abstract: A method for sparse optical flow based tracking in a computer vision system is provided that includes detecting feature points in a frame captured by a monocular camera in the computer vision system to generate a plurality of detected feature points, generating a binary image indicating locations of the detected feature points with a bit value of one, wherein all other locations in the binary image have a bit value of zero, generating another binary image indicating neighborhoods of currently tracked points, wherein locations of the neighborhoods in the binary image have a bit value of zero and all other locations in the binary image have a bit value of one, and performing a binary AND of the two binary images to generate another binary image, wherein locations in the binary image having a bit value of one indicate new feature points detected in the frame.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: February 27, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Deepak Kumar Poddar, Anshu Jain, Desappan Kumar, Pramod Kumar Swami
  • Patent number: 11876989
    Abstract: Several methods and systems for facilitating multimedia data encoding are disclosed. In an embodiment, a plurality of picture buffers associated with multimedia data are received in an order of capture associated with the plurality of picture buffers. Buffer information is configured for each picture buffer from among the plurality of picture buffers comprising at least one of a metadata associated with the corresponding picture buffer and one or more encoding parameters for the corresponding picture buffer. A provision of picture buffers in an order of encoding is facilitated based on the configured buffer information.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: January 16, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Uday Pudipeddi Kiran, Deepak Kumar Poddar, Pramod Kumar Swami, Arun Shankar Kudana
  • Patent number: 11856220
    Abstract: Several techniques aimed at reducing computational complexity when encoding uses bi-predictively encoded frames (B-frames) are implemented in a video encoder. In an embodiment, B-frames are not used as reference frames for encoding P-frames and other B-frames. Non-use of B-frames allows a de-blocking filter used in the video encoder to be switched off when reconstructing encoded B-frames, and use of a lower complexity filter for fractional-resolution motion search for B-frames. In another embodiment, cost functions used in motion estimation for B-frames are simplified to reduce computational complexity. In one more embodiment, fractional pixel refinement in motion search for B-frames is simplified. In yet another embodiment, predictors used in motion estimation for a macro-block in a P-frame are selected from a B-frame that uses a same reference frame as the P-frame.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: December 26, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Soyeb Nagori, Arun Shankar Kudana, Pramod Kumar Swami
  • Patent number: 11847184
    Abstract: A matching accelerator in the form of a hardware accelerator configured to perform matrix multiplication and/or additional operations is used to optimize keypoint matching. An SSE calculation may be determined by utilizing the matching accelerator to perform matrix multiplication to obtain a cost matrix for two sets of keypoint descriptors from two images. The hardware accelerator may determine a best cost calculation for each keypoint in each direction, which is utilized to perform keypoint matching.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: December 19, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Deepak Kumar Poddar, Soyeb Nagori, Hrushikesh Tukaram Garud, Pramod Kumar Swami
  • Patent number: 11748599
    Abstract: Techniques including receiving a first set of values for processing by a machine learning (ML) network, storing a first portion of the first set of values in an on-chip memory, processing the first portion of the first set of values in a first layer of the ML network to generate a second portion of a second set of values, overwriting the stored first portion with the generated second portion, processing the second portion in a second layer of the ML network to generate a third portion of a third set of values, storing the third portion, repeating the steps of storing the first portion, processing the first portion, overwriting the stored first portion, processing the second portion, and storing the third portion for a fourth portion of the first set of values until all portions of the first set of values are processed to generate the third set of values.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: September 5, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Kumar Desappan, Mihir Narendra Mody, Pramod Kumar Swami, Anshu Jain, Rishabh Garg
  • Patent number: 11688078
    Abstract: A method for video object detection includes detecting an object in a first video frame, and selecting a first interest point and a second interest point of the object. The first interest point is in a first region of interest located at a first corner of a box surrounding the object. The second interest point is in a second region of interest located at a second corner of the box. The second corner is diagonally opposite the first corner. A first optical flow of the first interest point and a second optical flow of the second interest point are determined. A location of the object in a second video frame is estimated by determining, in the second video frame, a location of the first interest point based on the first optical flow and a location of the second interest point based on the second optical flow.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: June 27, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Soyeb Noormohammed Nagori, Manu Mathew, Kumar Desappan, Pramod Kumar Swami
  • Publication number: 20230064481
    Abstract: An electronic device, comprising one or more processors, wherein the one or more processors are configured to execute instructions causing the one or more processors to: receive a machine learning (ML) model and execution information associated with the ML model, wherein the execution information including first execution data indicating how to execute the ML model optimized based on a first performance criterion, and second execution data execution data indicating how to execute the ML model optimized based on a second performance criteria, the second performance criterion different from the first performance criteria; execute the ML model based on the first execution data; determine to execute the ML model based on the second execution data; and execute the ML model based on the second execution data.
    Type: Application
    Filed: August 31, 2021
    Publication date: March 2, 2023
    Inventors: Tarkesh PANDE, Rishabh GARG, Pramod Kumar SWAMI, Kumar DESAPPAN, Aishwarya DUBEY
  • Patent number: 11580719
    Abstract: A method for dynamically quantizing feature maps of a received image. The method includes convolving an image based on a predicted maximum value, a predicted minimum value, trained kernel weights and the image data. The input data is quantized based on the predicted minimum value and predicted maximum value. The output of the convolution is computed into an accumulator and re-quantized. The re-quantized value is output to an external memory. The predicted min value and the predicted max value are computed based on the previous max values and min values with a weighted average or a pre-determined formula. Initial min value and max value are computed based on known quantization methods and utilized for initializing the predicted min value and predicted max value in the quantization process.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: February 14, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Kumar Desappan, Manu Mathew, Pramod Kumar Swami, Praveen Eppa
  • Publication number: 20230013998
    Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run an ML model on a processing core; receiving a static memory allocation for running the ML model on the processing core; determining that a layer of the ML model uses more memory than the static memory allocated; transmitting, to a shared memory, a memory request for blocks of the shared memory; receiving an allocation of the requested blocks; running the layer of the ML model using the static memory and the range of memory addresses; and outputting results of running the layer of the ML model.
    Type: Application
    Filed: July 19, 2021
    Publication date: January 19, 2023
    Inventors: Mihir Narendra MODY, Kedar Satish CHITNIS, Kumar DESAPPAN, David SMITH, Pramod Kumar SWAMI, Shyam JAGANNATHAN
  • Publication number: 20220391776
    Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run a ML model, receiving synchronization information for organizing the running of the ML model with other ML models, determining, based on the synchronization information, to delay running the ML model, delaying the running of the ML model, determining, based on the synchronization information, a time to run the ML model; and running the ML model at the time.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventors: Mihir Narendra MODY, Kumar DESAPPAN, Kedar Satish CHITNIS, Pramod Kumar SWAMI, Kevin Patrick LAVERY, Prithvi Shankar YEYYADI ANANTHA, Shyam JAGANNATHAN
  • Publication number: 20220377322
    Abstract: This invention predicts that intra mode prediction is more effective for the macroblocks where motion estimation in inter mode prediction fails. This failure is indicated by a large value of the inter mode SAD. This invention performs intra mode prediction for only macro blocks have larger inter mode SADs. The definition of a large inter mode SAD differs for different content. This invention compares the inter mode SAD of a current macroblock with an adaptive threshold. This adaptive threshold depends on the average and variance of the SADs of the previous predicted frame. An adaptive threshold is calculated for each new predictive frame.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 24, 2022
    Inventors: Soyeb Nagori, Manu Mathew, Pramod Kumar Swami
  • Publication number: 20220327355
    Abstract: A method for generating a sparsified convolutional neural network (CNN) is provided that includes training the CNN to generate coefficient values of filters of convolution layers, and performing sparsified fine tuning on the convolution layers to generate the sparsified CNN, wherein the sparsified fine tuning causes selected nonzero coefficient values of the filters to be set to zero.
    Type: Application
    Filed: June 29, 2022
    Publication date: October 13, 2022
    Inventors: Manu Mathew, Kumar Desappan, Pramod Kumar Swami
  • Publication number: 20220327055
    Abstract: An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory.
    Type: Application
    Filed: June 22, 2022
    Publication date: October 13, 2022
    Inventors: Abhijeet Ashok CHACHAD, Timothy David ANDERSON, Pramod Kumar SWAMI, Naveen BHORIA, David Matthew THOMPSON, Neelima MURALIDHARAN
  • Publication number: 20220327810
    Abstract: A method for multi-label image classification in a convolutional neural network (CNN) is provided that includes forming a composite image from a plurality of clipped images, and processing the composite image by the CNN to generate a probability vector for each clipped image of the plurality of clipped images, wherein a length of a probability vector is equal to a number of classes the CNN is designed to classify.
    Type: Application
    Filed: December 18, 2021
    Publication date: October 13, 2022
    Inventors: Soyeb Noormohammed Nagori, Manu Mathew, Debapriya Maji, Pramod Kumar Swami
  • Publication number: 20220321905
    Abstract: Several techniques aimed at reducing computational complexity when encoding uses bi-predictively encoded frames (B-frames) are implemented in a video encoder. In an embodiment, B-frames are not used as reference frames for encoding P-frames and other B-frames. Non-use of B-frames allows a de-blocking filter used in the video encoder to be switched off when reconstructing encoded B-frames, and use of a lower complexity filter for fractional-resolution motion search for B-frames. In another embodiment, cost functions used in motion estimation for B-frames are simplified to reduce computational complexity. In one more embodiment, fractional pixel refinement in motion search for B-frames is simplified. In yet another embodiment, predictors used in motion estimation for a macro-block in a P-frame are selected from a B-frame that uses a same reference frame as the P-frame.
    Type: Application
    Filed: June 21, 2022
    Publication date: October 6, 2022
    Inventors: Soyeb Nagori, Arun Shankar Kudana, Pramod Kumar Swami
  • Patent number: 11425371
    Abstract: This invention predicts that intra mode prediction is more effective for the macroblocks where motion estimation in inter mode prediction fails. This failure is indicated by a large value of the inter mode SAD. This invention performs intra mode prediction for only macro blocks have larger inter mode SADs. The definition of a large inter mode SAD differs for different content. This invention compares the inter mode SAD of a current macroblock with an adaptive threshold. This adaptive threshold depends on the average and variance of the SADs of the previous predicted frame. An adaptive threshold is calculated for each new predictive frame.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 23, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Soyeb Nagori, Manu Mathew, Pramod Kumar Swami
  • Patent number: 11392498
    Abstract: An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 19, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, Pramod Kumar Swami, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
  • Patent number: 11388434
    Abstract: Several techniques aimed at reducing computational complexity when encoding uses bi-predictively encoded frames (B-frames) are implemented in a video encoder. In an embodiment, B-frames are not used as reference frames for encoding P-frames and other B-frames. Non-use of B-frames allows a de-blocking filter used in the video encoder to be switched off when reconstructing encoded B-frames, and use of a lower complexity filter for fractional-resolution motion search for B-frames. In another embodiment, cost functions used in motion estimation for B-frames are simplified to reduce computational complexity. In one more embodiment, fractional pixel refinement in motion search for B-frames is simplified. In yet another embodiment, predictors used in motion estimation for a macro-block in a P-frame are selected from a B-frame that uses a same reference frame as the P-frame.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: July 12, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Soyeb Nagori, Arun Shankar Kudana, Pramod Kumar Swami
  • Publication number: 20220012635
    Abstract: Techniques for enhancing machine learning (ML) model execution. The technique includes determining an amount of memory used to process layers of a machine learning network having multiple layers, smoothing the amount of memory used to process the layers of the machine learning network based on a number of layers, identifying change layers where the smoothed amount of memory used changes more than a memory change threshold amount, grouping the layers of the machine learning network into a first layer grouping based on the identified change layers, and outputting the first layer grouping.
    Type: Application
    Filed: May 24, 2021
    Publication date: January 13, 2022
    Inventors: Rishabh GARG, Pramod Kumar SWAMI, Kumar DESAPPAN, Anshu JAIN