Patents by Inventor Dipan Kumar Mandal

Dipan Kumar Mandal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230297383
    Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 21, 2023
    Inventors: Jayasree Sankaranarayanan, Dipan Kumar Mandal
  • Patent number: 11669330
    Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: June 6, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Jayasree Sankaranarayanan, Dipan Kumar Mandal
  • Publication number: 20220408106
    Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
    Type: Application
    Filed: August 23, 2022
    Publication date: December 22, 2022
    Inventors: Hetul Sanghvi, Mihir Narendra Mody, Niraj Nandan, Mahesh Madhukar Mehendale, Subrangshu Das, Dipan Kumar Mandal, Nainala Vyagrheswarudu, Vijayavardhan Baireddy, Pavan Venkata Shastry
  • Patent number: 11445207
    Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: September 13, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hetul Sanghvi, Mihir Narendra Mody, Niraj Nandan, Mahesh Madhukar Mehendale, Subrangshu Das, Dipan Kumar Mandal, Nainala Vyagrheswarudu, Vijayavardhan Baireddy, Pavan Venkata Shastry
  • Publication number: 20220196798
    Abstract: According to various embodiments, a radar device is described comprising a processor configured to generate a scene comprising an object based on a plurality of receive wireless signals, generate a ground truth object parameter of the object and generate a dataset representative of the scene and a radar detector configured to determine an object parameter of the object using a machine learning algorithm and the dataset, determine an error value of the machine learning algorithm using a cost function, the object parameter, and the ground truth object parameter and adjust the machine learning algorithm values to reduce the error value.
    Type: Application
    Filed: July 14, 2021
    Publication date: June 23, 2022
    Inventors: Chulong CHEN, Wenling Margaret HUANG, Saiveena KESARAJU, Ivan SIMÕES GASPAR, Pradyumna S. SINGH, Biji GEORGE, Dipan Kumar MANDAL, Om Ji OMER, Sreenivas SUBRAMONEY, Yuval AMIZUR, Leor BANIN, Hao CHEN, Nir DVORECKI, Shengbo XU
  • Patent number: 11347828
    Abstract: A disclosed apparatus to multiply matrices includes a compute engine. The compute engine includes multipliers in a two dimensional array that has a plurality of array locations defined by columns and rows. The apparatus also includes a plurality of adders in columns. A broadcast interconnect between a cache and the multipliers broadcasts a first set of operand data elements to multipliers in the rows of the array. A unicast interconnect unicasts a second set of operands between a data buffer and the multipliers. The multipliers multiply the operands to generate a plurality of outputs, and the adders add the outputs generated by the multipliers.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: May 31, 2022
    Assignee: Intel Corporation
    Inventors: Biji George, Om Ji Omer, Dipan Kumar Mandal, Cormac Brick, Lance Hacking, Sreenivas Subramoney, Belliappa Kuttanna
  • Patent number: 11238309
    Abstract: An example apparatus for selecting keypoints in image includes a keypoint detector to detect keypoints in a plurality of received images. The apparatus also includes a score calculator to calculate a keypoint score for each of the detected keypoints based on a descriptor score indicating descriptor invariance. The apparatus includes a keypoint selector to select keypoints based on the calculated keypoint scores. The apparatus also further includes a descriptor calculator to calculate descriptors for each of the selected keypoints. The apparatus also includes a descriptor matcher to match corresponding descriptors between images in the plurality of received images. The apparatus further also includes a feature tracker to track a feature in the plurality of images based on the matched descriptors.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: February 1, 2022
    Assignee: Intel Corporation
    Inventors: Dipan Kumar Mandal, Gurpreet Kalsi, Om J Omer, Prashant Laddha, Sreenivas Subramoney
  • Patent number: 11189000
    Abstract: An embodiment of an image processor device includes technology to fetch a feature point data set from outside a local memory, locally store three or more fetched feature point data sets in the local memory, compute orientation information for each fetched feature point data set, compute first descriptor information based on the computed orientation information and a first locally stored feature point data set in parallel with a fetch and local store of a second feature point data set in the local memory, and compute second descriptor information based on the computed orientation information and the second locally stored feature point data set in parallel with the compute of the first descriptor information. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: November 30, 2021
    Assignee: Intel Corporation
    Inventors: Gopi Neela, Dipan Kumar Mandal, Gurpreet S. Kalsi, Prashant Laddha, Om J. Omer, Anirud Thyagharajan, Srivatsava Jandhyala
  • Publication number: 20210358135
    Abstract: An example apparatus for tracking features in image data includes an image data receiver to receive initial image data corresponding to an image from a camera and store the image data a circular buffer. The apparatus also includes a feature detector to detect features in the image data. The apparatus further includes a feature sorter to sort the detected features to generate sorted feature points. The apparatus includes a feature tracker to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. The subsequent image data is to replace the initial image data in the circular buffer.
    Type: Application
    Filed: July 28, 2021
    Publication date: November 18, 2021
    Inventors: Dipan Kumar Mandal, Nagadastagiri Reddy C, Mahesh Mamidipaka, Om J Omer
  • Publication number: 20210255869
    Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
    Type: Application
    Filed: May 3, 2021
    Publication date: August 19, 2021
    Inventors: Jayasree Sankaranarayanan, Dipan Kumar Mandal
  • Patent number: 11080864
    Abstract: An example apparatus for tracking features in image data includes an image data receiver to receive initial image data corresponding to an image from a camera and store the image data a circular buffer. The apparatus also includes a feature detector to detect features in the image data. The apparatus further includes a feature sorter to sort the detected features to generate sorted feature points. The apparatus includes a feature tracker to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. The subsequent image data is to replace the initial image data in the circular buffer.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: August 3, 2021
    Assignee: Intel Corporation
    Inventors: Dipan Kumar Mandal, Nagadastagiri Reddy C, Mahesh Mamidipaka, Om J Omer
  • Patent number: 10996955
    Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: May 4, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Jayasree Sankaranarayanan, Dipan Kumar Mandal
  • Publication number: 20200226203
    Abstract: A disclosed apparatus to multiply matrices includes a compute engine. The compute engine includes multipliers in a two dimensional array that has a plurality of array locations defined by columns and rows. The apparatus also includes a plurality of adders in columns. A broadcast interconnect between a cache and the multipliers broadcasts a first set of operand data elements to multipliers in the rows of the array. A unicast interconnect unicasts a second set of operands between a data buffer and the multipliers. The multipliers multiply the operands to generate a plurality of outputs, and the adders add the outputs generated by the multipliers.
    Type: Application
    Filed: March 27, 2020
    Publication date: July 16, 2020
    Inventors: Biji George, Om Ji Omer, Dipan Kumar Mandal, Cormac Brick, Lance Hacking, Sreenivas Subramoney, Belliappa Kuttanna
  • Publication number: 20200120351
    Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
    Type: Application
    Filed: December 16, 2019
    Publication date: April 16, 2020
    Inventors: Hetul Sanghvi, Mihir Narendra Mody, Niraj Nandan, Mahesh Madhukar Mehendale, Subrangshu Das, Dipan Kumar Mandal, Nainala Vyagrheswarudu, Vijayavardhan Baireddy, Pavan Venkata Shastry
  • Publication number: 20200058133
    Abstract: Disclosed techniques relate to forming a block sum of picture elements employing a vector dot product instruction to sum packed picture elements and the mask producing a vector of masked horizontal picture element. The block sum is formed from plural horizontal sums via vector single instruction multiple data (SIMD) addition.
    Type: Application
    Filed: August 27, 2019
    Publication date: February 20, 2020
    Inventors: Jayasree SANKARANARAYANAN, Dipan Kumar MANDAL
  • Patent number: 10547859
    Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: January 28, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hetul Sanghvi, Mihir Narendra Mody, Niraj Nandan, Mahesh Madhukar Mehendale, Subrangshu Das, Dipan Kumar Mandal, Nainala Vyagrheswarudu, Vijayavardhan Baireddy, Pavan Venkata Shastry
  • Patent number: 10540420
    Abstract: Systems and methods for a hardware accelerated matrix decomposition matrix decomposition circuit are described herein. This matrix decomposition circuit splits matrix decomposition operations into parallel operation circuits and serial operation circuits, and joins the parallel and serial operation circuits using specific dependency handling logic for efficient parallel execution. This provides fast matrix decomposition with low power consumption, reduced memory footprint, and reduced memory bandwidth.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Gurpreet Singh Kalsi, Om Ji Omer, Santhosh Kumar Rethinagiri, Anish N K, Dipan Kumar Mandal
  • Publication number: 20190377578
    Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
    Type: Application
    Filed: June 25, 2019
    Publication date: December 12, 2019
    Inventors: Jayasree Sankaranarayanan, Dipan Kumar Mandal
  • Publication number: 20190333183
    Abstract: An embodiment of an image processor device includes technology to fetch a feature point data set from outside a local memory, locally store three or more fetched feature point data sets in the local memory, compute orientation information for each fetched feature point data set, compute first descriptor information based on the computed orientation information and a first locally stored feature point data set in parallel with a fetch and local store of a second feature point data set in the local memory, and compute second descriptor information based on the computed orientation information and the second locally stored feature point data set in parallel with the compute of the first descriptor information. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: June 24, 2019
    Publication date: October 31, 2019
    Applicant: Intel Corporation
    Inventors: Gopi Neela, Dipan Kumar Mandal, Gurpreet S. Kalsi, Prashant Laddha, Om J. Omer, Anirud Thyagharajan, Srivatsava Jandhyala
  • Patent number: 10397591
    Abstract: A control processor for a video encode-decode engine is provided that includes an instruction pipeline. The instruction pipeline includes an instruction fetch stage coupled to an instruction memory to fetch instructions, an instruction decoding stage coupled to the instruction fetch stage to receive the fetched instructions, and an execution stage coupled to the instruction decoding stage to receive and execute decoded instructions. The instruction decoding stage and the instruction execution stage are configured to decode and execute a set of instructions in an instruction set of the control processor that are designed specifically for accelerating video sequence encoding and encoded video bit stream decoding.
    Type: Grant
    Filed: April 11, 2015
    Date of Patent: August 27, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Dipan Kumar Mandal, Mihir Narendra Mody, Mahesh Madhukar Mehendale, Chaitanya Satish Ghone, Piyali Goswami, Naresh Kumar Yadav, Hetul Sanghvi, Niraj Nandan