Patents by Inventor Abhik Sarkar

Abhik Sarkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10789056
    Abstract: Technologies for binary translation include a computing device that allocates a translation cache shared by all threads associated with a corresponding execution domain. The computing device assigns a thread to an execution domain, translates original binary code of the thread to generate translated binary code, and installs the translated binary code into the corresponding translation cache for execution. The computing device may allocate a global region cache, generate region metadata associated with the original binary code of a thread, and store the region metadata in the global region cache. The original binary code may be translated using the region metadata. The computing device may allocate a global prototype cache, translate the original binary code of a thread to generate prototype code, and install the prototype code in the global prototype cache. The prototype code may be a non-executable version of the translated binary code. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: September 29, 2020
    Assignee: Intel Corporation
    Inventors: Koichi Yamada, Jose A. Baiocchi Paredes, Abhik Sarkar, Ajay Harikumar, Jiwei Lu
  • Patent number: 9990233
    Abstract: Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: June 5, 2018
    Assignee: Intel Corporation
    Inventors: Abhik Sarkar, Jiwei Lu, Palanivelrajan Rajan Shanmugavelayutham, Jason M. Agron, Koichi Yamada
  • Publication number: 20180011696
    Abstract: Technologies for binary translation include a computing device that allocates a translation cache shared by all threads associated with a corresponding execution domain. The computing device assigns a thread to an execution domain, translates original binary code of the thread to generate translated binary code, and installs the translated binary code into the corresponding translation cache for execution. The computing device may allocate a global region cache, generate region metadata associated with the original binary code of a thread, and store the region metadata in the global region cache. The original binary code may be translated using the region metadata. The computing device may allocate a global prototype cache, translate the original binary code of a thread to generate prototype code, and install the prototype code in the global prototype cache. The prototype code may be a non-executable version of the translated binary code. Other embodiments are described and claimed.
    Type: Application
    Filed: July 6, 2016
    Publication date: January 11, 2018
    Inventors: Koichi Yamada, Jose A. Baiocchi Paredes, Abhik Sarkar, Ajay Harikumar, Jiwei Lu
  • Publication number: 20160188372
    Abstract: Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed.
    Type: Application
    Filed: June 28, 2013
    Publication date: June 30, 2016
    Inventors: Abhik SARKAR, Jiwei LU, Palanivelrajan Rajan SHANMUGAVELAYUTHAM, Jason M. AGRON, Koichi YAMADA
  • Patent number: 7983342
    Abstract: A macro-block level parallel video decoder for a parallel processing environment is provided. The video decoder includes a Variable Length Decoding (VLD) block for decoding the encoded Discrete Cosine Transform (DCT) coefficients, a master node that receives the decoded DCT coefficients, and multiple slave nodes/processors for parallel implementation of Inverse Discrete Cosine Transform (IDCT) and motion compensation at the macro-block level. Also provided is a method for macro-block level video decoding in a parallel processing system.
    Type: Grant
    Filed: July 28, 2005
    Date of Patent: July 19, 2011
    Assignee: STMicroelectronics Pvt. Ltd.
    Inventors: Kaushik Saha, Abhik Sarkar, Srijib Narayan Maiti
  • Publication number: 20060072674
    Abstract: A macro-block level parallel video decoder for a parallel processing environment is provided. The video decoder includes a Variable Length Decoding (VLD) block for decoding the encoded Discrete Cosine Transform (DCT) coefficients, a master node that receives the decoded DCT coefficients, and multiple slave nodes/processors for parallel implementation of Inverse Discrete Cosine Transform (IDCT) and motion compensation at the macro-block level. Also provided is a method for macro-block level video decoding in a parallel processing system.
    Type: Application
    Filed: July 28, 2005
    Publication date: April 6, 2006
    Applicant: STMICROELECTRONICS PVT. LTD.
    Inventors: Kaushik Saha, Abhik Sarkar, Srijib Maiti