Patents by Inventor Jose F. Martinez

Jose F. Martinez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12001841
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: June 4, 2024
    Assignee: Cornell University
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Publication number: 20230032335
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Application
    Filed: September 22, 2022
    Publication date: February 2, 2023
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Patent number: 11461097
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 4, 2022
    Assignee: CORNELL UNIVERSITY
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Publication number: 20220229663
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Application
    Filed: January 15, 2021
    Publication date: July 21, 2022
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Publication number: 20160117118
    Abstract: The invention relates to a system and methods for memory scheduling performed by a processor using a characterization logic and a memory scheduler. The processor influences the order by which memory requests are serviced and provides associated hints to the memory scheduler, where scheduling actually takes place.
    Type: Application
    Filed: June 20, 2014
    Publication date: April 28, 2016
    Inventors: José F. MARTÍNEZ, Saugata GHOSE
  • Patent number: 7809926
    Abstract: A reconfigurable multiprocessor system including a number of processing units and components enabling executing sequential code collectively at processing units and enabling changing the architectural configuration of the processing units.
    Type: Grant
    Filed: November 3, 2006
    Date of Patent: October 5, 2010
    Assignee: Cornell Research Foundation, Inc.
    Inventors: Jose F. Martinez, Engin Ipek, Meyrem Kirman, Nevin Kirman
  • Patent number: 7747841
    Abstract: A technique known as checkpointed early load retirement, combines register checkpointing load-value prediction to manage long-latency loads. When a long-latency load reaches the retirement stage unresolved, the processor enters Clear mode by (1) taking a Checkpoint of the architectural registers, (2) supplying a load-value prediction to consumers, and (3) early-retiring the long-latency load. This unclogs retirement, thereby “clearing the way” for subsequent instructions to retire, and also allowing instructions dependent on the long-latency load to execute sooner. When the actual value returns from memory, it is compared against the prediction. A misprediction causes the processor to roll back to the checkpoint, discarding all subsequent computation.
    Type: Grant
    Filed: September 26, 2006
    Date of Patent: June 29, 2010
    Assignee: Cornell Research Foundation, Inc.
    Inventors: Jose F. Martinez, Meyrem Kirman, Nevin Kirman
  • Publication number: 20080109637
    Abstract: A reconfigurable multiprocessor system including a number of processing units and components enabling executing sequential code collectively at processing units and enabling changing the architectural configuration of the processing units.
    Type: Application
    Filed: November 3, 2006
    Publication date: May 8, 2008
    Applicant: Cornell Research Foundation, Inc.
    Inventors: Jose F. Martinez, Engin Ipek, Meyrem Kirman, Nevin Kirman