Patents by Inventor Helena Caminal

Helena Caminal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12001841
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: June 4, 2024
    Assignee: Cornell University
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Publication number: 20240152292
    Abstract: Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.
    Type: Application
    Filed: January 17, 2024
    Publication date: May 9, 2024
    Inventors: Sean S. Eilert, Kenneth M. Curewitz, Helena Caminal, Ameen D. Akel
  • Patent number: 11899961
    Abstract: Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: February 13, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Sean S. Eilert, Kenneth M. Curewitz, Helena Caminal, Ameen D. Akel
  • Publication number: 20230267043
    Abstract: Methods, systems, and devices for parity-based error management are described. A processing system that performs a computational operation on a set of operands may perform a computational operation, (e.g., the same computational operation) on parity bits for the operands. The processing system may then use the parity bits that result from the computational operation on the parity bits to detect, and discretionarily correct, one or more errors in the output that results from the computational operation on the operands.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Inventors: Ameen D. Akel, Helena Caminal, Sean S. Eilert
  • Publication number: 20230214148
    Abstract: Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.
    Type: Application
    Filed: February 23, 2022
    Publication date: July 6, 2023
    Inventors: Sean S. Eilert, Kenneth M. Curewitz, Helena Caminal, Ameen D. Akel
  • Publication number: 20230208444
    Abstract: Methods, systems, and devices for associative computing for error correction are described. A device may receive first data representative of a first codeword of a size for error correction. The device may identify a set of content-addressable memory cells that stores data representative of a set of codewords each of which is the size of the first codeword. The device may identify second data representative of the first codeword in the set of content-addressable memory cells. Based on identifying the second data, the device may transmit an indication of a valid codeword that is mapped to the second data.
    Type: Application
    Filed: February 22, 2022
    Publication date: June 29, 2023
    Inventors: Ameen D. Akel, Helena Caminal, Sean S. Eilert
  • Publication number: 20230032335
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Application
    Filed: September 22, 2022
    Publication date: February 2, 2023
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Patent number: 11461097
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 4, 2022
    Assignee: CORNELL UNIVERSITY
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten
  • Publication number: 20220229663
    Abstract: A content-addressable processing engine, also referred to herein as CAPE, is provided. Processing-in-memory (PIM) architectures attempt to overcome the von Neumann bottleneck by combining computation and storage logic into a single component. CAPE provides a general-purpose PIM microarchitecture that provides acceleration of vector operations while being programmable with standard reduced instruction set computing (RISC) instructions, such as RISC-V instructions with standard vector extensions. CAPE can be implemented as a standalone core that specializes in associative computing, and that can be integrated in a tiled multicore chip alongside other types of compute engines. Certain embodiments of CAPE achieve average speedups of 14× (up to 254×) over an area-equivalent out-of-order processor core tile with three levels of caches across a diverse set of representative applications.
    Type: Application
    Filed: January 15, 2021
    Publication date: July 21, 2022
    Inventors: José F. Martínez, Helena Caminal, Kailin Yang, Khalid Al-Hawaj, Christopher Batten