Patents by Inventor Vincent Cavé

Vincent Cavé has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240069921
    Abstract: Technology described herein provides a dynamically reconfigurable processing core. The technology includes a plurality of pipelines comprising a core, where the core is reconfigurable into one of a plurality of core modes, a core network to provide inter-pipeline connections for the pipelines, and logic to receive a morph instruction including a target core mode from an application running on the core, determine a present core state for the core, and morph, based on the present core state, the core to the target core mode. In embodiments, to morph the core, the logic is to select, based on the target core mode, which inter-pipeline connections are active, where each pipeline includes at least one multiplexor via which the inter-pipeline connections are selected to be active. In embodiments, to morph the core, the logic is further to select, based on the target core mode, which memory access paths are active.
    Type: Application
    Filed: September 29, 2023
    Publication date: February 29, 2024
    Inventors: Scott Cline, Robert Pawlowski, Joshua Fryman, Ivan Ganev, Vincent Cave, Sebastian Szkoda, Fabio Checconi
  • Patent number: 11630691
    Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: April 18, 2023
    Assignee: Intel Corporation
    Inventors: Robert Pawlowski, Ankit More, Jason M. Howard, Joshua B. Fryman, Tina C. Zhong, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cave, Sriram Aananthakrishnan, Bharadwaj Krishnamurthy
  • Patent number: 11360809
    Abstract: Embodiments of apparatuses, methods, and systems for scheduling tasks to hardware threads are described. In an embodiment, a processor includes a multiple hardware threads and a task manager. The task manager is to issue a task to a hardware thread. The task manager includes a hardware task queue to store a descriptor for the task. The descriptor is to include a field to store a value to indicate whether the task is a single task, a collection of iterative tasks, and a linked list of tasks.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: June 14, 2022
    Assignee: Intel Corporation
    Inventors: William Paul Griffin, Joshua Fryman, Jason Howard, Sang Phill Park, Robert Pawlowski, Michael Abbott, Scott Cline, Samkit Jain, Ankit More, Vincent Cave, Fabrizio Petrini, Ivan Ganev
  • Publication number: 20220100508
    Abstract: Embodiments of apparatuses and methods for copying and operating on matrix elements are described. In embodiments, an apparatus includes a hardware instruction decoder to decode a single instruction and execution circuitry, coupled to hardware instruction decoder, to perform one or more operations corresponding to the single instruction. The single instruction has a first operand to reference a base address of a first representation of a source matrix and a second operand to reference a base address of second representation of a destination matrix. The one or more operations include copying elements of the source matrix to corresponding locations in the destination matrix and filling empty elements of the destination matrix with a single value.
    Type: Application
    Filed: December 25, 2020
    Publication date: March 31, 2022
    Applicant: Intel Corporation
    Inventors: Robert Pawlowski, Ankit More, Vincent Cave, Sriram Aananthakrishnan, Jason M. Howard, Joshua B. Fryman
  • Publication number: 20210409265
    Abstract: Examples described herein relate to a first group of core nodes to couple with a group of switch nodes and a second group of core nodes to couple with the group of switch nodes, wherein: a core node of the first or second group of core nodes includes circuitry to execute one or more message passing instructions that indicate a configuration of a network to transmit data toward two or more endpoint core nodes and a switch node of the group of switch nodes includes circuitry to execute one or more message passing instructions that indicate the configuration to transmit data toward the two or more endpoint core nodes.
    Type: Application
    Filed: September 13, 2021
    Publication date: December 30, 2021
    Inventors: Robert PAWLOWSKI, Vincent CAVE, Shruti SHARMA, Fabrizio PETRINI, Joshua B. FRYMAN, Ankit MORE
  • Publication number: 20210389984
    Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.
    Type: Application
    Filed: August 24, 2021
    Publication date: December 16, 2021
    Inventors: Robert PAWLOWSKI, Ankit MORE, Jason M. HOWARD, Joshua B. FRYMAN, Tina C. ZHONG, Shaden SMITH, Sowmya PITCHAIMOORTHY, Samkit JAIN, Vincent CAVE, Sriram AANANTHAKRISHNAN, Bharadwaj KRISHNAMURTHY
  • Patent number: 11106494
    Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 31, 2021
    Assignee: Intel Corporation
    Inventors: Robert Pawlowski, Ankit More, Jason M. Howard, Joshua B. Fryman, Tina C. Zhong, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cave, Sriram Aananthakrishnan, Bharadwaj Krishnamurthy
  • Patent number: 11061742
    Abstract: In one embodiment, a first processor core includes: a plurality of execution pipelines each to execute instructions of one or more threads; a plurality of pipeline barrier circuits coupled to the plurality of execution pipelines, each of the plurality of pipeline barrier circuits associated with one of the plurality of execution pipelines to maintain status information for a plurality of barrier groups, each of the plurality of barrier groups formed of at least two threads; and a core barrier circuit to control operation of the plurality of pipeline barrier circuits and to inform the plurality of pipeline barrier circuits when a first barrier has been reached by a first barrier group of the plurality of barrier groups. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 13, 2021
    Assignee: INTEL CORPORATION
    Inventors: Robert Pawlowski, Ankit More, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cavé, Sriram Aananthakrishnan, Jason M. Howard, Joshua B. Fryman
  • Publication number: 20200401412
    Abstract: Disclosed embodiments relate to hardware support for dual-memory atomic operations. In one example, a processor includes multiple cores, each including multiple multi-threaded pipelines (MTPs), each associated with a memory, an atomic unit (ATMU) to perform atomic operations and a write-combine buffer (WCB) to manage access to and locks of cache lines in the associated memory, each MTP including fetch and decode stages to fetch and decode an instruction having fields to specify first and second memory locations and an opcode calling for a first MTP to send a request to a second MTP of the multiple MTPs, the second MTP being associated with a memory to which the first memory location is mapped, and to perform an atomic dual-memory operation on the first and second memory locations using its associated ATMU and WCB to perform the request.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Applicant: Intel Corporation
    Inventors: Robert PAWLOWSKI, Joshua B. FRYMAN, Vincent CAVE, Eric M. SCHWARTZ, Ivan B. GANEV, Jason M. HOWARD, Ankit MORE, Shaden SMITH
  • Patent number: 10795819
    Abstract: Disclosed embodiments relate to a system with configurable cache sub-domains and cross-die memory coherency. In one example, a system includes R racks, each rack housing N nodes, each node incorporating D dies, each die containing C cores and a die shadow tag, each core including P pipelines and a core shadow tag, each pipelines associated with a data cache and data cache tags and being either non-coherent or coherent and one of X coherency domains, wherein each pipeline, when needing to read a cache line, issues a read request to its associated data cache, then, if need be, issues a read request to its associated core-level cache, then, if need be, issues a read request to its associated die-level cache, then, if need be, issues a no-cache remote read request to a target die being mapped to hold the cache line.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: October 6, 2020
    Assignee: Intel Corporation
    Inventors: Robert Pawlowski, Bharadwaj Krishnamurthy, Vincent Cave, Jason M. Howard, Ankit More, Joshua B. Fryman
  • Publication number: 20200104164
    Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 2, 2020
    Inventors: Robert PAWLOWSKI, Ankit MORE, Jason M. HOWARD, Joshua B. FRYMAN, Tina C. ZHONG, Shaden SMITH, Sowmya PITCHAIMOORTHY, Samkit JAIN, Vincent CAVE, Sriram AANANTHAKRISHNAN, Bharadwaj KRISHNAMURTHY
  • Publication number: 20200004602
    Abstract: In one embodiment, a first processor core includes: a plurality of execution pipelines each to execute instructions of one or more threads; a plurality of pipeline barrier circuits coupled to the plurality of execution pipelines, each of the plurality of pipeline barrier circuits associated with one of the plurality of execution pipelines to maintain status information for a plurality of barrier groups, each of the plurality of barrier groups formed of at least two threads; and a core barrier circuit to control operation of the plurality of pipeline barrier circuits and to inform the plurality of pipeline barrier circuits when a first barrier has been reached by a first barrier group of the plurality of barrier groups. Other embodiments are described and claimed.
    Type: Application
    Filed: June 27, 2018
    Publication date: January 2, 2020
    Inventors: Robert Pawlowski, Ankit More, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cavé, Sriram Aananthakrishnan, Jason M. Howard, Joshua B. Fryman
  • Publication number: 20200004587
    Abstract: Embodiments of apparatuses, methods, and systems for a multithreaded processor core with hardware-assisted task scheduling are described. In an embodiment, a processor includes a first hardware thread, a second hardware thread, and a task manager. The task manager is to issue a task to the first hardware thread. The task manager includes a hardware task queue in which to store a plurality of task descriptors. Each of the task descriptors is to represent one of a single task, a collection of iterative tasks, and a linked list of tasks.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Inventors: Paul Griffin, Joshua Fryman, Jason Howard, Sang Phill Park, Robert Pawlowski, Michael Abbott, Scott Cline, Samkit Jain, Ankit More, Vincent Cave, Fabrizio Petrini, Ivan Ganev