INSTRUCTION SET ARCHITECTURE WITH PROGRAMMABLE DIRECT MEMORY ACCESS AND EXPANDED FENCE/FLUSH OPERATIONS

In one embodiment, a processor includes decode circuitry and memory offload circuitry. The decode circuitry decodes an instruction to perform a direct memory access (DMA) operation, which includes an opcode and one or more fields. The opcode indicates a type of DMA operation to be performed. The one or more fields indicate a destination memory region and one or more data operands. The memory offload circuitry offloads the instruction from an execution pipeline and performs the DMA operation.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims the benefit of the filing date of the following patent applications, the contents of which are hereby expressly incorporated by reference: U.S. Provisional Patent Application Ser. No. 63/293,590, filed on Dec. 23, 2021, and entitled “GRAPH PROCESSING COMPUTING ARCHITECTURE”; and U.S. Provisional Patent Application Ser. No. 63/295,280, filed on Dec. 30, 2021, and entitled “GRAPH PROCESSING COMPUTING ARCHITECTURE.”

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT

This invention was made with Government support under Agreement No. HR0011-17-3-0004, awarded by the Defense Advanced Research Projects Agency (DARPA). The Government has certain rights in the invention.

FIELD

The present disclosure relates in general to the field of computer architecture, and more specifically, though not exclusively, to an instruction set architecture with programmable direct memory access and expanded fence/flush operations.

BACKGROUND

Direct memory access (DMA) is a common tool for moving large data structures in the background. Existing DMA implementations typically place the DMA controller in the input/output (I/O) interface and trigger copies via memory-mapped input/output (MMIO) gates. These DMA implementations have relatively limited functionality, however, and they typically only support straightforward transfers of contiguous data from one memory location to another.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates a block diagram of components of an example datacenter.

FIG. 2A is a simplified block diagram illustrating an example graph processing core,

FIG. 2B is a simplified block diagram illustrating an example graph processing device.

FIG. 3A is a simplified block diagram illustrating a simplified example of a graph structure.

FIG. 3B is a simplified block diagram illustrating a representation of an example access stream using an example graph structure.

FIG. 4 is a simplified block diagram illustrating example components of an example graph processing core.

FIG. 5 is a diagram illustrating example operations of an example graphic processing core offload engine.

FIG. 6 is a simplified block diagram illustrating an example implementation of a graph processing system including both graph processing cores and dense compute cores.

FIG. 7 is a simplified block diagram illustrating an example system including graph processing capabilities.

FIG. 8 is a representation of an example memory map of an example graph processing system.

FIG. 9 illustrates a block diagram of an example computing architecture with instruction set architecture (ISA) support for programmable direct memory access (DMA).

FIG. 10 illustrates an example implementation of a DMA initialize (dma.init) instruction.

FIG. 11 illustrates an example implementation of a DMA initialize stride (dma.initstride) instruction.

FIG. 12 illustrates an example implementation of a DMA copy (dma.copy) instruction.

FIGS. 13A-B illustrate an example implementation of a DMA copy stride (dma.copystride) instruction with stride passthrough and pack/unpack modes.

FIG. 14 illustrates an example implementation of a DMA copy transpose (dma.copytrans) instruction.

FIG. 15 illustrates an example implementation of a DMA scatter (dma.scatter) instruction.

FIG. 16 illustrates an example implementation of a DMA broadcast (dma.bcast) instruction.

FIG. 17 illustrates an example implementation of a DMA reduce (dma.reduce) instruction.

FIG. 18 illustrates a flowchart for executing programmable DMA instructions in accordance with certain embodiments.

FIG. 19 illustrates a block diagram of an example computing architecture with ISA support for expanded fence operations.

FIG. 20 illustrates a flowchart for executing fence instructions in accordance with certain embodiments.

FIG. 21 illustrates a block diagram of an example computing architecture with ISA support for expanded flush operations.

FIG. 22 illustrates a flowchart for executing flush instructions in accordance with certain embodiments.

FIGS. 23A-23B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to certain embodiments.

FIGS. 24A-D are block diagrams illustrating an exemplary specific vector friendly instruction format according to certain embodiments.

FIG. 25 is a block diagram of a register architecture according to one embodiment.

FIG. 26A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to certain embodiments.

FIG. 26B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to certain embodiments.

FIGS. 27A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip.

FIG. 28 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to certain embodiments.

FIGS. 29, 30, 31, and 32 are block diagrams of exemplary computer architectures.

Like reference numbers and designations in the various drawings indicate like elements.

DETAILED DESCRIPTION

FIG. 1 illustrates a block diagram of components of a datacenter 100 in accordance with certain embodiments. In the embodiment depicted, datacenter 100 includes a plurality of platforms 102 (e.g., 102A, 102B, 102C, etc.), data analytics engine 104, and datacenter management platform 106 coupled together through network 108. A platform 102 may include platform logic 110 with one or more central processing units (CPUs) 112 (e.g., 112A, 112B, 112C, 112D), memories 114 (which may include any number of different modules), chipsets 116 (e.g., 116A, 116B), communication interfaces 118, and any other suitable hardware and/or software to execute a hypervisor 120 or other operating system capable of executing processes associated with applications running on platform 102. In some embodiments, a platform 102 may function as a host platform for one or more guest systems 122 that invoke these applications.

Each platform 102 may include platform logic 110. Platform logic 110 includes, among other logic enabling the functionality of platform 102, one or more CPUs 112, memory 114, one or more chipsets 116, and communication interface 118. Although three platforms are illustrated, datacenter 100 may include any suitable number of platforms. In various embodiments, a platform 102 may reside on a circuit board that is installed in a chassis, rack, compossible servers, disaggregated servers, or other suitable structures that includes multiple platforms coupled together through network 108 (which may include, e.g., a rack or backplane switch).

CPUs 112 may each include any suitable number of processor cores. The cores may be coupled to each other, to memory 114, to at least one chipset 116, and/or to communication interface 118, through one or more controllers residing on CPU 112 and/or chipset 116. In particular embodiments, a CPU 112 is embodied within a socket that is permanently or removably coupled to platform 102. CPU 112 is described in further detail below in connection with FIG. 4. Although four CPUs are shown, a platform 102 may include any suitable number of CPUs.

Memory 114 may include any form of volatile or non-volatile memory including, without limitation, magnetic media (e.g., one or more tape drives), optical media, random access memory (RAM), read-only memory (ROM), flash memory, removable media, or any other suitable local or remote memory component or components. Memory 114 may be used for short, medium, and/or long-term storage by platform 102. Memory 114 may store any suitable data or information utilized by platform logic 110, including software embedded in a computer readable medium, and/or encoded logic incorporated in hardware or otherwise stored (e.g., firmware). Memory 114 may store data that is used by cores of CPUs 112. In some embodiments, memory 114 may also include storage for instructions that may be executed by the cores of CPUs 112 or other processing elements (e.g., logic resident on chipsets 116) to provide functionality associated with components of platform logic 110. Additionally or alternatively, chipsets 116 may each include memory that may have any of the characteristics described herein with respect to memory 114. Memory 114 may also store the results and/or intermediate results of the various calculations and determinations performed by CPUs 112 or processing elements on chipsets 116. In various embodiments, memory 114 may include one or more modules of system memory coupled to the CPUs through memory controllers (which may be external to or integrated with CPUs 112). In various embodiments, one or more particular modules of memory 114 may be dedicated to a particular CPU 112 or other processing device or may be shared across multiple CPUs 112 or other processing devices.

A platform 102 may also include one or more chipsets 116 including any suitable logic to support the operation of the CPUs 112. In some cases, chipsets 116 may be implementations of graph processing devices, such as discussed herein. In various embodiments, chipset 116 may reside on the same package as a CPU 112 or on one or more different packages. Each chipset may support any suitable number of CPUs 112. A chipset 116 may also include one or more controllers to couple other components of platform logic 110 (e.g., communication interface 118 or memory 114) to one or more CPUs. Additionally or alternatively, the CPUs 112 may include integrated controllers. For example, communication interface 118 could be coupled directly to CPUs 112 via one or more integrated I/O controllers resident on each CPU.

Chipsets 116 may each include one or more communication interfaces 128 (e.g., 128A, 128B). Communication interface 128 may be used for the communication of signaling and/or data between chipset 116 and one or more I/O devices, one or more networks 108, and/or one or more devices coupled to network 108 (e.g., datacenter management platform 106 or data analytics engine 104). For example, communication interface 128 may be used to send and receive network traffic such as data packets. In a particular embodiment, communication interface 128 may be implemented through one or more I/O controllers, such as one or more physical network interface controllers (NICs), also known as network interface cards or network adapters. An I/O controller may include electronic circuitry to communicate using any suitable physical layer and data link layer standard such as Ethernet (e.g., as defined by an IEEE 802.3 standard), Fibre Channel, InfiniBand, Wi-Fi, or other suitable standard. An I/O controller may include one or more physical ports that may couple to a cable (e.g., an Ethernet cable). An I/O controller may enable communication between any suitable element of chipset 116 (e.g., switch 130 (e.g., 130A, 130B)) and another device coupled to network 108. In some embodiments, network 108 may include a switch with bridging and/or routing functions that is external to the platform 102 and operable to couple various I/O controllers (e.g., NICs) distributed throughout the datacenter 100 (e.g., on different platforms) to each other. In various embodiments an I/O controller may be integrated with the chipset (i.e., may be on the same integrated circuit or circuit board as the rest of the chipset logic) or may be on a different integrated circuit or circuit board that is electromechanically coupled to the chipset. In some embodiments, communication interface 128 may also allow I/O devices integrated with or external to the platform (e.g., disk drives, other NICs, etc.) to communicate with the CPU cores.

Switch 130 may couple to various ports (e.g., provided by NICs) of communication interface 128 and may switch data between these ports and various components of chipset 116 according to one or more link or interconnect protocols, such as Peripheral Component Interconnect Express (PCIe), Compute Express Link (CXL), HyperTransport, GenZ, OpenCAPI, and others, which may each alternatively or collectively apply the general principles and/or specific features discussed herein. Switch 130 may be a physical or virtual (i.e., software) switch.

Platform logic 110 may include an additional communication interface 118. Similar to communication interface 128, this additional communication interface 118 may be used for the communication of signaling and/or data between platform logic 110 and one or more networks 108 and one or more devices coupled to the network 108. For example, communication interface 118 may be used to send and receive network traffic such as data packets. In a particular embodiment, communication interface 118 includes one or more physical I/O controllers (e.g., NICs). These NICs may enable communication between any suitable element of platform logic 110 (e.g., CPUs 112) and another device coupled to network 108 (e.g., elements of other platforms or remote nodes coupled to network 108 through one or more networks). In particular embodiments, communication interface 118 may allow devices external to the platform (e.g., disk drives, other NICs, etc.) to communicate with the CPU cores. In various embodiments, NICs of communication interface 118 may be coupled to the CPUs through I/O controllers (which may be external to or integrated with CPUs 112). Further, as discussed herein, I/O controllers may include a power manager 125 to implement power consumption management functionality at the I/O controller (e.g., by automatically implementing power savings at one or more interfaces of the communication interface 118 (e.g., a PCIe interface coupling a NIC to another element of the system), among other example features.

Platform logic 110 may receive and perform any suitable types of processing requests. A processing request may include any request to utilize one or more resources of platform logic 110, such as one or more cores or associated logic. For example, a processing request may include a processor core interrupt; a request to instantiate a software component, such as an I/O device driver 124 or virtual machine 132 (e.g., 132A, 132B); a request to process a network packet received from a virtual machine 132 or device external to platform 102 (such as a network node coupled to network 108); a request to execute a workload (e.g., process or thread) associated with a virtual machine 132, application running on platform 102, hypervisor 120 or other operating system running on platform 102; or other suitable request.

In various embodiments, processing requests may be associated with guest systems 122. A guest system may include a single virtual machine (e.g., virtual machine 132A or 132B) or multiple virtual machines operating together (e.g., a virtual network function (VNF) 134 or a service function chain (SFC) 136). As depicted, various embodiments may include a variety of types of guest systems 122 present on the same platform 102.

A virtual machine 132 may emulate a computer system with its own dedicated hardware. A virtual machine 132 may run a guest operating system on top of the hypervisor 120. The components of platform logic 110 (e.g., CPUs 112, memory 114, chipset 116, and communication interface 118) may be virtualized such that it appears to the guest operating system that the virtual machine 132 has its own dedicated components.

A virtual machine 132 may include a virtualized NIC (vNIC), which is used by the virtual machine as its network interface. A vNIC may be assigned a media access control (MAC) address, thus allowing multiple virtual machines 132 to be individually addressable in a network.

In some embodiments, a virtual machine 132B may be paravirtualized. For example, the virtual machine 132B may include augmented drivers (e.g., drivers that provide higher performance or have higher bandwidth interfaces to underlying resources or capabilities provided by the hypervisor 120). For example, an augmented driver may have a faster interface to underlying virtual switch 138 for higher network performance as compared to default drivers.

VNF 134 may include a software implementation of a functional building block with defined interfaces and behavior that can be deployed in a virtualized infrastructure. In particular embodiments, a VNF 134 may include one or more virtual machines 132 that collectively provide specific functionalities (e.g., wide area network (WAN) optimization, virtual private network (VPN) termination, firewall operations, load-balancing operations, security functions, etc.). A VNF 134 running on platform logic 110 may provide the same functionality as traditional network components implemented through dedicated hardware. For example, a VNF 134 may include components to perform any suitable NFV workloads, such as virtualized Evolved Packet Core (vEPC) components, Mobility Management Entities, 3rd Generation Partnership Project (3GPP) control and data plane components, etc.

SFC 136 is group of VNFs 134 organized as a chain to perform a series of operations, such as network packet processing operations. Service function chaining 136 may provide the ability to define an ordered list of network services (e.g. firewalls, load balancers) that are stitched together in the network to create a service chain.

A hypervisor 120 (also known as a virtual machine monitor) may include logic to create and run guest systems 122. The hypervisor 120 may present guest operating systems run by virtual machines with a virtual operating platform (i.e., it appears to the virtual machines that they are running on separate physical nodes when they are actually consolidated onto a single hardware platform) and manage the execution of the guest operating systems by platform logic 110. Services of hypervisor 120 may be provided by virtualizing in software or through hardware assisted resources that require minimal software intervention, or both. Multiple instances of a variety of guest operating systems may be managed by the hypervisor 120. Each platform 102 may have a separate instantiation of a hypervisor 120.

Hypervisor 120 may be a native or bare-metal hypervisor that runs directly on platform logic 110 to control the platform logic and manage the guest operating systems. Alternatively, hypervisor 120 may be a hosted hypervisor that runs on a host operating system and abstracts the guest operating systems from the host operating system. Various embodiments may include one or more non-virtualized platforms 102, in which case any suitable characteristics or functions of hypervisor 120 described herein may apply to an operating system of the non-virtualized platform.

Hypervisor 120 may include a virtual switch 138 that may provide virtual switching and/or routing functions to virtual machines of guest systems 122. The virtual switch 138 may include a logical switching fabric that couples the vNICs of the virtual machines 132 to each other, thus creating a virtual network through which virtual machines may communicate with each other. Virtual switch 138 may also be coupled to one or more networks (e.g., network 108) via physical NICs of communication interface 118 so as to allow communication between virtual machines 132 and one or more network nodes external to platform 102 (e.g., a virtual machine running on a different platform 102 or a node that is coupled to platform 102 through the Internet or other network). Virtual switch 138 may include a software element that is executed using components of platform logic 110. In various embodiments, hypervisor 120 may be in communication with any suitable entity (e.g., a SDN controller) which may cause hypervisor 120 to reconfigure the parameters of virtual switch 138 in response to changing conditions in platform 102 (e.g., the addition or deletion of virtual machines 132 or identification of optimizations that may be made to enhance performance of the platform).

Hypervisor 120 may include any suitable number of I/O device drivers 124. I/O device driver 124 represents one or more software components that allow the hypervisor 120 to communicate with a physical I/O device. In various embodiments, the underlying physical I/O device may be coupled to any of CPUs 112 and may send data to CPUs 112 and receive data from CPUs 112. The underlying I/O device may utilize any suitable communication protocol, such as PCI, PCIe, Universal Serial Bus (USB), Serial Attached SCSI (SAS), Serial ATA (SATA), InfiniBand, Fibre Channel, an IEEE 802.3 protocol, an IEEE 802.11 protocol, or other current or future signaling protocol.

The underlying I/O device may include one or more ports operable to communicate with cores of the CPUs 112. In one example, the underlying I/O device is a physical NIC or physical switch. For example, in one embodiment, the underlying I/O device of I/O device driver 124 is a NIC of communication interface 118 having multiple ports (e.g., Ethernet ports).

In other embodiments, underlying I/O devices may include any suitable device capable of transferring data to and receiving data from CPUs 112, such as an audio/video (AN) device controller (e.g., a graphics accelerator or audio controller); a data storage device controller, such as a flash memory device, magnetic storage disk, or optical storage disk controller; a wireless transceiver; a network processor; or a controller for another input device such as a monitor, printer, mouse, keyboard, or scanner; or other suitable device.

In various embodiments, when a processing request is received, the I/O device driver 124 or the underlying I/O device may send an interrupt (such as a message signaled interrupt) to any of the cores of the platform logic 110. For example, the I/O device driver 124 may send an interrupt to a core that is selected to perform an operation (e.g., on behalf of a virtual machine 132 or a process of an application). Before the interrupt is delivered to the core, incoming data (e.g., network packets) destined for the core might be cached at the underlying I/O device and/or an I/O block associated with the CPU 112 of the core. In some embodiments, the I/O device driver 124 may configure the underlying I/O device with instructions regarding where to send interrupts.

In some embodiments, as workloads are distributed among the cores, the hypervisor 120 may steer a greater number of workloads to the higher performing cores than the lower performing cores. In certain instances, cores that are exhibiting problems such as overheating or heavy loads may be given less tasks than other cores or avoided altogether (at least temporarily). Workloads associated with applications, services, containers, and/or virtual machines 132 can be balanced across cores using network load and traffic patterns rather than just CPU and memory utilization metrics.

The elements of platform logic 110 may be coupled together in any suitable manner. For example, a bus may couple any of the components together. A bus may include any known interconnect, such as a multi-drop bus, a mesh interconnect, a ring interconnect, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, or a Gunning transceiver logic (GTL) bus.

Elements of the data system 100 may be coupled together in any suitable, manner such as through one or more networks 108. A network 108 may be any suitable network or combination of one or more networks operating using one or more suitable networking protocols. A network may represent a series of nodes, points, and interconnected communication paths for receiving and transmitting packets of information that propagate through a communication system. For example, a network may include one or more firewalls, routers, switches, security appliances, antivirus servers, or other useful network devices. A network offers communicative interfaces between sources and/or hosts, and may include any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, Internet, wide area network (WAN), virtual private network (VPN), cellular network, or any other appropriate architecture or system that facilitates communications in a network environment. A network can include any number of hardware or software elements coupled to (and in communication with) each other through a communications medium. In various embodiments, guest systems 122 may communicate with nodes that are external to the datacenter 100 through network 108.

Current practices in data analytics and artificial intelligence perform tasks such as object classification on unending streams of data. Computing infrastructure for classification is predominantly oriented toward “dense” compute, such as matrix computations. The continuing exponential growth in generated data has shifted some compute to be offloaded to GPUs and other application-focused accelerators across multiple domains that are dense-compute dominated. However, the next step in the evolution in both artificial intelligence (AI), machine learning, and data analytics is reasoning about the relationships between these classified objects. In some implementations, a graph structure (or data structure) may be defined and utilized to define relationships between classified objects. For instance, determining the relationships between entities in a graph is the basis of graph analytics. Graph analytics poses important challenges on existing processor architectures due to its sparse structure.

High-performance large scale graph analytics is essential to timely analyze relationships in big data sets. The combination of low performance and very large graph sizes has traditionally limited the usability of graph analytics. Indeed, conventional processor architectures suffer from inefficient resource usage and bad scaling on graph workloads. Recognizing both the increasing importance of graph analytics and the need for vastly improved sparse computation performance compared to traditional approaches, an improved system architecture is presented herein that is adapted to performing high-performance graph processing by addressing constraints across the network, memory, and compute architectures that typically limit performance on graph workloads.

FIG. 2A is a simplified block diagram 200a representing the general architecture of an example graph processing core 205. While a graph processing core 205, as discussed herein, may be particularly adept, at an architectural level, at handling workloads to implement graph-based algorithms, it should be appreciated that the architecture of a graph processing core 205 may handle any program developed to utilize its architecture and instruction set, including programs entirely unrelated to graph processing. Indeed, a graph processing core (e.g., 205) may adopt an architecture configured to provide massive multithreading and enhanced memory efficiency to minimize latency to memory and hide remaining latency to memory. Indeed, the high input/output (I/O) and memory bandwidth of the architecture enable the graph processing core 205 to be deployed in a variety of applications where memory efficiency is at a premium and memory bandwidth requirements made by the application are prohibitively demanding to traditional processor architectures. Further, the architecture of the graph processing core 205 may realize this enhanced memory efficiency by granularizing its memory accesses in relatively small, fixed chunks (e.g., 8B random access memory), equipping the cores with networking capabilities optimized for corresponding small transactions, and providing extensive multi-threading.

In the example of FIG. 2A, an example graph processing core 205 may include a number of multi-threaded pipelines or cores (MTCs) (e.g., 215a-d) and a number single-threaded pipelines or cores (e.g., 220a-b). In some implementations, the MTCs and STCs may architecturally the same, but for the ability of the MTCs to support multiple concurrent thread and switching between these threads. For instance, respective MTC and STC may have 32 registers per thread, all state address map, and utilize a common instruction set architecture (ISA). In one example, the pipeline/core ISAs may be Reduced Instruction Set Computer (RISC)-based, fixed length instructions.

In one example, respective MTC (e.g., 215a-d) may support sixteen threads with only minimal interrupt handling. For instance, each thread in an MTC may execute a portion of a respective instruction, with the MTC switching between the active threads automatically or opportunistically (e.g., switch from executing one thread to the next in response to a load operation by the first thread so as to effectively hide the latency of the load operation (allowing the other thread or threads to operate during the cycles needed for the load operation to complete), among other examples). An MTC thread may be required to finishing executing its respective instruction before taking on another. In some implementations, the MTCs may adopt a barrel model, among other features or designs. STC's may execute a single thread at a time and may support full interrupt handling. Portions of a workload handled by a graph processing core 205 may be divided not only between the MTCs (e.g., with sixteen threads per MTC), but also between the MTCs 215a-d and STCs 220a-b. For instance, STCs 220a-b may be optimized for various types of operations (e.g., load-store forwarding, branch predictions, etc.) and programs may make use of STCs for some operations and the multithreading capabilities of the MTCs for other instructions.

An example graph processing core 205 may include additional circuitry to implement components such as a scratchpad 245, uncore, and memory controller (e.g., 250). Components of the graph processing core 205 may be interconnected via a crossbar interconnect (e.g., a full crossbar 255) that ties all components in the graph processing core 205 together in a low latency, high bandwidth network. The memory controller 250 may be implemented as a narrow channel memory controller, for instance, supporting a narrow, fixed 8-byte memory channel. Data pulled using the memory controller from memory in the system may be loaded into a scratchpad memory region 245 for use by other components of the graph processing core 205. In one example, the scratchpad may provide 2 MB of scratchpad memory per core (e.g., MTC and STC) and provide dual network ports (e.g., via 1 MB regions).

In some implementations, an uncore region of a graph processing core 205 may be equipped with enhanced functionality to allow the MTCs 215a-d and STCs 220a-b to handle exclusively substantive, algorithmic workloads, with supporting work handled by the enhanced uncore, including synchronization, communication, and data movement/migration. The uncore may perform a variety of tasks including copy and merge operations, reductions, gathers/scatters, packs/unpacks, in-flight matrix transposes, advanced atomics, hardware collectives, reductions in parallel prefixes, hardware queuing engines, and so on. The ISA of the uncore can come from the pipelines' (MTCs and STCs) synchronous execution. In one example, the uncore may include components such as a collective engine 260, a queue engine 265, an atomic engine 270, and memory engine 275, among other example components and corresponding logic. An example memory engine 275 may provide an internal DMA engine for the architecture. The queue engine 265 can orchestrate and queue messaging within the architecture, with messages optimized in terms of (reduced) size to enable very fast messaging within the architecture. An example collective engine 260 may handle various collective operations for the architecture, including reductions, barriers, scatters, gathers, broadcasts, etc. The atomic engine 270 may handle any memory controller lock scenarios impacting the memory controller 250, among other example functionality.

FIG. 2B is a simplified block diagram illustrating an example system 200b with a set of graph processing cores 205a-d. A graph processing node may include a respective graph processing core (e.g., 205a-d) and a corresponding memory (e.g., dynamic random access memory (DRAM) (e.g., 225)). Each node may include a respective graph processing core (e.g., 205), which includes a set of MTCs (e.g., 215) as well as a set of single-thread cores (STCs) (e.g., 220), such as in the example graph processing core 205 illustrated and described above in the example of FIG. 2A. In one example, multiple graph processing nodes may be incorporated in or mounted on the same package or board and interconnected via a high-radix (e.g., multiple (e.g., >3) ports per connection), low-diameter (e.g., of 3 or less) network. The example system 200 may further include interconnect ports (e.g., 230, 235) to enable the system 200 to be coupled to other computing elements including other types of processing units (e.g., central processing units (CPUs), graphical processing units (GPUs), tensor processing units (TPUs), etc. In some cases, a graph processing chip, chiplet, board, or device (e.g., system 200) may be coupled to other graph processing devices (e.g., additional instances of the same type of graph processing system (e.g., 200). In some implementations, interconnects 230, 235 may be utilized to couple to other memory devices, allowing this external memory and local DRAM (e.g., 225) to function as shared system memory of the graph processing nodes for use by graph processing cores and other logic of the graph processing nodes, among other examples.

FIG. 3A is a simplified representation of an example graph structure 300. The graph structure may be composed of multiple interconnected nodes (e.g., 305, 310, 315, 320, 325, 330, 335). An edge is defined by the interface between one graph node and respective neighboring graph node. Each node may be connected to one or more other nodes in the graph. The sparseness of graph data structures leads to scattered and irregular memory accesses and communication, challenging the decades-long optimizations made in traditional dense compute solutions. As an example, consider the common case of pushing data along the graph edges (e.g., with reference to the simplified graph 300 example of FIG. 3A). All vertices initially store a value locally and then proceed to add their value to all neighbors along outgoing edges. This basic computation is ubiquitous in graph algorithms. FIG. 3B illustrates a representation 350 of an example access stream (e.g., from node 1 (305)), which illustrates the irregularity and lack of locality in such operations, making conventional prefetching and caching effectively useless.

More generally, graph algorithms face several major scalability challenges on traditional CPU and GPU architectures, because of the irregularity and sparsity of graph structures. For instance, in traditional cache-based processor architectures, which utilize prefetching, the execution of graph applications may suffer from inefficient cache and bandwidth utilization. Due to the sparsity of graph structures, caches used in such applications are thrashed with single-use sparse accesses and useless prefetches where most (e.g., 64 byte) memory fetches contain only a small amount (e.g., 8-bytes out of 64) of useful data. Further, overprovisioning memory bandwidth and/or cache space to cope with sparsity is inefficient in terms of power consumption, chip area and I/O pin count.

Further analysis of graph algorithms shows additional problems in optimizing performance. For instance, in the execution of graph algorithms, the computations may be irregular in character—they exhibit skewed compute time distributions, encounter frequent control flow instructions, and perform many memory accesses. For instance, for an example graph-based link analysis algorithm for a search engine, the compute time for a vertex in the algorithm is proportional to the number of outgoing edges (degree) of that vertex. Graphs such as the one illustrated in FIG. 3A may have skewed degree distributions, and thus the work per vertex has a high variance, leading to significant load imbalance. Graph applications may be heavy on branches and memory operations. Furthermore, conditional branches are often data dependent, e.g., checking the degree or certain properties of vertices, leading to irregular and therefore hard to predict branch outcomes. Together with the high cache miss rates caused by the sparse accesses, conventional performance oriented out-of-order processors are largely underutilized: most of the time they are stalled on cache misses, while a large part of the speculative resources is wasted due to branch mispredictions.

As additional example shortcomings of conventional computing architectures' availability to handle graph processing, graph algorithms require frequent fine- and coarse-grained synchronization. For example, fine-grained synchronizations (e.g., atomics) may be required in a graph algorithm to prevent race conditions when pushing values along edges. Synchronization instructions that resolve in the cache hierarchy place a large stress on the cache coherency mechanisms for multi-socket systems, and all synchronizations incur long round-trip latencies on multi-node systems. Additionally, the sparse memory accesses result in even more memory traffic for synchronizations due to false sharing in the cache coherency system. Coarse-grained synchronizations (e.g., system-wide barriers and prefix scans) fence the already-challenging computations in graph algorithms. These synchronizations have diverse uses including resource coordination, dynamic load balancing, and the aggregation of partial results. These synchronizations can dominate execution time on large-scale systems due to high network latencies and imbalanced computation.

Additionally, current commercial graph databases may be quite large (e.g., exceed 20 TB as an in-memory representation). Such large problems may exceed the capabilities of even a rack of computational nodes of any type, which requires a large-scale multi-node platform to even house the graph's working set. When combined with the prior observations—poor memory hierarchy utilization, high control flow changes, frequent memory references, and abundant synchronizations—reducing the latency to access remote data is a challenge, combined with latency hiding techniques in the processing elements, among other example considerations. Traditional architectures and their limitations in being able to effectively handle graph algorithms extends beyond CPUs to include traditional GPU—sparse accesses prevent memory coalescing, branches cause thread divergence and synchronization limits thread progress. While GPUs may have more threads and much higher memory bandwidth, GPUs have limited memory capacity and limited scale-out capabilities, which means that they are unable to process large, multi-TB graphs. Furthermore, where graphs are extremely sparse (<<1% non-zeros), typical GPU memory usage is orders of magnitude less efficient, making GPUs all but unusable outside of the smallest graphs, among other example issues.

An improved computing system architecture may be implemented in computing systems to enable more efficient (e.g., per watt performance) graph analytics. In one example, specialized graph processing cores may be networked together in a low diameter, high radix manner to more efficiently handle graph analytics workloads. The design of such graph processing cores builds on the observations that most graph workloads have abundant parallelism, are memory bound and are not compute intensive. These observations call for many simple pipelines, with multi-threading to hide memory latency. Returning to the discussion of FIG. 2, such graph processing cores may be implemented as multi-threaded cores (MTC), which are round-robin multi-threaded in-order pipeline. In one implementation, at any moment, each thread in an MTC can only have one in-flight instruction, which considerably simplifies the core design for better energy efficiency. Single-threaded cores (STC) are used for single-thread performance sensitive tasks, such as memory and thread management threads (e.g., from the operating system). These are in-order stall-on-use cores that are able to exploit some instruction and memory-level parallelism, while avoiding the high-power consumption of aggressive out-or-order pipelines. In some implementations, both MTCs and STCs may implement the same custom RISC instruction set.

Turning to FIG. 4, a simplified block diagram 400 is shown illustrating example components of an example graph processing core device (e.g., 205). A graph processing core device may include a set of multi-threaded cores (MTCs) (e.g., 215). In some instances, both multi-threaded cores and single threaded cores (STCs) may be provided within a graph processing block. Further, each core may have a small data cache (D$) (e.g., 410) and an instruction cache (1$) (e.g., 415), and a register file (RF) (e.g., 420) to support its thread count. Because of the low locality in graph workloads, no higher cache levels need be included, avoiding useless chip area and power consumption of large caches. For scalability, in some implementations, caches are not coherent. In such implementations, programs that are to be executed using the system may be adapted to avoid modifying shared data that is cached, or to flush caches if required for correctness. As noted above, in some implementations, MTCs and STCs are grouped into blocks, each of which may be provided with a large local scratchpad (SPAD) memory 245 for low latency storage. Programs run on such platforms may selecting which memory accesses to cache (e.g., local stack), which to put on SPAD (e.g., often reused data structures or the result of a direct memory access (DMA) gather operation), and which not to store locally. Further, prefetchers may be omitted from such architectures to avoid useless data fetches and to limit power consumption. Instead, some implementations may utilize offload engines or other circuitry to efficiently fetch large chunks of useful data.

Continuing with this example, although the MTCs of an example graph processing core hide some of the memory latency by supporting multiple concurrent threads, an MTC may adopt an in-order design, which limits the number of outstanding memory accesses to one per thread. To increase memory-level parallelism and to free more compute cycles to the graph processing core, a memory offload engine (e.g., 430) may be provided for each block. The offload engine performs memory operations typically found in many graph applications in the background, while the cores continue with their computations. Turning to FIG. 5, a simplified block diagram 500 is shown illustrating example operations of an example graphic processing core offload engine (e.g., 430) including atomics 505 and gather operations 510, among other examples. Further, a direct memory access (DMA) engine may perform operations such as (strided) copy, scatter and gather. Queue engines may also be provided, which are responsible for maintaining queues allocated in shared memory, alleviating the core from atomic inserts and removals, among other example benefits. The logic of an offload engine can be used for work stealing algorithms and dynamically partitioning the workload. Further, the offload engines can implement efficient system-wide reductions and barriers. Remote atomics perform atomic operations at the memory controller where the data is located, instead of burdening the pipeline with first locking the data, moving the data to the core, updating it, writing back, and unlocking. They enable efficient and scalable synchronization, which is indispensable for the high thread count in this improved graph-optimized system architecture. The collective logic (or engines) of the offload engines may directed by the graph processing cores using specific instructions defined in an instruction set. These instructions may be non-blocking, enabling the graph processing cores to perform other work while these memory management operations are performed in the background. Custom polling and waiting instructions may also be included within the instruction set architecture (ISA) for use in synchronizing the threads and offloaded computations, among other example features. In some implementations, example graph processing cores and chipsets may not rely on any locality. Instead, the graph processing cores may collectively use their offload engines to perform complex system wide memory operations in parallel, and only move the data that is eventually needed to the core that requests it. For example, a DMA gather will not move the memory stored indices or addresses of the data elements to gather to the requesting core, but only the requested elements from the data array.

Returning to FIG. 4, an example graph processing device may additionally include a memory controller 250 to access and manage requests of local DRAM. Further, sparse and irregular accesses to a large data structure are typical for graph analysis applications. Therefore, accesses to remote memory should be done with minimal overhead. An improved system architecture, such as introduced above, utilizing specialized graph processing cores adapted for processing graph-centric workload may, in some implementations, implement a hardware distributed global address space (DGAS), which enables respective cores (e.g., graph processing core or support dense core) to uniformly access memory across the full system, which may include multiple nodes (e.g., a multiple graph processing core, corresponding memory, and memory management hardware) with one address space. Accordingly, a network interface (e.g., 440) may be provided to facilitate network connections between processing cores (e.g., on the same or different die, package, board, rack, etc.).

Besides avoiding the overhead of setting up communication for remote accesses, a DGAS also greatly simplifies programming, because there is no implementation difference between accessing local and remote memory. Further, in some implementations, address translation tables (ATT) may be provided, which contain programmable rules to translate application memory addresses to physical locations, to arrange the address space to the need of the application (e.g., address interleaved, block partitioned, etc.). Memory controllers may be provided within the system (e.g., one per block) to natively support relatively small cache lines (e.g., 8 byte accesses, rather than 64 byte accesses), while supporting standard cache line accesses as well. Such components may enable only the data that is actually needed to be fetched, thereby reducing memory bandwidth pressure and utilizing the available bandwidth more efficiently.

As noted above, a system, implemented as a chiplet, board, rack, or other platform, may include multiple interconnected graph processing cores, among other hardware elements. FIG. 6 is a simplified block diagram 600 showing an example implementation of a graph processing system 602 including a number of graph processing cores (e.g., 205a-h) each coupled to a high-radix, low-diameter network to interconnect all of the graph processing cores in the system. In this example implementations, the system may further include dense compute cores (e.g., 605a-h) likewise interconnected. In some instances, kernel functions, which would more efficiently be executed using dense compute logic may be offloaded from the graph processing cores to one or more of the dense compute cores. The graph processing cores may include associated memory blocks, which may be exposed to programmers via their own memory maps. Memory controllers (MC) (e.g., 610) may be provided in the system to other memory, including memory external to the system (e.g., on a different die, board, or rack). High speed input/output (HSIO) circuitry (e.g., 615) may also be provided on the system to enable core blocks and devices to couple to other computing devices, such as compute, accelerator, networking, and/or memory devices external to the system, among other examples.

A network may be provided in a system to interconnect the component within the system (e.g., on the same SoC or chiplet die, etc.) and the attributes of the network may be specially configured to support and enhance the graph processing efficiencies of the system. Indeed, the network connecting the blocks is responsible for sending memory requests to remote memory controllers. Similar to the memory controller, it is optimized for small messages (e.g., 8 byte messages). Furthermore, due to the high fraction of remote accesses, network bandwidth may exceed local DRAM bandwidth, which is different from conventional architectures that assume higher local traffic than remote traffic. To obtain high bandwidth and low latency to remote blocks, the network needs a high radix and a low diameter. Various topologies may be utilized to implement such network dimensions and characteristics. In one example, a HyperX topology may be utilized, with all-to-all connections on each level. In some implementations, links on the highest levels are implemented as optical links to ensure power-efficient, high-bandwidth communication. The hierarchical topology and optical links enable the system to efficiently scale out to many nodes, maintaining easy and fast remote access.

FIG. 7 is a simplified block diagram showing the use of an example graph processing system (incorporating graph processing cores, such as discussed above) in a server system. A graph processing device (e.g., 705) may be provided with a set of graph processing cores (and in some cases, supplemental dense compute cores). A graph processing device 705 may enable specialized processing support to handle graph workloads with small and irregular memory accesses through near-memory atomics, among other features, such as discussed above. Multiple such graph processing devices (e.g., 705, 715, 720, 725, etc.) may be provided on a board, rack, blade, or other platform (e.g., 710). In some implementations, the platform system 710 may include not only an interconnected network of graph processing devices (and their constituent graph processing cores), but the system 710 may further include general purpose processors (e.g., 730), SoC devices, accelerators, memory elements (e.g., 735), as well additional switches, fabrics, or other circuitry (e.g., 740) to interconnect and facilitate the communication of data between devices (e.g., 705-740) on the platform. The system 710 may adopt a global memory model and be interconnected consistent with the networking and packaging principles described herein to enable high I/O and memory bandwidth.

In some implementations, the system 710 may itself be capable of being further connected to other systems, such as other blade systems in a server rack system (e.g., 750). Multiple systems within the server system 750 may also be equipped with graph processing cores to further scale the graph processing power of a system. Indeed, multiple servers full of such graph processing cores may be connected via a wider area network (e.g., 760) to further scale such systems. The networking of such devices using the proposed graph processing architecture offers networking as a first-class citizen, supports point-to-point messaging, and relies upon a flattened latency hierarchy, among other example features and advantages.

In one example system, a C/C++ compiler (e.g., based on LLVM) may be utilized in the development of software for use with the graph processing systems described herein. For instance, the compiler may support a Reduced Instruction Set Computer (RISC) instruction set architecture (ISA) of the graph processing system, including basic library functions. In some implementations, graph-processing-specific operations, such as the offload engines and remote atomics, are accessible using intrinsics. Additionally, the runtime environment of the system may implement basic memory and thread management, supporting common programming models, such as gather-apply-scatter, task-based and single program, multiple data (SPMD)-style parallelism. Among other tools, an architectural simulator for the graph processing architecture may be provided to simulate the timing of all instructions in the pipelines, engines, memory, and network, based on the hardware specifications. Additional software development tools may be provided to assist developers is developing software for such graph processing systems, such as tools to simulate execution of the software, generate performance estimations of running a workload on the system, performance analysis reports (e.g., CPI stacks and detailed performance information on each memory structure and each instruction), among other example features. Such tools may enable workload owners to quickly detect bottleneck causes, and to use these insights to optimize the workload for graph processing systems.

In some implementations, software developed to perform graph analytics using the improved graph processing architecture discussed herein may be implemented as basic kernels, library overhead may be limited. In networked systems of multiple graph processing cores, the application code does not need to change for multinode execution, thanks to the system-wide shared memory. As an example, a software application may be written to cause the graph processing system to perform a sparse matrix dense vector multiplication (SpMV) algorithm. The basic operation of SpMV may include a multiply-accumulate of sparse matrix elements and a dense vector. A matrix input may be provided (e.g., an RMAT-30 synthetic matrix) stored in compressed sparse row (CSR) format. In one example, a straightforward implementation of SpMV may be programmed, with each thread of the graph processing cores calculating one or more elements of the result vector. The rows are partitioned across the threads based on the number of non-zeros for a balanced execution. It does not make use of DMA operations, and all accesses are non-cached at a default length (e.g., 8-byte), with thread local stack accesses cached by default. Such an implementation may outperform high performance CPU architectures (e.g., Intel Xeon™) through the use of a higher thread count and 8-byte memory accesses, avoiding memory bandwidth saturation. In other implementations of an SpMV algorithm may be programmed to execute on the graph processing architecture utilizing selective caching. For instance, accesses to the matrix values are cached, while the sparse accesses to the vector bypass caches. In the compressed sparse row (CSR) representation of a sparse matrix, all non-zero elements on a row are stored consecutively and accessed sequentially, resulting in spatial locality. The dense vector, on the other hand, is accessed sparsely, because only a few of its elements are needed for the multiply-accumulate (the indices of the non-zeros in the row of the matrix). Accordingly, the accesses to the matrix are cached, while the vector accesses remain uncached 8-byte accesses, leading to a further potential performance improvement relative to CPU architectures. Further, an implementation of the SpMV algorithm may be further enhanced using a graph processing architecture, for instance, by a DMA gather operation to fetch the elements of the dense vector that are needed for the current row from memory. These elements may then be stored on local scratchpad. The multiply-accumulate reduction is then done by the core, fetching the matrix elements from cache and the vector elements from scratchpad. Not only does this significantly reduce the number of load instructions, it also reduces data movement: the index list does not need to be transferred to the requesting core, only the final gathered vector elements. While data is gathered, the thread is stalled, allowing other threads that have already fetched their data to compute a result vector element.

Programs, such as the examples above, may be designed to effectively use the graph processing architecture (e.g., using more than 95% of the available memory bandwidth, while not wasting bandwidth on useless and sparse accesses) and realize potentially exponential efficiency improvement over traditional architectures. Further, the improved graph processing architecture provide much higher thread count support (e.g., 144 threads for Xeon, verses thousands of threads (e.g., 16,000+) in the graph processing core implementation), enabling threads to progress while others are stalled on memory operations, efficient small size local and remote memory operations, and powerful offload engines that allow for more memory/compute overlap. Scaling graph processing systems (e.g., with multiple nodes) may yield compounding benefits (even if not perfectly linear, for instance, due to larger latencies and bandwidth restrictions or other example issues) to significantly outperform other multi-node conventional multinode processor configurations. While the examples focus on an SpMV algorithm, it should be appreciated that this example was offered as but one of many example graph algorithms. Similar programs may be developed to leverage the features of a graph processing architecture to more efficiently perform other graph-based algorithms including application classification, random walks, graph search, Louvain community, TIES sampler, Graph2Vec, Graph Sage, Graph Wave, parallel decoding FST, geolocation, breadth-first search, sparse matrix-sparse vector multiplication (SpMSpV), among other examples.

As noted above, sparse workloads exhibit a large number of random remote memory accesses and have been shown to be heavily network and memory bandwidth-intensive and less dependent on compute capability. While the graph processing architecture discussed herein provides efficient support for workloads that are truly sparse (and may be alternatively referred to as “sparse compute” devices), such a graph processing architecture lacks sufficient compute performance to execute dense kernels (e.g., matrix multiply, convolution, etc.) at needed performance in some applications. Dense kernels are a critical component of many critical compute applications such as image processing. Even with matrix computation units included, a challenge remains of effective integration of dense compute and offloading operations with regards to memory movement, matrix operation definition, and controllability across multiple threads.

Traditional offloading techniques (e.g., for offloading to an on-chip accelerator in an SoC) include memory mapped registers. For instance, the pipeline/core can perform the offload of the computation by writing to memory mapped registers present inside the accelerator. These registers may specify configurations as well as data needed to be used for the computation. This may also require the pipeline to monitor/poll registers if it is not sure that the offload engine is idle. In one example of a graph processing, an enhanced offload mechanism may be used to offload dense compute work from the graph processing cores to dense compute cores. There is a hardware managed queue that stores incoming offload instructions and monitors the current status of the pipeline and launches the instructions sequentially, enabling an easy offload mechanism for the software. Multiple graph processing core threads can each use the dense compute bandwidth of the dense compute cores by calling a new ISA function (e.g., by calling the dense.func) without worrying about the status of the dense core and whether other cores are using the dense core at the same time. The offload instruction can also allow efficient and simple passing of the program counter and operand addresses to one of the dense compute cores as well. The queue gives metrics through software readable registers (e.g., the number of instructions waiting (in a COUNT value)) and can help in tracking average waiting requests and other statistics for any dense core.

As noted above, a graph processing architecture may be particularly suited to operate on sparse workloads exhibiting a large number of random remote memory accesses and that are heavily network and memory bandwidth-intensive and less dependent on compute capability. To efficiently address this workload space, a graph processing architecture has a highly scalable low-diameter and high-radix network and many optimized memory interfaces on each die in the system. While this architectural approach provides efficient support for workloads that are truly sparse, providing a system with graph processing cores alone lacks sufficient compute performance to execute dense kernels (e.g., matrix multiply, convolution, etc.) that may be utilized in some application. To correct this performance gap, some systems incorporating a graph processing architecture may further include dense compute cores in addition to the graph processing cores, such as illustrated in the example of FIG. 6. In this example, eight dense compute cores (e.g., 605a-h) are incorporated into each die of a graph processing device (e.g., 602) to be incorporated in a system. In such implementations, kernel functions are offloaded from threads in the graph processing cores (e.g., 205a-h) to any dense core 605a-h in the system 602 via directed messages.

In one example implementation, the compute capability within each dense core is implemented with a 16×16 reconfigurable spatial array of compute elements or systolic array (also referred to herein as a “dense array (DA)”). In some implementations, the reconfigurable array of compute elements of a dense compute core may be implemented as a multi-dimensional systolic array. This array is capable of a variety of floating point and integer operations of varying precisions. In this example, such an array can, in total, at a 2 GHz operating frequency a single dense core can achieve a peak performance of 1 TFLOP of double precision FMAs. Respective dense cores may have a control pipeline responsible for configuring the DA, executing DMA operations to efficiently move data into local buffers, and moving data into and out of the DA to execute the dense computation. The specific characteristics (e.g., memory locations, compute types, and data input sizes) of the operations vary based on the corresponding kernel. These kernels are programmed by software and launched on the control pipeline at a desired program counter (PC) value.

In some implementations, graph processing cores within a system that also include dense compute cores may include a dense offload queue and corresponding hardware circuitry to perform offloads from the graph processing core to the dense compute core control. This offload pipeline is managed intelligently by hardware managed through the dense offload queues (DOQ) to thereby simplify programmability for the software offloading the dense compute. With full hardware management, there is no need for software to check for the idleness of the dense compute or having to manage the contents and ordering of the queue, among other example benefits. The hardware circuitry managing the DOQs may also handle passing of the required program counter (PC) information, the operand, and the result matrix addresses to the control pipeline in a simple manner, among other example features.

In some implementations, a specialized instruction in the graph processing architecture ISA may be provided as a handle for initiating a request to a dense compute core. For instance, the software may use a dense function ISA instruction (e.g., ‘dense.func’) to trigger the offloading of a task from a graph processing core to a dense compute core by sending an instruction packet over the network interconnecting the cores from the graph processing core to one of the dense compute cores. The request may include the address of the target dense compute core, which may be used by the network to route the packet to the appropriate dense compute core. The request packet may be received at the dense offload queue (DOQ) corresponding to the targeted dense compute core.

FIG. 8 is a diagram representing an example memory map 800 utilized in a system including a set of graph processing cores and dense compute cores. Such systems may be further interconnected with other devices (included other instances of similar graph processing devices) in various system topographies. For instance, a first portion 805 of the memory address space may correspond to a core's local scratch pad, with a second portion 810 of the memory map dedicated to identifying the specific core (e.g., graph processing core, dense compute core, or other core provided on the system). In one example, eight cores may be provided per die and sixteen dies may be provided per compute sub-node. Accordingly, a third portion 815 of the memory map may be reserved to address dies (which may be addressed differently between graph processing dies and dense compute dies) and a fourth portion 820 reserved for subnode addressing. In some implementations, two or more subnodes may be provided per node and nodes may be interconnected on a server rack. Still further, multiple server racks may be interconnected in a network to further expand the overall graph compute capacity of a system, and so on. As illustrated in FIG. 8, corresponding portions (e.g., 825, 830) may be provided for addressing at the node, rack, and even system levels, among other examples corresponding to other network topologies.

Instruction Set Architecture with Highly-Programmable Direct Memory Access Operations

As the size of datasets continues to grow, the cost of data movement is becoming a higher performance bottleneck. This is true for a variety of applications that have different requirements relating to data structuring and format. For example, dense artificial intelligence (AI) and machine learning (ML) workloads have packed data structures that can be sequentially moved in an efficient manner. However, sparse workloads have data inserted at more random patterns and are typically referenced using a compressed sparse row (CSR) format. Therefore, it is not efficient to move these data structures in the same manner as packed data.

The computing architecture described throughout this disclosure is a highly scalable non-uniform memory access (NUMA) system. For example, with reference to the system 200a of FIG. 2B each die includes multiple cores 210 (e.g., eight cores), and each core 210 includes its own local scratchpad and local DRAM channel. As the system scales out to multiple die per node 205 and multiple nodes 205 in the total system, all scratchpads and DRAM are accessible from any core 210 via the address map in the distributed global address space (DGAS) of the system. Therefore, the latency cost of accessing data structures can fluctuate significantly depending on whether those structures are on the same die or a remote die (e.g., potentially on the other side of the system).

A common method to hide the latency of remote data access is software prefetching, where the remote data is copied to the local scratchpad preceding its use by the active thread. Offloading this operation from the pipeline allows it to be performed in the background while the pipeline itself is executing more immediate work. Typical approaches involve the use of direct memory access (DMA) operations to handle this work.

DMA operations are a common tool for moving large data structures in the background. Existing DMA implementations typically place the DMA controller in the input/output (I/O) interface and trigger copies via memory-mapped input/output (MMIO) gates. In these implementations, DMA operations are not exposed in the instruction set architecture (ISA)—rather, DMA operations are performed via loads/stores to the MMIO registers. Moreover, these DMA implementations do not provide adequate programmability, particularly with respect to packing/unpacking data during transfer, reading of CSR lists for scatter/gather, and performing an atomic operation between the source and destination data, among other examples.

Accordingly, this disclosure presents embodiments of a computing architecture with instruction set architecture (ISA) support for highly-programmable DMA operations. In some embodiments, for example, DMA operations are implemented via an offload engine local to each pipeline in the system. Moreover, the ISA provides custom direct memory access (DMA) instructions, which can be offloaded from the pipeline and carried out in the background of the issuing thread. These DMA instructions are enabled by the custom offload/execution engines. Further, the DMA instructions expand on existing DMA capabilities by supporting a wider range of functions and including significantly more programmability within each function. In some embodiments, for example, the DMA instructions incorporate a “DMAType” modifier, which allows unique manipulation of the data being operated on. This disclosure details the DMA ISA and hardware support for these highly-programmable DMA instructions.

These embodiments provide numerous advantages, including a scalable, low-overhead, easy-to-use hook for software to perform versatile memory and compute operations. These DMA instructions operate in the background of the issuing thread, allowing the thread to continue forward progress while the memory operations are in flight. This helps address the problems associated with scaling workload sizes, including sparse graph applications with irregular memory access patterns.

FIG. 9 illustrates a block diagram of an example computing architecture 900 with instruction set architecture (ISA) support for programmable direct memory access (DMA). In the illustrated embodiment, computing architecture 900 includes an execution pipeline 902, a memory offload engine (MENG) 904, an operation engine (OPENG) 906, an atomic unit (ATMU) 908, and memory 910a-b, as described further below.

The memory engine 904 provides hardware support for custom direct memory access (DMA) operations. When the execution pipeline 902 decodes an instruction for a custom DMA operation, the pipeline 902 offloads the DMA operation to the memory engine 904, allowing the corresponding thread to continue execution while the DMA operation is in flight. The memory engine 904 supports multiple DMA operations at the same time, giving equal time to all active instructions. The memory engine 904 communicates with the operation engine 906, which interfaces with (i) source and destination memory 910a-b for read and write requests and (ii) the atomic unit 908 for performing any required manipulation of the data being accessed.

For example, when the pipeline 902 offloads a DMA operation to the memory engine 904, the memory engine 904 sends requests to the operation engine 906, causing the operation engine 906 to read from the source memory 910a, interface with the atomic unit 908 for any required data manipulation, and then write to the destination memory 910b. In particular, based on the operands in a particular DMA instruction, the memory engine 904 generates the corresponding requests to the operation engine 906 to read from and write to memory 910a-b (which the operation engine 906 performs directly) and perform any required computations (which the operation engine 906 offloads to the atomic unit 908). Once all read/write/compute requests for a given DMA instruction are complete, the memory engine 904 notifies the pipeline 902 of the completion.

In some embodiments, computing architecture 900 may be implemented by or on the example computing devices and systems described throughout this disclosure, such as systems 200a-b of FIGS. 2A-B. In particular, the components of computing architecture 900 could be implemented on each core 210 of system 200. In some cases, however, one core 210 may execute a DMA instruction with source/destination memory addresses in memory 910a-b of another core 210 on the system 200 (e.g., another core on the same die/node or on a different die, node, or socket of the system). In those cases, the pipeline 902 and memory engine 904 on the core 210 executing the DMA instruction may be interacting with the operation engine 906, atomic unit 908, and/or memory 910a-b of one or more other cores 210 on the system 200.

Moreover, computing architecture 900 may include instruction set architecture (ISA) support for various types of DMA operations, with both contiguous and non-contiguous memory operands, and options for performing various compute operations on the data. For example, each instruction may include register operands which generally signify the source memory, destination memory, and overall size of the memory operations (e.g., using a count number). The instruction format/encoding also includes a 15-bit ‘DMAType’ field, which modifies the instruction to perform different types of computations on the data involved in the DMA operation (e.g., adding, inverting, etc.). The atomic unit 908 is the memory-side compute unit which performs this data manipulation. The instruction format/encoding also includes a ‘SIZE’ field indicating the granularity of the DMA operation—such as whether the DMA operates on a data granularity of one byte, two bytes, four bytes, or eight bytes—and software is expected to align the memory addresses in the DMA operation with the specified size.

Example ISA definitions for various DMA instructions are provided in Table 1, and examples of the configurable options available via the ‘DMAType’ field are provided in Table 2. In other embodiments, however, an ISA may use alternative definitions for DMA instructions with the same or similar functionality (e.g., different operands/fields), define additional types or variations of DMA instructions, and/or define additional types of compute operations that can be performed by the DMA instructions.

TABLE 1 Example ISA definitions for programmable DMA instructions INSTRUCTION ARGUMENTS ARGUMENT DESCRIPTIONS dma.init r1, r2, r3, r1 = destination address DMAType, r2 = source data value to copy SIZE r3 = # of data. SIZE elements to copy dma.initstride r1, r2, r3, r4, r5, r1 = destination address DMAType, r2 = source data to copy SIZE r3 = # of data. SIZE elements to copy r4 = stride size r5 = # of elements at each stride dma.copy r1, r2, r3, r1 = destination address DMAtype, r2 = source address SIZE r3 = # of data. SIZE elements to copy dma.copystride r1, r2, r3, r4, r5, r1 = destination address DMAType, r2 = source address SIZE r3 = # of data. SIZE elements to copy r4 = stride size r5 = # of elements at each stride dma.copytrans r1, r2, r3, r4, r5, r1 = destination address DMAType, r2 = source address SIZE r3 = # of data. SIZE elements to copy r4 = transpose row length r5 = # of elements per transpose row length dma.scatter r1, r2, r3, r4, r5, r1 = pointer to array of destination addresses DMAType, r2 = pointer to array of source data elements SIZE r3 = total # of data. SIZE elements to copy r4 = # of elements to copy at each destination address of r1 r5 = base address (for base + offset format) dma.gather r1, r2, r3, r4, r5, r1 = destination address for array of gathered data elements DMAType, r2 = pointer to array of addresses of source data elements SIZE r3 = total # of data. SIZE elements to copy r4 = # of elements to copy at each source address of r2 r5 = base address (for base + offset format) dma.bcast1 r1, r2, r3, r4, r5, r1 = pointer to array of addresses or offsets DMAType, r2 = source data of SIZE to copy SIZE r3 = destination count r4 = compare value for compare-exchange (cmp-xchg) r5 = base address (for base + offset format) dma.bcastX r1, r2, r3, r4, r5, r1 = pointer to array of addresses or offsets DMAType, r2 = base address of source memory SIZE r3 = destination count r4 = # of elements of SIZE to copy r5 = base address (for base + offset format) dma.reduce r1, r2, r3, r4, r1 = pointer to array of addresses or offsets DMAType, r2 = destination address SIZE r3 = source count r4 = base address (for base + offset format)

TABLE 2 Example bit definitions for the 'DMAType' instruction field BIT NO. DESCRIPTION 0 For dma.scatter or dma.gather: 0 = base + offset address mode; 1 = absolute/direct address mode. For dma.copystride: 0 = stride passthrough mode; 1 = pack/unpack mode (sparse→dense (pack) or dense→sparse (unpack); bit [1] specifies pack or unpack operation). 1 For dma.copystride in pack/unpack mode: 0 = pack operation (sparse→dense transformation via “pack” sequence); 1 = unpack operation (dense→sparse transformation via “unpack” sequence). 2 offset pointer size (for base + offset address mode): 0 = 64 bits; 1 = 32 bits. 3 offset pointer type (for base + offset address mode): 0 = signed; 1 = unsigned. 4 complement incoming data value from source 5 complement existing data value found at destination 9:6 4′b0000 = no bitwise 4′b1000 = bitwise AND 4′b1110 = bitwise OR 4′b0110 = bitwise XOR 11:10 2′b00 = integer; complement using two's complement operation 2′b01 = floating point; complement with “−1.0 * VALUE” 2′b10 = raw bits; complement as ~VALUE 2′b11 = reserved 14:12 3′b111 = overwrite with complement (source, destination, or both) 3′b000 = overwrite without complement 3′b010 = add 3′b110 = multiply 3′b011 = bitwise op (bits [9:6] specify the bitwise operation) 3′b101 = compare and exchange

FIG. 10 illustrates an example implementation of a DMA initialize (dma.init) instruction 1000. The dma.init instruction 1000 initializes a contiguous block of memory with a source data value specified by software. In the illustrated example, the instruction operands include a starting base address of memory to be initialized (register r1), a source data value to be copied into memory (register r2), a count or number of sequential copies of the source data value (register r3), and the size of the source data value (‘SIZE’ field). The memory engine 904 uses the starting base address, number of copies, and size operands to generate memory addresses for all the writes needed to initialize the block of memory with the source data value.

The dma.init instruction 1000 can also manipulate the source data value before copying it into memory using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 11 illustrates an example implementation of a DMA initialize stride (dma.initstride) instruction 1100. The dma.initstride instruction 1100 initializes memory with a specified data value in a strided fashion. The dma.initstride instruction 1100 is similar to the dma.init instruction 1000, except it initializes a non-contiguous block of memory using strided write patterns. The dma.initstride instruction 1100 also includes the same operands as the dma.init instruction 1000, such as base address (register r1), source data value (register r2), number of copies (register r3), and data size (SIZE′ field), along with a stride size (distance or number of leaps in memory between strides) (register r4) and a length at stride (number of sequential copies per stride) (register r5).

The dma.initstride instruction 1100 can manipulate the source data value before copying it into memory using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 12 illustrates an example implementation of a DMA copy (dma.copy) instruction 1200. The dma.copy instruction 1200 copies a block of data from one memory region to another memory region. In the illustrated example, the instruction operands include destination address (register r1), source address (register r2), number of data elements to copy (register r3) from source to destination, and size of each data element (′ SIZE′ field). The total size of the data block/memory region that is copied depends on the number of data elements and data element size.

The dma.copy instruction 1200 can also manipulate data at the source and/or destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIGS. 13A-B illustrate an example implementation of a DMA copy stride (dma.copystride) instruction 1300 with stride passthrough and pack/unpack modes. In particular, FIG. 13A illustrates stride passthrough mode, and FIG. 13B illustrates pack mode. Unpack mode (not shown in the figures) is similar to pack mode but in reverse.

The dma.copystride instruction 1300 copies blocks of data to and/or from strided memory regions. Similar to the dma.initstride instruction 1100, the dma.copystride instruction 1300 includes ‘LengthAtStride’ and ‘StrideSize’ parameters, which determine the size of each memory chunk and the distance between each chunk, respectively.

The dma.copystride instruction 1300 includes stride passthrough mode, which copies data between two strided memory regions, along with pack and unpack modes, which copy data from a strided memory region into a packed memory region (pack mode) or vice versa (unpack mode). For example, in stride passthrough mode (shown in FIG. 13A), a strided source memory region is copied into another equally strided destination memory region. In pack mode (shown in FIG. 13B), a strided source memory region is copied into a packed (e.g., sequential/contiguous) destination memory region. In unpack mode, a packed source memory region is copied into a strided destination memory region.

The dma.copystride instruction 1300 can also manipulate data at the source and/or destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 14 illustrates an example implementation of a DMA copy transpose (dma.copytrans) instruction 1400. The dma.copytrans instruction 1400 reads data elements of a matrix from source memory, transposes the data elements, and writes the transposed data elements to destination memory.

The dma.copytrans instruction 1400 may include pack, unpack, and/or stride passthrough variations (similar to the dma.copystride instruction 1300 of FIGS. 13A-B). For example, pack mode is shown in FIG. 14, where the source data elements are read from strided source memory regions, transposed, and then written into a packed/contiguous destination memory region. In the illustrated example, the fragmented data elements in the strided source memory regions are treated as a matrix with nine rows and two columns (where each of the nine pairs of data elements represents one row of the matrix with two columns). Those data elements are transposed into a matrix with two rows and nine columns and then written to a contiguously region of destination memory (where each of the two sets of nine data elements represents one row of the matrix with nine columns).

In unpack mode (not shown), the source data elements are packed in a contiguous region of source memory (e.g., by setting the ‘LengthAtStride’ and ‘StrideSize’ parameters to the same value as the ‘SIZE’ field), and the transposed data elements are unpacked and written to a non-contiguous strided region of destination memory. In stride passthrough mode (not shown), the source data elements and the transposed data elements are read from and written to unpacked strided memory regions.

The dma.copytrans instruction 1400 can also manipulate data at the source and/or destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 15 illustrates an example implementation of a DMA scatter (dma.scatter) instruction 1500. The dma.scatter instruction 1500 reads a contiguous vector of data elements and copies or scatters the data elements to fragmented regions of memory. In the illustrated example, the instruction operands include a pointer to an array of destination addresses (or offsets for base+offset mode) (register r1), a pointer to an array of source data elements (register r2), the total number of data. SIZE elements to copy (register r3), the number of data elements to copy at each destination address (register r4), and optionally a base address for base+offset mode (register r5). The memory engine 904 reads the array of data elements from the source address and scatters them to the addresses in the destination address array (e.g., based on the total number of elements and number of elements per destination address parameters).

The dma.gather instruction (not shown) has similar functionality as dma.scatter but in reverse. For example, dma.gather reads data elements from scattered or fragmented memory regions and packs them into an array stored contiguously in destination memory.

The dma.scatter and dma.gather instructions can also manipulate data at the source and/or destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 16 illustrates an example implementation of a DMA broadcast (dma.bcast) instruction 1600. There are two variations of the dma.bcast instruction 1600: dma.bcast1 and dma.bcastX. The dma.bcast1 instruction (shown in FIG. 16) broadcasts (copies) a single data element of size ‘SIZE’ to ‘COUNT’ number of memory locations. The dma.bcastX instruction reads X data elements of size ‘SIZE’ from memory and broadcasts them to ‘COUNT’ number of memory regions.

In dma.bcast1, the source data element is provided via a register (register r2). In dma.bcastX, a pointer to an array of source data elements is provided via a register (register r2). In both variations, the destination memory regions are specified in either direct address mode or base+offset address mode. In direct address mode, a pointer to an array of destination memory addresses is provided (register r1). In base+offset mode, a base address (register r5) and a pointer to an array of offsets (register r1) are provided. For example, in base+offset mode, the memory engine 904 receives a base address, and a pointer to a list of offsets (or a pointer to a list of addresses) for where to copy the source data element(s).

The dma.bcast instructions 1600 can also manipulate the source data element(s) and/or data at the destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 17 illustrates an example implementation of a DMA reduce (dma.reduce) instruction 1700. The dma.reduce instruction 1700 reads a specified number of data elements of size ‘SIZE’ from memory and reduces them into a single data element at the destination memory address. The type of reduction is specified via the ‘DMAType’ field (e.g., add, multiply, bitwise, etc.). Software is expected to initialize the destination address appropriately depending on the reduction type.

In the illustrated example, the instruction operands include a pointer to an array of addresses (or offsets for base+offset mode) (register r1), a destination memory address (register r2), a count or number of data elements in the source array (register r3), and optionally a base address for base+offset mode (register r4).

The dma.reduce instruction 1700 can manipulate data at the source and/or destination memory addresses using any of the computations specified in Table 2 (e.g., by setting the appropriate bits in the ‘DMAType’ field).

FIG. 18 illustrates a flowchart 1800 for executing programmable DMA instructions in accordance with certain embodiments. In some embodiments, for example, flowchart 1800 may be performed by or using the computing devices, systems, and architectures described throughout this disclosure (e.g., a computing device with ISA support for programmable DMA).

For example, the programmable DMA instructions may include an opcode and one or more fields, where the opcode indicates the type of DMA operation to be performed, and the field(s) indicate parameters such as source and/or destination memory regions, data operands, and/or compute operations to be performed on the data operand(s) (e.g., where the resulting data operands from the compute operation(s) are written to the destination memory region).

Moreover, in some cases, depending on the particular type of DMA instruction/DMA operation (as specified by the opcode), the DMA operation may read from or write to a non-contiguous memory region (e.g., read from a non-contiguous source memory region and/or write to a non-contiguous destination memory region).

For example, a DMA initialize instruction initializes a destination memory region with data. A DMA initialize stride instruction initializes a destination memory region with data, where the destination memory region is a strided memory region. A DMA copy stride instruction copies data from a source memory region to a destination memory region, where at least one of the source memory region or the destination memory region is a strided memory region. A DMA scatter instruction scatters data across a destination memory region, where the destination memory region is a non-contiguous memory region. A DMA gather instruction gathers data from a source memory region and stores the data in a destination memory region, where the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region. A DMA broadcast instruction broadcasts data to one or more destination memory addresses. A DMA reduce instruction reduces data (e.g., multiple data elements) to a single data element.

Moreover, the fields in the instruction may include a DMA type field (which may also be referred to as a compute type field), which indicates whether any compute operations need to be performed on the data operands before writing to the destination memory region. For example, the compute operation may include a complement operation, a bitwise operation, an add operation, a multiply operation, or compare and exchange operation, among other examples.

In some cases, the compute operation may be performed on one or more data operand(s) from the source memory region and one or more data operand(s) from the destination memory region. For example, in a DMA copy or DMA copy stride instruction, the data values at the source memory region and the data values at the destination memory region may be added, multiplied, bitwise manipulated, and so forth, and the resulting data values may be written to the destination memory region.

The flowchart begins at block 1802 by fetching and decoding the next instruction in the execution pipeline.

The flowchart then proceeds to block 1804 to determine if the instruction is a DMA instruction (e.g., an instruction to perform a DMA operation). If the instruction is not a DMA instruction, the flowchart proceeds to block 1806 to execute the instruction. If the instruction is a DMA instruction, the flowchart proceeds to block 1808 to offload the DMA instruction from the execution pipeline to a memory offload engine.

The flowchart then proceeds to block 1810 to access or read one or more data operands associated with the DMA operation (e.g., via registers and/or memory), as specified by the various fields in the instruction.

The flowchart then proceeds to block 1812 to determine whether the DMA operation requires a compute operation to be performed on any data operands. If the DMA operation does not require a compute operation to be performed, the flowchart proceeds to block 1816 to write the data operands to destination memory in the manner specified by the various fields in the instruction. If the DMA operation requires a compute operation to be performed, the flowchart proceeds to block 1814 to perform the compute operation on the respective data operand(s), and then to block 1816 to write the resulting data operands to destination memory in the manner specified by the various fields in the instruction.

At this point, the flowchart may be complete. In some embodiments, however, the flowchart may restart and/or certain blocks may be repeated. For example, in some embodiments, the flowchart may restart at block 1802 to continue executing programmable DMA instructions.

Instruction Set Architecture with Expanded Fence and Flush Operations

Most modern core architectures provide increased performance through implementation of out-of-order execution or multi-threading. With these designs comes the possibility of violating software memory ordering rules. To allow software to maintain the memory ordering expectations, “fence” instructions can be provided at the ISA level. For example, the x86 ISA includes MFENCE (fence on all memory operations), LFENCE (fence on loads), and SFENCE (fence on stores). The ARM ISA includes support for write barriers and atomic fences (load-write-store; non-remote). These instructions provide a barrier in the code where execution will wait until the preceding memory operation (load, store, or both) is complete. However, these implementations are all limited to memory operations and do not include support for fencing relative to PC values or fencing on offloaded operations (DMA, QMA, remote atomics, and collectives).

For example, in addition to loads and stores, the computing architecture and ISA described throughout this disclosure includes many operations which require a level of software visibility to enforce ordering expectations. These include “offloaded” instructions such as direct memory access (DMA), queue memory access (QMA), collectives, thread management, and remote atomics. Other non-“offload” operations requiring software visibility include cache invalidations, writebacks, and pre-fetches. This necessitates an expansion of traditional “fence” instruction capability beyond what is implemented in current architectures. This disclosure presents embodiments of an ISA and pipeline design for enhanced fence operations.

Many architectures also include flush operations, which traditionally refer to the cleaning of cache lines—e.g., writing dirty data in the cache back to memory and invalidating the line. However, existing architectures do not have a mechanism to apply a flush operation to non-cached software-issued store requests with no acknowledgement. For example, in the computing architecture described throughout this disclosure, further software control-ability of memory operations is provided with the intention of optimizing performance. This is through the exposure of store operations both with and without acknowledgment (storeack and store (no ack), respectively). Because the described computing architecture is a Non-Uniform Memory Access (NUMA) system that is very well suited for graph applications and other similar workloads—where a majority of loads and stores are non-cached—the programmer may wish to issue non-ack stores to allow the thread to continue while the store is being committed to memory. However, the status of these non-ack store operations may eventually need to be known. Therefore, this disclosure also presents embodiments of an ISA and hardware design to support flushing of non-ack stores to enable improved performance for memory-intensive applications.

In particular, this disclosure describes ISA features that provide two mechanisms to give visibility for when background operations and ack-less stores are completed. The first mechanism is a custom fence instruction, which is necessary to maintain consistency when accessing memory space concurrently between the pipeline and different compute engines or cores. The described ISA includes three fence instructions: fence on an operation type (e.g., load, store, DMA, collective, etc.), fence on a program counter (PC) (or instruction pointer (IP)) value, and fence relative to a PC (or IP) value.

The second mechanism is a flush instruction to enable visibility of non-ack stores that are issued from the pipeline. This allows for increased performance while not compromising the visibility of the result of non-ack store operations.

These embodiments provide many advantages. The inclusion of extended fence support in the ISA allows for easy software integration of offload functions by helping to provide visibility into the completion of each operation. Using a fence instruction in situations where the offloaded operations are running concurrently with pipeline activity is the programmer's choice, enabling the highest possible performance while utilizing the rich feature set of the described architecture. The flush functionality also provides a performance increase when non-cached, non-ack stores are allowed to complete in the background while the pipeline operates on different memory regions. Properly using the described flush operation removes the possibility of inconsistent data in scenarios where the ack-less store memory region overlaps with the region being actively operated on by the pipeline.

Expanded Fence Operations

In the computing architectures described throughout this disclosure (e.g., in connection with FIGS. 1-22), the pipelines are responsible for fetching, decoding, and executing the instructions of each thread. These instructions are allocated in a retire buffer (RB), which keeps track of program order and tracks when each instruction is completed. The background operations are special instructions, in that the retire buffer instead only tracks the instruction until it is issued to the appropriate execution engine, not until it is completed. As a result, this disclosure presents a hardware architecture and ISA with support for expanded “fence” instructions, which can be issued by any running thread on any pipeline and are used to track when these background instruction types have fully completed execution.

FIG. 19 illustrates a block diagram of an example computing architecture 1900 with instruction set architecture (ISA) support for expanded fence operations. Examples of various flavors of fence instructions supported in computing architecture 1900 are provided in Table 3.

TABLE 3 Examples of expanded fence instructions INSTRUCTION ARGUMENTS ARGUMENT DESCRIPTIONS fence imm32, B/N imm32 = bitmask of what operation type to fence on B = blocking fence/ N = non-blocking fence fencepc r1, B/N r1 = program counter (PC) to fence on B = blocking fence/ N = non-blocking fence fencepc.rel imm42, B/N imm42 = relative program counter (PC) to fence on (PC of fencepc.rel + imm42) B = blocking fence/ N = non-blocking fence

With respect to the ‘fence’ instruction, all fences are either considered blocking or non-blocking. Blocking means a stall of the thread is generated if the conditions of the fence are met. For example, if a fence on DMA instructions is issued, and there are still DMA instructions issued by that thread still in flight, the thread is stalled until those DMA instructions are completed. Non-blocking fences only generate a thread stall if either: (i) another fence instruction is encountered before the first one is met (only one fence is allowed at any given time); or (ii) another instruction of that fence type is encountered before the fence is met.

The fence instruction also provides a bitmask 1905 in the imm32 field to determine what type(s) of instruction the fence is operating on. Examples of supported instruction types are provided below in Table 4.

TABLE 4 Instruction type bitmask for fence instruction BIT NO. INSTRUCTION TYPE 0 Load 1 Store 2 Atomic 3 DMA 4 QMA 5 Collective 6 Thread 7 Icache 8 Dcache

In this manner, the fence instruction can track all previously issued instructions of the type(s) specified in the fence bitmask 1905 until they are completed. When any of these instruction types are fetched and decoded, the pipeline 1902 issues the instruction to the appropriate execution engine. For example, loads and stores are sent to the load-store queue (LSQ) 1904 inside of the pipeline 1902, which manages sending the loads and stores to the network (e.g., to other parts of the system). Direct cache control instructions are sent to the particular cache 1908a-b they are operating on (either the instruction cache 1908a or data cache 1908b). Complex background operations (atomics, DMA, QMA, Collective, Thread ops) each have their own execution offload engines 1920. These classes of instructions are managed by the engine sequence queue (ESQ) 1910, which sits outside of the pipeline 1902 and is responsible for tracking status of these instruction types.

All of these units in turn provide a “pending” signal to the pipeline 1902. For the ESQ 1910, the pending signal is a bitmask 1915 indicating the pending instruction types for the background operations that it tracks. The fence instruction—when decoded and executed inside the pipeline 1902—uses these pending signals to determine if/when the corresponding instruction types have completed. For example, the hazard logic 1906 in the pipeline 1902 determines whether a stall needs to be generated for a particular fence instruction based on the issued fence bitmask 1905 within the instruction (e.g., indicating the instruction types to fence on), the pending bitmask 1915 from the ESQ (indicating the background instruction types that are currently pending), and the pending signals from the other units.

For blocking fences, if any of the pending signals from the outside entities are asserted and match the bitmask 1905 provided in the decoded fence instruction, a stall is generated until the appropriate pending signals are de-asserted. Likewise, for non-blocking fences, even though no stall is asserted from a matching pending bitmask, the fence isn't considered completed until all appropriate pending signals are low.

The ‘fencepc’ instruction is similar to fence, except it operates on a specific, absolute program counter (PC) instead of a class of instruction. PCs of decoded and uncompleted foreground instructions (e.g., instructions tracked by the retire buffer 1903 of the pipeline) are stored in the retire buffer 1903 until that instruction is retired. When the fence is decoded, if there exists a matching, un-retired PC in any of the allocated entries in the retire buffer 1903, then the fencepc is activated. The blocking and non-blocking versions of fencepc behave similarly to the blocking and non-blocking fence. If it is blocking, the thread is immediately stalled until the matching PCs have retired. If it is non-blocking, the thread will continue to execute until either another fence is encountered, or the matching PC is once again fetched.

The ‘fencepc.rel’ instruction has the same mechanism as fencepc, except instead of an absolute PC provided to the instruction, a PC relative to the fencepc.rel is used. For example, if the PC of the fencepc.rel is 0x1000, and the imm42 field contains the value 0x500, then the fencepc is activated on any PC matching 0x1000+0x500 or 0x1500. Blocking and non-blocking versions of fencepc.rel are similar to fencepc.

FIG. 20 illustrates a flowchart 2000 for executing fence instructions in accordance with certain embodiments. In some embodiments, for example, flowchart 2000 may be performed by or using the computing devices, systems, and architectures described throughout this disclosure.

The flowchart begins at block 2002 by fetching and decoding the next instruction in the execution pipeline.

The flowchart then proceeds to block 2004 to determine if the instruction is a fence instruction. If the instruction is not a fence instruction, the flowchart proceeds to block 2006 to execute the instruction. If the instruction is a fence instruction, the flowchart proceeds to block 2008 to determine if the fence instruction is blocking or non-blocking.

If the fence instruction is blocking, the flowchart proceeds to block 2010 to determine if the target instruction(s) specified in the fence instruction are currently being executed. If the target instruction(s) are not currently being executed, the flowchart proceeds to block 2012 to continue executing instructions until the target instruction(s) are encountered in the pipeline, and then to block 2014 to generate a stall until the target instruction(s) finish executing. If the target instruction(s) are currently being executed, the flowchart proceeds to block 2014 to generate a stall until the target instruction(s) finish executing.

If the fence instruction is non-blocking, the flowchart proceeds to block 2012 to continue executing instructions until the target instruction(s) are encountered in the pipeline, and then to block 2014 to generate a stall until the target instruction(s) finish executing.

At this point, the flowchart may be complete. In some embodiments, however, the flowchart may restart and/or certain blocks may be repeated. For example, in some embodiments, the flowchart may restart at block 2002 to continue executing and enforcing fence instructions.

Expanded Flush Operations

FIG. 21 illustrates a block diagram of an example computing architecture 2100 with instruction set architecture (ISA) support for expanded flush operations. In particular, even though a fence instruction can pause a thread until all pending store instructions are complete, an ack-less store is considered complete from the perspective of the load-store queue (LSQ) 2104a once it enters the network, and not when the write completes. Thus, an expanded flush instruction is supported by the ISA to provide visibility for when these ack-less store operations have completed. An example ISA definition for the expanded flush instruction is provided in Table 5.

TABLE 5 Example ISA definition for expanded flush instruction INSTRUCTION ARGUMENTS ARGUMENT DESCRIPTIONS Flush B/N, I/N B = blocking flush/ N = non-blocking flush I = invalidate/ N = no-invalidate

When any entity 2104a-d (e.g., LSQ 2104a, data cache 2104b, instruction cache 2104c, offload engine 2104d) needs to make a memory request, it has to get allocated into the message transport buffer (MTB) 2106, which is the entry point to the network 2108 interfacing with the rest of the system. As a part of this request, the entity 2104a-d informs the MTB 2106 of the type of request (read, write, etc.) and if this request requires a response to be returned to the sender. In the case of an ack-less store, the LSQ 2104a makes the store request to the MTB 2106, and informs the MTB 2106 that no response is required. Once the MTB 2106 accepts it, it is de-allocated from the LSQ 2104a, and the thread considers it retired. However, the network 2108 still returns a response for this request back to the MTB 2106, so the MTB 2106 can de-allocate its slot. This is the hook that allows the flush to function. The pipeline 2102 contains a register that maintains a flush counter 2105. Every time the MTB 2106 sends a request to the network 2108 that is an ack-less store, it increments the count in this register 2105, and every time the MTB 2106 receives a response for an ack-less store, it decrements the counter. As the flush operates on the network 2108 entry point, it will provide visibility for any ack-less store, by any entity connected to that pipeline 2102. This means if any thread running on that pipeline 2102 issues a flush, it will affect all other threads on that pipeline 2102.

For example, all entities 2104a-d of an agent (the pipeline 2102 and all units that directly talk to the pipeline 2102) send any memory request to the network 2108 through the MTB 2106. When the MTB 2106 sends an ack-less store to the network 2108, it increments the flush counter register 2105, and when it receives a response for an ack-less store from the network, it decrements the flush counter register 2105.

Similar to the fence instructions, the flush instruction has blocking and non-blocking versions. The blocking version stalls the entire pipeline while the count register 2105 is non-zero. The non-blocking version prevents the MTB 2106 from sending any more requests to the network 2108 until the counter register 2105 is zero, while allowing the pipeline 2102 to continue execution. The invalidate/no-invalidate option is used as a switch to either clear (invalidate) or not clear (no-invalidate) the count register 2105.

FIG. 22 illustrates a flowchart 2200 for executing flush instructions in accordance with certain embodiments. In some embodiments, for example, flowchart 2200 may be performed by or using the computing devices, systems, and architectures described throughout this disclosure.

The flowchart begins at block 2202 by fetching and decoding the next instruction in the execution pipeline.

The flowchart then proceeds to block 2204 to determine if the instruction is a flush instruction. If the instruction is not a flush instruction, the flowchart proceeds to block 2206 to execute the instruction. If the instruction is a flush instruction, the flowchart proceeds to block 2208 to determine if the flush instruction is blocking or non-blocking.

If the flush instruction is blocking, the flowchart proceeds to block 2210 to determine if any no-ack store requests are currently pending. If no no-ack store requests are pending, the flowchart proceeds to block 2212 to continue executing instructions until a no-ack store request is encountered in the pipeline, and then to block 2214 to generate a stall until the no-ack store request is complete. If at least one no-ack store request is pending, the flowchart proceeds to block 2214 to generate a stall until all no-ack store requests are complete.

If the flush instruction is non-blocking, the flowchart proceeds to block 2212 to continue executing instructions until a no-ack store request is encountered in the pipeline, and then to block 2214 to generate a stall until the no-ack store request is complete.

At this point, the flowchart may be complete. In some embodiments, however, the flowchart may restart and/or certain blocks may be repeated. For example, in some embodiments, the flowchart may restart at block 2202 to continue executing and enforcing flush instructions.

The embodiments described herein can be used in a variety of computing environments, contexts, and use cases. In some embodiments, for example, the described embodiments (e.g., programmable DMA instructions, fence/flush instructions) may be utilized by microservices and/or containers. Microservices may refer to a collection of self-contained, loosely-coupled applications that each provide certain services or functionalities which collectively form a fully functional application. Containers may refer to the use of operating-system-level virtualization to deliver software packages in self-contained, isolated environments with all required dependencies to ensure they run the same in any environment or infrastructure, out-of-the-box. Moreover, in some embodiments, microservices may be implemented in containers.

The described embodiments can also be used with a variety of memory configurations and architectures, including with pooled memory, far memory, near memory, tiered memory, and/or disaggregated memory, among other examples.

In some embodiments, the programmable DMA operations and/or fence/flush operations may be implemented in a memory offload engine, a memory controller (e.g., particularly for far memory scenarios), a DMA accelerator, or a data copy and transformation accelerator (e.g., the Intel data streaming accelerator (DSA)), among other examples.

Moreover, these embodiments (e.g., memory offload engine, memory controller, DMA accelerator, data copy/transformation accelerator) can have a variety of forms, including discrete parts (e.g., in a single system or disaggregated system), parts of a system-on-a-chip (SoC), chiplets, dielets, multi-chip packages, and embedded multi-die interconnect bridge (EMIB)-connected multi-chip packages, among other examples.

Example Computing Devices, Systems, and Architectures

The figures below detail exemplary architectures and systems to implement embodiments of the above. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below, or implemented as software modules.

Embodiments of the instruction(s) detailed above are embodied may be embodied in a “generic vector friendly instruction format” which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the writemask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) above may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.

An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014).

Exemplary Instruction Formats

Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.

Generic Vector Friendly Instruction Format

A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.

FIGS. 23A-23B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to certain embodiments. FIG. 23A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof; while FIG. 23B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof. Specifically, a generic vector friendly instruction format 2300 for which are defined class A and class B instruction templates, both of which include no memory access 2305 instruction templates and memory access 2320 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.

While embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).

The class A instruction templates in FIG. 23A include: 1) within the no memory access 2305 instruction templates there is shown a no memory access, full round control type operation 2310 instruction template and a no memory access, data transform type operation 2315 instruction template; and 2) within the memory access 2320 instruction templates there is shown a memory access, temporal 2325 instruction template and a memory access, non-temporal 2330 instruction template. The class B instruction templates in FIG. 23B include: 1) within the no memory access 2305 instruction templates there is shown a no memory access, write mask control, partial round control type operation 2312 instruction template and a no memory access, write mask control, vsize type operation 2317 instruction template; and 2) within the memory access 2320 instruction templates there is shown a memory access, write mask control 2327 instruction template.

The generic vector friendly instruction format 2300 includes the following fields listed below in the order illustrated in FIGS. 23A-23B.

Format field 2340—a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.

Base operation field 2342—its content distinguishes different base operations.

Register index field 2344—its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a P×Q (e.g., 32×512, 16×128, 32×1024, 64×1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).

Modifier field 2346—its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 2305 instruction templates and memory access 2320 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.

Augmentation operation field 2350—its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 2368, an alpha field 2352, and a beta field 2354. The augmentation operation field 2350 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.

Scale field 2360—its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale*index+base).

Displacement Field 2362A—its content is used as part of memory address generation (e.g., for address generation that uses 2scale*index+base+displacement).

Displacement Factor Field 2362B (note that the juxtaposition of displacement field 2362A directly over displacement factor field 2362B indicates one or the other is used)—its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N)—where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale*index+base+scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 2374 (described later herein) and the data manipulation field 2354C. The displacement field 2362A and the displacement factor field 2362B are optional in the sense that they are not used for the no memory access 2305 instruction templates and/or different embodiments may implement only one or none of the two.

Data element width field 2364—its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.

Write mask field 2370—its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 2370 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the write mask field's 2370 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 2370 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 2370 content to directly specify the masking to be performed.

Immediate field 2372—its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.

Class field 2368—its content distinguishes between different classes of instructions. With reference to FIGS. 23A-B, the contents of this field select between class A and class B instructions. In FIGS. 23A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 2368A and class B 2368B for the class field 2368 respectively in FIGS. 23A-B).

Instruction Templates of Class A

In the case of the non-memory access 2305 instruction templates of class A, the alpha field 2352 is interpreted as an RS field 2352A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2352A.1 and data transform 2352A.2 are respectively specified for the no memory access, round type operation 2310 and the no memory access, data transform type operation 2315 instruction templates), while the beta field 2354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2305 instruction templates, the scale field 2360, the displacement field 2362A, and the displacement scale filed 2362B are not present.

No-Memory Access Instruction Templates—Full Round Control Type Operation

In the no memory access full round control type operation 2310 instruction template, the beta field 2354 is interpreted as a round control field 2354A, whose content(s) provide static rounding. While in the described embodiments the round control field 2354A includes a suppress all floating point exceptions (SAE) field 2356 and a round operation control field 2358, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 2358).

SAE field 2356—its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 2356 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.

Round operation control field 2358—its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 2358 allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 2350 content overrides that register value.

No Memory Access Instruction Templates—Data Transform Type Operation

In the no memory access data transform type operation 2315 instruction template, the beta field 2354 is interpreted as a data transform field 2354B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).

In the case of a memory access 2320 instruction template of class A, the alpha field 2352 is interpreted as an eviction hint field 2352B, whose content distinguishes which one of the eviction hints is to be used (in FIG. 23A, temporal 2352B.1 and non-temporal 2352B.2 are respectively specified for the memory access, temporal 2325 instruction template and the memory access, non-temporal 2330 instruction template), while the beta field 2354 is interpreted as a data manipulation field 2354C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 2320 instruction templates include the scale field 2360, and optionally the displacement field 2362A or the displacement scale field 2362B.

Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.

Memory Access Instruction Templates—Temporal

Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.

Memory Access Instruction Templates—Non-Temporal

Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.

Instruction Templates of Class B

In the case of the instruction templates of class B, the alpha field 2352 is interpreted as a write mask control (Z) field 2352C, whose content distinguishes whether the write masking controlled by the write mask field 2370 should be a merging or a zeroing.

In the case of the non-memory access 2305 instruction templates of class B, part of the beta field 2354 is interpreted as an RL field 2357A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2357A.1 and vector length (VSIZE) 2357A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 2312 instruction template and the no memory access, write mask control, VSIZE type operation 2317 instruction template), while the rest of the beta field 2354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2305 instruction templates, the scale field 2360, the displacement field 2362A, and the displacement scale filed 2362B are not present.

In the no memory access, write mask control, partial round control type operation 2310 instruction template, the rest of the beta field 2354 is interpreted as a round operation field 2359A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).

Round operation control field 2359A—just as round operation control field 2358, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 2359A allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 2350 content overrides that register value.

In the no memory access, write mask control, VSIZE type operation 2317 instruction template, the rest of the beta field 2354 is interpreted as a vector length field 2359B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).

In the case of a memory access 2320 instruction template of class B, part of the beta field 2354 is interpreted as a broadcast field 2357B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 2354 is interpreted the vector length field 2359B. The memory access 2320 instruction templates include the scale field 2360, and optionally the displacement field 2362A or the displacement scale field 2362B.

With regard to the generic vector friendly instruction format 2300, a full opcode field 2374 is shown including the format field 2340, the base operation field 2342, and the data element width field 2364. While one embodiment is shown where the full opcode field 2374 includes all of these fields, the full opcode field 2374 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 2374 provides the operation code (opcode).

The augmentation operation field 2350, the data element width field 2364, and the write mask field 2370 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.

The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.

The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes are within the purview of this disclosure). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.

Exemplary Specific Vector Friendly Instruction Format

FIG. 24 is a block diagram illustrating an exemplary specific vector friendly instruction format according to certain embodiments. FIG. 24 shows a specific vector friendly instruction format 2400 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 2400 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from FIG. 23 into which the fields from FIG. 24 map are illustrated.

It should be understood that, although some embodiments are described with reference to the specific vector friendly instruction format 2400 in the context of the generic vector friendly instruction format 2300 for illustrative purposes, embodiments of this disclosure are not limited to the specific vector friendly instruction format 2400. For example, the generic vector friendly instruction format 2300 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 2400 is shown as having fields of specific sizes. By way of specific example, while the data element width field 2364 is illustrated as a one bit field in the specific vector friendly instruction format 2400, not all embodiments are so limited (that is, the generic vector friendly instruction format 2300 contemplates other sizes of the data element width field 2364).

The generic vector friendly instruction format 2300 includes the following fields listed below in the order illustrated in FIG. 24A.

EVEX Prefix (Bytes 0-3) 2402—is encoded in a four-byte form.

Format Field 2340 (EVEX Byte 0, bits [7:0])—the first byte (EVEX Byte 0) is the format field 2340 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment).

The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.

REX field 2405 (EVEX Byte 1, bits [7-5])—consists of a EVEX.R bit field (EVEX Byte 1, bit [7]—R), EVEX.X bit field (EVEX byte 1, bit [6]—X), and 2357BEX byte 1, bit[5]—B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e., ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.

REX′ field 2310—this is the first part of the REX′ field 2310 and is the EVEX.R′ bit field (EVEX Byte 1, bit [4]—R′) that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R′Rrrr is formed by combining EVEX.R′, EVEX.R, and the other RRR from other fields.

Opcode map field 2415 (EVEX byte 1, bits [3:0]—mmmm)—its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3).

Data element width field 2364 (EVEX byte 2, bit [7]—W)—is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).

EVEX.vvvv 2420 (EVEX Byte 2, bits [6:3]—vvvv)—the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (1s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 2420 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.

EVEX.0 2368 Class field (EVEX byte 2, bit [2]—U)—If EVEX.0=0, it indicates class A or EVEX.U0; if EVEX.0=1, it indicates class B or EVEX.U1.

Prefix encoding field 2425 (EVEX byte 2, bits [1:0]—pp)—provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.

Alpha field 2352 (EVEX byte 3, bit [7]—EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a)—as previously described, this field is context specific.

Beta field 2354 (EVEX byte 3, bits [6:4]—SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ)—as previously described, this field is context specific.

REX′ field 2310—this is the remainder of the REX′ field and is the EVEX.V′ bit field (EVEX Byte 3, bit [3]—V′) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V′VVVV is formed by combining EVEX.V′, EVEX.vvvv.

Write mask field 2370 (EVEX byte 3, bits [2:0]—kkk)—its content specifies the index of a register in the write mask registers as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).

Real Opcode Field 2430 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.

MOD R/M Field 2440 (Byte 5) includes MOD field 2442, Reg field 2444, and R/M field 2446. As previously described, the MOD field's 2442 content distinguishes between memory access and non-memory access operations. The role of Reg field 2444 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 2446 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.

Scale, Index, Base (SIB) Byte (Byte 6)—As previously described, the scale field's 2350 content is used for memory address generation. SIB.xxx 2454 and SIB.bbb 2456—the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.

Displacement field 2362A (Bytes 7-10)—when MOD field 2442 contains 10, bytes 7-10 are the displacement field 2362A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.

Displacement factor field 2362B (Byte 7)—when MOD field 2442 contains 01, byte 7 is the displacement factor field 2362B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between −128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values −128, −64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 2362B is a reinterpretation of disp8; when using displacement factor field 2362B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 2362B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 2362B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 2372 operates as previously described.

Full Opcode Field

FIG. 24B is a block diagram illustrating the fields of the specific vector friendly instruction format 2400 that make up the full opcode field 2374 according to one embodiment. Specifically, the full opcode field 2374 includes the format field 2340, the base operation field 2342, and the data element width (W) field 2364. The base operation field 2342 includes the prefix encoding field 2425, the opcode map field 2415, and the real opcode field 2430.

Register Index Field

FIG. 24C is a block diagram illustrating the fields of the specific vector friendly instruction format 2400 that make up the register index field 2344 according to one embodiment. Specifically, the register index field 2344 includes the REX field 2405, the REX′ field 2410, the MODR/M.reg field 2444, the MODR/M.r/m field 2446, the VVVV field 2420, xxx field 2454, and the bbb field 2456.

Augmentation Operation Field

FIG. 24D is a block diagram illustrating the fields of the specific vector friendly instruction format 2400 that make up the augmentation operation field 2350 according to one embodiment. When the class (U) field 2368 contains 0, it signifies EVEX.U0 (class A 2368A); when it contains 1, it signifies EVEX.U1 (class B 2368B). When U=0 and the MOD field 2442 contains 11 (signifying a no memory access operation), the alpha field 2352 (EVEX byte 3, bit [7]—EH) is interpreted as the rs field 2352A. When the rs field 2352A contains a 1 (round 2352A.1), the beta field 2354 (EVEX byte 3, bits [6:4]—SSS) is interpreted as the round control field 2354A. The round control field 2354A includes a one bit SAE field 2356 and a two bit round operation field 2358. When the rs field 2352A contains a 0 (data transform 2352A.2), the beta field 2354 (EVEX byte 3, bits [6:4]—SSS) is interpreted as a three bit data transform field 2354B. When U=0 and the MOD field 2442 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 2352 (EVEX byte 3, bit [7]—EH) is interpreted as the eviction hint (EH) field 2352B and the beta field 2354 (EVEX byte 3, bits [6:4]—SSS) is interpreted as a three bit data manipulation field 2354C.

When U=1, the alpha field 2352 (EVEX byte 3, bit [7]—EH) is interpreted as the write mask control (Z) field 2352C. When U=1 and the MOD field 2442 contains 11 (signifying a no memory access operation), part of the beta field 2354 (EVEX byte 3, bit [4]—S0) is interpreted as the RL field 2357A; when it contains a 1 (round 2357A.1) the rest of the beta field 2354 (EVEX byte 3, bit [6-5]—S2-1) is interpreted as the round operation field 2359A, while when the RL field 2357A contains a 0 (VSIZE 2357.A2) the rest of the beta field 2354 (EVEX byte 3, bit [6-5]—S2-1) is interpreted as the vector length field 2359B (EVEX byte 3, bit [6-5]—L1-0). When U=1 and the MOD field 2442 contains 00, 01, or 10 (signifying a memory access operation), the beta field 2354 (EVEX byte 3, bits [6:4]—SSS) is interpreted as the vector length field 2359B (EVEX byte 3, bit [6-5]—L1-0) and the broadcast field 2357B (EVEX byte 3, bit [4]—B).

Exemplary Register Architecture

FIG. 25 is a block diagram of a register architecture 2500 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 2510 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 2400 operates on these overlaid register file as illustrated in the below tables.

Adjustable Vector Length Class Operations Registers Instruction Templates that A (FIG. 23A; U = 0) 2310, 2315, 2325, 2330 zmm registers (the vector length is 64 byte) do not include the vector length field 2359B B (FIG. 23B; U = 1) 2312 zmm registers (the vector length is 64 byte) Instruction templates that B (FIG. 23B; U = 1) 2317, 2327 zmm, ymm, or xmm registers (the do include the vector vector length is 64 byte, 32 byte, or 16 length field 2359B byte) depending on the vector length field 2359B

In other words, the vector length field 2359B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 2359B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 2400 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.

Write mask registers 2515—in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 2515 are 16 bits in size. As previously described, in one embodiment, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.

General-purpose registers 2525—in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.

Scalar floating point stack register file (x87 stack) 2545, on which is aliased the MMX packed integer flat register file 2550—in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.

Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.

Exemplary Core Architectures, Processors, and Computer Architectures

Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.

Example Core Architectures

FIG. 26A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to certain embodiments. FIG. 26B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to certain embodiments. The solid lined boxes in FIGS. 26A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.

In FIG. 26A, a processor pipeline 2600 includes a fetch stage 2602, a length decode stage 2604, a decode stage 2606, an allocation stage 2608, a renaming stage 2610, a scheduling (also known as a dispatch or issue) stage 2612, a register read/memory read stage 2614, an execute stage 2616, a write back/memory write stage 2618, an exception handling stage 2622, and a commit stage 2624.

FIG. 26B shows processor core 2690 including a front end unit 2630 coupled to an execution engine unit 2650, and both are coupled to a memory unit 2670. The core 2690 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 2690 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.

The front end unit 2630 includes a branch prediction unit 2632 coupled to an instruction cache unit 2634, which is coupled to an instruction translation lookaside buffer (TLB) 2636, which is coupled to an instruction fetch unit 2638, which is coupled to a decode unit 2640. The decode unit 2640 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 2640 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 2690 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 2640 or otherwise within the front end unit 2630). The decode unit 2640 is coupled to a rename/allocator unit 2652 in the execution engine unit 2650.

The execution engine unit 2650 includes the rename/allocator unit 2652 coupled to a retirement unit 2654 and a set of one or more scheduler unit(s) 2656. The scheduler unit(s) 2656 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 2656 is coupled to the physical register file(s) unit(s) 2658. Each of the physical register file(s) units 2658 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 2658 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 2658 is overlapped by the retirement unit 2654 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 2654 and the physical register file(s) unit(s) 2658 are coupled to the execution cluster(s) 2660. The execution cluster(s) 2660 includes a set of one or more execution units 2662 and a set of one or more memory access units 2664. The execution units 2662 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 2656, physical register file(s) unit(s) 2658, and execution cluster(s) 2660 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster—and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 2664). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.

The set of memory access units 2664 is coupled to the memory unit 2670, which includes a data TLB unit 2672 coupled to a data cache unit 2674 coupled to a level 2 (L2) cache unit 2676. In one exemplary embodiment, the memory access units 2664 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 2672 in the memory unit 2670. The instruction cache unit 2634 is further coupled to a level 2 (L2) cache unit 2676 in the memory unit 2670. The L2 cache unit 2676 is coupled to one or more other levels of cache and eventually to a main memory.

By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 2600 as follows: 1) the instruction fetch 2638 performs the fetch and length decoding stages 2602 and 2604; 2) the decode unit 2640 performs the decode stage 2606; 3) the rename/allocator unit 2652 performs the allocation stage 2608 and renaming stage 2610; 4) the scheduler unit(s) 2656 performs the schedule stage 2612; 5) the physical register file(s) unit(s) 2658 and the memory unit 2670 perform the register read/memory read stage 2614; the execution cluster 2660 perform the execute stage 2616; 6) the memory unit 2670 and the physical register file(s) unit(s) 2658 perform the write back/memory write stage 2618; 7) various units may be involved in the exception handling stage 2622; and 8) the retirement unit 2654 and the physical register file(s) unit(s) 2658 perform the commit stage 2624.

The core 2690 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, Calif.; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, Calif.), including the instruction(s) described herein. In one embodiment, the core 2690 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.

It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).

While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 2634/2674 and a shared L2 cache unit 2676, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.

FIGS. 27A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.

FIG. 27A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 2702 and with its local subset of the Level 2 (L2) cache 2704, according to certain embodiments. In one embodiment, an instruction decoder 2700 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 2706 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 2708 and a vector unit 2710 use separate register sets (respectively, scalar registers 2712 and vector registers 2714) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 2706, alternative certain embodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).

The local subset of the L2 cache 2704 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 2704. Data read by a processor core is stored in its L2 cache subset 2704 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 2704 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.

FIG. 27B is an expanded view of part of the processor core in FIG. 27A according to certain embodiments. FIG. 27B includes an L1 data cache 2706A part of the L1 cache 2704, as well as more detail regarding the vector unit 2710 and the vector registers 2714. Specifically, the vector unit 2710 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 2728), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 2720, numeric conversion with numeric convert units 2722A-B, and replication with replication unit 2724 on the memory input. Write mask registers 2726 allow predicating resulting vector writes.

FIG. 28 is a block diagram of a processor 2800 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to certain embodiments. The solid lined boxes in FIG. 28 illustrate a processor 2800 with a single core 2802A, a system agent 2810, a set of one or more bus controller units 2816, while the optional addition of the dashed lined boxes illustrates an alternative processor 2800 with multiple cores 2802A-N, a set of one or more integrated memory controller unit(s) 2814 in the system agent unit 2810, and special purpose logic 2808.

Thus, different implementations of the processor 2800 may include: 1) a CPU with the special purpose logic 2808 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 2802A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 2802A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 2802A-N being a large number of general purpose in-order cores. Thus, the processor 2800 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 2800 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.

The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 2806, and external memory (not shown) coupled to the set of integrated memory controller units 2814. The set of shared cache units 2806 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 2812 interconnects the integrated graphics logic 2808, the set of shared cache units 2806, and the system agent unit 2810/integrated memory controller unit(s) 2814, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 2806 and cores 2802-A-N.

In some embodiments, one or more of the cores 2802A-N are capable of multi-threading. The system agent 2810 includes those components coordinating and operating cores 2802A-N. The system agent unit 2810 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 2802A-N and the integrated graphics logic 2808. The display unit is for driving one or more externally connected displays.

The cores 2802A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 2802A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.

Example Computer Architectures

FIGS. 29-32 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.

Referring now to FIG. 29, shown is a block diagram of a system 2900 in accordance with one embodiment. The system 2900 may include one or more processors 2910, 2915, which are coupled to a controller hub 2920. In one embodiment the controller hub 2920 includes a graphics memory controller hub (GMCH) 2990 and an Input/Output Hub (IOH) 2950 (which may be on separate chips); the GMCH 2990 includes memory and graphics controllers to which are coupled memory 2940 and a coprocessor 2945; the IOH 2950 is couples input/output (I/O) devices 2960 to the GMCH 2990. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 2940 and the coprocessor 2945 are coupled directly to the processor 2910, and the controller hub 2920 in a single chip with the IOH 2950.

The optional nature of additional processors 2915 is denoted in FIG. 29 with broken lines. Each processor 2910, 2915 may include one or more of the processing cores described herein and may be some version of the processor 2800.

The memory 2940 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 2920 communicates with the processor(s) 2910, 2915 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 2995.

In one embodiment, the coprocessor 2945 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 2920 may include an integrated graphics accelerator.

There can be a variety of differences between the physical resources 2910, 2915 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.

In one embodiment, the processor 2910 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 2910 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 2945. Accordingly, the processor 2910 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 2945. Coprocessor(s) 2945 accept and execute the received coprocessor instructions.

Referring now to FIG. 30, shown is a block diagram of a first more specific exemplary system 3000 in accordance with an embodiment. As shown in FIG. 30, multiprocessor system 3000 is a point-to-point interconnect system, and includes a first processor 3070 and a second processor 3080 coupled via a point-to-point interconnect 3050. Each of processors 3070 and 3080 may be some version of the processor 2800. In one embodiment, processors 3070 and 3080 are respectively processors 2910 and 2915, while coprocessor 3038 is coprocessor 2945. In another embodiment, processors 3070 and 3080 are respectively processor 2910 coprocessor 2945.

Processors 3070 and 3080 are shown including integrated memory controller (IMC) units 3072 and 3082, respectively. Processor 3070 also includes as part of its bus controller units point-to-point (P-P) interfaces 3076 and 3078; similarly, second processor 3080 includes P-P interfaces 3086 and 3088. Processors 3070, 3080 may exchange information via a point-to-point (P-P) interface 3050 using P-P interface circuits 3078, 3088. As shown in FIG. 30, IMCs 3072 and 3082 couple the processors to respective memories, namely a memory 3032 and a memory 3034, which may be portions of main memory locally attached to the respective processors.

Processors 3070, 3080 may each exchange information with a chipset 3090 via individual P-P interfaces 3052, 3054 using point to point interface circuits 3076, 3094, 3086, 3098. Chipset 3090 may optionally exchange information with the coprocessor 3038 via a high-performance interface 3039. In one embodiment, the coprocessor 3038 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.

A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.

Chipset 3090 may be coupled to a first bus 3016 via an interface 3096. In one embodiment, first bus 3016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.

As shown in FIG. 30, various I/O devices 3014 may be coupled to first bus 3016, along with a bus bridge 3018 which couples first bus 3016 to a second bus 3020. In one embodiment, one or more additional processor(s) 3015, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 3016. In one embodiment, second bus 3020 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 3020 including, for example, a keyboard and/or mouse 3022, communication devices 3027 and a storage unit 3028 such as a disk drive or other mass storage device which may include instructions/code and data 3030, in one embodiment. Further, an audio I/O 3024 may be coupled to the second bus 3020. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 30, a system may implement a multi-drop bus or other such architecture.

Referring now to FIG. 31, shown is a block diagram of a second more specific exemplary system 3100 in accordance with an embodiment. Like elements in FIGS. 30 and 31 bear like reference numerals, and certain aspects of FIG. 30 have been omitted from FIG. 31 in order to avoid obscuring other aspects of FIG. 31.

FIG. 31 illustrates that the processors 3070, 3080 may include integrated memory and I/O control logic (“CL”) 3072 and 3082, respectively. Thus, the CL 3072, 3082 include integrated memory controller units and include I/O control logic. FIG. 31 illustrates that not only are the memories 3032, 3034 coupled to the CL 3072, 3082, but also that I/O devices 3114 are also coupled to the control logic 3072, 3082. Legacy I/O devices 3115 are coupled to the chipset 3090.

Referring now to FIG. 32, shown is a block diagram of a SoC 3200 in accordance with an embodiment. Similar elements in FIG. 28 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG. 32, an interconnect unit(s) 3202 is coupled to: an application processor 3210 which includes a set of one or more cores 202A-N and shared cache unit(s) 2806; a system agent unit 2810; a bus controller unit(s) 2816; an integrated memory controller unit(s) 2814; a set or one or more coprocessors 3220 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 3230; a direct memory access (DMA) unit 3232; and a display unit 3240 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 3220 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.

Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Certain embodiments may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.

Program code, such as code 3030 illustrated in FIG. 30, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.

The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.

One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.

Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.

Accordingly, certain embodiments also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.

Example Embodiments

The following examples pertain to embodiments in accordance with this Specification.

Example 1 includes a processor, comprising: decode circuitry to decode an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and memory offload circuitry to offload the instruction from an execution pipeline, wherein the memory offload circuitry is to perform the DMA operation based on the opcode and the one or more fields.

Example 2 includes the processor of Example 1, wherein the one or more fields further indicate: a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

Example 3 includes the processor of Example 2, wherein the compute operation comprises: a complement operation; a bitwise operation; an add operation; or a multiply operation.

Example 4 includes the processor of any of Examples 2-3, wherein the compute operation is to be performed on at least one data operand from a source memory region and at least one data operand from the destination memory region.

Example 5 includes the processor of any of Examples 2-4, wherein the one or more fields comprise a DMA type field, wherein the DMA type field indicates the compute operation to be performed.

Example 6 includes the processor of any of Examples 1-5, wherein: the DMA operation is to read from or write to a non-contiguous memory region, wherein the non-contiguous memory region is a source memory region or the destination memory region.

Example 7 includes the processor of any of Examples 1-5, wherein the instruction is a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data.

Example 8 includes the processor of any of Examples 1-5, wherein the instruction is a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region.

Example 9 includes the processor of any of Examples 1-5, wherein the instruction is a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region.

Example 10 includes the processor of any of Examples 1-5, wherein the instruction is a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region.

Example 11 includes the processor of any of Examples 1-5, wherein the instruction is a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

Example 12 includes one or more non-transitory computer-readable media comprising instructions that, when executed or implemented by a processor, cause the processor to: decode an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and offload the instruction from an execution pipeline to a memory offload engine, wherein the memory offload engine is to perform the DMA operation based on the opcode and the one or more fields.

Example 13 includes the computer-readable media of Example 12, wherein the one or more fields further indicate: a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

Example 14 includes the computer-readable media of Example 13, wherein the compute operation comprises: a complement operation; a bitwise operation; an add operation; or a multiply operation.

Example 15 includes the computer-readable media of any of Examples 13-14, wherein the compute operation is to be performed on at least one data operand from a source memory region and at least one data operand from the destination memory region.

Example 16 includes the computer-readable media of any of Examples 12-15, wherein: the DMA operation is to read from or write to a non-contiguous memory region, wherein the non-contiguous memory region is a source memory region or the destination memory region.

Example 17 includes the computer-readable media of any of Examples 12-15, wherein the instruction is a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data.

Example 18 includes the computer-readable media of any of Examples 12-15, wherein the instruction is a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region.

Example 19 includes the computer-readable media of any of Examples 12-15, wherein the instruction is a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region.

Example 20 includes the computer-readable media of any of Examples 12-15, wherein the instruction is a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region.

Example 21 includes the computer-readable media of any of Examples 12-15, wherein the instruction is a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

Example 22 includes a method, comprising: decoding an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and offloading the instruction from an execution pipeline to a memory offload engine, wherein the memory offload engine is to perform the DMA operation based on the opcode and the one or more fields.

Example 23 includes the method of Example 22, wherein the one or more fields further indicate: a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

Example 24 includes the method of Example 23, wherein the compute operation comprises: a complement operation; a bitwise operation; an add operation; or a multiply operation.

Example 25 includes the method of any of Examples 22-24, wherein the instruction is: a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data; a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region; a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region; a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region; or a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

Example 26 includes a computing system, comprising: a memory; a plurality of processing cores, wherein each processing core comprises: decode circuitry to decode a plurality of instructions in an execution pipeline, wherein the plurality of instructions comprise a direct memory access (DMA) instruction, wherein the DMA instruction comprises an opcode and one or more fields, wherein the opcode indicates a DMA operation to be performed and the one or more fields indicate: a destination memory region on the memory; and one or more data operands; execution circuitry to execute the plurality of instructions in the execution pipeline; and memory offload circuitry to offload the DMA instruction from the execution pipeline, wherein the memory offload circuitry is to perform the DMA operation based on the opcode and the one or more fields.

Example 27 includes the computing system of Example 26, wherein the one or more fields further indicate: a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

Example 28 includes the computing system of Example 27, wherein the compute operation comprises: a complement operation; a bitwise operation; an add operation; or a multiply operation.

Example 29 includes the computing system of any of Examples 26-28, wherein the instruction is: a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data; a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region; a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region; a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region; or a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.

Claims

1. A processor, comprising:

decode circuitry to decode an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and
memory offload circuitry to offload the instruction from an execution pipeline, wherein the memory offload circuitry is to perform the DMA operation based on the opcode and the one or more fields.

2. The processor of claim 1, wherein the one or more fields further indicate:

a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

3. The processor of claim 2, wherein the compute operation comprises:

a complement operation;
a bitwise operation;
an add operation; or
a multiply operation.

4. The processor of claim 3, wherein the compute operation is to be performed on at least one data operand from a source memory region and at least one data operand from the destination memory region.

5. The processor of claim 3, wherein the one or more fields comprise a DMA type field, wherein the DMA type field indicates the compute operation to be performed.

6. The processor of claim 1, wherein:

the DMA operation is to read from or write to a non-contiguous memory region, wherein the non-contiguous memory region is a source memory region or the destination memory region.

7. The processor of claim 1, wherein the instruction is a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data.

8. The processor of claim 1, wherein the instruction is a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region.

9. The processor of claim 1, wherein the instruction is a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region.

10. The processor of claim 1, wherein the instruction is a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region.

11. The processor of claim 1, wherein the instruction is a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

12. One or more non-transitory computer-readable media comprising instructions that, when executed or implemented by a processor, cause the processor to:

decode an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and
offload the instruction from an execution pipeline to a memory offload engine, wherein the memory offload engine is to perform the DMA operation based on the opcode and the one or more fields.

13. The computer-readable media of claim 12, wherein the one or more fields further indicate:

a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

14. The computer-readable media of claim 13, wherein the compute operation comprises:

a complement operation;
a bitwise operation;
an add operation; or
a multiply operation.

15. The computer-readable media of claim 14, wherein the compute operation is to be performed on at least one data operand from a source memory region and at least one data operand from the destination memory region.

16. The computer-readable media of claim 12, wherein:

the DMA operation is to read from or write to a non-contiguous memory region, wherein the non-contiguous memory region is a source memory region or the destination memory region.

17. The computer-readable media of claim 12, wherein the instruction is a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data.

18. The computer-readable media of claim 12, wherein the instruction is a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region.

19. The computer-readable media of claim 12, wherein the instruction is a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region.

20. The computer-readable media of claim 12, wherein the instruction is a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region.

21. The computer-readable media of claim 12, wherein the instruction is a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

22. A method, comprising:

decoding an instruction to perform a direct memory access (DMA) operation, wherein the instruction comprises an opcode and one or more fields, wherein the opcode indicates a type of DMA operation to be performed, and wherein the one or more fields indicate: a destination memory region; and one or more data operands; and
offloading the instruction from an execution pipeline to a memory offload engine, wherein the memory offload engine is to perform the DMA operation based on the opcode and the one or more fields.

23. The method of claim 22, wherein the one or more fields further indicate:

a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

24. The method of claim 23, wherein the compute operation comprises:

a complement operation;
a bitwise operation;
an add operation; or
a multiply operation.

25. The method of claim 22, wherein the instruction is:

a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data;
a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region;
a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region;
a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region; or
a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.

26. A computing system, comprising:

a memory;
a plurality of processing cores, wherein each processing core comprises: decode circuitry to decode a plurality of instructions in an execution pipeline, wherein the plurality of instructions comprise a direct memory access (DMA) instruction, wherein the DMA instruction comprises an opcode and one or more fields, wherein the opcode indicates a DMA operation to be performed and the one or more fields indicate: a destination memory region on the memory; and one or more data operands; execution circuitry to execute the plurality of instructions in the execution pipeline; and memory offload circuitry to offload the DMA instruction from the execution pipeline, wherein the memory offload circuitry is to perform the DMA operation based on the opcode and the one or more fields.

27. The computing system of claim 26, wherein the one or more fields further indicate:

a compute operation to be performed on the one or more data operands, wherein one or more resulting data operands are to be written to the destination memory region.

28. The computing system of claim 27, wherein the compute operation comprises:

a complement operation;
a bitwise operation;
an add operation; or
a multiply operation.

29. The computing system of claim 26, wherein the instruction is:

a DMA initialize instruction, wherein the DMA initialize instruction is to initialize the destination memory region with data;
a DMA initialize stride instruction, wherein the DMA initialize stride instruction is to initialize the destination memory region with data, wherein the destination memory region is a strided memory region;
a DMA copy stride instruction, wherein the DMA copy stride instruction is to copy data from a source memory region to the destination memory region, wherein at least one of the source memory region or the destination memory region is a strided memory region;
a DMA scatter instruction, wherein the DMA scatter instruction is to scatter data across the destination memory region, wherein the destination memory region is a non-contiguous memory region; or
a DMA gather instruction, wherein the DMA gather instruction is to gather data from a source memory region and store the data in the destination memory region, wherein the source memory region is a non-contiguous memory region and the destination memory region is a contiguous memory region.
Patent History
Publication number: 20220222075
Type: Application
Filed: Apr 2, 2022
Publication Date: Jul 14, 2022
Inventors: Robert S. Pawlowski (Beaverton, OR), Scott N. Cline (Portland, OR), Jason Howard (Portland, OR), Joshua B. Fryman (Corvallis, OR), Ivan B. Ganev (Portland, OR)
Application Number: 17/712,104
Classifications
International Classification: G06F 9/30 (20060101); G06F 12/02 (20060101); G06F 12/1081 (20060101);