Abstract: Data processing apparatus, data processing methods, a method and a computer program product are disclosed. The data processing apparatus includes a processor core operable to execute sequences of instructions of a plurality of program threads. The processor core has a plurality of pipeline stages, one of which is an instruction schedule stage having scheduling logic operable, in response to a thread pause instruction within a program thread, to prevent scheduling of instructions from that program thread following the thread pause instruction and instead to schedule instructions from another program thread for execution within the plurality of pipeline stages.
Abstract: A memory unit and method are disclosed. The memory unit comprises: at least one controller interfaced with at least one corresponding persistent memory device operable to store files in accordance with a file system; and a file mapping unit operable, in response to a virtual file access request from a memory management unit of a processor, the virtual file access request having a virtual address within a virtual address space associated with one of the files identifying data to be accessed, to map the virtual address to a physical address of the data within the one of the files using pre-stored mapping information and to issue a physical access request having the physical address to access the data within the one of the files.
Type:
Grant
Filed:
November 22, 2013
Date of Patent:
October 17, 2017
Assignee:
SWARM64 AS
Inventors:
Thomas Richter, Eivind Liland, David Geier
Abstract: A data processing apparatus, a data processing method and a computer program product are disclosed. In an embodiment, the data processing apparatus comprises: a processor comprising a plurality of parallel lanes for parallel processing of sets of threads, each lane comprising a plurality of pipelined stages, the pipelined stages of each lane being operable to process instructions from the sets of threads; and scheduling logic operable to schedule instructions for processing by the lanes, the scheduling logic being operable to identify that one of the sets of threads being processed is to be split into a plurality of sub-sets of threads and to schedule at least two of the plurality of sub-sets of threads for processing by different pipelined stages concurrently.
Abstract: A memory unit and method are disclosed. The memory unit comprises: at least one controller interfaced with at least one corresponding persistent memory device operable to store files in accordance with a file system; and a file mapping unit operable, in response to a virtual file access request from a memory management unit of a processor, the virtual file access request having a virtual address within a virtual address space associated with one of the files identifying data to be accessed, to map the virtual address to a physical address of the data within the one of the files using pre-stored mapping information and to issue a physical access request having the physical address to access the data within the one of the files.
Type:
Application
Filed:
November 22, 2013
Publication date:
May 28, 2015
Applicant:
Swarm64 AS
Inventors:
THOMAS RICHTER, EIVIND LILAND, DAVID GEIER