Patents by Inventor Elliot Maurice Simon ROSEMARINE
Elliot Maurice Simon ROSEMARINE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240378084Abstract: According to the present techniques there is provided a method of operating a data processor unit to generate processing tasks: the data processor unit comprising: a control circuit to receive, from a host processor unit, a request for the data processor unit to perform a processing job; an iterator unit to process the request and generate a workload comprising one or more tasks for the requested job; one or more execution units to perform the one or more tasks; storage to store system information indicative of a status of at least one component of the data processor unit; the method comprising: receiving, at the control circuit, a first request to perform a first processing job; processing, at the iterator unit, the first request and generating a workload comprising one or more tasks for the first processing job based on or in response to the system information in storage, wherein at least one characteristic of the workload is dependent on the system information.Type: ApplicationFiled: May 11, 2023Publication date: November 14, 2024Inventors: Elliot Maurice Simon Rosemarine, Alexander Eugene Chalfin, Ozgur Tasdizen, Tord Kvestad Øygard
-
Publication number: 20240370301Abstract: The present disclosure relates to a system, method and non-transitory computer-readable storage medium for handling data. From a directed acyclic graph, DAG, of operations on input data a sub-graph of operations is identified and issued as task data to be executed by a processing module, wherein each of the operations in the sub-graph maps to a corresponding execution unit of the processing module of the system and wherein each connection between operations maps to a corresponding storage element of the processing module. The sub-graph is identified such that a simulation of an execution of the operations of the candidate sub-graph according to a determined size of the processing unit of said input data shows that the processing module can execute the operations of the sub-graph such that memory constrains of the processing module are met and read-write operations to memory external to the processing module are avoided or reduced.Type: ApplicationFiled: April 19, 2024Publication date: November 7, 2024Applicant: Arm LimitedInventors: Elliot Maurice Simons Rosemarine, Rune Holm
-
Publication number: 20240311947Abstract: A processor, method and non-transitory computer-readable storage medium for handling data, by obtaining task data describing a task to be executed in the form of a plurality of operations on data, the task data further defining an operation space of said data, analyzing each of the operations to define transformation data comprising transformation instruction representing a transform into an associated operation-specific local spaces. In case transformation instructions to get to the operation-specific local space for an operation are producing less dimensions compared to the operation space, one or more operation-specific arguments are stored in a data field corresponding to a dimension not produced by the transformation instructions in the transformation data corresponding to the operation.Type: ApplicationFiled: March 15, 2023Publication date: September 19, 2024Inventors: Rune HOLM, Elliot Maurice Simon ROSEMARINE
-
Publication number: 20240256332Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may facilitate and/or support scheduling tasks for one or more hardware components of a computing device.Type: ApplicationFiled: January 27, 2023Publication date: August 1, 2024Inventor: Elliot Maurice Simon Rosemarine
-
Publication number: 20240256646Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may facilitate and/or support assignment, configuration and/or management of one or more hardware components of a computing device.Type: ApplicationFiled: January 27, 2023Publication date: August 1, 2024Inventor: Elliot Maurice Simon Rosemarine
-
Publication number: 20240248721Abstract: A method and apparatus for distributing operations for execution. Input data is received and is subdivided into portions, each comprising a first and second sub-portion. A first operation and a second operation are received. Dependencies between the first and second operations are identified. For each portion the first operation is issued for execution on the first sub-portion to produce a first output sub-portion, and completion is tracked. The first operation is issued for execution on the second sub-portion to produce a second output sub-portion. Depending upon satisfaction of the dependencies in respect of the first sub-portion, either the second operation to be executed on the first output sub-portion is issued, if the dependencies are met; or the second operation, to be executed on the first output sub-portion is stalled, if the dependencies are not met. This is repeated for each subsequent portion.Type: ApplicationFiled: January 16, 2024Publication date: July 25, 2024Inventors: Rune HOLM, Alexander Eugene CHALFIN, Elliot Maurice Simon ROSEMARINE
-
Publication number: 20240248755Abstract: A processor comprising: a handling unit; a plurality of components each configured to execute a function. The handling unit can receive a task comprising operations on data in a coordinate space having N dimensions, receive a data structure describing execution of the task and comprising a partially ordered set of data items each associated with instructions usable by the plurality of components when executing the task, each data item is associated with a component among the plurality of components, each data item indicates dimensions of the coordinates space for which changes of coordinate causes the function of the associated component to execute, and dimensions of the coordinate space for which changes of coordinate causes the function of the associated component to store data ready to be used by another component. The handling unit iterates over the coordinate space and executes the task using the partially ordered set of data items.Type: ApplicationFiled: January 20, 2023Publication date: July 25, 2024Inventors: Rune HOLM, Jens OLSON, Jared Corey SMOLENS, Dominic Hugo SYMES, Elliot Maurice Simon ROSEMARINE
-
Publication number: 20240248754Abstract: A processor to generate position data indicative of a position within a compressed data stream, wherein, previously, in executing a task, data of the compressed data stream ending at the position has been read by the processor from storage storing the compressed data stream. After reading the data, the processor reads further data of the compressed data stream from the storage, in executing the task, the further data located beyond the position within the compressed data stream. After reading the further data, the processor reads, based on the position data, a portion of the compressed data stream from the storage, in executing the task, starting from the position within the compressed data stream. The processor decompresses the portion of the compressed data stream to generate decompressed data, in executing the task.Type: ApplicationFiled: January 20, 2023Publication date: July 25, 2024Inventors: Elliot Maurice Simon ROSEMARINE, Jared Corey SMOLENS, Rune HOLM, John Wakefield BROTHERS, III, Jens OLSON
-
Publication number: 20240248753Abstract: A processor to: receive a task to be executed, the task comprising a task-based parameter associated with the task, for use in determining a position, within an array of data descriptors, of a particular data descriptor of a particular portion of data to be processed in executing the task. Each of the data descriptors in the array of data descriptors has a predetermined size and is indicative of a location in a storage system of a respective portion of data. The processor derives, based on the task, array location data indicative of a location in the storage system of a predetermined data descriptor, and obtains the particular data descriptor, based on the array location data and the task-based parameter. The processor obtains the particular portion of data based on the particular data descriptor and processes the particular portion of data in executing the task.Type: ApplicationFiled: January 20, 2023Publication date: July 25, 2024Inventors: Elliot Maurice Simon ROSEMARINE, Alexander Eugene CHALFIN, Rune HOLM
-
Publication number: 20240248764Abstract: A memory unit configured for handling task data, the task data describing a task to be executed as a directed acyclic graph of operations, wherein each operation maps to a corresponding execution unit, and wherein each connection between operations in the acyclic graph maps to a corresponding storage element of the execution unit. The task data defines an operation space representing the dimensions of a multi-dimensional arrangement of the connected operations to be executed represented by the data blocks; the memory unit configured to receive a sequence of processing requests comprising the one or more data blocks with each data block assigned a priority value and comprising a block command. The memory unit is configured to arbitrate between the data blocks based upon the priority value and block command to prioritize the sequence of processing requests and wherein the processing requests include writing data to, or reading data from storage.Type: ApplicationFiled: May 12, 2023Publication date: July 25, 2024Inventors: Rune HOLM, Jens OLSON, Elliot Maurice Simon ROSEMARINE, Jared SMOLENS
-
Publication number: 20240249127Abstract: A data processing system comprising a processor (306) that is configured to perform neural network processing having one or more execution units (213, 214) configured to perform processing operations for neural network processing and a control circuit (217) configured to distribute processing tasks to the execution unit or units, and a graphics processor (304) comprising a programmable execution unit (203) operable to execute processing programs to perform processing operations. The control circuit (217) of the processor (306) that is configured to perform neural network processing is configured to, in response to an indication of particular neural network processing to be performed provided to the control circuit, cause the programmable execution unit (203) of the graphics processor to execute a program to perform the indicated neural network processing.Type: ApplicationFiled: July 8, 2023Publication date: July 25, 2024Applicant: Arm LimitedInventors: Thomas James Cooksey, Elliot Maurice Simon Rosemarine
-
Publication number: 20240160889Abstract: A data processor is disclosed that includes a processing unit operable to process neural network data, and a cache system operable to cache neural network data for the processing unit. When neural network data is required for processing, the processing unit issues a request for the neural network data to the cache system, and if the requested data is not cached in the cache system, a compression codec is caused to decode a part of a compressed neural network data stream that encodes the requested neural network data so as to provide the requested neural network data.Type: ApplicationFiled: November 14, 2022Publication date: May 16, 2024Applicant: Arm LimitedInventor: Elliot Maurice Simon Rosemarine
-
Patent number: 11907855Abstract: A computer implemented method of storing and retrieving feature map data of a neural network the method comprising receiving a first portion of feature map data from local storage, selecting a first set of subportions of the first portion of feature map data, compressing the subportions to produce a first plurality of sections of compressed feature map data and instructing the storage of the sections into external storage. The method also comprises receiving a second plurality of sections of compressed feature map data from the external storage, decompressing the sections to produce a second set of subportions of the second portion of feature map data and storing the second portion of feature map data in local storage. The first and second sets of subportions each correspond to a predetermined format of subdivision and the method comprises selecting the predetermined format of subdivision from a plurality of predetermined formats of subdivision.Type: GrantFiled: March 30, 2020Date of Patent: February 20, 2024Assignee: Arm LimitedInventors: Erik Persson, Stefan Johannes Frid, Elliot Maurice Simon Rosemarine
-
Patent number: 11663107Abstract: A computer implemented method, performed in a data processing system comprising a performance monitoring unit. The method comprises receiving a set of computer-readable instructions to be executed by the data processing system to implement at least a portion of a neural network, wherein one or more of the instructions is labeled with one or more performance monitoring labels based upon one or more features of the neural network. The method further comprises configuring the performance monitoring unit to count one or more events occurring in one or more components of the data processing system based on the one or more performance monitoring labels.Type: GrantFiled: February 21, 2020Date of Patent: May 30, 2023Assignee: ARM LIMITEDInventors: Elliot Maurice Simon Rosemarine, Rachel Jean Trimble
-
Patent number: 11656905Abstract: A neural processing unit comprises an input module for receiving a transaction from at least one program, each program has an associated program privilege level; and a plurality of delegation pages, each delegation page comprising a delegation management unit associated with a page privilege level. The neural processing unit also comprises at least one resource arranged to be accessed by at least one of the delegation pages; and a processing module arranged to process the transaction. Processing the transactions comprises allocating each transaction to a delegation page based on the program privilege level and page privilege level. The program is arranged to instruct the delegation management unit of a first delegation page, having a first-page privilege level to delegate access to the at least one resource to a second delegation page having a second-page privilege level, and wherein the first-page privilege level is higher than the second-page privilege level.Type: GrantFiled: August 9, 2019Date of Patent: May 23, 2023Assignee: Arm LimitedInventor: Elliot Maurice Simon Rosemarine
-
Publication number: 20230084603Abstract: Aspects of the present disclosure relate to apparatus comprising execution circuitry comprising at least one execution unit to execute program instructions, and control circuitry. The control circuitry receives a stream of processing instructions, and issues each received instruction to one of said at least one execution unit. Responsive to determining that a first type of context switch is to be performed from an initial context to a new context, issuing continues until a pre-emption point in the stream of processing instructions is reached. Responsive to reaching the pre-emption point, state information is stored, and the new context is switched to. Responsive to determining that a context switch is to be performed to return from the new context to the initial context, the processing status is restored from the state information, and the stream of processing instructions is continued.Type: ApplicationFiled: September 14, 2021Publication date: March 16, 2023Inventors: Eric KUNZE, Jared Corey SMOLENS, Aaron DEBATTISTA, Elliot Maurice Simon ROSEMARINE
-
Publication number: 20220365853Abstract: A method of performing fault detection during computations relating to a neural network comprising a first neural network layer and a second neural network layer in a data processing system, the method comprising: scheduling computations onto data processing resources for the execution of the first neural network layer and the second neural network layer, wherein the scheduling includes: for a given one of the first neural network layer and the second neural network layer, scheduling a respective given one of a first computation and a second computation as a non-duplicated computation, in which the given computation is at least initially scheduled to be performed only once during the execution of the given neural network layer; and for the other of the first and second neural network layers, scheduling the respective other of the first and second computations as a duplicated computation.Type: ApplicationFiled: June 15, 2022Publication date: November 17, 2022Inventors: Andrew Brian Thomas HOPKINS, Graeme Leslie INGRAM, Elliot Maurice Simon ROSEMARINE, Antonio PRIORE
-
Patent number: 11475287Abstract: There is provided a neural processing unit (NPU), including a primary processing node containing primary control registers and processing circuitry configured to write control data to the primary control registers, and multiple secondary processing nodes each having respective secondary control registers and being configured to process data in accordance with control data stored by the respective secondary control registers. The NPU also includes a bus interface for transmitting data between the primary processing node and the plurality of secondary processing nodes. The primary processing node is configured to transmit first control data to a given secondary control register of each of the plurality of secondary processing nodes.Type: GrantFiled: July 11, 2019Date of Patent: October 18, 2022Assignee: Arm LimitedInventor: Elliot Maurice Simon Rosemarine
-
Publication number: 20210304012Abstract: A computer implemented method of storing and retrieving feature map data of a neural network the method comprising receiving a first portion of feature map data from local storage, selecting a first set of subportions of the first portion of feature map data, compressing the subportions to produce a first plurality of sections of compressed feature map data and instructing the storage of the sections into external storage. The method also comprises receiving a second plurality of sections of compressed feature map data from the external storage, decompressing the sections to produce a second set of subportions of the second portion of feature map data and storing the second portion of feature map data in local storage. The first and second sets of subportions each correspond to a predetermined format of subdivision and the method comprises selecting the predetermined format of subdivision from a plurality of predetermined formats of subdivision.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Erik PERSSON, Stefan Johannes FRID, Elliot Maurice Simon ROSEMARINE
-
Publication number: 20210263826Abstract: A computer implemented method, performed in a data processing system comprising a performance monitoring unit. The method comprises receiving a set of computer-readable instructions to be executed by the data processing system to implement at least a portion of a neural network, wherein one or more of the instructions is labeled with one or more performance monitoring labels based upon one or more features of the neural network. The method further comprises configuring the performance monitoring unit to count one or more events occurring in one or more components of the data processing system based on the one or more performance monitoring labels.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Elliot Maurice Simon ROSEMARINE, Rachel Jean TRIMBLE