Patents by Inventor Andreas Lars SANDBERG

Andreas Lars SANDBERG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200341901
    Abstract: A system, apparatus and method for secure functions and manipulating cache line data. The method includes generating cache block addresses from a subset of bits, i.e. tag bits, of a cache address and hashing the cache block addresses with one or more secure functions that use keys to generate secure indexes.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 29, 2020
    Applicant: Arm Limited
    Inventors: Andreas Lars Sandberg, Prakash S. Ramrakhyani
  • Publication number: 20200293457
    Abstract: Apparatus comprises two or more processing devices each having an associated translation lookaside buffer to store translation data defining address translations between virtual and physical memory addresses, each address translation being associated with a respective virtual address space; and control circuitry to control the transfer of at least a subset of the translation data from the translation lookaside buffer associated with a first processing device to the translation lookaside buffer associated with a second, different, processing device.
    Type: Application
    Filed: January 31, 2020
    Publication date: September 17, 2020
    Inventors: Ilias VOUGIOUKAS, Nikos NIKOLERIS, Andreas Lars SANDBERG, Stephan DIESTELHORST
  • Patent number: 10761988
    Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: September 1, 2020
    Assignee: Arm Limited
    Inventors: Radhika Sanjeev Jagtap, Nikos Nikoleris, Andreas Lars Sandberg, Stephan Diestelhorst
  • Publication number: 20200264980
    Abstract: An apparatus and method are provided for handling caching of persistent data. The apparatus comprises cache storage having a plurality of entries to cache data items associated with memory address in a non-volatile memory. The data items may comprise persistent data items and non-persistent data items. Write back control circuitry is used to control write back of the data items from the cache storage to the non-volatile memory. In addition, cache usage determination circuitry is used to determine, in dependence on information indicative of capacity of a backup energy source, a subset of the plurality of entries to be used to store persistent data items. In response to an event causing the backup energy source to be used, the write back control circuitry is then arranged to initiate write back to the non-volatile memory of the persistent data items cached in the subset of the plurality of entries.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Inventors: Wei WANG, Stephan DIESTELHORST, Wendy Arnott ELSASSER, Andreas Lars SANDBERG, Nikos NIKOLERIS
  • Patent number: 10712965
    Abstract: An apparatus and method are provided for transferring data between address ranges in memory. The apparatus comprises a data transfer controller, that is responsive to a data transfer request received by the apparatus from a processing element, to perform a transfer operation to transfer data from at least one source address range in memory to at least one destination address range in the memory. A redirect controller is then arranged, whilst the transfer operation is being performed, to intercept an access request that specifies a target address within a target address range, and to perform a memory redirection operation so as to cause the access request to be processed without awaiting completion of the transfer operation.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: July 14, 2020
    Assignee: ARM Limited
    Inventors: Andreas Lars Sandberg, Nikos Nikoleris, David Hennah Mansell
  • Patent number: 10705848
    Abstract: A TAGE branch predictor has, as its fallback predictor, a perceptron predictor. This provides a branch predictor which reduces the penalty of context switches and branch prediction state flushes.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 7, 2020
    Assignee: Arm Limited
    Inventors: Ilias Vougioukas, Stephan Diestelhorst, Andreas Lars Sandberg, Nikos Nikoleris
  • Patent number: 10642743
    Abstract: An apparatus and method are provided for handling caching of persistent data. The apparatus comprises cache storage having a plurality of entries to cache data items associated with memory address in a non-volatile memory. The data items may comprise persistent data items and non-persistent data items. Write back control circuitry is used to control write back of the data items from the cache storage to the non-volatile memory. In addition, cache usage determination circuitry is used to determine, in dependence on information indicative of capacity of a backup energy source, a subset of the plurality of entries to be used to store persistent data items. In response to an event causing the backup energy source to be used, the write back control circuitry is then arranged to initiate write back to the non-volatile memory of the persistent data items cached in the subset of the plurality of entries.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: May 5, 2020
    Assignee: ARM LIMITED
    Inventors: Wei Wang, Stephan Diestelhorst, Wendy Arnott Elsasser, Andreas Lars Sandberg, Nikos Nikoleris
  • Patent number: 10628318
    Abstract: A system cache and method of operating a system cache are provided. The system cache provides data caching in response to data access requests from plural system components. The system cache has data caching storage with plural entries, each entry storing a block of data items and each block of data items comprising plural sectors of data items. Sector use prediction circuitry is provided which stores a set of sector use pattern entries. In response to a data access request received from a system component specifying one or more data items, a pattern entry is selected and a sector use prediction is generated in dependence on a sector use pattern in the selected pattern entry. Further data items may then be retrieved which are not specified in the data access request but are indicated by the sector use prediction.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 21, 2020
    Assignee: ARM LIMITED
    Inventors: Nikos Nikoleris, Andreas Lars Sandberg, Jonas S̆vedas, Stephan Diestelhorst
  • Publication number: 20200073819
    Abstract: Address translation circuitry performs virtual-to-physical address translations using a page table hierarchy of page table entries, wherein a translation between a virtual address and a physical address is defined in a last level page table entry of the page table hierarchy. The address translation circuitry is responsive to receipt of the virtual address to perform a translation determination with reference to the page table hierarchy, wherein an intermediate level page table entry of the page table hierarchy stores an intermediate level pointer to the last level page table entry.
    Type: Application
    Filed: September 4, 2018
    Publication date: March 5, 2020
    Inventors: Geoffrey Wyman BLAKE, Prakash S. RAMRAKHYANI, Andreas Lars SANDBERG
  • Publication number: 20200034303
    Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.
    Type: Application
    Filed: September 26, 2018
    Publication date: January 30, 2020
    Inventors: Radhika Sanjeev JAGTAP, Nikos NIKOLERIS, Andreas Lars SANDBERG, Stephan DIESTELHORST
  • Publication number: 20190384501
    Abstract: An apparatus comprises control circuitry to control access to a memory implemented using a memory technology providing variable access latency. The control circuitry has request handling circuitry to identify an execution context switch comprising a transition from servicing memory access requests associated with a first execution context to servicing memory access requests associated with a second execution context. At least when the execution context switch meets a predetermined condition, a delay masking action is triggered to control subsequent memory access requests associated with the second execution context, for which the required data is already stored in the memory, to be serviced with a response delay which is independent of which addresses were accessed by the memory access requests associated with the first execution context. This can help guard against attacks which aim to exploit variation in response latency to gain insight into the addresses accessed by a victim execution context.
    Type: Application
    Filed: October 5, 2018
    Publication date: December 19, 2019
    Inventors: Radhika Sanjeev JAGTAP, Nikos NIKOLERIS, Andreas Lars SANDBERG
  • Publication number: 20190361706
    Abstract: A branch predictor is provided with a branch state buffer, branch prediction save circuitry responsive to a branch prediction save event associated with a given execution context to save at least a portion of the active branch prediction state associated with the given execution context to a branch state buffer; and branch prediction restore circuitry responsive to a branch prediction restore event associated with the given execution context to restore active branch prediction state based on previously saved branch prediction state stored in the branch state buffer for the given execution context. This is useful for reducing the performance impact of mitigating against speculative side-channel attacks.
    Type: Application
    Filed: June 26, 2018
    Publication date: November 28, 2019
    Inventors: Ilias VOUGIOUKAS, Andreas Lars SANDBERG, Stephan DIESTELHORST, Matthew James HORSNELL
  • Publication number: 20190361707
    Abstract: A TAGE branch predictor has, as its fallback predictor, a perceptron predictor. This provides a branch predictor which reduces the penalty of context switches and branch prediction state flushes.
    Type: Application
    Filed: June 26, 2018
    Publication date: November 28, 2019
    Inventors: Ilias VOUGIOUKAS, Stephan DIESTELHORST, Andreas Lars SANDBERG, Nikos NIKOLERIS
  • Publication number: 20190243778
    Abstract: Memory address translation apparatus comprises page table access circuitry to access a page table to retrieve translation data defining an address translation between an initial memory address in an initial memory address space, and a corresponding output memory address in an output address space; a translation data buffer to store, for a subset of the initial address space, one or more instances of the translation data; the translation data buffer comprising: an array of storage locations arranged in rows and columns; a row buffer comprising a plurality of entries each to store information from a respective portion of a row of the array; and comparison circuitry responsive to a key value dependent upon at least the initial memory address, to compare the key value with information stored in each of at least one key entry of the row buffer, each key entry having an associated value entry for storing at least a representation of a corresponding output memory address, and to identify which of the at least one ke
    Type: Application
    Filed: November 29, 2017
    Publication date: August 8, 2019
    Inventors: Nikos NIKOLERIS, Andreas Lars SANDBERG, Prakash S. RAMRAKHYANI, Stephan DIESTELHORST
  • Publication number: 20190155747
    Abstract: There is provided an apparatus that includes an input port to receive, from a requester, any one of: a lookup operation comprising an input address, and a maintenance operation. Maintenance queue circuitry stores a maintenance queue of at least one maintenance operation and address storage stores a translation between the input address and an output address in an output address space. In response to receiving the input address, the output address is provided in dependence on the maintenance queue. In response to storing the maintenance operation, the maintenance queue circuitry causes an acknowledgement to be sent to the requester. By providing a separate maintenance queue for performing the maintenance operation, there is no need for a requester to be blocked while maintenance is performed.
    Type: Application
    Filed: October 24, 2018
    Publication date: May 23, 2019
    Inventors: Andreas Lars SANDBERG, Nikos NIKOLERIS, Prakash S. RAMRAKHYANI, Stephan DIESTELHORST
  • Publication number: 20190155748
    Abstract: Memory address translation apparatus comprises page table access circuitry to access page table data to retrieve translation data defining an address translation between an initial memory address in an initial memory address space, and a corresponding output memory address in an output address space; a translation data buffer to store, for a subset of the virtual address space, one or more instances of the translation data; and control circuitry, responsive to an input initial memory address to be translated, to request retrieval of translation data for the input initial memory address from the translation data buffer and, before completion of processing of the request for retrieval from the translation data buffer, to initiate retrieval of translation data for the input initial memory address by the page table access circuitry.
    Type: Application
    Filed: November 6, 2018
    Publication date: May 23, 2019
    Inventors: Andreas Lars SANDBERG, Nikos NIKOLERIS, Prakash S. RAMRAKHYANI
  • Publication number: 20190155742
    Abstract: There is provided an apparatus that includes an input address port to receive an input address from processor circuitry. Address storage stores a translation between the input address and an output address in an output address space. An output address port outputs the output address. An input data port receives data. Data storage stores the data. An output data port outputs the data stored in the data storage and control circuitry causes the data storage to store the translation between the input address and the output address. The control circuitry issues a signal to cause a page walk to occur in response to the input address being absent from the address storage and the data storage.
    Type: Application
    Filed: October 24, 2018
    Publication date: May 23, 2019
    Inventors: Prakash S. RAMRAKHYANI, Andreas Lars SANDBERG, Nikos NIKOLERIS, Stephan DIESTELHORST
  • Publication number: 20190004960
    Abstract: An apparatus and method are provided for handling caching of persistent data. The apparatus comprises cache storage having a plurality of entries to cache data items associated with memory address in a non-volatile memory. The data items may comprise persistent data items and non-persistent data items. Write back control circuitry is used to control write back of the data items from the cache storage to the non-volatile memory. In addition, cache usage determination circuitry is used to determine, in dependence on information indicative of capacity of a backup energy source, a subset of the plurality of entries to be used to store persistent data items. In response to an event causing the backup energy source to be used, the write back control circuitry is then arranged to initiate write back to the non-volatile memory of the persistent data items cached in the subset of the plurality of entries.
    Type: Application
    Filed: June 12, 2018
    Publication date: January 3, 2019
    Inventors: Wei Wang, Stephan Diestelhorst, Wendy Arnott ELSASSER, Andreas Lars Sandberg, Nikos NIKOLERIS
  • Publication number: 20180365142
    Abstract: Broadly speaking, embodiments of the present technique provide an apparatus and methods for improved wear-levelling in non-volatile memory (NVM) devices. In particular, the present wear-levelling techniques operate on small blocks within a memory device, at a finer scale/granularity than that used by common wear-levelling techniques which often remap large blocks (e.g. several kilobytes) of data.
    Type: Application
    Filed: December 2, 2016
    Publication date: December 20, 2018
    Applicant: Arm Limited
    Inventors: Andreas Lars SANDBERG, Irenéus Johannes de JONG, Andreas Hansson
  • Publication number: 20180232313
    Abstract: A system cache and method of operating a system cache are provided. The system cache provides data caching in response to data access requests from plural system components. The system cache has data caching storage with plural entries, each entry storing a block of data items and each block of data items comprising plural sectors of data items, and each block of data items being stored in an entry of the data caching storage with an associated address portion. Sector use prediction circuitry is provided which has a set of pattern entries to store a set of sector use patterns. In response to a data access request received from a system component specifying one or more data items a selected pattern entry is selected in dependence on a system component identifier in the data access request and a sector use prediction is generated in dependence on a sector use pattern in the selected pattern entry.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 16, 2018
    Inventors: Nikos NIKOLERIS, Andreas Lars SANDBERG, Jonas SVEDAS, Stephan DIESTELHORST