Patents by Inventor Magnus Själander

Magnus Själander has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210365554
    Abstract: A system and method for mitigating micro-architectural replay attacks in a processing system by delaying speculative execution on the processing system of a set of processor instructions upon detection that the set of processor instructions are part of a micro-architectural replay attack by detecting repeating speculative execution of the set of processor instructions interleaved with misspeculation and squashing of the set of processor instructions.
    Type: Application
    Filed: May 25, 2021
    Publication date: November 25, 2021
    Inventors: Christos SAKALIS, Stefanos KAXIRAS, Magnus SJÄLANDER
  • Patent number: 11163576
    Abstract: A system and method for efficiently preventing visible side-effects in the memory hierarchy during speculative execution is disclosed. Hiding the side-effects of executed instructions in the whole memory hierarchy is both expensive, in terms of performance and energy, and complicated. A system and method is disclosed to hide the side-effects of speculative loads in the cache(s) until the earliest time these speculative loads become non-speculative. A refinement is disclosed where loads that hit in the L1 cache are allowed to proceed by keeping their side-effects on the L1 cache hidden until these loads become non-speculative, and all other speculative loads that miss in the cache(s) are prevented from executing until they become non-speculative. To limit the performance deterioration caused by these delayed loads, a system and method is disclosed that augments the cache(s) with a value predictor or a re-computation engine that supplies predicted or recomputed values to the loads that missed in the cache(s).
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 2, 2021
    Assignee: ETA SCALE AB
    Inventors: Christos Sakalis, Stefanos Kaxiras, Alberto Ros, Alexandra Jimborean, Magnus Själander
  • Publication number: 20200301712
    Abstract: A system and method for efficiently preventing visible side-effects in the memory hierarchy during speculative execution is disclosed. Hiding the side-effects of executed instructions in the whole memory hierarchy is both expensive, in terms of performance and energy, and complicated. A system and method is disclosed to hide the side-effects of speculative loads in the cache(s) until the earliest time these speculative loads become non-speculative. A refinement is disclosed where loads that hit in the L1 cache are allowed to proceed by keeping their side-effects on the L1 cache hidden until these loads become non-speculative, and all other speculative loads that miss in the cache(s) are prevented from executing until they become non-speculative. To limit the performance deterioration caused by these delayed loads, a system and method is disclosed that augments the cache(s) with a value predictor or a re-computation engine that supplies predicted or recomputed values to the loads that missed in the cache(s).
    Type: Application
    Filed: March 20, 2020
    Publication date: September 24, 2020
    Inventors: Christos SAKALIS, Stefanos KAXIRAS, Alberto ROS, Alexandra JIMBOREAN, Magnus SJÄLANDER
  • Patent number: 10089237
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: October 2, 2018
    Assignee: Florida State University Research Foundation, Inc.
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Publication number: 20170177490
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Application
    Filed: March 3, 2017
    Publication date: June 22, 2017
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Patent number: 9612960
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: April 4, 2017
    Assignee: Florida State University Research Foundation, Inc.
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Publication number: 20140372700
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Application
    Filed: August 29, 2014
    Publication date: December 18, 2014
    Applicant: Florida State University Research Foundation, Inc.
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Publication number: 20110004881
    Abstract: A method comprising receiving tasks for execution on at least one processor, and processing at least one task within one processor. To decrease the turn-around time of task processing, a method comprises parallel to processing the at least one task, verifying readiness of at least one next task assuming the currently processed task is finished, preparing a readystructure for the at least one task verified as ready, and starting the at least one task verified as ready using the ready-structure after the currently processed task is finished.
    Type: Application
    Filed: March 12, 2009
    Publication date: January 6, 2011
    Applicant: NXP B.V.
    Inventors: Andrei Sergeevich Terechko, Ghiath Al-Kadi, Marc Andre Georges Duranton, Magnus Själander