Patents by Inventor David Whalley
David Whalley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240077371Abstract: A wall shear stress sensor comprising first and second optical gratings, an incident light source and a photodetector. The second optical grating overlaps the first optical grating such that the first optical grating and second optical grating form a Moiré fringe pattern, wherein the second optical grating is displaceable relative to the first optical grating in response to a wall shear stress imparted on the sensor, and wherein displacement of the second optical grating correlates with a phase shift in the Moiré fringe pattern. The incident light source is configured to sequentially illuminate a plurality of discrete locations distributed across the Moiré fringe pattern. The photodetector is configured to detect light intensity reflected from each discrete location on the Moiré fringe pattern. A method of using the sensor and a two dimensional wall shear stress sensor system are also disclosed.Type: ApplicationFiled: December 14, 2021Publication date: March 7, 2024Applicant: University of Newcastle upon TyneInventors: Richard David Whalley, Nima Ebrahimzade, Peter Jonathan Cumpson
-
Publication number: 20240068892Abstract: A two-dimensional wall shear stress sensor comprising fixed and floating substrates, an incident light source and first and second photodetectors. The fixed substrate supports a first plurality of optical gratings. The floating substrate supports a second plurality of optical gratings superimposed over the first plurality of optical gratings to form a plurality of Moire fringe patterns comprising at least a first Moire fringe pattern extending in a first direction and a second Moire fringe pattern extending in a second direction different to the first direction. The floating substrate is displaceable relative to the fixed substrate in response to a wall shear stress imparted on the sensor, wherein displacement of the floating substrate correlates with a phase shift in at least one of the first and second Moire fringe patterns. An incident light source is configured to illuminate each of the plurality of Moire fringe patterns.Type: ApplicationFiled: December 14, 2021Publication date: February 29, 2024Applicant: University of Newcastle upon TyneInventors: Richard David Whalley, Nima Ebrahimzade
-
Patent number: 11188337Abstract: Micro-architecture designs and methods are provided. A computer processing architecture may include an instruction cache for storing producer instructions, a half-instruction cache for storing half instructions, and eager shelves for storing a result of a first producer instruction. The computer processing architecture may fetch the first producer instruction and a first half instruction; send the first half instruction to the eager shelves; based on execution of the first producer instruction, send a second half instruction to the eager shelves; assemble the first producer instruction in the eager shelves based on the first half instruction and the second half instruction; and dispatch the first producer instruction for execution.Type: GrantFiled: September 30, 2019Date of Patent: November 30, 2021Assignees: The Florida State University Research Foundation, Inc., Michigan Technological UniversityInventors: David Whalley, Soner Onder
-
Publication number: 20200104136Abstract: Micro-architecture designs and methods are provided. A computer processing architecture may include an instruction cache for storing producer instructions, a half-instruction cache for storing half instructions, and eager shelves for storing a result of a first producer instruction. The computer processing architecture may fetch the first producer instruction and a first half instruction; send the first half instruction to the eager shelves; based on execution of the first producer instruction, send a second half instruction to the eager shelves; assemble the first producer instruction in the eager shelves based on the first half instruction and the second half instruction; and dispatch the first producer instruction for execution.Type: ApplicationFiled: September 30, 2019Publication date: April 2, 2020Applicant: Michigan Technological UniversityInventors: David Whalley, Soner Onder
-
Patent number: 10089237Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.Type: GrantFiled: March 3, 2017Date of Patent: October 2, 2018Assignee: Florida State University Research Foundation, Inc.Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
-
Publication number: 20170177490Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.Type: ApplicationFiled: March 3, 2017Publication date: June 22, 2017Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
-
Patent number: 9612960Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.Type: GrantFiled: August 29, 2014Date of Patent: April 4, 2017Assignee: Florida State University Research Foundation, Inc.Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
-
Patent number: 9600418Abstract: Certain embodiments herein relate to using tagless access buffers (TABs) to optimize energy efficiency in various computing systems. Candidate memory references in an L1 data cache may be identified and stored in the TAB. Various techniques may be implemented for identifying the candidate references and allocating the references into the TAB. Groups of memory references may also be allocate to a single TAB entry or may be allocated to an extra TAB entry (such that two lines in the TAB may be used to store L1 data cache lines), for example, when a strided access pattern spans two consecutive L1 data cache lines. Certain other embodiments are related to data filter cache and multi-issue tagless hit instruction cache (TH-IC) techniques.Type: GrantFiled: November 19, 2013Date of Patent: March 21, 2017Assignee: Florida State University Research Foundation, Inc.Inventors: David Whalley, Hans Magnus Sjalander, Alen Bardizbanyan, Per Larsson-Edefors, Peter Gavin
-
Publication number: 20140372700Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.Type: ApplicationFiled: August 29, 2014Publication date: December 18, 2014Applicant: Florida State University Research Foundation, Inc.Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
-
Publication number: 20140143494Abstract: Certain embodiments herein relate to using tagless access buffers (TABs) to optimize energy efficiency in various computing systems. Candidate memory references in an L1 data cache may be identified and stored in the TAB. Various techniques may be implemented for identifying the candidate references and allocating the references into the TAB. Groups of memory references may also be allocate to a single TAB entry or may be allocated to an extra TAB entry (such that two lines in the TAB may be used to store L1 data cache lines), for example, when a strided access pattern spans two consecutive L1 data cache lines. Certain other embodiments are related to data filter cache and multi-issue tagless hit instruction cache (TH-IC) techniques.Type: ApplicationFiled: November 19, 2013Publication date: May 22, 2014Applicant: Florida State University Research Foundation, Inc.Inventors: David Whalley, Hans Magnus Sjalander, Alen Bardizbanyan, Per Larsson-Edefors, Peter Gavin
-
Patent number: 7765342Abstract: Embodiments of the present invention may provide for architectural and compiler approaches to optimizing processors by packing instructions into instruction register files. The approaches may include providing at least one instruction register file, identifying a plurality of frequently-used instructions, and storing at least a portion of the identified frequently-used instructions in the instruction register file. The approaches may further include specifying a first identifier for identifying each of instructions stored within the instruction register file, and retrieving at least one packed instruction from an instruction cache, wherein each packed instruction includes at least one first identifier. The packed instructions may be tightly packed or loosely packed in accordance with embodiments of the present invention. Packed instructions may also be executed alongside traditional non-packed instructions.Type: GrantFiled: September 7, 2006Date of Patent: July 27, 2010Assignee: Florida State University Research FoundationInventors: David Whalley, Gary Tyson
-
Publication number: 20070136561Abstract: Embodiments of the present invention may provide for architectural and compiler approaches to optimizing processors by packing instructions into instruction register files. The approaches may include providing at least one instruction register file, identifying a plurality of frequently-used instructions, and storing at least a portion of the identified frequently-used instructions in the instruction register file. The approaches may further include specifying a first identifier for identifying each of instructions stored within the instruction register file, and retrieving at least one packed instruction from an instruction cache, wherein each packed instruction includes at least one first identifier. The packed instructions may be tightly packed or loosely packed in accordance with embodiments of the present invention. Packed instructions may also be executed alongside traditional non-packed instructions.Type: ApplicationFiled: September 7, 2006Publication date: June 14, 2007Applicant: FLORIDA STATE UNIVERSITY RESEARCH FOUNDATIONInventors: David Whalley, Gary Tyson