Patents by Inventor Ross Segelken

Ross Segelken has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10642744
    Abstract: An improved architectural means to address processor cache attacks based on speculative execution defines a new memory type that is both cacheable and inaccessible by speculation. Speculative execution cannot access and expose a memory location that is speculatively inaccessible. Such mechanisms can disqualify certain sensitive data from being exposed through speculative execution. Data which must be protected at a performance cost may be specifically marked. If the processor is told where secrets are stored in memory and is forbidden from speculating on those memory locations, then the processor will ensure the process trying to access those memory locations is privileged to access those locations before reading and caching them. Such countermeasure is effective against attacks that use speculative execution to leak secrets from a processor cache.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: May 5, 2020
    Assignee: NVIDIA Corporation
    Inventors: Darrell D. Boggs, Ross Segelken, Mike Cornaby, Nick Fortino, Shailender Chaudhry, Denis Khartikov, Alok Mooley, Nathan Tuck, Gordon Vreugdenhil
  • Patent number: 10324725
    Abstract: The disclosure provides a method and a system for identifying and replacing code translations that generate spurious fault events. In one embodiment the method includes executing a first set and a second set of native instructions, performing a third translation of a target instruction to form a third set of native instructions in response to a determination that a fault occurrence is attributed to a first translation, wherein the third set of native instructions is not the same as the second set of native instructions, and the third set of native instructions is not the same as the first set of native instructions, and executing the third set of native instructions.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: June 18, 2019
    Assignee: Nvidia Corporation
    Inventors: Nathan Tuck, David Dunn, Ross Segelken, Madhu Swarna
  • Patent number: 10241810
    Abstract: A processing system comprising a microprocessor core and a translator. Within the microprocessor core is arranged a hardware decoder configured to selectively decode instructions for execution in the microprocessor core, and, a logic structure configured to track usage of the hardware decoder. The translator is operatively coupled to the logic structure and configured to selectively translate the instructions for execution in the microprocessor core, based on the usage of the hardware decoder as determined by the logic structure.
    Type: Grant
    Filed: May 18, 2012
    Date of Patent: March 26, 2019
    Assignee: Nvidia Corporation
    Inventors: Rupert Brauch, Madhu Swarna, Ross Segelken, David Dunn, Ben Hertzberg
  • Publication number: 20190004961
    Abstract: An improved architectural means to address processor cache attacks based on speculative execution defines a new memory type that is both cacheable and inaccessible by speculation. Speculative execution cannot access and expose a memory location that is speculatively inaccessible. Such mechanisms can disqualify certain sensitive data from being exposed through speculative execution. Data which must be protected at a performance cost may be specifically marked. If the processor is told where secrets are stored in memory and is forbidden from speculating on those memory locations, then the processor will ensure the process trying to access those memory locations is privileged to access those locations before reading and caching them. Such countermeasure is effective against attacks that use speculative execution to leak secrets from a processor cache.
    Type: Application
    Filed: June 28, 2018
    Publication date: January 3, 2019
    Inventors: Darrell D. BOGGS, Ross SEGELKEN, Mike CORNABY, Nick FORTINO, Shailender CHAUDHRY, Denis KHARTIKOV, Alok MOOLEY, Nathan TUCK, Gordon VREUGDENHIL
  • Patent number: 10146545
    Abstract: Embodiments related to fetching instructions and alternate versions achieving the same functionality as the instructions from an instruction cache included in a microprocessor are provided. In one example, a method is provided, comprising, at an example microprocessor, fetching an instruction from an instruction cache. The example method also includes hashing an address for the instruction to determine whether an alternate version of the instruction which achieves the same functionality as the instruction exists. The example method further includes, if hashing results in a determination that such an alternate version exists, aborting fetching of the instruction and retrieving and executing the alternate version.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: December 4, 2018
    Assignee: Nvidia Corporation
    Inventors: Ross Segelken, Alex Klaiber, Nathan Tuck, David Dunn
  • Patent number: 10108424
    Abstract: The disclosure provides a micro-processing system operable in a hardware decoder mode and in a translation mode. In the hardware decoder mode, the hardware decoder receives and decodes non-native ISA instructions into native instructions for execution in a processing pipeline. In the translation mode, native translations of non-native ISA instructions are executed in the processing pipeline without using the hardware decoder. The system includes a code portion profile stored in hardware that changes dynamically in response to use of the hardware decoder to execute portions of non-native ISA code. The code portion profile is then used to dynamically form new native translations executable in the translation mode.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 23, 2018
    Assignee: Nvidia Corporation
    Inventors: Nathan Tuck, Alexander Klaiber, Ross Segelken, David Dunn, Ben Hertzberg, Rupert Brauch, Thomas Kistler, Guillermo J. Rozas, Madhu Swarna
  • Publication number: 20180260222
    Abstract: The disclosure provides a method and a system for identifying and replacing code translations that generate spurious fault events. In one embodiment the method includes executing a first set and a second set of native instructions, performing a third translation of a target instruction to form a third set of native instructions in response to a determination that a fault occurrence is attributed to a first translation, wherein the third set of native instructions is not the same as the second set of native instructions, and the third set of native instructions is not the same as the first set of native instructions, and executing the third set of native instructions.
    Type: Application
    Filed: March 8, 2018
    Publication date: September 13, 2018
    Inventors: Nathan Tuck, David Dunn, Ross Segelken, Madhu Swarna
  • Patent number: 9891972
    Abstract: Embodiments related to managing lazy runahead operations at a microprocessor are disclosed. For example, an embodiment of a method for operating a microprocessor described herein includes identifying a primary condition that triggers an unresolved state of the microprocessor. The example method also includes identifying a forcing condition that compels resolution of the unresolved state. The example method also includes, in response to identification of the forcing condition, causing the microprocessor to enter a runahead mode.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: February 13, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Magnus Ekman, Ross Segelken, Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Darrell D. Boggs
  • Patent number: 9880846
    Abstract: In one embodiment, a micro-processing system includes a hardware structure disposed on a processor core. The hardware structure includes a plurality of entries, each of which are associated with portion of code and a translation of that code which can be executed to achieve substantially equivalent functionality. The hardware structure includes a redirection array that enables, when referenced, execution to be redirected from a portion of code to its counterpart translation. The entries enabling such redirection are maintained within or evicted from the hardware structure based on usage information for the entries.
    Type: Grant
    Filed: April 11, 2012
    Date of Patent: January 30, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Nathan Tuck, Ross Segelken
  • Patent number: 9875105
    Abstract: Embodiments related to re-dispatching an instruction selected for re-execution from a buffer upon a microprocessor re-entering a particular execution location after runahead are provided. In one example, a microprocessor is provided. The example microprocessor includes fetch logic, one or more execution mechanisms for executing a retrieved instruction provided by the fetch logic, and scheduler logic for scheduling the retrieved instruction for execution. The example scheduler logic includes a buffer for storing the retrieved instruction and one or more additional instructions, the scheduler logic being configured, upon the microprocessor re-entering at a particular execution location after runahead, to re-dispatch, from the buffer, an instruction that has been previously dispatched to one of the execution mechanisms.
    Type: Grant
    Filed: May 3, 2012
    Date of Patent: January 23, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Guillermo J. Rozas, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Ross Segelken, Darrell Boggs, Magnus Ekman
  • Patent number: 9823931
    Abstract: Various embodiments of microprocessors and methods of operating a microprocessor during runahead operation are disclosed herein. One example method of operating a microprocessor includes identifying a runahead-triggering event associated with a runahead-triggering instruction and, responsive to identification of the runahead-triggering event, entering runahead operation and inserting the runahead-triggering instruction along with one or more additional instructions in a queue. The example method also includes resuming non-runahead operation of the microprocessor in response to resolution of the runahead-triggering event and re-dispatching the runahead-triggering instruction along with the one or more additional instructions from the queue to the execution logic.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: November 21, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Ross Segelken, Darrell D. Boggs, Magnus Ekman, Aravindh Baktha, David Dunn
  • Patent number: 9740553
    Abstract: Embodiments related to managing potentially invalid results generated/obtained by a microprocessor during runahead are provided. In one example, a method for operating a microprocessor includes causing the microprocessor to enter runahead upon detection of a runahead event. The example method also includes, during runahead, determining that an operation associated with an instruction referencing a storage location would produce a potentially invalid result based on a value of an architectural poison bit associated with the storage location and performing a different operation in response.
    Type: Grant
    Filed: November 14, 2012
    Date of Patent: August 22, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Bruce Holmer, Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Ross Segelken, Darrell D. Boggs, Magnus Ekman
  • Publication number: 20170199778
    Abstract: Embodiments related to managing lazy runahead operations at a microprocessor are disclosed. For example, an embodiment of a method for operating a microprocessor described herein includes identifying a primary condition that triggers an unresolved state of the microprocessor. The example method also includes identifying a forcing condition that compels resolution of the unresolved state. The example method also includes, in response to identification of the forcing condition, causing the microprocessor to enter a runahead mode.
    Type: Application
    Filed: March 27, 2017
    Publication date: July 13, 2017
    Inventors: Magnus Ekman, Ross Segelken, Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Darrell D. Boggs
  • Patent number: 9632976
    Abstract: Embodiments related to managing lazy runahead operations at a microprocessor are disclosed. For example, an embodiment of a method for operating a microprocessor described herein includes identifying a primary condition that triggers an unresolved state of the microprocessor. The example method also includes identifying a forcing condition that compels resolution of the unresolved state. The example method also includes, in response to identification of the forcing condition, causing the microprocessor to enter a runahead mode.
    Type: Grant
    Filed: December 7, 2012
    Date of Patent: April 25, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Guillermo J. Rozas, Alexander Klaiber, James van Zoeren, Paul Serris, Brad Hoyt, Sridharan Ramakrishnan, Hens Vanderschoot, Ross Segelken, Darrell D. Boggs, Magnus Ekman
  • Patent number: 9563432
    Abstract: Various embodiments relating to executing different types of instruction code in a micro-processing system are provided.
    Type: Grant
    Filed: April 19, 2013
    Date of Patent: February 7, 2017
    Assignee: Nvidia Corporation
    Inventors: Ross Segelken, Darrell D. Boggs, Shiaoli Mendyke
  • Patent number: 9552032
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes instruction memory and a branch prediction unit. The branch prediction unit is configured to use information from the instruction memory to selectively power up the branch prediction unit from a powered-down state when fetched instruction data includes a branch instruction and maintain the branch prediction unit in the powered-down state when the fetched instruction data does not include a branch instruction in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 24, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck, Paul Wasson
  • Patent number: 9547358
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes a branch prediction unit. The branch prediction unit is configured to track the presence of branches in instruction data that is fetched from an instruction memory after a redirection at a target of a predicted taken branch. The branch prediction unit is selectively powered up from a powered-down state when the fetched instruction data includes a branch instruction and is maintained in the powered-down state when the fetched instruction data does not include an instruction branch in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 17, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Paul Wasson
  • Patent number: 9396117
    Abstract: In one embodiment, a method for controlling an instruction cache including a least-recently-used bits array, a tag array, and a data array, includes looking up, in the least-recently-used bits array, least-recently-used bits for each of a plurality of cacheline sets in the instruction cache, determining a most-recently-used way in a designated cacheline set of the plurality of cacheline sets based on the least-recently-used bits for the designated cacheline, looking up, in the tag array, tags for one or more ways in the designated cacheline set, looking up, in the data array, data stored in the most-recently-used way in the designated cacheline set, and if there is a cache hit in the most-recently-used way, retrieving the data stored in the most-recently-used way from the data array.
    Type: Grant
    Filed: January 9, 2012
    Date of Patent: July 19, 2016
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck
  • Publication number: 20140317382
    Abstract: Various embodiments relating to executing different types of instruction code in a micro-processing system are provided.
    Type: Application
    Filed: April 19, 2013
    Publication date: October 23, 2014
    Applicant: NVIDIA Corporation
    Inventors: Ross Segelken, Darrell D. Boggs, Shiaoli Mendyke
  • Publication number: 20140281392
    Abstract: The disclosure provides a micro-processing system operable in a hardware decoder mode and in a translation mode. In the hardware decoder mode, the hardware decoder receives and decodes non-native ISA instructions into native instructions for execution in a processing pipeline. In the translation mode, native translations of non-native ISA instructions are executed in the processing pipeline without using the hardware decoder. The system includes a code portion profile stored in hardware that changes dynamically in response to use of the hardware decoder to execute portions of non-native ISA code. The code portion profile is then used to dynamically form new native translations executable in the translation mode.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Nathan Tuck, Alexander Klaiber, Ross Segelken, David Dunn, Ben Hertzberg, Rupert Brauch, Thomas Kistler, Guillermo J. Rozas, Madhu Swarna