Patents by Inventor Karin Strauss

Karin Strauss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120159057
    Abstract: Techniques are described for controlling availability of memory. As memory write operations are processed, the contents of memory targeted by the write operations are read and compared to the data to be written. The availability of the memory for subsequent write operations is controlled based on the outcomes of the comparing. How many concurrent write operations are being executed may vary according to the comparing. In one implementation, a pool of tokens is maintained based on the comparing. The tokens represent units of power. When write operations require more power, for example when they will alter the values of more cells in PCM memory, they draw (and eventually return) more tokens. The token pool can act as a memory-availability mechanism in that tokens must be obtained for a write operation to be executed. When and how many tokens are reserved or recycled can vary according to implementation.
    Type: Application
    Filed: December 16, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Gabriel H. Loh, Douglas Burger, Karin Strauss, Timothy Sherwood
  • Publication number: 20120151252
    Abstract: Methods of memory management are described which can accommodate non- maskable failures in pages of physical memory. In an embodiment, when an impending non-maskable failure in a page of memory is identified, a pristine page of physical memory is used to replace the page containing the impending failure and memory mappings are updated to remap virtual pages from the failed page to the pristine page. When a new page of virtual memory is then allocated by a process, the failed page may be reused if the process identifies that it can accommodate failures and the process is provided with location information for impending failures. In another embodiment, a process may expose information on failure-tolerant regions of virtual address space such that a physical page of memory containing failures only in failure-tolerant regions may be used to store the data instead of using a pristine page.
    Type: Application
    Filed: December 10, 2010
    Publication date: June 14, 2012
    Applicant: Microsoft Corporation
    Inventors: Timothy Harris, Karin Strauss, Orion Hodson, Dushyanth Narayanan
  • Patent number: 8201024
    Abstract: Embodiments are described for managing memory faults. An example system can include a memory controller module to manage memory cells and report memory faults. An error buffer module can store memory fault information received from the memory controller. A notification module can be in communication with the error buffer module. The notification module may generate a notification of a memory fault in a memory access operation. A system software module can provide services and manage executing programs on a processor. In addition, the system software module can receive the notifications of the memory fault for the memory access operation. A notification handler may be activated by an interrupt when the notification of the memory fault in the memory access operation is received.
    Type: Grant
    Filed: May 17, 2010
    Date of Patent: June 12, 2012
    Assignee: Microsoft Corporation
    Inventors: Doug Burger, James Larus, Karin Strauss, Jeremy Condit
  • Publication number: 20120124442
    Abstract: Techniques involving failure management of storage devices are described. One representative technique includes encoding data to enable it to be stored in a storage block that includes at least one storage failure. The data is encoded such that it traverses the storage failures when stored in the storage block. When it is determined that a storage access request has requested the data stored in a storage block having such failures, the data is decoded to restore it to its original form.
    Type: Application
    Filed: November 11, 2010
    Publication date: May 17, 2012
    Applicant: Microsoft Corporation
    Inventor: Karin Strauss
  • Publication number: 20120110278
    Abstract: Inoperable phase change memory (PCM) blocks in a PCM device are remapped to one or more operable PCM blocks, e.g. by maintaining an inoperable block table that includes an entry for each inoperable PCM block and an address of a remapped PCM block. Alternatively, the PCM blocks may be remapped by storing the address of the remapped block in the block itself, and setting a remapping bit that indicate the block has been remapped. Where the remapping is performed by a processor, an inoperable block bit may be set in a translation look aside buffer that indicates whether a virtual memory page is associated with an inoperable or remapped PCM block. When a request to access a virtual memory page is received, the processor references the inoperable block bit associated with the virtual memory page to determine whether to check for remapped PCM blocks in the inoperable block table.
    Type: Application
    Filed: October 29, 2010
    Publication date: May 3, 2012
    Applicant: Microsoft Corporation
    Inventors: John D. Davis, Karin Strauss, Douglas C. Burger
  • Patent number: 8166255
    Abstract: A method for performing a transaction including a transaction head and a transaction tail, includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.
    Type: Grant
    Filed: January 19, 2011
    Date of Patent: April 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20110296258
    Abstract: Architecture that implements error correcting pointers (ECPs) with a memory row, which point to the address of failed memory cells, each of which is paired with a replacement cell to be substituted for the failed cell. If two error correcting pointers in the array point to the same cell, a precedence rule dictates the array entry with the higher index (the entry created later) takes precedence. To count the number of error correcting pointers in use, a null pointer address can be employed to indicate that a pointer is inactive, an activation bit can be added, and/or a counter, that represents the number of error correcting pointers that are active. Mechanisms are provided for wear-leveling within the error correction structure, or for pairing this scheme with single-error correcting bits for instances where transient failures may occur. The architecture also employs pointers to correct errors in volatile and non-volatile memories.
    Type: Application
    Filed: May 27, 2010
    Publication date: December 1, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Stuart Schechter, Karin Strauss, Gabriel Loh, Douglas C. Burger
  • Publication number: 20110283135
    Abstract: Embodiments are described for managing memory faults. An example system can include a memory controller module to manage memory cells and report memory faults. An error buffer module can store memory fault information received from the memory controller. A notification module can be in communication with the error buffer module. The notification module may generate a notification of a memory fault in a memory access operation. A system software module can provide services and manage executing programs on a processor. In addition, the system software module can receive the notifications of the memory fault for the memory access operation. A notification handler may be activated by an interrupt when the notification of the memory fault in the memory access operation is received.
    Type: Application
    Filed: May 17, 2010
    Publication date: November 17, 2011
    Applicant: Microsoft Corporation
    Inventors: Doug Burger, Jim Larus, Karin Strauss, Jeremy Condit
  • Patent number: 7945741
    Abstract: A computer readable medium is provided embodying instructions executable by a processor to perform a method for performing a transaction including a transaction head and a transaction tail, the method includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.
    Type: Grant
    Filed: July 9, 2007
    Date of Patent: May 17, 2011
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20110113203
    Abstract: A method for performing a transaction including a transaction head and a transaction tail, includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.
    Type: Application
    Filed: January 19, 2011
    Publication date: May 12, 2011
    Applicant: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20110040913
    Abstract: A method includes accepting for a first processor core of a plurality of processor cores in a multi-core system, a user-level interrupt indicated by a user-level interrupt message when an interrupt domain of an application thread executing on the first processor core and a recipient identifier of the application thread executing on the first processor core match corresponding fields in the user-level interrupt message.
    Type: Application
    Filed: December 8, 2009
    Publication date: February 17, 2011
    Inventors: Jaewoong Chung, Karin Strauss
  • Publication number: 20110040914
    Abstract: A method includes recording a user-level interrupt as undeliverable in a mailbox at least partially based on an interrupt domain identifier and an interrupt recipient identifier included in a user-level interrupt message associated with the user-level interrupt. The recording is at least partially based on an indication that the user-level interrupt is undeliverable to a recipient application thread executing on a processor core of a plurality of processor cores in a multi-core system.
    Type: Application
    Filed: December 8, 2009
    Publication date: February 17, 2011
    Inventors: Karin Strauss, Jaewoong Chung
  • Publication number: 20110040915
    Abstract: A method includes delivering a user-level interrupt message indicative of a user-level interrupt to one or more recipients according to a user-level interrupt delivery configuration selected from a plurality of user-level interrupt delivery configurations. The one or more recipients correspond to one or more application threads executing on one or more processor cores of a plurality of processor cores in a multi-core system. A method includes generating an indicator of a user-level interrupt being undeliverable to one or more intended recipients of a user-level interrupt message according to a failed delivery notification mode configuration. The user-level interrupt may be issued by an application thread executing on a first processor core of a plurality of processor cores in a multi-core system.
    Type: Application
    Filed: December 8, 2009
    Publication date: February 17, 2011
    Inventors: Karin Strauss, Jaewoong Chung
  • Patent number: 7856535
    Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.
    Type: Grant
    Filed: July 21, 2008
    Date of Patent: December 21, 2010
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Patent number: 7568073
    Abstract: A computer-implemented method for enforcing cache coherence includes multicasting a cache request for a memory address from a requesting node without an ordering restriction over a network, collecting, by the requesting node, a combined snoop response of the cache request over a unidirectional ring embedded in the network, and enforcing cache coherence for the memory address at the requesting node, according to the combined snoop response.
    Type: Grant
    Filed: November 6, 2006
    Date of Patent: July 28, 2009
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20090089512
    Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.
    Type: Application
    Filed: July 21, 2008
    Publication date: April 2, 2009
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20090019209
    Abstract: A computer readable medium is provided embodying instructions executable by a processor to performing a method for performing a transaction including a transaction head and a transaction tail, the method includes executing die transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.
    Type: Application
    Filed: July 9, 2007
    Publication date: January 15, 2009
    Inventors: Xiaowei Shen, Karin Strauss
  • Patent number: 7437520
    Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.
    Type: Grant
    Filed: July 11, 2005
    Date of Patent: October 14, 2008
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20080109609
    Abstract: A computer-implemented method for enforcing cache coherence includes multicasting a cache request for a memory address from a requesting node without an ordering restriction over a network, collecting, by the requesting node, a combined snoop response of the cache request over a unidirectional ring embedded in the network, and enforcing cache coherence for the memory address at the requesting node, according to the combined snoop response.
    Type: Application
    Filed: November 6, 2006
    Publication date: May 8, 2008
    Inventors: Xiaowei Shen, Karin Strauss
  • Publication number: 20070011408
    Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.
    Type: Application
    Filed: July 11, 2005
    Publication date: January 11, 2007
    Inventors: Xiaowei Shen, Karin Strauss