Patents by Inventor Karin Strauss
Karin Strauss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20120159057Abstract: Techniques are described for controlling availability of memory. As memory write operations are processed, the contents of memory targeted by the write operations are read and compared to the data to be written. The availability of the memory for subsequent write operations is controlled based on the outcomes of the comparing. How many concurrent write operations are being executed may vary according to the comparing. In one implementation, a pool of tokens is maintained based on the comparing. The tokens represent units of power. When write operations require more power, for example when they will alter the values of more cells in PCM memory, they draw (and eventually return) more tokens. The token pool can act as a memory-availability mechanism in that tokens must be obtained for a write operation to be executed. When and how many tokens are reserved or recycled can vary according to implementation.Type: ApplicationFiled: December 16, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Gabriel H. Loh, Douglas Burger, Karin Strauss, Timothy Sherwood
-
Publication number: 20120151252Abstract: Methods of memory management are described which can accommodate non- maskable failures in pages of physical memory. In an embodiment, when an impending non-maskable failure in a page of memory is identified, a pristine page of physical memory is used to replace the page containing the impending failure and memory mappings are updated to remap virtual pages from the failed page to the pristine page. When a new page of virtual memory is then allocated by a process, the failed page may be reused if the process identifies that it can accommodate failures and the process is provided with location information for impending failures. In another embodiment, a process may expose information on failure-tolerant regions of virtual address space such that a physical page of memory containing failures only in failure-tolerant regions may be used to store the data instead of using a pristine page.Type: ApplicationFiled: December 10, 2010Publication date: June 14, 2012Applicant: Microsoft CorporationInventors: Timothy Harris, Karin Strauss, Orion Hodson, Dushyanth Narayanan
-
Patent number: 8201024Abstract: Embodiments are described for managing memory faults. An example system can include a memory controller module to manage memory cells and report memory faults. An error buffer module can store memory fault information received from the memory controller. A notification module can be in communication with the error buffer module. The notification module may generate a notification of a memory fault in a memory access operation. A system software module can provide services and manage executing programs on a processor. In addition, the system software module can receive the notifications of the memory fault for the memory access operation. A notification handler may be activated by an interrupt when the notification of the memory fault in the memory access operation is received.Type: GrantFiled: May 17, 2010Date of Patent: June 12, 2012Assignee: Microsoft CorporationInventors: Doug Burger, James Larus, Karin Strauss, Jeremy Condit
-
Publication number: 20120124442Abstract: Techniques involving failure management of storage devices are described. One representative technique includes encoding data to enable it to be stored in a storage block that includes at least one storage failure. The data is encoded such that it traverses the storage failures when stored in the storage block. When it is determined that a storage access request has requested the data stored in a storage block having such failures, the data is decoded to restore it to its original form.Type: ApplicationFiled: November 11, 2010Publication date: May 17, 2012Applicant: Microsoft CorporationInventor: Karin Strauss
-
Publication number: 20120110278Abstract: Inoperable phase change memory (PCM) blocks in a PCM device are remapped to one or more operable PCM blocks, e.g. by maintaining an inoperable block table that includes an entry for each inoperable PCM block and an address of a remapped PCM block. Alternatively, the PCM blocks may be remapped by storing the address of the remapped block in the block itself, and setting a remapping bit that indicate the block has been remapped. Where the remapping is performed by a processor, an inoperable block bit may be set in a translation look aside buffer that indicates whether a virtual memory page is associated with an inoperable or remapped PCM block. When a request to access a virtual memory page is received, the processor references the inoperable block bit associated with the virtual memory page to determine whether to check for remapped PCM blocks in the inoperable block table.Type: ApplicationFiled: October 29, 2010Publication date: May 3, 2012Applicant: Microsoft CorporationInventors: John D. Davis, Karin Strauss, Douglas C. Burger
-
Patent number: 8166255Abstract: A method for performing a transaction including a transaction head and a transaction tail, includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.Type: GrantFiled: January 19, 2011Date of Patent: April 24, 2012Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20110296258Abstract: Architecture that implements error correcting pointers (ECPs) with a memory row, which point to the address of failed memory cells, each of which is paired with a replacement cell to be substituted for the failed cell. If two error correcting pointers in the array point to the same cell, a precedence rule dictates the array entry with the higher index (the entry created later) takes precedence. To count the number of error correcting pointers in use, a null pointer address can be employed to indicate that a pointer is inactive, an activation bit can be added, and/or a counter, that represents the number of error correcting pointers that are active. Mechanisms are provided for wear-leveling within the error correction structure, or for pairing this scheme with single-error correcting bits for instances where transient failures may occur. The architecture also employs pointers to correct errors in volatile and non-volatile memories.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: MICROSOFT CORPORATIONInventors: Stuart Schechter, Karin Strauss, Gabriel Loh, Douglas C. Burger
-
Publication number: 20110283135Abstract: Embodiments are described for managing memory faults. An example system can include a memory controller module to manage memory cells and report memory faults. An error buffer module can store memory fault information received from the memory controller. A notification module can be in communication with the error buffer module. The notification module may generate a notification of a memory fault in a memory access operation. A system software module can provide services and manage executing programs on a processor. In addition, the system software module can receive the notifications of the memory fault for the memory access operation. A notification handler may be activated by an interrupt when the notification of the memory fault in the memory access operation is received.Type: ApplicationFiled: May 17, 2010Publication date: November 17, 2011Applicant: Microsoft CorporationInventors: Doug Burger, Jim Larus, Karin Strauss, Jeremy Condit
-
Patent number: 7945741Abstract: A computer readable medium is provided embodying instructions executable by a processor to perform a method for performing a transaction including a transaction head and a transaction tail, the method includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.Type: GrantFiled: July 9, 2007Date of Patent: May 17, 2011Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20110113203Abstract: A method for performing a transaction including a transaction head and a transaction tail, includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.Type: ApplicationFiled: January 19, 2011Publication date: May 12, 2011Applicant: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20110040913Abstract: A method includes accepting for a first processor core of a plurality of processor cores in a multi-core system, a user-level interrupt indicated by a user-level interrupt message when an interrupt domain of an application thread executing on the first processor core and a recipient identifier of the application thread executing on the first processor core match corresponding fields in the user-level interrupt message.Type: ApplicationFiled: December 8, 2009Publication date: February 17, 2011Inventors: Jaewoong Chung, Karin Strauss
-
Publication number: 20110040914Abstract: A method includes recording a user-level interrupt as undeliverable in a mailbox at least partially based on an interrupt domain identifier and an interrupt recipient identifier included in a user-level interrupt message associated with the user-level interrupt. The recording is at least partially based on an indication that the user-level interrupt is undeliverable to a recipient application thread executing on a processor core of a plurality of processor cores in a multi-core system.Type: ApplicationFiled: December 8, 2009Publication date: February 17, 2011Inventors: Karin Strauss, Jaewoong Chung
-
Publication number: 20110040915Abstract: A method includes delivering a user-level interrupt message indicative of a user-level interrupt to one or more recipients according to a user-level interrupt delivery configuration selected from a plurality of user-level interrupt delivery configurations. The one or more recipients correspond to one or more application threads executing on one or more processor cores of a plurality of processor cores in a multi-core system. A method includes generating an indicator of a user-level interrupt being undeliverable to one or more intended recipients of a user-level interrupt message according to a failed delivery notification mode configuration. The user-level interrupt may be issued by an application thread executing on a first processor core of a plurality of processor cores in a multi-core system.Type: ApplicationFiled: December 8, 2009Publication date: February 17, 2011Inventors: Karin Strauss, Jaewoong Chung
-
Patent number: 7856535Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.Type: GrantFiled: July 21, 2008Date of Patent: December 21, 2010Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Patent number: 7568073Abstract: A computer-implemented method for enforcing cache coherence includes multicasting a cache request for a memory address from a requesting node without an ordering restriction over a network, collecting, by the requesting node, a combined snoop response of the cache request over a unidirectional ring embedded in the network, and enforcing cache coherence for the memory address at the requesting node, according to the combined snoop response.Type: GrantFiled: November 6, 2006Date of Patent: July 28, 2009Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20090089512Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.Type: ApplicationFiled: July 21, 2008Publication date: April 2, 2009Inventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20090019209Abstract: A computer readable medium is provided embodying instructions executable by a processor to performing a method for performing a transaction including a transaction head and a transaction tail, the method includes executing die transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered.Type: ApplicationFiled: July 9, 2007Publication date: January 15, 2009Inventors: Xiaowei Shen, Karin Strauss
-
Patent number: 7437520Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.Type: GrantFiled: July 11, 2005Date of Patent: October 14, 2008Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20080109609Abstract: A computer-implemented method for enforcing cache coherence includes multicasting a cache request for a memory address from a requesting node without an ordering restriction over a network, collecting, by the requesting node, a combined snoop response of the cache request over a unidirectional ring embedded in the network, and enforcing cache coherence for the memory address at the requesting node, according to the combined snoop response.Type: ApplicationFiled: November 6, 2006Publication date: May 8, 2008Inventors: Xiaowei Shen, Karin Strauss
-
Publication number: 20070011408Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.Type: ApplicationFiled: July 11, 2005Publication date: January 11, 2007Inventors: Xiaowei Shen, Karin Strauss