Patents by Inventor Ramaswamy Sivaramakrishnan
Ramaswamy Sivaramakrishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8527712Abstract: A method including: receiving multiple local requests to access the cache line; inserting, into an address chain, multiple entries corresponding to the multiple local requests; identifying a first entry at a head of the address chain; initiating, in response to identifying the first entry and in response to the first entry corresponding to a request to own the cache line, a traversal of the address chain; setting, during the traversal of the address chain, a state element identified in a second entry; receiving a foreign request to access the cache line; inserting, in response to setting the state element, a third entry corresponding to the foreign request into the address chain after the second entry; and relinquishing, in response to inserting the third entry after the second entry in the address chain, the cache line to a foreign thread after executing the multiple local requests.Type: GrantFiled: August 29, 2011Date of Patent: September 3, 2013Assignee: Oracle International CorporationInventors: Connie Wai Mun Cheung, Madhavi Kondapaneni, Joann Yin Lam, Ramaswamy Sivaramakrishnan
-
Patent number: 8464009Abstract: A distributed shared memory multiprocessor system that supports both fine- and coarse- grained interleaving of the shared memory address space. A ceiling mask sets a boundary between the fine-grain interleaved and coarse-grain interleaved memory regions of the distributed shared memory. A method for satisfying a memory access request in a distributed shared memory subsystem of a multiprocessor system having both fine- and coarse-grain interleaved memory segments. Certain low or high order address bits, depending on whether the memory segment is fine- or coarse-grain interleaved, respectively, are used to determine if the memory address is local to a processor node. A method for setting the ceiling mask of a distributed shared memory multiprocessor system to optimize performance of a first application run on a single node and performance of a second application run on a plurality of nodes.Type: GrantFiled: June 4, 2008Date of Patent: June 11, 2013Assignee: Oracle America, Inc.Inventors: Ramaswamy Sivaramakrishnan, Connie Cheung, William Bryg
-
Publication number: 20130138995Abstract: A method for managing multiple nodes hosting multiple memory segments, including: identifying a failure of a first node hosting a first memory segment storing a hypervisor; identifying a second memory segment storing a shadow of the hypervisor and hosted by a second node; intercepting, after the failure, a hypervisor access request (HAR) generated by a core of a third node and comprising a physical memory address comprising multiple node identification (ID) bits identifying the first node; modifying the multiple node ID bits of the physical memory address to identify the second node; and accessing a location in the shadow of the hypervisor specified by the physical address of the HAR after the multiple node ID bits are modified.Type: ApplicationFiled: November 30, 2011Publication date: May 30, 2013Applicant: Oracle International CorporationInventors: Ramaswamy Sivaramakrishnan, Jiejun Lu, Aaron S. Wynn
-
Patent number: 8429353Abstract: A method and a system for processor nodes configurable to operate in various distributed shared memory topologies. The processor node may be coupled to a first local memory. The first processor node may include a first local arbiter, which may be configured to perform one or more of a memory node decode or a coherency check on the first local memory. The processor node may also include a switch coupled to the first local arbiter for enabling and/or disabling the first local arbiter. Thus one or more processor nodes may be coupled together in various distributed shared memory configurations, depending on the configuration of their respective switches.Type: GrantFiled: May 20, 2008Date of Patent: April 23, 2013Assignee: Oracle America, Inc.Inventors: Ramaswamy Sivaramakrishnan, Stephen E. Phillips
-
Publication number: 20130086417Abstract: The systems and methods described herein may provide a flush-retire instruction for retiring “bad” cache locations (e.g., locations associated with persistent errors) to prevent their allocation for any further accesses, and a flush-unretire instruction for unretiring cache locations previously retired. These instructions may be implemented as hardware instructions of a processor. They may be executable by processes executing in a hyper-privileged state, without the need to quiesce any other processes. The flush-retire instruction may atomically flush a cache line implicated by a detected cache error and set a lock bit to disable subsequent allocation of the corresponding cache location. The flush-unretire instruction may atomically flush an identified cache line (if valid) and clear the lock bit to re-enable subsequent allocation of the cache location. Various bits in the encodings of these instructions may identify the cache location to be retired or unretired in terms of the physical cache structure.Type: ApplicationFiled: September 30, 2011Publication date: April 4, 2013Inventors: Ramaswamy Sivaramakrishnan, Ali Vahidsafa, Aaron S. Wynn, Connie W. Cheung
-
Publication number: 20130055011Abstract: A cache memory system includes a cache controller and a cache tag array. The cache tag array includes one or more ways, one or more indices, and a cache tag entry for each way and index combination. Each cache tag entry includes an error correction portion and an address portion. In response to an address request for data that includes a first index and a first address, the cache controller compares the first address to the cache tag entries of the cache tag array that correspond to the first index. When the comparison results in a miss, the cache controller corrects cache tag entries with an error that correspond to the first index using the corresponding error correction portions, and stores at least one of the corrected cache tag entries in a storage that is external to the cache tag array. The cache controller, for each corrected cache tag entry, replays the comparison using the least one of the externally stored corrected cache tag entries.Type: ApplicationFiled: August 24, 2011Publication date: February 28, 2013Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Ramaswamy SIVARAMAKRISHNAN, Aaron S. WYNN, Connie Wai Mun CHEUNG, Satarupa BOSE
-
Patent number: 8332729Abstract: A system for automatic lane failover includes a first device coupled to a second device via a serial communication link having a plurality of a communication lanes. The devices may communicate by operating the link in a normal mode and a degraded mode. During normal mode operation, the devices may send frames of information to each other via the serial communication link. Each frame of information may include a number of data bits and a number of error protection bits. In response to either device detecting a failure of one or more of the communication lanes, the first device may cause the serial communication link to operate in a degraded mode by removing the one or more failed communication lanes. In addition, each device may reformat and send the frame of information on the remaining communication lanes with fewer data bits and the same number of error protection bits.Type: GrantFiled: September 29, 2008Date of Patent: December 11, 2012Assignee: Oracle International CorporationInventors: Ramaswamy Sivaramakrishnan, Sebastian Turullols, Stephen E. Phillips
-
Publication number: 20120290794Abstract: A method including: receiving multiple local requests to access the cache line; inserting, into an address chain, multiple entries corresponding to the multiple local requests; identifying a first entry at a head of the address chain; initiating, in response to identifying the first entry and in response to the first entry corresponding to a request to own the cache line, a traversal of the address chain; setting, during the traversal of the address chain, a state element identified in a second entry; receiving a foreign request to access the cache line; inserting, in response to setting the state element, a third entry corresponding to the foreign request into the address chain after the second entry; and relinquishing, in response to inserting the third entry after the second entry in the address chain, the cache line to a foreign thread after executing the multiple local requests.Type: ApplicationFiled: August 29, 2011Publication date: November 15, 2012Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Connie Wai Mun Cheung, Madhavi Kondapaneni, Joann Yin Lam, Ramaswamy Sivaramakrishnan
-
Publication number: 20100083066Abstract: A system for automatic lane failover includes a first device coupled to a second device via a serial communication link having a plurality of a communication lanes. The devices may communicate by operating the link in a normal mode and a degraded mode. During normal mode operation, the devices may send frames of information to each other via the serial communication link. Each frame of information may include a number of data bits and a number of error protection bits. In response to either device detecting a failure of one or more of the communication lanes, the first device may cause the serial communication link to operate in a degraded mode by removing the one or more failed communication lanes. In addition, each device may reformat and send the frame of information on the remaining communication lanes with fewer data bits and the same number of error protection bits.Type: ApplicationFiled: September 29, 2008Publication date: April 1, 2010Inventors: Ramaswamy Sivaramakrishnan, Sebastian Turullols, Stephen E. Phillips
-
Publication number: 20090307434Abstract: A distributed shared memory multiprocessor system that supports both fine- and coarse-grained interleaving of the shared memory address space. A ceiling mask sets a boundary between the fine-grain interleaved and coarse-grain interleaved memory regions of the distributed shared memory. A method for satisfying a memory access request in a distributed shared memory subsystem of a multiprocessor system having both fine- and coarse-grain interleaved memory segments. Certain low or high order address bits, depending on whether the memory segment is fine- or coarse-grain interleaved, respectively, are used to determine if the memory address is local to a processor node. A method for setting the ceiling mask of a distributed shared memory multiprocessor system to optimize performance of a first application run on a single node and performance of a second application run on a plurality of nodes.Type: ApplicationFiled: June 4, 2008Publication date: December 10, 2009Applicant: Sun Microsystems, Inc.Inventors: Ramaswamy Sivaramakrishnan, Connie Cheung, William Bryg
-
Publication number: 20090292881Abstract: A method and a system for processor nodes configurable to operate in various distributed shared memory topologies. The processor node may be coupled to a first local memory. The first processor node may include a first local arbiter, which may be configured to perform one or more of a memory node decode or a coherency check on the first local memory. The processor node may also include a switch coupled to the first local arbiter for enabling and/or disabling the first local arbiter. Thus one or more processor nodes may be coupled together in various distributed shared memory configurations, depending on the configuration of their respective switches.Type: ApplicationFiled: May 20, 2008Publication date: November 26, 2009Inventors: Ramaswamy Sivaramakrishnan, Stephen E. Phillips
-
Patent number: 7310709Abstract: A method and apparatus is disclosed for maintaining coherency between a primary cache and a secondary cache in a directory-based cache system. Upon identifying a parity error in the primary cache, a tag parity packet and a load instruction are sent from the primary cache to the secondary cache. In response to the tag parity packet, each tag entry in the secondary cache that is associated with the parity error is invalidated. Upon receiving an acknowledgment of receipt of the tag parity packet, the primary cache functions to invalidate each tag entry in the primary cache that is associated with the parity error. Then, the secondary cache communicates data requested in the load instruction to the primary cache.Type: GrantFiled: April 6, 2005Date of Patent: December 18, 2007Assignee: Sun Microsystems, Inc.Inventors: Kathirgamar Aingaran, Ramaswamy Sivaramakrishnan, Sanjay Patel
-
Patent number: 7281096Abstract: A hardware implemented method for writing data to a cache is provided. In this hardware implemented method, a Block Initializing Store (BIS) instruction is received to write the data from a processor core to a memory block. The BIS instruction includes the data from the processor core. Thereafter, a dummy read request is sent to a memory controller and known data is received from the memory controller without accessing a main memory. The known data is then written to the cache and, after the known data is written, the data from the processor core is written to the cache. A system and processor for writing data to the cache also are described.Type: GrantFiled: February 9, 2005Date of Patent: October 9, 2007Assignee: Sun Microsystems, Inc.Inventors: Ramaswamy Sivaramakrishnan, Sunil Vemula, Sanjay Patel, James P. Laudon