Plural Shared Memories Patents (Class 711/148)
  • Patent number: 8489839
    Abstract: The memory splitter chip couples multiple DRAM units to the PPU, thereby expanding the memory capacity available to the PPU for storing data and increasing the overall performance of the graphics processing system. The memory splitter chip includes logic for managing the transmission of data between the PPU and the DRAM units when the transmission frequencies and the burst lengths of the PPU interface and the DRAM interfaces differ. Specifically, the memory splitter chip implements an overlapping transmission mode, a pairing transmission mode or a combination of the two modes when the transmission frequencies or the burst lengths differ.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: July 16, 2013
    Assignee: Nvidia Corporation
    Inventors: Ashish Karandikar, Kaustubh Sanghani, Jonah M. Alben, Shane Keil
  • Patent number: 8484536
    Abstract: Methods, systems, and apparatus, including computer program products, featuring generating a plurality of error-correcting code chunks from a plurality of data chunks. The error-correcting code chunks can be used to reconstruct one or more of the data chunks. The data chunks are allocated to a local group of storage nodes. The error correcting code chunks are allocated between the local group of storage nodes and one or more remote groups of storage nodes. Each remote group of storage nodes is allocated one or more unique error-correcting code chunks from the error-correcting code chunks. Any of the error-correcting code chunks not allocated to a remote group of storage nodes are allocated to the local group of storage nodes.
    Type: Grant
    Filed: March 26, 2010
    Date of Patent: July 9, 2013
    Assignee: Google Inc.
    Inventor: Robert Cypher
  • Patent number: 8484422
    Abstract: A method, system and computer program product are disclosed for maintaining data coherence, for use in a multi-node processing system where each of the nodes includes one or more components. In one embodiment, the method comprises establishing a data domain, assigning a group of the components to the data domain, sending a coherence message from a first component of the processing system to a second component of the processing system, and determining if that second component is assigned to the data domain. In this embodiment, if that second component is assigned to the data domain, the coherence message is transferred to all of the components assigned to the data domain to maintain data coherency among those components. In an embodiment, if that second component is assigned to the data domain, the first component is assigned to the data domain.
    Type: Grant
    Filed: December 8, 2009
    Date of Patent: July 9, 2013
    Assignee: International Business Machines Corporation
    Inventors: Kattamuri Ekanadham, Il Park, Pratap Pattnaik
  • Patent number: 8484438
    Abstract: Some embodiments provide a system that facilitates concurrency control in a computer system. During operation, the system generates a set of signatures associated with memory accesses in the computer system. To generate the signatures, the system creates a set of hierarchical Bloom filters (HBFs) corresponding to the signatures, and populates the HBFs using addresses associated with the memory accesses. Next, the system compares the HBFs to detect a potential conflict associated with the memory accesses. Finally, the system manages concurrent execution in the computer system based on the detected potential conflict.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: July 9, 2013
    Assignee: Oracle America, Inc.
    Inventor: Robert E. Cypher
  • Patent number: 8484397
    Abstract: Various methods and apparatus are described for a memory scheduler. The memory scheduler has a pipelined arbiter to determine which request will access the target memory core. Pipelining occurs in stages within the arbiter over a period of more than one clock cycle. The pipelined arbiter uses two or more weighting factors affecting an arbitration decision that are processed in parallel. A predictive scheduler in the memory scheduler uses data from a previous cycle to make the arbitration decision about a request during a current clock cycle in which the arbitration decision is made in order to increase overall system efficiency of requests being serviced in the integrated circuit.
    Type: Grant
    Filed: May 24, 2012
    Date of Patent: July 9, 2013
    Assignee: Sonics, Inc.
    Inventors: Krishnan Srinivasan, Drew E. Wingard
  • Patent number: 8478877
    Abstract: A computer readable medium comprising software instructions for: obtaining an allocation policy by a MAC layer executing on a host; receiving, a request for a transmit kernel buffer (TxKB) by a sending application executing on at least one processor of the host; obtaining a location of a plurality of available TxKBs on the host; obtaining a location of at least one available network interface on the host; obtaining a location of the sending application; allocating one of the plurality of available TxKBs to obtain an allocated TxKB, wherein the one of the plurality of available TxKBs is selected according to the allocation policy using the location of the plurality of available TxKB, the location of the at least one available network interface, and the location of the sending application, to obtain an allocated TxKB; and providing, to the sending application, the location of the allocated TxKB.
    Type: Grant
    Filed: February 24, 2010
    Date of Patent: July 2, 2013
    Assignee: Oracle International Corporation
    Inventors: Nicolas G. Droux, Sunay Tripathi
  • Patent number: 8478835
    Abstract: The data path in a network storage system is streamlined by sharing a memory among multiple functional modules (e.g., N-module and D-module) of a storage server that facilitates symmetric access to data from multiple clients. The shared memory stores data from clients or storage devices to facilitate communication of data between clients and storage devices and/or between functional modules, and reduces redundant copies necessary for data transport. It reduces latency and improves throughput efficiencies by minimizing data copies and using hardware assisted mechanisms such as DMA directly from host bus adapters over an interconnection, e.g. switched PCI-e “network”. This scheme is well suited for a “SAN array” architecture, but also can be applied to NAS protocols or in a unified protocol-agnostic storage system. The storage system can provide a range of configurations ranging from dual module to many modules with redundant switched fabrics for I/O, CPU, memory, and disk connectivity.
    Type: Grant
    Filed: July 17, 2008
    Date of Patent: July 2, 2013
    Assignee: NetApp. Inc.
    Inventors: Jeffrey S. Kimmel, Steve C. Miller, Ashish Prakash
  • Publication number: 20130159636
    Abstract: An information processing apparatus includes a directory. Information is registered with the directory in a first format having entries corresponding to data storage areas, respectively. The information indicates a CPU that stores data stored in a data storage area of one information processing part of plural information processing parts or an information processing part having the CPU. The information processing part converts into a second format. The second format is such that an entry registered in such a way that data is not to be used from among the plural entries of the first format is removed and the number of the entries is reduced.
    Type: Application
    Filed: February 20, 2013
    Publication date: June 20, 2013
    Applicant: FUJITSU LIMITED
    Inventor: FUJITSU LIMITED
  • Patent number: 8464006
    Abstract: Provided are a method and apparatus for efficiently transferring a massive amount of multimedia data between two processors. The apparatus includes a first local switch, which connects a virtual page of a first processor element to a shared memory page, a second local switch, which connects a virtual page of a second processor element to the shared memory page, a shared page switch, which connects a predetermined shared memory page of a shared physical memory to the first or second local switch, and a switch manager, which remaps a certain shared memory page of the shared physical memory that stores data of a task performed by the first processor element to the virtual page of the second processor element. Accordingly, since memory remapping is used, the massive amount of multimedia data can be transmitted by changing a method of mapping a memory, unlike a case when multimedia data is transmitted by using a memory bus.
    Type: Grant
    Filed: February 7, 2008
    Date of Patent: June 11, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young-Su Kwon, Hyuk Kim, Young-Seok Baek, Suk Ho Lee, Bon Tae Koo, Nak Woong Eum
  • Patent number: 8464005
    Abstract: Systems and methods for accessing common registers in a multi-core processor are disclosed. In an exemplary embodiment a method may comprise streaming at least one transaction from one of a plurality of processing cores in a core domain directly to a register domain. The method may also comprise reassembling the at least one streamed transaction in the register domain for data access operations at the common registers.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: June 11, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Warren K. Howlett, Christopher L. Lyles
  • Publication number: 20130138884
    Abstract: Exemplary embodiments of the invention provide load distribution among storage systems using solid state memory (e.g., flash memory) as expanded cache area. In accordance with an aspect of the invention, a system comprises a first storage system and a second storage system. The first storage system changes a mode of operation from a first mode to a second mode based on load of process in the first storage system. The load of process in the first storage system in the first mode is executed by the first storage system. The load of process in the first storage system in the second mode is executed by the first storage system and the second storage system.
    Type: Application
    Filed: November 30, 2011
    Publication date: May 30, 2013
    Applicant: HITACHI, LTD.
    Inventor: Shunji KAWAMURA
  • Publication number: 20130138877
    Abstract: A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.
    Type: Application
    Filed: January 30, 2013
    Publication date: May 30, 2013
    Inventors: Kumar Ganapathy, Ruban Kanapathippillai, Saurin Shah, George Moussa, Earle F. Philhower, III, Ruchir Shah
  • Patent number: 8452926
    Abstract: A digital system is provided with a memory interposer module configured to be coupled between a processor module and a memory module. The memory interposer module has a memory controller configured to couple to the memory module. It also includes a first memory emulator configured to couple to the processor module via a connector, wherein the first memory emulator is configured to emulate the memory module. There is an arbiter coupled between the memory controller and the memory emulator. A second memory emulator is connected to the arbiter, wherein the second memory emulator is also configured to emulate the memory module. Each memory emulator is operable to stall a memory request when a conflict occurs.
    Type: Grant
    Filed: January 5, 2010
    Date of Patent: May 28, 2013
    Assignee: Texas Instruments Incorporated
    Inventors: Philippe Gentric, Olivier Alavoine
  • Patent number: 8452899
    Abstract: A method for data distribution, including distributing logical addresses among an initial set of devices so as provide balanced access, and transferring the data to the devices in accordance with the logical addresses. If a device is added to the initial set, forming an extended set, the logical addresses are redistributed among the extended set so as to cause some logical addresses to be transferred from the devices in the initial set to the additional device. There is substantially no transfer of the logical addresses among the initial set. If a surplus device is removed from the initial set, forming a depleted set, the logical addresses of the surplus device are redistributed among the depleted set. There is substantially no transfer of the logical addresses among the depleted set. In both cases the balanced access is maintained.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: May 28, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen
  • Patent number: 8452845
    Abstract: Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: May 28, 2013
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8443155
    Abstract: An object storage system comprises one or more computer processors or threads that can concurrently access a shared memory, the shared memory comprising an array of equally-sized cells. In one embodiment, each cell is of the size used by the processors to represent a pointer, e.g., 64 bits. Using an algorithm performing only one memory write, and using a hardware-provided transactional operation, such as a compare-and-swap instruction, to implement the memory write, concurrent access is safely accommodated in a lock-free manner.
    Type: Grant
    Filed: December 31, 2009
    Date of Patent: May 14, 2013
    Assignee: Facebook, Inc.
    Inventors: Keith Adams, Spencer Ahrens
  • Patent number: 8433874
    Abstract: A memory system architecture is provided in which a memory controller controls operations of memory devices in a serial interconnection configuration. The memory controller has an output serial interface for sending memory commands and an input serial interface for receiving memory responses for those memory commands requisitioning such responses. Each memory device includes a memory, such as, for example, flash memory (e.g., NAND- and NOR-type flash memories). In an initialization phase, the memory devices are assigned with consecutive number addresses. The memory controller sends a target address and can recognize the type of the targeted memory device. A data path for the memory commands and the memory responses is provided by the interconnection.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: April 30, 2013
    Assignee: Mosaid Technologies Incorporated
    Inventors: HakJune Oh, Hong Beom Pyeon, Jin-Ki Kim
  • Patent number: 8429374
    Abstract: System, method, and program to perform simultaneous read and write operations in a NAND-type memory device, including: assigning a first partition in a NAND-type memory device, wherein the first partition is configured to perform read operations on high priority read content; assigning a second partition in the NAND-type memory device, wherein the second partition is configured to perform read operations and write operations, wherein the read operations are performed on non-high priority read content; and controlling the first partition and second partition to operate in a simultaneous manner.
    Type: Grant
    Filed: March 22, 2010
    Date of Patent: April 23, 2013
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventor: Wladyslaw Bolanowski
  • Patent number: 8429353
    Abstract: A method and a system for processor nodes configurable to operate in various distributed shared memory topologies. The processor node may be coupled to a first local memory. The first processor node may include a first local arbiter, which may be configured to perform one or more of a memory node decode or a coherency check on the first local memory. The processor node may also include a switch coupled to the first local arbiter for enabling and/or disabling the first local arbiter. Thus one or more processor nodes may be coupled together in various distributed shared memory configurations, depending on the configuration of their respective switches.
    Type: Grant
    Filed: May 20, 2008
    Date of Patent: April 23, 2013
    Assignee: Oracle America, Inc.
    Inventors: Ramaswamy Sivaramakrishnan, Stephen E. Phillips
  • Patent number: 8423722
    Abstract: Solid State Drives (SSD) can yield very high performance if it is designed properly. A SSD typically includes both a front end that interfaces with the host and a back end that interfaces with the flash media. Typically SSDs include flash media that is designed with a high degree of parallelism that can support a very high bandwidth on input/output (I/O). A SSD front end designed according to a traditional hard disk drive (HDD) model will not be able to take advantage of the high performance offered by the typical flash media. Embodiments of the invention provide improved management of multiple I/O threads that take advantage of the high performing and concurrent nature of the back end media, so the resulting storage system can achieve a very high performance.
    Type: Grant
    Filed: August 26, 2011
    Date of Patent: April 16, 2013
    Assignee: Western Digital Technologies, Inc.
    Inventors: Marvin R. Deforest, Matthew Call, Mei-Man L. Syu
  • Patent number: 8418226
    Abstract: A tamper resistant servicing Agent for providing various services (e.g., data delete, firewall protection, data encryption, location tracking, message notification, and updating software) comprises multiple functional modules, including a loader module (CLM) that loads and gains control during POST, independent of the OS, an Adaptive Installer Module (AIM), and a Communications Driver Agent (CDA). Once control is handed to the CLM, it loads the AIM, which in turn locates, validates, decompresses and adapts the CDA for the detected OS environment. The CDA exists in two forms, a mini CDA that determines whether a full or current CDA is located somewhere on the device, and if not, to load the full-function CDA from a network; and a full-function CDA that is responsible for all communications between the device and the monitoring server. The servicing functions can be controlled by a remote server.
    Type: Grant
    Filed: March 20, 2006
    Date of Patent: April 9, 2013
    Assignee: Absolute Software Corporation
    Inventor: Philip B. Gardner
  • Patent number: 8417898
    Abstract: A protocol chip and a communication conversion circuit are provided in a channel adapter package that is in charge of communications with a host. The communication conversion circuit communicates with the protocol chip using a procedure that conforms to a communication protocol. The communication conversion circuit communicates with a microprocessor using a procedure that is common to multiple communication protocols. It appears from the microprocessor as though communications are being carried out with the same type of channel adapter package.
    Type: Grant
    Filed: October 7, 2010
    Date of Patent: April 9, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Masateru Hemmi, Atsushi Yasuno
  • Patent number: 8417889
    Abstract: An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.
    Type: Grant
    Filed: July 24, 2009
    Date of Patent: April 9, 2013
    Assignee: International Business Machines Corporation
    Inventors: Diane Garza Flemming, William A. Maron, Ram Raghavan, Mysore Sathyanarayana Srinivas, Basu Vaidyanathan
  • Patent number: 8412918
    Abstract: According to various embodiments, a programmable device assembly includes an FPGA coupled to a nonvolatile serial configuration memory (e.g., serial flash memory) and a volatile fast bulk memory (e.g., SRAM or SDRAM). The nonvolatile serial configuration memory contains both the FPGA configuration data and CPU instructions. When a predetermined condition occurs, a serial memory access component that is hard coded on the FPGA automatically reads the configuration data from the nonvolatile serial configuration memory. The configuration data is used to configure the FPGA with various components, including a CPU, a boot ROM with code for a boot copier, and a bus structure. When the CPU boots, code for the boot copier is executed so that the CPU instructions are copied from the nonvolatile serial configuration memory to the volatile fast bulk memory. The CPU then executes the CPU instructions stored in the volatile fast bulk memory.
    Type: Grant
    Filed: September 22, 2010
    Date of Patent: April 2, 2013
    Assignee: Altera Corporation
    Inventors: Timothy P. Allen, Andrew Draper, Aaron Ferrucci, Kerry Veenstra
  • Patent number: 8412886
    Abstract: In such a configuration that a port unit is provided which takes a form being shared among threads and has a plurality of entries for holding access requests, and the access requests for a cache shared by a plurality of threads being executed at the same time are controlled using the port unit, the access request issued from each tread is registered on a port section of the port unit which is assigned to the tread, thereby controlling the port unit to be divided for use in accordance with the thread configuration. In selecting the access request, the access requests are selected for each thread based on the specified priority control from among the access requests issued from the threads held in the port unit, thereafter a final access request is selected in accordance with a thread selection signal from among those selected access requests.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: April 2, 2013
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8412890
    Abstract: A scalable, performance-based, volume allocation technique that can be applied in large storage controller collections is disclosed. A global resource tree of multiple nodes representing interconnected components of a storage system in a plurality of component layers is analyzed to yield gap values for each node (e.g., a bottom-up estimation). The gap value for each node is an estimate of the amount in GB of the new workload that can be allocated in the subtree of that node without exceeding the performance and space bounds at any of the nodes in that subtree. The gap values of the global resource tree are further analyzed to generate an ordered allocation list of the volumes of the storage system (e.g., a top-down selection). The volumes may be applied to a storage workload in the order of the allocation list and the gap values and list are updated.
    Type: Grant
    Filed: March 8, 2011
    Date of Patent: April 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Bhuvan Bamba, Madhukar R. Korupolu
  • Patent number: 8407426
    Abstract: Technologies are generally described for a system for sending a data block stored in a cache. In some examples described herein, a system may comprise a first processor in a first tile. The first processor is effective to generate a request for a data block, the request including a destination identifier identifying a destination tile for the data block, the destination tile being distinct from the first tile. Some example systems may further comprise a second tile effective to receive the request, the second tile effective to determine a data tile including the data block, the second tile further effective to send the request to the data tile. Some example systems may still further comprise a data tile effective to receive the request from the second tile, the data tile effective to send the data block to the destination tile.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: March 26, 2013
    Assignee: Empire Technology Development, LLC
    Inventor: Yan Solihin
  • Publication number: 20130073814
    Abstract: A computer system, comprising a plurality of nodes, the plurality of nodes are grouped into m node groups, each node group comprises n nodes, wherein m is a natural number greater than or equal to 1, n is a natural number greater than or equal to 2, the n nodes in each of the node group are connected directly or indirectly into a dual interconnection structure, wherein first node controllers of the n nodes in the same node group are connected directly or indirectly to form a first interconnection structure, second node controllers of nodes in the same node group are connected directly or indirectly to form a second interconnection structure. Therefore, less interconnection chips are required, the access path between nodes is shortened, the access delay time is reduced, the cost is reduced, and the system performance is improved.
    Type: Application
    Filed: November 13, 2012
    Publication date: March 21, 2013
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: HUAWEI TECHNOLOGIES CO., LTD.
  • Patent number: 8402228
    Abstract: An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: March 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, David Navarro, Bret R. Olszewski, Sergio Reyes
  • Patent number: 8402199
    Abstract: The invention discloses a memory management system and a memory management method are disclosed. The memory management system includes a first memory, at least one secondary memory, and a memory management device. The first memory includes a normal access memory bank and at least one switching access memory bank. The secondary memory includes at least one secondary access memory bank corresponding to the switching access memory bank. The memory management device reads/writes the normal access memory bank or the secondary access memory bank.
    Type: Grant
    Filed: May 22, 2012
    Date of Patent: March 19, 2013
    Assignee: Sonix Technology Co., Ltd.
    Inventors: Chien-Long Kao, Yi-Chih Hsin
  • Patent number: 8402106
    Abstract: An apparatus and a method for operating on data at a cache node of a data grid system is described. An asynchronous future-based interface of a computer system receives a request to operate on a cache node of a cluster. An acknowledgment is sent back upon receipt of the request prior to operating on the cache node. The cache node is then operated on based on the request. The operation is replicated to other cache nodes in the cluster. An acknowledgment that the operation has been completed in the cluster is sent back.
    Type: Grant
    Filed: April 14, 2010
    Date of Patent: March 19, 2013
    Assignee: Red Hat, Inc.
    Inventor: Manik Surtani
  • Publication number: 20130067172
    Abstract: Methods and structure for improved buffer management in a storage controller. A plurality of processes in the controller each transmits buffer management requests to buffer management control logic. A plurality of reserved portions and a remaining non-reserved portion are defined in a shared pool memory managed by the buffer management control logic. Each reserved portion is defined as a corresponding minimum amount of memory of the shared pool. Each reserved portion is associated with a private pool identifier. Each allocation request from a client process supplies a private pool identifier for the associated buffer to be allocated. The buffer is allocated from the reserved portion if there sufficient available space in the reserved portion identified by the supplied private pool identifier. Otherwise, the buffer is allocated if sufficient memory is available in the non-reserved portion. Otherwise the request is queued for later re-processing.
    Type: Application
    Filed: March 28, 2012
    Publication date: March 14, 2013
    Applicant: LSI CORPORATION
    Inventors: James A. Rizzo, Vinu Velayudhan, Adam Weiner, Rakesh Chandra, Phillip V. Nguyen
  • Patent number: 8397009
    Abstract: An interconnection network with m first electronic circuits and n second electronic circuits, comprising m interconnection sub-networks, each interconnection sub-network including: at least one addressing bus and one information transfer bus connecting one of the m first circuits to all the n second circuits, the information transfer bus comprising a plurality of portions of signal transmission lines connected to each other through signal repeater devices, and a controller device that controls the signal repeater devices, at least one of the signal repeater devices is controlled to be active depending on a value of an addressing signal to be sent to the addressing bus by said one of the m first circuits to the controller device, where m and n are integer numbers greater than 1.
    Type: Grant
    Filed: June 2, 2010
    Date of Patent: March 12, 2013
    Assignee: Commissariat a l'Energie Atomique et aux energies alternatives
    Inventor: Francois Jacquet
  • Patent number: 8397013
    Abstract: One embodiment of the present invention sets forth a hybrid memory module that combines memory devices of different types while presenting a single technology interface. The hybrid memory module includes a number of super-stacks and a first interface configured to transmit data between the super-stacks and a memory controller. Each super-stack includes a number of sub-stacks, a super-controller configured to control the sub-stacks, and a second interface configured to transmit data between the sub-stacks and the first interface. Combining memory devices of different types allows utilizing the favorable properties of each type of the memory devices, while hiding their unfavorable properties from the memory controller.
    Type: Grant
    Filed: March 27, 2008
    Date of Patent: March 12, 2013
    Assignee: Google Inc.
    Inventors: Daniel L. Rosenband, Frederick Daniel Weber, Michael John Sebastian Smith
  • Publication number: 20130061004
    Abstract: In a memory/logic conjugate system, a plurality of cluster memory chips each including a plurality of cluster memories (20) including basic cells (10) arranged in a cluster, the basic cell including a memory circuit, and a controller chip that controls the plurality of cluster memories are three-dimensionally stacked, the plurality of cluster memories located along the stacking direction of the plurality of cluster memory chips and the controller chip are electrically coupled to the controller chip via a multibus (11) including a through-via, an arbitrary one of the basic cells is directly accessed through the multibus from the controller chip so that truth value data is written therein, and whereby the arbitrary basic cell is switched to a logic circuit as conjugate.
    Type: Application
    Filed: October 4, 2012
    Publication date: March 7, 2013
    Inventors: Kanji OTSUKA, Tsuneo ITO, Yoichi SATO, Masahiro YOSHIDA, Shigeru YAMAMOTO, Takeshi KOYAMA, Yuko TANBA, Yutaka AKIYAMA
  • Patent number: 8392659
    Abstract: A method, programmed medium and system are provided for enabling a core's cache capacity to be increased by using the caches of the disabled or non-enabled cores on the same chip. Caches of disabled or non-enabled cores on a chip are made accessible to store cachelines for those chip cores that have been enabled, thereby extending cache capacity of enabled cores.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: March 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Diane Garza Flemming, William A. Maron, Mysore Sathyanarayana Srinivas
  • Patent number: 8380937
    Abstract: A system including a server apparatus executes an application program and a client apparatus enabling a user to utilize the application program by communicating with the server apparatus based on an instruction of the user. The server apparatus includes: an output detection section for detecting output-processing which is processing of outputting data from the application program into a shared area; and an output control section for storing instruction information in the shares area, instead of storing the output data outputted from the application program therein, in response to the detection of the output-processing, the instruction information specifying an acquisition method by which an authorized client apparatus acquires the output data.
    Type: Grant
    Filed: November 28, 2006
    Date of Patent: February 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Sanehiro Furuichi, Yuriko Kanai, Masana Murase, Tasuku Otani
  • Patent number: 8380960
    Abstract: In a distributed storage system such as those in a data center or web based service, user characteristics and characteristics of the hardware such as storage size and storage throughput impact the capacity and performance of the system. In such systems, an allocation is a mapping from the user to the physical storage devices where data/information pertaining to the user will be stored. Policies regarding quality of service and reliability including replication of user data/information may be provided by the entity managing the system. A policy may define an objective function which quantifies the value of a given allocation. Maximizing the value of the allocation will optimize the objective function. This optimization may include the dynamics in terms of changes in patterns of user characteristics and the cost of moving data/information between the physical devices to satisfy a particular allocation.
    Type: Grant
    Filed: November 4, 2008
    Date of Patent: February 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Hongzhong Jia, Moises Goldszmidt
  • Patent number: 8380806
    Abstract: A system and method provides for enabling a storage virtualization system to dynamically discover shares on a network attached storage file system is disclosed. Certain network attached storage systems represent user shares using abbreviated symbolic path names rather than full absolute path names. These network attached storage systems can correctly map the abbreviated path address to the actual file location; however, when a storage virtualization system is implemented to manage shares or files in these shares, it cannot access these files because it does not have the absolute path address. An embodiment of the present invention provides software instructions to augment the capabilities of the storage virtualization system, enabling it to map files with abbreviated share names, and therefore provide it with the ability to access these types of network attached storage systems.
    Type: Grant
    Filed: September 28, 2007
    Date of Patent: February 19, 2013
    Assignee: EMC Corporation
    Inventors: Xie Fen, Mingzhou Joe Sun
  • Patent number: 8375174
    Abstract: Described are techniques for partitioning memory. A plurality of boards is provided. Each of the plurality of boards includes a physical memory portion and a set of one or more processor. The physical memory portion in each of said plurality of boards is partitioned into a plurality of logical partitions including a global memory partition accessible by any processor on any of the plurality of boards and one or more other memory partitions configured for use by one or more processors of said each board. Each of the one or more other memory partitions not being accessible to a processor on a board other than said each board.
    Type: Grant
    Filed: March 29, 2010
    Date of Patent: February 12, 2013
    Assignee: EMC Corporation
    Inventors: Jerome Cartmell, Steven McClure, Alesia Tringale
  • Publication number: 20130036273
    Abstract: Described are memory modules that include a configurable signal buffer that manages communication between memory devices and a memory controller. The buffer can be configured to support threading to reduce access granularity, the frequency of row-activation, or both. The buffer can translate controller commands to access information of a specified granularity into subcommands seeking to access information of reduced granularity. The reduced-granularity information can then be combined, as by concatenation, and conveyed to the memory controller as information of the specified granularity.
    Type: Application
    Filed: August 3, 2012
    Publication date: February 7, 2013
    Applicant: Rambus Inc.
    Inventor: Ian Shaeffer
  • Patent number: 8370595
    Abstract: A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.
    Type: Grant
    Filed: December 21, 2009
    Date of Patent: February 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Charles F. Marino, William J. Starke, Derek E. Williams
  • Patent number: 8370597
    Abstract: Technologies are described for implementing a migration mechanism in a storage system containing multiple tiers of storage with each tier having different cost and performance parameters. Access statistics can be collected for each territory, or storage entity, within the storage system. Data that is accessed more frequently can be migrated toward higher performance storage tiers while data that is accessed less frequently can be migrated towards lower performance storage tiers. The placement of data may be governed first by the promotion of territories with higher access frequency to higher tiers. Secondly, data migration may be governed by demoting territories to lower tiers to create room for the promotion of more eligible territories from the next lower tier. In instances where space is not available on the next lower tier, further demotion may take place to an even lower tier in order to make space for the first demotion.
    Type: Grant
    Filed: April 11, 2008
    Date of Patent: February 5, 2013
    Assignee: American Megatrends, Inc.
    Inventors: Paresh Chatterjee, Ajit Narayanan, Loganathan Ranganathan, Sharon Enoch
  • Patent number: 8364908
    Abstract: Embodiments of the invention enable application programs running across multiple compute nodes of a highly-parallel system to selectively migrate objects from one node to another. For example, when an object becomes too large, a node containing the object may migrate the object to another node, thereby freeing memory space. Whether a large object is migrated can be dependent on how frequently the object is used by the application. Because the memory used by such an object is freed for other uses by the application, overall application performance may be improved. On large parallel systems with thousands of compute nodes, even relatively small improvements in application performance an individual compute node may be magnified many times, resulting in dramatic improvements in overall application performance.
    Type: Grant
    Filed: April 28, 2008
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventors: Eric L. Barsness, David L. Darrington, Amanda Peters, John M. Santosuosso
  • Patent number: 8364926
    Abstract: A memory module having reduced access granularity. The memory module includes a substrate having signal lines thereon that form a control path and first and second data paths, and further includes first and second memory devices coupled in common to the control path and coupled respectively to the first and second data paths. The first and second memory devices include control circuitry to receive respective first and second memory access commands via the control path and to effect concurrent data transfer on the first and second data paths in response to the first and second memory access commands.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: January 29, 2013
    Assignee: Rambus Inc.
    Inventors: Craig E. Hampel, Frederick A. Ware
  • Patent number: 8364922
    Abstract: An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer.
    Type: Grant
    Filed: December 21, 2009
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventor: William J. Starke
  • Patent number: 8364914
    Abstract: Methods and systems are described for performing storage operations on electronic data in a network. In response to the initiation of a storage operation and according to a first set of selection logic, a media management component is selected to manage the storage operation. In response to the initiation of a storage operation and according to a second set of selection logic, a network storage device to associate with the storage operation. The selected media management component and the selected network storage device perform the storage operation on the electronic data.
    Type: Grant
    Filed: May 7, 2012
    Date of Patent: January 29, 2013
    Assignee: CommVault Systems, Inc.
    Inventors: Rajiv Kottomtharayil, Parag Gokhale, Anand Prahlad, Manoj Kumar Vijayan Retnamma, David Ngo, Varghese Devassy
  • Patent number: 8359419
    Abstract: A system LSI includes first and second memories, first and second buses, a bus bridge that performs signal transfer between the first and second buses, a first bus system connecting to the first bus and accessing the first or second memory, a second bus system connecting to the second bus and accessing the first or second memory, a memory access circuit having first and second bus-side input/output terminals that perform signal transfer to/from the first and second buses and first and second memory-side input/output terminals that perform signal transfer to/from the first and second memories.
    Type: Grant
    Filed: November 12, 2009
    Date of Patent: January 22, 2013
    Assignee: Fujitsu Limited
    Inventors: Shinichi Sutou, Kiyomitsu Katou
  • Patent number: 8359429
    Abstract: System and method for distributing volume status information in a storage system. According to one embodiment, a system may include a plurality of volumes configured to store data, where the volumes are configured as mirrors of one another, and a plurality of hosts configured to access the plurality of volumes. A first one of the plurality of hosts may be configured to execute a mirror recovery process and to maintain a progress indication of the mirror recovery process, and the first host may be further configured to distribute the progress indication to another one or more of the plurality of hosts.
    Type: Grant
    Filed: November 8, 2004
    Date of Patent: January 22, 2013
    Assignee: Symantec Operating Corporation
    Inventors: Gopal Sharma, Richard Gorby, Santosh S. Rao, Aseem Asthana
  • Patent number: 8356050
    Abstract: Methods and systems are provided that may be utilized for spilling in query processing environments.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: January 15, 2013
    Assignee: Yahoo! Inc.
    Inventors: Chris Olston, Khaled Elmeleegy, Benjamin Reed