Plural Shared Memories Patents (Class 711/148)
  • Publication number: 20140068201
    Abstract: Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today.
    Type: Application
    Filed: August 28, 2013
    Publication date: March 6, 2014
    Applicant: Silicon Graphics International Corp.
    Inventor: Eric Fromm
  • Patent number: 8665283
    Abstract: An apparatus including a first memory, a second memory, and a memory interface. The first memory may be configured to store an entire image. The second memory may be configured to store a portion of the image during an image processing operation. The memory interface may be configured to transfer the portion of the image (i) from a source area of the first memory to the second memory prior to the image processing operation and (ii) from the second memory to a destination area of the first memory following the image processing operation. The memory interface may be further configured to select from among four modes of transferring image data from the source area of the first memory and to the destination area of the first memory based upon how the source area and the destination area overlap in the first memory.
    Type: Grant
    Filed: March 29, 2010
    Date of Patent: March 4, 2014
    Assignee: Ambarella, Inc.
    Inventor: Melvyn Lim
  • Patent number: 8667228
    Abstract: A memory system connected to another apparatus via a data crossbar, has a first memory, a second memory that forms a dual configuration together with the first memory, a first memory controller that transmits or receives data to be written into the first memory or data read out from the first memory to or from the other apparatus, a second memory controller that transmits or receives data to be written into the second memory or data read out from the second memory to or from the other apparatus, and a system controller that instructs the first memory controller and the second memory controller to read out, from the first memory and the second memory, data requested to be read out by the other apparatus if the system controller detects that any one of the first data crossbar and the second data crossbar being not capable of transmitting or receiving data.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: March 4, 2014
    Assignee: Fujitsu Limited
    Inventor: Kazuya Takaku
  • Patent number: 8667230
    Abstract: A digital memory architecture for recognition and recall in support of a host comprises a plurality of pattern processors, each of which has its own random access memory (RAM) and controller, an external data bus and external data bus controller, a results bus and results bus controller, an internal data bus and internal data bus controller, and an external control bus and external control bus and controller. Each of the pattern processors may be a general purpose set theoretic processor (GPSTP) operating in interrupt and block modes.
    Type: Grant
    Filed: October 19, 2010
    Date of Patent: March 4, 2014
    Inventor: Curtis L. Harris
  • Patent number: 8667246
    Abstract: A system (10) for virtual disks version control includes a selectively read-only volume (12); at least one topmost overlay (141, 142, 143, 144, 145); and at least two intermediate, selectively read-only overlays (16, 16?, 16?, 16??) configured as at least two mounting points. The at least one topmost overlay (141, 142, 143, 144, 145) is configured to store the results of redirected write operations. One of the at least two mounting points (16, 16?, 16?, 16??) and the volume (12) form an image, and the other of the at least two mounting points (16, 16?, 16?, 16??) and the volume (12) form another image. The at least two intermediate overlays (16, 16?, 16?, 16??) are operatively located between the volume (12) and the at least one topmost overlay (141, 142, 143, 144, 145).
    Type: Grant
    Filed: May 13, 2009
    Date of Patent: March 4, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Julien Rope, Philippe Auphelle, Yves Gattegno
  • Patent number: 8656128
    Abstract: A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Guy L Guthrie, Charles F. Marino, William J. Starke, Derek E. Williams
  • Patent number: 8656129
    Abstract: An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventor: William J. Starke
  • Patent number: 8650387
    Abstract: An IC chip, an information processing apparatus, a software module control method, an information processing system, an information processing method, and a program for ensuring security before booting a software module reliably are provided. A reader/writer and a mobile phone terminal to be accessed by the reader/writer through proximity communication are provided. In the mobile phone terminal, a first software module transmits commands to second and third software modules. The first software module manages states of the second and third software modules. If during boot-up of the third software module, the processing of the second software module is started and completed, then the first software module resumes the boot-up of the third software module.
    Type: Grant
    Filed: September 11, 2009
    Date of Patent: February 11, 2014
    Assignee: Sony Corporation
    Inventor: Hirokazu Sugiyama
  • Patent number: 8643888
    Abstract: Disclosed is an image forming apparatus including: at least three storing devices; a detecting unit; and a control unit, wherein the striping is carried out by using all of the storing devices, and the mirroring is carried out by using at least two of the storing devices except at least one storing device. When one of the storing device is failed, in case that the mirroring was carried out by using the failed storing device, the mirroring is continued by using the one storing device which is not used for carrying out the mirroring instead of the failed storing device. The striping is carried out by using all of the storing devices except the failed storing device.
    Type: Grant
    Filed: August 10, 2012
    Date of Patent: February 4, 2014
    Assignee: Konica Minolta Business Technologies, Inc.
    Inventor: Katsunori Teshima
  • Patent number: 8639730
    Abstract: A system and method for efficient garbage collection. A general-purpose central processing unit (CPU) sends a garbage collection request and a first log to a special processing unit (SPU). The first log includes an address and a data size of each allocated data object stored in a heap in memory corresponding to the CPU. The SPU has a single instruction multiple data (SIMD) parallel architecture and may be a graphics processing unit (GPU). The SPU efficiently performs operations of a garbage collection algorithm due to its architecture on a local representation of the data objects stored in the memory. The SPU records a list of changes it performs to remove dead data objects and compact live data objects. This list is subsequently sent to the CPU, which performs the included operations.
    Type: Grant
    Filed: September 24, 2012
    Date of Patent: January 28, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Azeem S. Jiva, Gary R. Frost
  • Patent number: 8626866
    Abstract: A network caching system has a multi-protocol caching filer coupled to an origin server to provide storage virtualization of data served by the filer in response to data access requests issued by multi-protocol clients over a computer network. The multi-protocol caching filer includes a file system configured to manage a sparse volume that “virtualizes” a storage space of the data to thereby provide a cache function that enables access to data by the multi-protocol clients. To that end, the caching filer further includes a multi-protocol engine configured to translate the multi-protocol client data access requests into generic file system primitive operations executable by both the caching filer and the origin server.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: January 7, 2014
    Assignee: NetApp, Inc.
    Inventors: Jason Ansel Lango, Robert M. English, Paul Christopher Eastham, Qinghua Zheng, Brian Mederic Quirion, Peter Griess, Matthew Benjamin Amdur, Kartik Ayyar, Robert Lieh-Yuan Tsai, David Grunwald, J. Chris Wagner, Emmanuel Ackaouy, Ashish Prakash
  • Patent number: 8627016
    Abstract: A method, system and computer program product are disclosed for maintaining data coherence, for use in a multi-node processing system where each of the nodes includes one or more components. In one embodiment, the method comprises establishing a data domain, assigning a group of the components to the data domain, sending a coherence message from a first component of the processing system to a second component of the processing system, and determining if that second component is assigned to the data domain. In this embodiment, if that second component is assigned to the data domain, the coherence message is transferred to all of the components assigned to the data domain to maintain data coherency among those components. In an embodiment, if that second component is assigned to the data domain, the first component is assigned to the data domain.
    Type: Grant
    Filed: July 8, 2013
    Date of Patent: January 7, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kattamuri Ekanadham, Il Park, Pratap Pattnaik
  • Patent number: 8621159
    Abstract: A memory device loops back control information from one interface to another interface to facilitate sharing of the memory device by multiple devices. In some aspects, a memory controller sends control and address information to one interface of a memory device when accessing the memory device. The memory device may then loop back this control and address information to another interface that is used by another memory controller to access the memory device. The other memory controller may then use this information to determine how to access the memory device. In some aspects a memory device loops back arbitration information from one interface to another interface thereby enabling controller devices that are coupled to the memory device to control (e.g., schedule) accesses of the memory device.
    Type: Grant
    Filed: February 3, 2010
    Date of Patent: December 31, 2013
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, John E. Linstadt, Venu M. Kuchibhotla
  • Publication number: 20130339632
    Abstract: A processor management method includes setting a master mechanism in a given processor among multiple processors, where the master mechanism manages the processors; setting a local master mechanism and a virtual master mechanism in each of processors other than the given processor among the processors, where the local master mechanism and the virtual master mechanism manage each of the processors; and notifying by the master mechanism, the processors of an offset value of an address to allow a shared memory managed by the master mechanism to be accessed as a continuous memory by the processors.
    Type: Application
    Filed: August 21, 2013
    Publication date: December 19, 2013
    Applicant: FUJITSU LIMITED
    Inventors: Koichiro YAMASHITA, Hiromasa YAMAUCHI, Takahisa SUZUKI, Koji KURIHARA
  • Patent number: 8612684
    Abstract: Provided are memory control apparatus and methods for controlling data transfer between a memory controller and at least two logical memory busses connected to memory, comprising a memory controller; a buffer; a bidirectional data bus connecting the controller and the buffer; a control interface connecting the controller and the buffer, the buffer being connected to at least two logical memory busses for memory read and write operations, the buffer comprising data storage areas to buffer data between the controller and the logical memory busses, and logic circuits to decode memory interface control commands from the controller; and a data access and control bus connecting the buffer and each of the logical memory busses to control memory read and write operations.
    Type: Grant
    Filed: December 7, 2007
    Date of Patent: December 17, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Theodore Carter Briggs, John Michael Wastlick, Gary Belgrave Gostin
  • Patent number: 8612713
    Abstract: Provided is a memory switching control apparatus using an open serial interfacing scheme capable of enhancing flexibility, reliability, availability, performance in a data communication processes between a memory and a processing unit and an operating method thereof. The memory switching control apparatus includes: one or more processor interfacing units which perform interfacing with one or more processing units; one or more memory interfacing units which have open-serial-interfacing-scheme memory interfacing ports to interface with data storage devices connected to the memory interfacing ports in a serial interfacing scheme; and a plurality of arbitrating units which are provided corresponding to the memory interfacing units to independently arbitrate usage rights of the processor interfacing units to the memory interfacing units.
    Type: Grant
    Filed: May 1, 2008
    Date of Patent: December 17, 2013
    Assignee: Electronics & Telecommunications Research Institute
    Inventors: Bup Joong Kim, Woo Young Choi, Kook Jin Nam, Byung Jun Ahn
  • Patent number: 8607008
    Abstract: A shared resource management system and method are described. In one embodiment a shared resource management system includes a plurality of engines, a shared resource, and a shared resource management unit. In one exemplary implementation the shared resource is a memory and the shared resource management unit is a memory management unit (MMU). The plurality of engines perform processing. The shared resource supports the processing. For example a memory store information and instructions for the engines. The shared resource management unit independently caches and invalidates page table entries on a per engine basis.
    Type: Grant
    Filed: November 1, 2006
    Date of Patent: December 10, 2013
    Assignee: NVIDIA Corporation
    Inventors: David B. Glasco, Lingfeng Yuan
  • Publication number: 20130326158
    Abstract: In general, this disclosure describes techniques for selecting a memory channel in a multi-channel memory system for storing data, so that usage of the memory channels is well-balanced. A request to write data to a logical memory address of a memory system may be received. The logical memory address may include a logical page number and a page offset, where the logical page number maps to a physical page number and the logical memory address maps to a physical memory address. A memory unit out of a plurality of memory units in the memory system may be determined by performing a logical operation on one or more bits of the page offset and one or more bits of the physical page number. The data may be written to a physical memory address in the determined memory unit in the memory system.
    Type: Application
    Filed: June 4, 2012
    Publication date: December 5, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Lin Chen, Long Chen
  • Publication number: 20130326159
    Abstract: The library server according to certain aspects can manage the use of tape drives according to the data requirements of different storage operation cells. The library server according to certain aspects can also facilitate automatic management of tape media in a tape library by allocating the tapes and slots to different cells. For instance, the library server can manage the positioning and placement of the tapes into appropriate slots within the tape library.
    Type: Application
    Filed: March 7, 2013
    Publication date: December 5, 2013
    Applicant: COMMVAULT SYSTEMS, INC.
    Inventors: Manoj Kumar Vijayan, Rajiv Kottomtharayil, Jaidev Oppath Kochunni
  • Patent number: 8601571
    Abstract: A multi-user computer system and a remote control method for the multi-user computer system includes a remote controller, with an input unit that receives a remote-control password to remotely operate the computer, information on an OS booted when the remote-control password is input, a key input setting the computer in a mode wherein the remote-control password and the OS information are set, and a key input operating the computer, a microprocessor, a wireless transmitter, and a computer, with a wireless receiver, a microprocessor, and a BIOS that automatically loads an OS corresponding to the remote-control password stored in the memory when the received remote-control password stored in the wireless receiver and the remote-control password in the memory are the same.
    Type: Grant
    Filed: August 2, 2006
    Date of Patent: December 3, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Chan-woo Kim
  • Patent number: 8601309
    Abstract: A method includes providing a persistent common view of data, services, and infrastructure functions accessible via one or more shared storage systems of a plurality of shared storage systems of a virtual shared storage system. The method includes applying different governance policies to two or more shared storage systems of the plurality of shared storage systems. The method includes restricting access to first content accessible via a first shared storage system of the plurality of shared storage systems based on a security level associated with a data consumer. The first content corresponds to at least one of first data, a first service, and a first infrastructure function.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: December 3, 2013
    Assignee: The Boeing Company
    Inventors: Marc A. Peters, Dennis L. Kuehn, David D. Bettger, Kevin A. Stone
  • Patent number: 8601308
    Abstract: A method providing a persistent common view of data, services, and infrastructure functions accessible via a plurality of shared storage systems of a virtual shared storage system. The method includes applying different governance policies at two or more shared storage systems of the virtual shared storage system. The method includes transferring content from a particular shared storage system to a requesting device without using at least one of a server session, an application-to-server session, and an application session. The content corresponds to at least one of data, a service, and an infrastructure function provided via the particular shared storage system.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: December 3, 2013
    Assignee: The Boeing Company
    Inventors: Marc A. Peters, Dennis L. Kuehn, David D. Bettger, Kevin A. Stone
  • Patent number: 8601307
    Abstract: A method includes providing a persistent common view of a virtual shared storage system. The virtual shared storage system includes a first shared storage system and a second shared storage system, and the persistent common view includes information associated with data and instructions stored at the first shared storage system and the second shared storage system. The method includes automatically updating the persistent common view to include third information associated with other data and other instructions stored at a third shared storage system in response to adding the third shared storage system to the virtual shared storage system.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: December 3, 2013
    Assignee: The Boeing Company
    Inventors: Marc A. Peters, Dennis L. Kuehn, David D. Bettger, Kevin A. Stone
  • Patent number: 8589633
    Abstract: A control apparatus, control method and computer readable article of manufacture for controlling data. The control apparatus includes a data storage unit; a plurality of entry storage units, and a plurality of registration units. The data storage unit stores data. Each of the entry storage units stores an entry for registering a pointer to data. If each of the registration units receives an instruction for registering data, then each registration unit (i) searches the entry storage units for an entry storage unit having an empty entry, (ii) registers a pointer to the data to be registered in the retrieved entry storage unit and (iii) stores the data to be registered and identification information of the retrieved entry storage unit in the data storage unit in such a manner that the data to be registered and the identification information is associated with each other.
    Type: Grant
    Filed: October 27, 2009
    Date of Patent: November 19, 2013
    Assignee: International Business Machines Corporation
    Inventor: Takeshi Ogasawara
  • Patent number: 8583877
    Abstract: A storage system adapted to be coupled to a plurality of host devices via a fiber channel. The storage system including a plurality of storage devices, at least a portion of the plurality of storage devices corresponding to a logical unit of a plurality of logical units, the logical unit having a logical unit number (LUN). The storage system also including a storage control device having a cache memory and controlling to store data, addressed to the LUN, into the portion of the plurality of storage devices. The storages system also including an input device being adapted to be used to set information, which is used to prevent an unauthorized access to the logical unit and which corresponds to a relationship between a host device of the plurality of host devices and the logical unit.
    Type: Grant
    Filed: September 22, 2009
    Date of Patent: November 12, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Akemi Sanada, Toshio Nakano, Hidehiko Iwasaki, Masahiko Sato, Kenji Muraoka, Kenichi Takamoto, Masaaki Kobayashi
  • Patent number: 8566537
    Abstract: A method and apparatus to facilitate shared pointers in a heterogeneous platform. In one embodiment of the invention, the heterogeneous or non-homogeneous platform includes, but is not limited to, a central processing core or unit, a graphics processing core or unit, a digital signal processor, an interface module, and any other form of processing cores. The heterogeneous platform has logic to facilitate sharing of pointers to a location of a memory shared by the CPU and the GPU. By sharing pointers in the heterogeneous platform, the data or information sharing between different cores in the heterogeneous platform can be simplified.
    Type: Grant
    Filed: March 29, 2011
    Date of Patent: October 22, 2013
    Assignee: Intel Corporation
    Inventors: Yang Ni, Rajkishore Barik, Ali-Reza Adl-Tabatabai, Tatiana Shpeisman, Jayanth N. Rao, Ben J. Ashbaugh, Tomasz Janczak
  • Publication number: 20130275687
    Abstract: Embodiments of the present invention provide an approach for memory and process sharing via input/output (I/O) with virtualization. Specifically, embodiments of the present invention provide a circuit design/system in which multiple chipsets are present that communicate with one another via a communications channel. Each chipset generally comprises a processor coupled to a memory unit. Moreover, each component has its own distinct/separate power supply. Pursuant to a communication and/or command exchange with a main controller, a processor of a particular chipset may disengage a memory unit coupled thereto, and then access a memory unit of another chipset (e.g., coupled to another processer in the system). Among other things, such an inventive configuration reduces memory leakage and enhances overall performance and/or efficiency of the system.
    Type: Application
    Filed: April 11, 2012
    Publication date: October 17, 2013
    Inventor: Moon J. Kim
  • Patent number: 8560756
    Abstract: A hybrid memory system is provided that combines the advantages of NAND flash memory devices with the advantages of NOR flashes memory devices. The system includes a NAND flash memory portion to provide mass storage and fast programming/erasure capabilities of conventional NAND flash memory devices. The system further comprises a NOR flash memory portion to provide code storage and fast random reading capabilities of conventional NOR flash memory devices. Accordingly, the hybrid memory system provides both mass storage and code storage along with fast programming/erasure speeds and fast random access speeds.
    Type: Grant
    Filed: October 17, 2007
    Date of Patent: October 15, 2013
    Assignee: Spansion LLC
    Inventors: Nian Yang, Jiang Li, Fan Wan Lai
  • Publication number: 20130262787
    Abstract: Low-power, easily scalable architectures for high-speed data handling are critical to modern circuits and systems. Successful architectures must provide efficient data storage and efficient/flexible data retrieval with low power consumption. Data encoding, including that achieved with turbo codes, have data streams split into a sequence of even and odd data bits. These bits are written into multiple single-port memories so that the writing alternates between memories. Scheduling for the reading and writing is performed to avoid conflicts and give priority to the read operations.
    Type: Application
    Filed: June 30, 2012
    Publication date: October 3, 2013
    Inventors: Venugopal Santhanam, Krushna Prasad Ojha, Pratap Neelasheety, Lokesh Kabra
  • Patent number: 8543596
    Abstract: In general, a technique or mechanism is provided to efficiently transfer data of a distributed file system to a parallel database management system using an algorithm that avoids or reduces sending of blocks of files across computer nodes on which the parallel database management system is implemented.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: September 24, 2013
    Assignee: Teradata US, Inc.
    Inventors: O. Pekka Kostamaa, Keliang Zhao, Yu Xu
  • Publication number: 20130246716
    Abstract: According to one embodiment, when a controller writes update data in a second memory to a first memory which is nonvolatile and a difference between a size of a page and a size of the update data is equal to or greater than a size of a cluster, the controller configured to generate write data by adding, to the update data, data which has the size of the cluster, store an update content of management information corresponding to the update data and an update content storage position indicating a storage position of the update content of the management information in the first memory, and write the generated write data to a block in writing of the first memory.
    Type: Application
    Filed: September 14, 2012
    Publication date: September 19, 2013
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Ryoichi KATO, Mitsunori Tadokoro, Takashi Hirao
  • Publication number: 20130246717
    Abstract: An information processing system includes: CPUs; storage devices; switches; dummy storage devices which are with respective storage devices and each of which sends, when receiving an identifying information request, its own identifying information back to a sender of the identifying information request; and dummy CPUs which are associated with respective CPUs and each of which tries to, when receiving an instruction for acquiring identifying information from a dummy storage device, acquire the identifying information of the dummy storage device by transmitting the identifying information request, and sends the identifying information as response information back to a sender device of the acquiring instruction.
    Type: Application
    Filed: February 14, 2013
    Publication date: September 19, 2013
    Applicant: FUJITSU LIMITED
    Inventors: Yasuo Noguchi, Toshihiro Ozawa, Kazuichi Oe, Munenori Maeda, Kazutaka Ogihara, Masahisa Tamura, Ken Ilzawa, Tatsuo Kumano, Jun Kato
  • Publication number: 20130246718
    Abstract: A control device includes a storage to store correspondence information indicating a correspondence between each of memories and each of information processing devices; and a processor to execute an operation including: detecting a first memory from among the memories, the first memory being a memory whose access frequency exceeds a predetermined access frequency or is relatively high, and a second memory being a memory whose access frequency is lower than or equal to a predetermined access frequency, changing the correspondence information so that an information processing device corresponding to the first memory changes from a first information processing device to a second information processing device corresponding to the second memory, and notifying a management device of the changed correspondence information, and outputting data read from the first memory to the management device via the second information processing device.
    Type: Application
    Filed: March 4, 2013
    Publication date: September 19, 2013
    Applicant: FUJITSU LIMITED
    Inventor: Takatsugu ONO
  • Publication number: 20130246719
    Abstract: A technique to increase memory bandwidth for throughput applications. In one embodiment, memory bandwidth can be increased, particularly for throughput applications, without increasing interconnect trace or pin count by pipelining pages between one or more memory storage areas on half cycles of a memory access clock.
    Type: Application
    Filed: March 5, 2013
    Publication date: September 19, 2013
    Inventor: ERIC SPRANGLE
  • Patent number: 8539166
    Abstract: Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: September 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael E. Aho, Charles J. Archer, James E. Carey, Matthew W. Markland, Philip J. Sanders
  • Patent number: 8539197
    Abstract: Various aspects of a data volume or other shared resource are determined and updated dynamically for purposes such as to provide guaranteed qualities of service. For example, the number of partitions in a data volume and/or the way in which data is stored across those partitions can be updated dynamically without significantly impacting the customer using the volume. The data stored to the volume can be striped or otherwise distributed across a number of logical areas, which then can be distributed across the partitions. Separate mappings can be used for the data in each logical area, and the logical areas in each partition, such that when moving a logical area only a single mapping has to be updated, regardless of the amount of data in that logical area. Further, logical areas can be moved between partitions without the need to repartition or redistributed the data in the data volume.
    Type: Grant
    Filed: June 29, 2010
    Date of Patent: September 17, 2013
    Assignee: Amazon Technologies, Inc.
    Inventors: Bradley E. Marshall, Swaminathan Sivasubramanian, Tate Andrew Certain, Nicholas J. Maniscalco
  • Patent number: 8522244
    Abstract: In at least one embodiment, a method includes locally scheduling a memory request requested by a thread of a plurality of threads executing on at least one processor. The memory request is locally scheduled according to a quality-of-service priority of the thread. The quality-of-service priority of the thread is based on a quality of service indicator for the thread and system-wide memory bandwidth usage information for the thread. In at least one embodiment, the method includes determining the system-wide memory bandwidth usage information for the thread based on local memory bandwidth usage information associated with the thread periodically collected from a plurality of memory controllers during a timeframe. In at least one embodiment, the method includes at each mini-timeframe of the timeframe accumulating the system-wide memory bandwidth usage information for the thread and updating the quality-of-service priority based on the accumulated system-wide memory bandwidth usage information for the thread.
    Type: Grant
    Filed: May 7, 2010
    Date of Patent: August 27, 2013
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Jaewoong Chung, Debarshi Chatterjee
  • Patent number: 8521970
    Abstract: Electrical interfaces, addressing schemes, and command protocols allow for communications with memory modules in computing devices such as imaging and printing devices. Memory modules may be assigned an address through a set of discrete voltages. One, multiple, or all of the memory modules may be addressed with a single command, which may be an increment counter command, a write command, a punch out bit field, or a cryptographic command. The commands may be transmitted using a broadcast scheme or a split transaction scheme. The status of the memory modules may be determined by sampling a single signal that may be at a low, high, or intermediate voltage level.
    Type: Grant
    Filed: April 19, 2006
    Date of Patent: August 27, 2013
    Assignee: Lexmark International, Inc.
    Inventors: James Ronald Booth, Bryan Scott Willett
  • Patent number: 8516179
    Abstract: A processing system on an integrated circuit includes a group of processing cores. A group of dedicated random access memories are severally coupled to one of the group of processing cores or shared among the group. A star bus couples the group of processing cores and random access memories. Additional layer(s) of star bus may couple many such clusters to each other and to an off-chip environment.
    Type: Grant
    Filed: November 30, 2004
    Date of Patent: August 20, 2013
    Assignee: Digital RNA, LLC
    Inventor: Joel Henry Hinrichs
  • Patent number: 8516220
    Abstract: A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits.
    Type: Grant
    Filed: May 11, 2010
    Date of Patent: August 20, 2013
    Assignee: Intel Corporation
    Inventors: Shoumeng Yan, Ying Gao, Xiaocheng Zhou, Hu Chen, Sai Luo, Bratin Saha
  • Patent number: 8510491
    Abstract: A method and apparatus for efficient interrupt event notification for a scalable input/output device in a network system. A network interface unit is operably connected to a plurality of processing entities and associated memory units. At least one status register in the network interface unit contains information relating to a process to be performed by at least one processing entity communicated to the processing entity by an interrupt event notification. Shared memory space comprises a mailbox storage register operable to store an image of the interrupt information stored in the status register of the network interface unit. A processing entity can directly access the process information stored in the mailbox status register thereby reducing system latency associated with reading information in the status register. Updated process status information in the network interface status register may be read by the processing entity on an interleaved basis while executing a process.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: August 13, 2013
    Assignee: Oracle America, Inc.
    Inventors: Ariel Hendel, Yatin Gajjar, May Lin, Rahoul Pun, Michael Wong
  • Patent number: 8510496
    Abstract: Method and apparatus for scheduling access requests for a multi-bank low-latency random read memory (LLRRM) device within a storage system. The LLRRM device comprising a plurality of memory banks, each bank being simultaneously and independently accessible. A queuing layer residing in storage system may allocate a plurality of request-queuing data structures (“queues”), each queue being assigned to a memory bank. The queuing layer may receive access requests for memory banks in the LLRRM device and store each received access request in the queue assigned to the requested memory bank. The queuing layer may then send, to the LLRRM device for processing, an access request from each request-queuing data structure in successive order. As such, requests sent to the LLRRM device will comprise requests that will be applied to each memory bank in successive order as well, thereby reducing access latencies of the LLRRM device.
    Type: Grant
    Filed: April 27, 2009
    Date of Patent: August 13, 2013
    Assignee: NetApp, Inc.
    Inventors: George Totolos, Jr., Nhiem T. Nguyen
  • Patent number: 8510513
    Abstract: Provided are a network load reducing method and a node structure for a multiprocessor system with a distributed memory. The network load reducing method uses a multiprocessor system including a node having a distributed memory and an auxiliary memory storing a sharer history table. The network load reducing method includes recording the history of a sharer node in the sharer history table of the auxiliary memory, requesting share data with reference to the sharer history table of the auxiliary memory, and deleting share data stored in the distributed memory and updating the sharer history table of the auxiliary memory.
    Type: Grant
    Filed: December 16, 2010
    Date of Patent: August 13, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sang Heon Lee, Moo Kyoung Chung, Kyoung Seon Shin, June Young Chang, Seong Mo Park, Nak Woong Eum
  • Patent number: 8503469
    Abstract: A technique for providing network access in accordance with at least one layered network access technology comprising layer 1 processes and layer 2 processes is described. In a device implementation, the technique comprises a shared memory adapted to store at least layer 1 data and layer 2 data as well as a memory access component coupled to the shared memory and comprising a first client port adapted to receive memory access requests from a layer 1 processing client and a second client port adapted to receive memory access requests from a layer 2 processing client. The memory access component is configured to serve a memory access request from the layer 1 processing client with a lower priority than a memory access request from the layer 2 processing client. In particular, the memory access component may be adapted to prioritize reading of layer 1 data by the layer 2 processing client over writing of layer 2 data by the layer 1 processing client.
    Type: Grant
    Filed: November 16, 2009
    Date of Patent: August 6, 2013
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Seyed-Hami Nourbakhsh, Helmut Steinbach
  • Patent number: 8489812
    Abstract: An approach for automatic storage planning and provisioning within a clustered computing environment is provided. Planning input for a set of storage area network volume controllers (SVCs) will be received within the clustered computing environment, the planning input indicating a potential load on the SVCs and its associated components. Analytical models (e.g., from vendors) can be also used that allow for a load to be accurately estimated on the storage components. Configuration data for a set of storage components (i.e., the set of SVCs, a set of managed disk (Mdisk) groups associated with the set of SVCs, and a set of backend storage systems) will also be collected. Based on this configuration data, the set of storage components will be filtered to identify candidate storage components capable of addressing the potential load. Then, performance data for the candidate storage components will be analyzed to identify an SVC and an Mdisk group to address the potential load.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Kavita Chavda, David P. Goodman, Sandeep Gopisetty, Larry S. McGimsey, James E. Olson, Aameek Singh
  • Patent number: 8489809
    Abstract: Embodiments of the present invention provide an approach for intelligent storage planning and planning within a clustered computing environment (e.g., a cloud computing environment). Specifically, embodiments of the present invention will first determine/identify a set of storage area network volume controllers (SVCs) that is accessible from a host that has submitted a request for access to storage. Thereafter, a set of managed disk (mdisk) groups (i.e., corresponding to the set of SVCs) that are candidates for satisfying the request will be determined. This set of mdisk groups will then be filtered based on available space therein, a set of user/requester preferences, and optionally, a set of performance characteristics. Then, a particular mdisk group will be selected from the set of mdisk groups based on the filtering.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Kavita Chavda, David P. Goodman, Sandeep Gopisetty, Seshashayee S. Murthy, Aameek Singh
  • Patent number: 8490094
    Abstract: In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition.
    Type: Grant
    Filed: February 27, 2009
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Mark R. Funk, Steven R. Kunkel, Mysore S. Srinivas, Randal C. Swanberg, Ronald D. Young
  • Patent number: 8489808
    Abstract: Systems and methods for presenting virtual tape products to a client are disclosed. An exemplary method may include allocating a plurality of disks connected to a host bus adapter (HBA) as both virtual tape storage and virtual disk storage. The method may also include translating at the HBA an input/output (I/O) communication between the client and the plurality of disks to access at least a portion of the plurality of disks allocated as virtual tape storage. The method may also include handling all other I/O communication between the client and the plurality of disks for access to the plurality of disks allocated as virtual disk storage.
    Type: Grant
    Filed: October 22, 2008
    Date of Patent: July 16, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Neil Johnson, Mark Robert Watkins
  • Patent number: 8489839
    Abstract: The memory splitter chip couples multiple DRAM units to the PPU, thereby expanding the memory capacity available to the PPU for storing data and increasing the overall performance of the graphics processing system. The memory splitter chip includes logic for managing the transmission of data between the PPU and the DRAM units when the transmission frequencies and the burst lengths of the PPU interface and the DRAM interfaces differ. Specifically, the memory splitter chip implements an overlapping transmission mode, a pairing transmission mode or a combination of the two modes when the transmission frequencies or the burst lengths differ.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: July 16, 2013
    Assignee: Nvidia Corporation
    Inventors: Ashish Karandikar, Kaustubh Sanghani, Jonah M. Alben, Shane Keil
  • Patent number: 8484536
    Abstract: Methods, systems, and apparatus, including computer program products, featuring generating a plurality of error-correcting code chunks from a plurality of data chunks. The error-correcting code chunks can be used to reconstruct one or more of the data chunks. The data chunks are allocated to a local group of storage nodes. The error correcting code chunks are allocated between the local group of storage nodes and one or more remote groups of storage nodes. Each remote group of storage nodes is allocated one or more unique error-correcting code chunks from the error-correcting code chunks. Any of the error-correcting code chunks not allocated to a remote group of storage nodes are allocated to the local group of storage nodes.
    Type: Grant
    Filed: March 26, 2010
    Date of Patent: July 9, 2013
    Assignee: Google Inc.
    Inventor: Robert Cypher