Prioritized Access Regulation Patents (Class 711/151)
  • Patent number: 8493810
    Abstract: Memory circuitry 2 includes a memory cell 12 coupled to a plurality of bit line pairs 18, 24 providing multiple access ports. Write boost circuitry 36 serves to increase a write voltage applied to write a data value into the memory cell during at least a boost period of a write access. Collision detection circuitry 10 detects when the write access at least partially overlaps in time with a read access. If a collision is detected, then write assist circuitry serves to drive the bit line pair of the detected read access with a write assist voltage difference having the same polarity as the write voltage and a magnitude less than the write voltage with the boost voltage applied. The write assist circuitry drives the bit line pair of the colliding read independently of the write boost circuitry applying the boost voltage such that the boost voltage is undiminished by the action of the write assist circuitry.
    Type: Grant
    Filed: May 9, 2011
    Date of Patent: July 23, 2013
    Assignee: ARM Limited
    Inventors: Nicolaas Klarinus Johannes van Winkelhoff, Gerald Jean Louis Gouya, Hsin-Yu Chen
  • Patent number: 8495641
    Abstract: A technique for efficiently boosting the priority of a preemptable data reader while resolving races between the priority boosting and the reader exiting a critical section or terminating in order to eliminate impediments to grace period processing that defers the destruction of one or more shared data elements that may be referenced by the reader until the reader is no longer capable of referencing the one or more data elements. A determination is made that the reader is in a read-side critical section and the reader is designated as a candidate for priority boosting. A verification is made that the reader has not exited its critical section or terminated, and the reader's priority is boosted to expedite its completion of the critical section. The reader's priority is decreased following its completion of the critical section.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventor: Paul E. McKenney
  • Patent number: 8495638
    Abstract: Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource.
    Type: Grant
    Filed: September 8, 2010
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventor: Kirk J. Krauss
  • Patent number: 8495640
    Abstract: Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource.
    Type: Grant
    Filed: March 16, 2012
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventor: Kirk J. Krauss
  • Publication number: 20130185525
    Abstract: Disclosed herein are a semiconductor chip for adaptively processing a plurality of commands to request memory access, and a method of controlling memory. The semiconductor chip includes a storage unit ad a control unit. The storage unit stores a memory access request to be currently processed and a plurality of memory access requests received before the memory access request to be currently processed in received order. The control unit processes the memory access request to be currently processed and the plurality of memory access requests received before the memory access request to be currently processed, which have been stored in the storage unit, in received order, except that memory access requests attempting to access the same bank and the same row are successively processed.
    Type: Application
    Filed: December 21, 2012
    Publication date: July 18, 2013
    Applicant: Foundation of Soongsil University-Industry Cooperation
    Inventor: Foundation of Soongsil University-Industry Cooperation
  • Patent number: 8490094
    Abstract: In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition.
    Type: Grant
    Filed: February 27, 2009
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Mark R. Funk, Steven R. Kunkel, Mysore S. Srinivas, Randal C. Swanberg, Ronald D. Young
  • Publication number: 20130179645
    Abstract: A method for equalizing the bandwidth of requesters using a shared memory system is disclosed. In one embodiment, such a method includes receiving multiple access requests to access a shared memory system. Each access request originates from a different requester coupled to the shared memory system. The method then determines which of the access requests has been waiting the longest to access the shared memory system. The access requests are then ordered so that the access request that has been waiting the longest is transmitted to the shared memory system after the other access requests. The requester associated with the longest-waiting access request may then transmit additional access requests to the shared memory system immediately after the longest-waiting access request has been transmitted. A corresponding apparatus and computer program product are also disclosed.
    Type: Application
    Filed: January 6, 2012
    Publication date: July 11, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hisato Matsuo, Rika Nagahara, Scott J. Schaffer
  • Patent number: 8484397
    Abstract: Various methods and apparatus are described for a memory scheduler. The memory scheduler has a pipelined arbiter to determine which request will access the target memory core. Pipelining occurs in stages within the arbiter over a period of more than one clock cycle. The pipelined arbiter uses two or more weighting factors affecting an arbitration decision that are processed in parallel. A predictive scheduler in the memory scheduler uses data from a previous cycle to make the arbitration decision about a request during a current clock cycle in which the arbitration decision is made in order to increase overall system efficiency of requests being serviced in the integrated circuit.
    Type: Grant
    Filed: May 24, 2012
    Date of Patent: July 9, 2013
    Assignee: Sonics, Inc.
    Inventors: Krishnan Srinivasan, Drew E. Wingard
  • Patent number: 8484438
    Abstract: Some embodiments provide a system that facilitates concurrency control in a computer system. During operation, the system generates a set of signatures associated with memory accesses in the computer system. To generate the signatures, the system creates a set of hierarchical Bloom filters (HBFs) corresponding to the signatures, and populates the HBFs using addresses associated with the memory accesses. Next, the system compares the HBFs to detect a potential conflict associated with the memory accesses. Finally, the system manages concurrent execution in the computer system based on the detected potential conflict.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: July 9, 2013
    Assignee: Oracle America, Inc.
    Inventor: Robert E. Cypher
  • Patent number: 8478950
    Abstract: Requests from a plurality of different agents (10) are passed to a request handler via a request concentrator. In front of the request concentrator the requests are queued in a plurality of queues (12). A first one of the agents is configured to issue a priority changing command with a defined position relative to pending requests issued by the first one of the agents (10) to the first one of the queues (12). An arbiter (16), makes successive selections selecting queues (12) from which the request concentrator (14) will pass requests to the request handler (18), based on relative priorities assigned to the queues (12). The arbiter (16) responds to the priority changing command by changing the priority of the first one of the queues (12), selectively for a duration while the pending requests up to the defined position are in the first one of the queues (12). Different queues may be provided for read and write requests from the first one of the agents.
    Type: Grant
    Filed: July 27, 2009
    Date of Patent: July 2, 2013
    Assignee: Synopsys, Inc.
    Inventors: Tomas Henriksson, Elisabeth Francisca Maria Steffens
  • Patent number: 8478952
    Abstract: Data indicating a plurality of groups into which data to be accessed from one or more storage media has been divided is received. For each of at least a subset of the groups a parallelization limit for that group is received. A first parallelization limit for a first group in the subset is not necessarily same as a second parallelization limit for a second group in the subset.
    Type: Grant
    Filed: June 13, 2006
    Date of Patent: July 2, 2013
    Assignee: EMC Corporation
    Inventor: Peter Armorer
  • Patent number: 8478940
    Abstract: Instruction fetch unit (IFU) verification is improved by dynamically monitoring the current state of the IFU model and detecting any predetermined states of interest. The instruction address sequence is automatically modified to force a selected address to be fetched next by the IFU model. The instruction address sequence may be modified by inserting one or more new instruction addresses, or by jumping to a non-sequential address in the instruction address sequence. In exemplary implementations, the selected address is a corresponding address for an existing instruction already loaded in the IFU cache, or differs only in a specific field from such an address. The instruction address control is preferably accomplished without violating any rules of the processor architecture by sending a flush signal to the IFU model and overwriting an address register corresponding to a next address to be fetched.
    Type: Grant
    Filed: June 2, 2009
    Date of Patent: July 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Akash V. Giri, Darin M. Greene, Alan G. Singletary
  • Patent number: 8473658
    Abstract: In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: June 25, 2013
    Assignee: Cavium, Inc.
    Inventors: Robert A. Sanzone, David H. Asher, Richard E. Kessler
  • Patent number: 8468536
    Abstract: A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: June 18, 2013
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Ekaterina M. Ambroladze, Michael Fee, Diana Lynn Orf
  • Publication number: 20130151795
    Abstract: Disclosed herein are an apparatus and method for controlling memory. The apparatus includes a memory access request buffer unit, a memory access request control unit, and a bank control unit. The memory access request buffer unit determines and stores memory access request order so that the plurality of memory access requests is processed in the order of input except that memory access requests attempting to access the same bank and the same row are successively processed. The memory access request control unit reads the memory access requests from the memory access request buffer unit in the determined order, distributes the memory access requests to banks, and transfers the memory access requests to memory. The bank control unit stores a preset number of memory access requests in each of buffer units for respective banks, and controls the operating state of each of the banks.
    Type: Application
    Filed: December 4, 2012
    Publication date: June 13, 2013
    Applicant: Foundation of Soongsil University-Industry Coopera
    Inventor: Foundation of Soongsil University-Industry Cooper
  • Patent number: 8464007
    Abstract: Various embodiments include fault tolerant memory apparatus, methods, and systems, including a memory manager for supplying read and write requests to a memory device having a plurality of addressable memory locations. The memory manager includes a plurality of banks. Each bank includes a bank queue for storing read and write requests. The memory manager also includes a request arbiter connected to the plurality of banks. The request arbiter removes read and write requests from the bank queues for presentation to the memory device. The request arbiter includes a read phase of operation and a write phase of operation, wherein the request arbiter preferentially selects read requests for servicing during the read phase of operation and preferentially selects write requests for servicing during the write phase of operation.
    Type: Grant
    Filed: June 12, 2009
    Date of Patent: June 11, 2013
    Assignee: Cray Inc.
    Inventors: Dennis C. Abts, Michael Higgins, Van L. Snyder, Gerald A Schwoerer
  • Patent number: 8463996
    Abstract: A processor chip is provided. The processor chip includes a plurality of processing cores where each of the processing cores being multi-threaded. The plurality of processing cores are located in a center region of the processor chip. A plurality of cache bank memories are included. A crossbar enabling communication between the plurality of processing cores and the plurality of cache bank memories is provided. The crossbar includes a centrally located arbiter configured to sort multiple requests received from the plurality of processing cores and the crossbar is defined over the plurality of processing cores. In another embodiment, the processor chip is oriented so that the cache bank memories are defined in the center region. A server is also included.
    Type: Grant
    Filed: May 26, 2004
    Date of Patent: June 11, 2013
    Assignee: Oracle America, Inc.
    Inventor: Kunle A. Olukotun
  • Patent number: 8458411
    Abstract: A distributed shared memory multiprocessor that includes a first processing element, a first memory which is a local memory of the first processing element, a second processing element connected to the first processing element via a bus, a second memory which is a local memory of the second processing element, a virtual shared memory region, where physical addresses of the first memory and the second memory are associated for one logical address in a logical address space of a shared memory having the first memory and the second memory, and an arbiter which suspends an access of the first processing element, if there is a write access request from the first processing element to the virtual shared memory region, according to a state of a write access request from the second processing element to the virtual shared memory region.
    Type: Grant
    Filed: August 25, 2009
    Date of Patent: June 4, 2013
    Assignee: Renesas Electronics Corporation
    Inventors: Yukihiko Akaike, Hitoshi Suzuki
  • Patent number: 8448178
    Abstract: Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: May 21, 2013
    Assignee: International Business Machines Corporation
    Inventors: David M Daly, Peter A Franaszek, Luis A Lastras-Montano
  • Patent number: 8447874
    Abstract: A system generates a web page that includes a plurality of embedded data windows. The system receives a request for the web page from a browser and in response generates and displays a frame for the web page on the browser. The frame includes holes for the embedded data windows. The system also receives a data streaming request for each of the embedded data windows and determines if the data streaming requests are thread-safe. For all the data streaming requests that are thread-safe, the system generates a parallel thread to fetch the data for each corresponding data streaming requests. When the data has been fetched for a particular data streaming requests, the data is rendered and streamed to the browser where it is displayed in place of the hole by the browser.
    Type: Grant
    Filed: February 4, 2008
    Date of Patent: May 21, 2013
    Assignee: Oracle International Corporation
    Inventors: Blake Sullivan, Max Starets, Edward J. Farrell
  • Publication number: 20130124805
    Abstract: A shared memory controller and method of operation are provided. The shared memory controller is configured for use with a plurality of processors such as a central processing unit or a graphics processing unit. The shared memory controller includes a command queue configured to hold a plurality of memory commands from the plurality of processors, each memory command having associated priority information. The shared memory controller includes boost logic configured to identify a latency sensitive memory command and update the priority information associated with the memory command to identify the memory command as latency sensitive. The boost logic may be configured to identify a latency sensitive processor command. The boost logic may be configured to track time duration between successive latency sensitive memory commands.
    Type: Application
    Filed: November 10, 2011
    Publication date: May 16, 2013
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Todd M. Rafacz, Kevin M. Lepak, Ryan J. Hensley
  • Patent number: 8443140
    Abstract: A apparatus for controlling a first storage and a second storage, has a controller for receiving a write command and a read command sent out from a host and for sending out the write command and the read command to the first storage and the second storage, a determining unit for sending out a request corresponding to the write command to the first storage and the second storage, for receiving a first response corresponding to the request from the first storage and a second response corresponding to the request from the second storage, and for determining one of the storages on the basis of each of response times, a first writing unit for writing data into the determined storage, and a second writing unit for writing the data written in the determined storage into the other storage after writing the data into the determined storage by the first writing unit.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: May 14, 2013
    Assignee: Fujitsu Limited
    Inventors: Toshiaki Takeuchi, Masakazu Sakamoto, Tetsuya Kinoshita, Takuya Kurihara, Jun Takeuchi, Atsushi Shinohara, Yusuke Kurasawa
  • Patent number: 8433858
    Abstract: A solid-state storage subsystem, such as a non-volatile memory card or drive, includes multiple interfaces and a memory area storing information used by a data arbiter to prioritize data commands received through the interfaces. As one example, the information may store a priority ranking of multiple host systems that are connected to the solid-state storage subsystem, such that the data arbiter may process concurrently received data transfer commands serially according to their priority ranking. A host software component may be configured to store and modify the priority control information in solid-state storage subsystem's memory area.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: April 30, 2013
    Assignee: Siliconsystems, Inc.
    Inventors: Mark S. Diggs, David E. Merry
  • Patent number: 8429374
    Abstract: System, method, and program to perform simultaneous read and write operations in a NAND-type memory device, including: assigning a first partition in a NAND-type memory device, wherein the first partition is configured to perform read operations on high priority read content; assigning a second partition in the NAND-type memory device, wherein the second partition is configured to perform read operations and write operations, wherein the read operations are performed on non-high priority read content; and controlling the first partition and second partition to operate in a simultaneous manner.
    Type: Grant
    Filed: March 22, 2010
    Date of Patent: April 23, 2013
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventor: Wladyslaw Bolanowski
  • Patent number: 8418226
    Abstract: A tamper resistant servicing Agent for providing various services (e.g., data delete, firewall protection, data encryption, location tracking, message notification, and updating software) comprises multiple functional modules, including a loader module (CLM) that loads and gains control during POST, independent of the OS, an Adaptive Installer Module (AIM), and a Communications Driver Agent (CDA). Once control is handed to the CLM, it loads the AIM, which in turn locates, validates, decompresses and adapts the CDA for the detected OS environment. The CDA exists in two forms, a mini CDA that determines whether a full or current CDA is located somewhere on the device, and if not, to load the full-function CDA from a network; and a full-function CDA that is responsible for all communications between the device and the monitoring server. The servicing functions can be controlled by a remote server.
    Type: Grant
    Filed: March 20, 2006
    Date of Patent: April 9, 2013
    Assignee: Absolute Software Corporation
    Inventor: Philip B. Gardner
  • Patent number: 8412891
    Abstract: Memory access arbitration allowing a shared memory to be used both as a memory for a processor and as a buffer for data flows, including an arbiter unit that makes assignment for access requests to the memory sequentially and transfers blocks of data in one round-robin cycle according to bandwidths required for the data transfers, sets priorities for the transfer blocks so that the bandwidths required for the data transfers are met by alternate transfer of the transfer blocks, and executes an access from the processor with an upper limit set for the number of access times from the processor to the memory in one round-robin cycle so that the access from the processor with the highest priority and with a predetermined transfer length exerts less effect on bandwidths for data flow transfers in predetermined intervals between the transfer blocks.
    Type: Grant
    Filed: November 1, 2010
    Date of Patent: April 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Masayuki Demura, Hisato Matsuo, Keisuke Tanaka
  • Patent number: 8412886
    Abstract: In such a configuration that a port unit is provided which takes a form being shared among threads and has a plurality of entries for holding access requests, and the access requests for a cache shared by a plurality of threads being executed at the same time are controlled using the port unit, the access request issued from each tread is registered on a port section of the port unit which is assigned to the tread, thereby controlling the port unit to be divided for use in accordance with the thread configuration. In selecting the access request, the access requests are selected for each thread based on the specified priority control from among the access requests issued from the threads held in the port unit, thereafter a final access request is selected in accordance with a thread selection signal from among those selected access requests.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: April 2, 2013
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8402229
    Abstract: One embodiment of the present invention sets forth a method for sharing graphics objects between a compute unified device architecture (CUDA) application programming interface (API) and a graphics API. The CUDA API includes calls used to alias graphics objects allocated by the graphics API and, subsequently, synchronize accesses to the graphics objects. When an application program emits a “register” call that targets a particular graphics object, the CUDA API ensures that the graphics object is in the device memory, and maps the graphics object into the CUDA address space. Subsequently, when the application program emits “map” and “unmap” calls, the CUDA API respectively enables and disables accesses to the graphics object through the CUDA API. Further, the CUDA API uses semaphores to synchronize accesses to the shared graphics object. Finally, when the application program emits an “unregister” call, the CUDA API configures the computing system to disregard interoperability constraints.
    Type: Grant
    Filed: February 14, 2008
    Date of Patent: March 19, 2013
    Assignee: NVIDIA Corporation
    Inventors: Nicholas Patrick Wilt, Ian A. Buck, Nolan David Goodnight
  • Patent number: 8397010
    Abstract: A device may receive a request to read data from or write data to a memory that includes a number of memory banks. The request may include an address. The device may perform a mapping operation on the address to map the address from a first address space to a second address space, identify one of the memory banks based on the address in the second address space, and send the request to the identified memory bank.
    Type: Grant
    Filed: July 27, 2007
    Date of Patent: March 12, 2013
    Assignee: Juniper Networks, Inc.
    Inventors: Anjan Venkatramani, Srinivas Perla, John Keen
  • Publication number: 20130061005
    Abstract: One embodiment of the present invention sets forth a technique for synchronization between two or more processors. The technique implements a spinlock acquire function and a spinlock release function. A processor executing the spinlock acquire function advantageously operates in a low power state while waiting for an opportunity to acquire spinlock. The spinlock acquire function configures a memory monitor to wake up the processor when spinlock is released by a different processor. The spinlock release function releases spinlock by clearing a lock variable and may clear a wait variable.
    Type: Application
    Filed: September 2, 2011
    Publication date: March 7, 2013
    Inventors: Mark A. OVERBY, ANDREW CURRID
  • Patent number: 8392667
    Abstract: Deadlocks are avoided by marking read requests issued by a parallel processor to system memory as “special.” Read completions associated with read requests marked as special are routed on virtual channel 1 of the PCIe bus. Data returning on virtual channel 1 cannot become stalled by write requests in virtual channel 0, thus avoiding a potential deadlock.
    Type: Grant
    Filed: December 12, 2008
    Date of Patent: March 5, 2013
    Assignee: NVIDIA Corporation
    Inventors: Samuel H. Duncan, David B. Glasco, Wei-Je Huang, Atul Kalambur, Patrick R. Marchand, Dennis K. Ma
  • Patent number: 8392630
    Abstract: Provided is an information processing apparatus and method of controlling same in which, when data transfer is performed among a plurality of control circuits, which control circuit is used to execute data transfer is controlled appropriately based on the transfer conditions of data transfer. To accomplish this, the apparatus has first and second control circuits, a request for data transfer performed between the first and second control circuits is acquired, the transfer conditions of the acquired data transfer are analyzed and which of the first and second control circuits is to execute the data transfer is selected.
    Type: Grant
    Filed: January 20, 2012
    Date of Patent: March 5, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: So Yokomizo
  • Patent number: 8380916
    Abstract: The present techniques provide systems and methods of controlling access to more than one open page in a memory component, such as a memory bank. Several components may request access to the memory banks. A controller can receive the requests and open or close the pages in the memory bank in response to the requests. In some embodiments, the controller assigns priority to some components requesting access, and assigns a specific page in a memory bank to the priority component. Further, additional available pages in the same memory bank may also be opened by other priority components, or by components with lower priorities. The controller may conserve power, or may increase the efficiency of processing transactions between components and the memory bank by closing pages after time outs, after transactions are complete, or in response to a number of requests received by masters.
    Type: Grant
    Filed: June 4, 2009
    Date of Patent: February 19, 2013
    Assignee: Micron Technology, Inc.
    Inventor: Robert Walker
  • Patent number: 8380933
    Abstract: A multiprocessor system includes cache memories each of which is provided in correspondence with one of processor cores and includes a tag storage unit configured to store validity information representing whether a cache line as a unit to store data is valid, update information representing whether data in the cache line has been rewritten, and address information of the data in the cache line, a shared memory shared by the processor cores, and an arbitration circuit configured to arbitrate access requests from the processor cores to the shared memory and send the arbitrated access request to the cache memories. Each cache memory includes a violation detection circuit configured to detect a violation access by comparing the information in the tag storage unit with the access request from the arbitration circuit.
    Type: Grant
    Filed: March 24, 2008
    Date of Patent: February 19, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Masato Uchiyama
  • Patent number: 8370584
    Abstract: A method, circuit arrangement, and design structure utilize a lock prediction data structure to control ownership of a cache line in a shared memory computing system. In a first node among the plurality of nodes, lock prediction data in a hardware-based lock prediction data structure for a cache line associated with a first memory request is updated in response to that first memory request, wherein at least a portion of the lock prediction data is predictive of whether the cache line is associated with a release operation. The lock prediction data is then accessed in response to a second memory request associated with the cache line and issued by a second node and a determination is made as to whether to transfer ownership of the cache line from the first node to the second node based at least in part on the accessed lock prediction data.
    Type: Grant
    Filed: June 22, 2012
    Date of Patent: February 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Jason F. Cantin, Steven R. Kunkel
  • Patent number: 8364992
    Abstract: A system and method for providing a command queue selection scheme by selecting commands by giving preference to commands based on the power consumption characteristics the command. In one embodiment the selection scheme involves calculating the value of the cost of energy saving associated with the access of a command by an evaluation function Costi=EAT+C×F1 (seek distance, latency). C is a dynamically adjustable power control function that determines how much power decreases with the selection of a particular command and F1 is a functional calculation of the power consumption value associated with the particular command. In one embodiment commands with low power consumption will be accessed in preference to commands with shorter seek distance.
    Type: Grant
    Filed: November 25, 2008
    Date of Patent: January 29, 2013
    Assignee: HGST, Netherlands B.V.
    Inventors: William Guthrie, Nyles Heise, Hung M. Vu
  • Patent number: 8364907
    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.
    Type: Grant
    Filed: January 27, 2012
    Date of Patent: January 29, 2013
    Assignee: Apple Inc.
    Inventors: Ramesh Gunna, Sudarshan Kadambi
  • Patent number: 8359438
    Abstract: A cache memory and a tag memory are included in a banked memory system and used to effectively enable parallel write and read operations on each clock cycle, even though the memory banks consist of single-port devices that are not inherently capable of parallel write and read operations.
    Type: Grant
    Filed: May 18, 2010
    Date of Patent: January 22, 2013
    Assignee: Avago Technologies Enterprise IP (Singapore) Pte. Ltd.
    Inventor: Douglas E. Bartlett
  • Patent number: 8359452
    Abstract: An image forming apparatus and a method of overwriting for a storage unit in an image forming apparatus. The method of overwriting data in a storage unit of an image forming apparatus includes configuring a plurality of overwriting options corresponding to data stored in the storage unit; deleting the data stored in the storage unit according to a delete instruction; and overwriting data according to the configuration of the plurality of overwriting options corresponding to the data stored in the storage unit.
    Type: Grant
    Filed: July 17, 2009
    Date of Patent: January 22, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Song-baik Jin
  • Patent number: 8359446
    Abstract: In a method for processing data using triple buffering, a data block to be processed is written to a memory area in a first interval of time. The data block is processed in the same memory area (A, B, C) in a second interval of time. The processed data block is returned from the same memory area in a third interval of time.
    Type: Grant
    Filed: November 9, 2009
    Date of Patent: January 22, 2013
    Assignee: Thomson Licensing
    Inventor: Ingo Huetter
  • Publication number: 20130013869
    Abstract: Embodiments of the invention operate within the context of a system with a processor providing memory-monitoring functionality. The lower-privileged code of a first process, such as user application code, communicates directly with higher-privileged code of a second process, such as interrupt-handling code of the operating system kernel, without using a software interrupt or other gate mechanism. This enhances overall system performance by eliminating the saving of state and processing inherent in interrupt handling, and also avoids missing events that may occur while other interrupts are masked during event handling. Specifically, the second process initializes a monitored memory area that is directly accessible by processes having at least the privilege level of the first process. The second process further initializes memory-monitoring hardware of the processor to monitor writes to the monitored memory area, such that the second process will resume execution from a dormant state when a write takes place.
    Type: Application
    Filed: July 8, 2011
    Publication date: January 10, 2013
    Inventor: Mateusz Berezecki
  • Patent number: 8347053
    Abstract: A storage system for managing a plurality of asynchronous remote copy proceedings between a plurality of first storage control devices and a plurality of second storage control devices, wherein each of a plurality of second storage control devices stores one or more update data corresponding to one or more update data related information including the same update reflection time information with the one that is received or older update reflection time information than this in a one or more second logical volume and changes status of the one or more second logical volumes to suspend status.
    Type: Grant
    Filed: June 6, 2012
    Date of Patent: January 1, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Hiroshi Arakawa, Kenta Ninose, Akira Deguchi, Katsuhiro Okumoto
  • Patent number: 8347044
    Abstract: A programmable logic processor (PLC) with multiple PLC functions is disclosed. The PLC includes at least one memory storing at least one of a plurality of programs or data, and one or more processor assigned to each of the PLC function and couple to the memory. The PLC functions are run in parallel. A method of operating the PLC and a PLC system with multiple processors are also disclosed.
    Type: Grant
    Filed: September 30, 2009
    Date of Patent: January 1, 2013
    Assignee: General Electric Company
    Inventors: Weihua Shang, Yongzhi Liu, William Henry Lueckenbach, Li Liu, Yu Zhang
  • Patent number: 8341344
    Abstract: A technique of accessing a resource includes receiving, at a master scheduler, resource access requests. The resource access requests are translated into respective slave state machine work orders that each include one or more respective commands. The respective commands are assigned, for execution, to command streams associated with respective slave state machines. The respective commands are then executed responsive to the respective slave state machines.
    Type: Grant
    Filed: September 21, 2007
    Date of Patent: December 25, 2012
    Inventors: Guhan Krishnan, John Kalamatianos
  • Publication number: 20120311273
    Abstract: The systems and methods described herein may extend transactional memory implementations to support transaction communicators and/or transaction condition variables for which transaction isolation is relaxed, and through which concurrent transactions can communicate and be synchronized with each other. Transactional accesses to these objects may not be isolated unless called within communicator-isolating transactions. A waiter transaction may invoke a wait method of a transaction condition variable, be added to a wait list for the variable, and be suspended pending notification of a notification event from a notify method of the variable. A notifier transaction may invoke a notify method of the variable, which may remove the waiter from the wait list, schedule the waiter transaction for resumed execution, and notify the waiter of the notification event. A waiter transaction may commit only if the corresponding notifier transaction commits.
    Type: Application
    Filed: June 27, 2011
    Publication date: December 6, 2012
    Inventors: Virendra J. Marathe, Victor M. Luchangco
  • Patent number: 8327082
    Abstract: A snoop look-up operation is performed in a system having a cache and a first processor. The processor generates requests to the cache for data. A snoop queue is loaded with snoop requests. Fullness of the snoop queue is a measure of how many snoop requests are in the snoop queue. A snoop look-up operation is performed in the cache if the fullness of the snoop queue exceeds the threshold. The snoop look-up operation is based on a snoop request from the snoop queue corresponding to an entry in the snoop queue. If the fullness of the snoop queue does not exceed the threshold, waiting to perform a snoop look-up operation until an idle access request cycle from the processor to the cache occurs and performing the snoop look-up operation in the cache upon the idle access request cycle from the processor.
    Type: Grant
    Filed: August 29, 2008
    Date of Patent: December 4, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventors: William C. Moyer, Quyen Pho
  • Patent number: 8327059
    Abstract: In a computer system supporting execution of virtualization software and at least one instance of virtual system hardware, an interface is provided into the virtualization software to allow a program to directly define the access characteristics of its program data stored in physical memory. The technique includes providing data identifying memory pages and their access characteristics to the virtualization software which then derives the memory access characteristics from the specified data. Optionally, the program may also specify a pre-defined function to be performed upon the occurrence of a fault associated with access to an identified memory page. In this manner, programs operating both internal and external to the virtualization software can protect his memory pages, without intermediation by the operating system software.
    Type: Grant
    Filed: September 30, 2009
    Date of Patent: December 4, 2012
    Assignee: VMware, Inc.
    Inventors: Xiaoxin Chen, Pratap Subrahmanyam
  • Patent number: 8327093
    Abstract: A unique system and method for ordering commands may reduce disc access latency while giving preference to pending commands. The method and system involves giving preference to pending commands in a set of priority queues. The method and system involve identifying a pending command and processing other non-pending commands in route to the pending command if performance will not be penalized in doing so. The method and system include a list of command node references referring to a list of sorted command nodes that are to be scheduled for processing.
    Type: Grant
    Filed: October 21, 2004
    Date of Patent: December 4, 2012
    Assignee: Seagate Technology LLC
    Inventors: Edwin Scott Olds, Stephen R. Cornaby, Mark David Hertz, Kenny Troy Coker
  • Patent number: 8321872
    Abstract: Hardware resources sharing for a computer system running software tasks. A controller stores records including a mutex ID tag and a waiter flag in a cache. Lock and unlock registers are readable by the controller and loadable by the tasks with a mutex ID specifying a hardware resource. The controller monitors whether the lock register for loading with a mutex ID, and then determines whether it corresponds with the tag of a record in the cache. If so, it sets the record's waiter flag. If not, it adds a record having a tag corresponding with the mutex ID. The controller also monitors whether the unlock register for loading with a mutex ID, and then determines whether it corresponds with the tag of a record in the cache. If so, it determines whether that record's waiter flag is set and, if so, it clears that record from the cache.
    Type: Grant
    Filed: August 1, 2006
    Date of Patent: November 27, 2012
    Assignee: Nvidia Corporation
    Inventor: James R. Terrell, II
  • Patent number: 8316414
    Abstract: Apparatuses, methods, and systems for reconfiguring a secure system are disclosed. In one embodiment, an apparatus includes a configuration storage location, a lock, and lock override logic. The configuration storage location is to store information to configure the apparatus. The lock is to prevent writes to the configuration storage location. The lock override logic is to allow instructions executed from sub-operating mode code to override the lock.
    Type: Grant
    Filed: December 29, 2006
    Date of Patent: November 20, 2012
    Assignee: Intel Corporation
    Inventors: Sham M. Datta, Mohan J. Kumar, James A. Sutton, Ernie Brickell, Ioannis T. Schoinas