Prioritizing Patents (Class 711/158)
  • Patent number: 8510496
    Abstract: Method and apparatus for scheduling access requests for a multi-bank low-latency random read memory (LLRRM) device within a storage system. The LLRRM device comprising a plurality of memory banks, each bank being simultaneously and independently accessible. A queuing layer residing in storage system may allocate a plurality of request-queuing data structures (“queues”), each queue being assigned to a memory bank. The queuing layer may receive access requests for memory banks in the LLRRM device and store each received access request in the queue assigned to the requested memory bank. The queuing layer may then send, to the LLRRM device for processing, an access request from each request-queuing data structure in successive order. As such, requests sent to the LLRRM device will comprise requests that will be applied to each memory bank in successive order as well, thereby reducing access latencies of the LLRRM device.
    Type: Grant
    Filed: April 27, 2009
    Date of Patent: August 13, 2013
    Assignee: NetApp, Inc.
    Inventors: George Totolos, Jr., Nhiem T. Nguyen
  • Patent number: 8504784
    Abstract: An embodiment of a non-volatile memory storage system comprises a memory controller, and a flash memory module. The memory controller manages the storage operations of the flash memory module. The memory controller is configured to assign a priority level to one or more types of house keeping operations that may be higher than a priority level of one or more types of commands received by a host coupled to the storage system, and to service all operations required of the flash memory module according to priority.
    Type: Grant
    Filed: June 27, 2007
    Date of Patent: August 6, 2013
    Assignee: SanDisk Technologies Inc.
    Inventor: Shai Traister
  • Patent number: 8503469
    Abstract: A technique for providing network access in accordance with at least one layered network access technology comprising layer 1 processes and layer 2 processes is described. In a device implementation, the technique comprises a shared memory adapted to store at least layer 1 data and layer 2 data as well as a memory access component coupled to the shared memory and comprising a first client port adapted to receive memory access requests from a layer 1 processing client and a second client port adapted to receive memory access requests from a layer 2 processing client. The memory access component is configured to serve a memory access request from the layer 1 processing client with a lower priority than a memory access request from the layer 2 processing client. In particular, the memory access component may be adapted to prioritize reading of layer 1 data by the layer 2 processing client over writing of layer 2 data by the layer 1 processing client.
    Type: Grant
    Filed: November 16, 2009
    Date of Patent: August 6, 2013
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Seyed-Hami Nourbakhsh, Helmut Steinbach
  • Publication number: 20130198465
    Abstract: A connection apparatus that connects a plurality of storage units and a controller that establishes connection with the respective storage units in response to a connection request issued from each of the plurality of storage units and accesses the storage units includes a processor; and a memory, wherein the processor transmits a connection request selected based on priority information that represents priority associated with the connection among a plurality of received connection requests to the controller, the priority information being stored in the memory, and changes priority information included in a connection request received from a certain storage unit among the plurality of storage units so that the priority information has higher priority than the priority information included in connection requests received from the other storage units for a period where a connection request is successively received from the certain storage unit and a predetermined condition is satisfied.
    Type: Application
    Filed: December 10, 2012
    Publication date: August 1, 2013
    Inventor: Fujitsu Limited
  • Patent number: 8499119
    Abstract: Aspects relate to systems and methods for providing the ability to customize content delivery. A device can cache multiple presentations. The device can establish a cache depth upon initiation of the subscription service. The device can provide an interface to select a cache depth. The cache depth can be the number of presentations the device will maintain on the device at a given time.
    Type: Grant
    Filed: April 6, 2009
    Date of Patent: July 30, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Sajith Balraj, An Mei Chen
  • Patent number: 8498074
    Abstract: A disk drive is disclosed wherein a plurality of access commands received from a host are stored in a command queue. An access cost for at least a first and second access command in the command queue is generated, wherein each access cost comprises a seek length and a rotation latency. A first access command is selected from the command queue having a first access cost, and a window is defined relative to the first access cost and a first risk based penalty (RBP) of the first access command, wherein the first RBP represents a probability of missing a first data sector of the first access command. A second access command is selected from the command queue comprising a second access cost within the window. A choice is made between the first and second access commands in response to a second RBP of the second access command.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: July 30, 2013
    Assignee: Western Digital Technologies, Inc.
    Inventors: Jack A. Mobley, Kenny T. Coker, Orhan Beker
  • Patent number: 8495641
    Abstract: A technique for efficiently boosting the priority of a preemptable data reader while resolving races between the priority boosting and the reader exiting a critical section or terminating in order to eliminate impediments to grace period processing that defers the destruction of one or more shared data elements that may be referenced by the reader until the reader is no longer capable of referencing the one or more data elements. A determination is made that the reader is in a read-side critical section and the reader is designated as a candidate for priority boosting. A verification is made that the reader has not exited its critical section or terminated, and the reader's priority is boosted to expedite its completion of the critical section. The reader's priority is decreased following its completion of the critical section.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventor: Paul E. McKenney
  • Patent number: 8495310
    Abstract: A system and method utilize a memory device that may be accessed by a plurality of controllers or processor cores via respective ports of the memory device. Each controller may be coupled to a respective port of the memory device via a data bus. Each port of the memory device may be associated with a predefined section of memory, thereby giving each controller access to a distinct section of memory without interference from other controllers. A common command/address bus may couple the plurality of controllers to the memory device. Each controller may assert an active signal on a memory access control bus to gain access to the command/address bus to initiate a memory access. In some embodiments, a plurality of memory devices may be arranged in a memory package in a stacked die memory configuration.
    Type: Grant
    Filed: September 22, 2008
    Date of Patent: July 23, 2013
    Assignee: Qimonda AG
    Inventors: Peter Gregorius, Thomas Hein, Martin Maier, Hermann Ruckerbauer, Thilo Schaffroth, Ralf Schedel, Wolfgang Spirkl, Johannes Stecker
  • Patent number: 8489812
    Abstract: An approach for automatic storage planning and provisioning within a clustered computing environment is provided. Planning input for a set of storage area network volume controllers (SVCs) will be received within the clustered computing environment, the planning input indicating a potential load on the SVCs and its associated components. Analytical models (e.g., from vendors) can be also used that allow for a load to be accurately estimated on the storage components. Configuration data for a set of storage components (i.e., the set of SVCs, a set of managed disk (Mdisk) groups associated with the set of SVCs, and a set of backend storage systems) will also be collected. Based on this configuration data, the set of storage components will be filtered to identify candidate storage components capable of addressing the potential load. Then, performance data for the candidate storage components will be analyzed to identify an SVC and an Mdisk group to address the potential load.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Kavita Chavda, David P. Goodman, Sandeep Gopisetty, Larry S. McGimsey, James E. Olson, Aameek Singh
  • Patent number: 8478952
    Abstract: Data indicating a plurality of groups into which data to be accessed from one or more storage media has been divided is received. For each of at least a subset of the groups a parallelization limit for that group is received. A first parallelization limit for a first group in the subset is not necessarily same as a second parallelization limit for a second group in the subset.
    Type: Grant
    Filed: June 13, 2006
    Date of Patent: July 2, 2013
    Assignee: EMC Corporation
    Inventor: Peter Armorer
  • Patent number: 8478955
    Abstract: In one aspect, a method includes forming a virtualized grid consistency group to replicate logical units, running a first grid copy on a first data protection appliance (DPA), running a second grid copy on a second DPA, splitting to the first DPA IOs intended for a first subset of the logical units and splitting to the second DPA IOs intended for a second subset of the logical units different from the first subset of logical units.
    Type: Grant
    Filed: September 27, 2010
    Date of Patent: July 2, 2013
    Assignee: EMC International Company
    Inventors: Assaf Natanzon, Yuval Aharoni
  • Patent number: 8478932
    Abstract: Embodiments of the invention provide a memory allocation module that adopts memory-pool based allocation and is aware of the physical configuration of the memory blocks in order to manage the memory allocation intelligently while exploiting statistical characters of packet traffic. The memory-pool based allocation makes it easy to find empty memory blocks. Packet traffic characteristics are used to maximize the number of empty memory blocks.
    Type: Grant
    Filed: September 15, 2009
    Date of Patent: July 2, 2013
    Assignee: Texas Instruments Incorporated
    Inventors: Seung Jun Baek, Ramanuja Vedantham, Se-Joong Lee
  • Patent number: 8478950
    Abstract: Requests from a plurality of different agents (10) are passed to a request handler via a request concentrator. In front of the request concentrator the requests are queued in a plurality of queues (12). A first one of the agents is configured to issue a priority changing command with a defined position relative to pending requests issued by the first one of the agents (10) to the first one of the queues (12). An arbiter (16), makes successive selections selecting queues (12) from which the request concentrator (14) will pass requests to the request handler (18), based on relative priorities assigned to the queues (12). The arbiter (16) responds to the priority changing command by changing the priority of the first one of the queues (12), selectively for a duration while the pending requests up to the defined position are in the first one of the queues (12). Different queues may be provided for read and write requests from the first one of the agents.
    Type: Grant
    Filed: July 27, 2009
    Date of Patent: July 2, 2013
    Assignee: Synopsys, Inc.
    Inventors: Tomas Henriksson, Elisabeth Francisca Maria Steffens
  • Publication number: 20130166689
    Abstract: Systems, methods, and computer-program products store file segments by receiving a first file segment, and storing the first file segment in a first memory area having a highest ranking. The first memory area is reassigned as a memory area having a next highest ranking when a second file segment is received and the first memory area has reached a maximum capacity. The second file segment is stored in another memory that is reassigned as the memory area having the highest ranking.
    Type: Application
    Filed: December 17, 2008
    Publication date: June 27, 2013
    Applicant: Adobe Systems Incorporated
    Inventor: Wesley McCullough
  • Patent number: 8473646
    Abstract: Input and output (I/O) operations performed by a data storage device are managed dynamically to balance aspects such as throughput and latency. Sequential read and write requests are sent to a data storage device whereby the corresponding operations are performed without time delay due to extra disk revolutions. In order to minimize latency, particularly for read operations, random read and write requests are held in a queue upstream of an I/O controller of the data storage device until the buffer of the data storage device is empty. The queued requests can be reordered when a higher priority request is received, improving the overall latency for specific requests. An I/O scheduler of a data server is still able to use any appropriate algorithm to order I/O requests, such as by prioritizing reads over writes as long as the writes do not back up in the I/O queue beyond a certain threshold.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: June 25, 2013
    Assignee: Amazon Technologies, Inc.
    Inventors: Tate Andrew Certain, Roland Paterson-Jones, James R. Hamilton
  • Patent number: 8473704
    Abstract: With respect to a storage system in which quick formatting and sequential formatting can be run concurrently, the time it takes to process an access request from a host is prevented from becoming prolonged even when a normal sequential formatting process is executed with respect to a storage volume which frequently incurs I/O penalties. The storage device measures the load from the host per configurational unit (storage medium) of LUs, and divides the LUs into a group of LUs whose load per storage medium is low, and a group of LUs whose load per storage medium is high. Further, the density per unit of LU capacity of I/O penalties incurred in a storage volume for which quick formatting is being executed is calculated. Sequential formatting is then executed, with priority, with respect to the LUs belonging to the group with low loads and in order of descending density of incurred I/O penalties.
    Type: Grant
    Filed: April 28, 2010
    Date of Patent: June 25, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Akihiko Araki, Yusuke Nonaka
  • Patent number: 8473677
    Abstract: A distributed storage unit determines how to handle a read or write request for a data slice based on a state of the memory the data slice is to be read from or written to. When receiving a request to retrieve a data slice, the distributed storage unit, determines a state of the memory in which the data slice is stored. Based on the memory state, one of multiple different methods for obtaining the data slice is selected. The methods include, among others, a direct read from the memory, and reconstructing the data slice using other memories and parity values. In response to a write request, the distributed storage unit can determine whether to use the currently selected memory for writing, or rotate the memory used for writing, based on a state of the memory.
    Type: Grant
    Filed: May 11, 2010
    Date of Patent: June 25, 2013
    Assignee: Cleversafe, Inc.
    Inventors: S. Christopher Gladwin, Wesley Leggette
  • Patent number: 8468536
    Abstract: A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: June 18, 2013
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Ekaterina M. Ambroladze, Michael Fee, Diana Lynn Orf
  • Patent number: 8468318
    Abstract: A system and method for scheduling read and write operations among a plurality of solid-state storage devices. A computer system comprises client computers and data storage arrays coupled to one another via a network. A data storage array utilizes solid-state drives and Flash memory cells for data storage. A storage controller within a data storage array comprises an I/O scheduler. The data storage controller is configured to receive requests targeted to the data storage medium, said requests including a first type of operation and a second type of operation. The controller is further configured to schedule requests of the first type for immediate processing by said plurality of storage devices, and queue requests of the second type for later processing by the plurality of storage devices. Operations of the first type may correspond to operations with an expected relatively low latency, and operations of the second type may correspond to operations with an expected relatively high latency.
    Type: Grant
    Filed: September 15, 2010
    Date of Patent: June 18, 2013
    Assignee: Pure Storage Inc.
    Inventors: John Colgrove, John Hayes, Bo Hong, Feng Wang, Ethan Miller, Craig Harmer
  • Patent number: 8468303
    Abstract: In accordance with an aspect of the invention, a storage system includes a processor; a memory; a disk control module configured to receive a write command for writing to an unallocated area and to identify an object of the write command to be written as a written object; and an object allocation acquisition module configured to obtain object allocation information specifying one or more virtual volume locations for storing the written object. The disk control module allocates, to each of the one or more virtual volume locations, an area selected from a plurality of logical volumes if the written object is predefined as a randomly accessed object. The disk control module allocates to the one or more virtual volume locations a consecutive area of one logical volume if the written object is predefined as a sequentially accessed object.
    Type: Grant
    Filed: September 27, 2010
    Date of Patent: June 18, 2013
    Assignee: Hitachi, Ltd.
    Inventor: Shinichi Hayashi
  • Patent number: 8468319
    Abstract: A storage system, a disk controller, a disk drive and a method of operating thereof. The method includes: configuring a disk drive in a manner enabling executing one or more read requests concurrently with executing one or more write requests addressed to the same data track of the disk drive; responsive to a received write request addressed to a certain track of the disk drive, identifying with the help of the control layer one or more read requests concurrent to received write request and addressed to the same track; if the received write request and the identified one or more read requests match a predefined criterion, generating and issuing, with the help of the control layer, a command to the disk drive for executing a single task corresponding to the concurrent read and write requests combined in accordance with a certain mask.
    Type: Grant
    Filed: January 11, 2011
    Date of Patent: June 18, 2013
    Assignee: Infinidat Ltd.
    Inventor: Julian Satran
  • Patent number: 8464007
    Abstract: Various embodiments include fault tolerant memory apparatus, methods, and systems, including a memory manager for supplying read and write requests to a memory device having a plurality of addressable memory locations. The memory manager includes a plurality of banks. Each bank includes a bank queue for storing read and write requests. The memory manager also includes a request arbiter connected to the plurality of banks. The request arbiter removes read and write requests from the bank queues for presentation to the memory device. The request arbiter includes a read phase of operation and a write phase of operation, wherein the request arbiter preferentially selects read requests for servicing during the read phase of operation and preferentially selects write requests for servicing during the write phase of operation.
    Type: Grant
    Filed: June 12, 2009
    Date of Patent: June 11, 2013
    Assignee: Cray Inc.
    Inventors: Dennis C. Abts, Michael Higgins, Van L. Snyder, Gerald A Schwoerer
  • Patent number: 8463984
    Abstract: The disclosure is related to systems and methods of dynamic dataflow in a multiple cache architecture. In an embodiment, a system having a data storage device with a multiple cache architecture may detect at least one attribute affecting a data storage workload or data storage performance. The system may select at least one of a plurality of data flow schemes based on the at least one attribute, which may be done to optimize the data storage workload for various conditions. In another embodiment, a data storage controller may automatically and dynamically select one of multiple data flow schemes within a data storage device having a multiple cache architecture. The data storage controller may monitor attributes to determine which data flow scheme to select for various workloads of the data storage device.
    Type: Grant
    Filed: December 31, 2009
    Date of Patent: June 11, 2013
    Assignee: Seagate Technology LLC
    Inventors: Edwin Scott Olds, Timothy Richard Feldman, David Warren Wheelock, Steven Scott William, Robert William Dixon
  • Publication number: 20130138900
    Abstract: According to an embodiment, an information processing device that includes a first storage unit and a second storage unit having power consumption different from that of the first storage unit. The information processing device also includes a control unit configured to make a control to determine a priority of information that is to be stored in the first storage unit or the second storage unit. The control unit is configured to store the information into the first storage unit or into the second storage unit based on the determined priority.
    Type: Application
    Filed: September 11, 2012
    Publication date: May 30, 2013
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takeshi Ishihara, Yoshimichi Tanizawa, Kotaro Ise
  • Publication number: 20130132681
    Abstract: In one embodiment, a memory management system temporarily maintains a memory page at an artificially high priority level 210. The memory management system may assign an initial priority level 212 to a memory page in a page priority list 202. The memory management system may change the memory page to a target priority level 214 in the page priority list 202 after a protection period 238 has expired.
    Type: Application
    Filed: November 22, 2011
    Publication date: May 23, 2013
    Applicant: Microsoft Corporation
    Inventors: Landy Wang, Yevgeniy Bak, Mehmet Iyigun
  • Publication number: 20130132653
    Abstract: Systems and methods are disclosed for partitioning data for storage in a non-volatile memory (“NVM”), such as flash memory. In some embodiments, a priority may be assigned to data being stored, and the data may be logically partitioned based on the priority. For example, a file system may identify a logical address within a first predetermined range for higher priority data and within a second predetermined range for lower priority data, such using a union file system. Using the logical address, a NVM driver can determine the priority of data being stored and can process (e.g., encode) the data based on the priority. The NVM driver can store an identifier in the NVM along with the data, and the identifier can indicate the processing techniques used on the associated data.
    Type: Application
    Filed: January 14, 2013
    Publication date: May 23, 2013
    Applicant: APPLE INC.
    Inventor: APPLE INC.
  • Patent number: 8448178
    Abstract: Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: May 21, 2013
    Assignee: International Business Machines Corporation
    Inventors: David M Daly, Peter A Franaszek, Luis A Lastras-Montano
  • Patent number: 8438345
    Abstract: A multi-priority encoder includes a plurality of interconnected, single-priority encoders arranged in descending priority order. The multi-priority encoder includes circuitry for blocking a match output by a lower level single-priority encoder if a higher level single-priority encoder outputs a match output. Match data is received from a content addressable memory, and the priority encoder includes address encoding circuitry for outputting the address locations of each highest priority match line flagged by the highest priority indicator. Each single-priority encoder includes a highest priority indicator which has a plurality of indicator segments, each indicator segment being associated with a match line input.
    Type: Grant
    Filed: July 1, 2011
    Date of Patent: May 7, 2013
    Assignee: Micron Technology, Inc.
    Inventor: Zvi Regev
  • Publication number: 20130111159
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Application
    Filed: October 5, 2012
    Publication date: May 2, 2013
    Applicant: IMAGINATION TECHNOLOGIES LIMITED
    Inventor: IMAGINATION TECHNOLOGIES LIMITED
  • Patent number: 8423728
    Abstract: Scheduling jobs for a plurality of logical devices associated with physical devices includes assigning a physical run count value and a physical skip count value to each of the physical devices, at each iteration, examining the physical skip count value and the physical run count value for each of the physical devices, and scheduling a number of jobs up to the physical run count value for logical devices associated with a particular one of the physical devices at each iteration corresponding to the physical skip count value for the particular one of the physical devices. The physical skip count value and the physical run count value for a particular one of the physical devices may vary according to a total load of the particular physical device.
    Type: Grant
    Filed: June 16, 2005
    Date of Patent: April 16, 2013
    Assignee: EMC Corporation
    Inventors: Rong Yu, Peng Yin, Stephen R. Ives, Adi Ofer, Gilad Sade, Barak Bejerano
  • Patent number: 8412886
    Abstract: In such a configuration that a port unit is provided which takes a form being shared among threads and has a plurality of entries for holding access requests, and the access requests for a cache shared by a plurality of threads being executed at the same time are controlled using the port unit, the access request issued from each tread is registered on a port section of the port unit which is assigned to the tread, thereby controlling the port unit to be divided for use in accordance with the thread configuration. In selecting the access request, the access requests are selected for each thread based on the specified priority control from among the access requests issued from the threads held in the port unit, thereafter a final access request is selected in accordance with a thread selection signal from among those selected access requests.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: April 2, 2013
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8412891
    Abstract: Memory access arbitration allowing a shared memory to be used both as a memory for a processor and as a buffer for data flows, including an arbiter unit that makes assignment for access requests to the memory sequentially and transfers blocks of data in one round-robin cycle according to bandwidths required for the data transfers, sets priorities for the transfer blocks so that the bandwidths required for the data transfers are met by alternate transfer of the transfer blocks, and executes an access from the processor with an upper limit set for the number of access times from the processor to the memory in one round-robin cycle so that the access from the processor with the highest priority and with a predetermined transfer length exerts less effect on bandwidths for data flow transfers in predetermined intervals between the transfer blocks.
    Type: Grant
    Filed: November 1, 2010
    Date of Patent: April 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Masayuki Demura, Hisato Matsuo, Keisuke Tanaka
  • Patent number: 8412885
    Abstract: In an embodiment of the present invention a method includes: sending request for data to a memory controller; arranging the request for data by order of importance or priority; identifying a source of the request for data; and if the source is an input/output device, masking off P ways in a cache; and allocating ways in filling the cache. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 12, 2009
    Date of Patent: April 2, 2013
    Assignee: Intel Corporation
    Inventors: Liqun Cheng, Zhen Fang, Jeffrey Wilder, Sadagopan Srinivasan, Ravishankar Iyer, Donald Newell
  • Patent number: 8407394
    Abstract: This document discusses, among other things, an example system and methods for memory expansion. An example embodiment includes detecting a memory command directed to a logical rand and a number of physical ranks mapped to the logical rank. The example embodiment may also include issuing the memory command to the number of physical ranks based on determining that the memory command is to be issued to the number of physical ranks.
    Type: Grant
    Filed: January 8, 2008
    Date of Patent: March 26, 2013
    Assignee: Cisco Technology, Inc.
    Inventors: Mario Mazzola, Satyanarayana Nishtala, Luca Cafiero, Philip Manela
  • Patent number: 8407442
    Abstract: A computer-program product that includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: March 26, 2013
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Robert J. Sonnelitter, III
  • Patent number: 8407407
    Abstract: A drive control module of a solid-state drive (SSD) includes a first module that receives host commands from one of a host command buffer and a drive interface of the SSD, converts the host commands to stage commands, and determines whether to store the stage commands in a stage slot of a staging memory or leave the stage slot empty. A second module transfers data between a buffer and a flash memory based on the stage commands. The flash memory comprises flash arrays. A third module detects a first empty stage of one of the flash arrays and based on an empty stage timer value triggers at least one of an end of the first empty stage, a start of an at least partially full stage of the one of the flash arrays, or a start of a second empty stage of the one of the flash arrays.
    Type: Grant
    Filed: September 16, 2010
    Date of Patent: March 26, 2013
    Assignee: Marvell International Ltd.
    Inventors: Jason Adler, Lau Nguyen, Perry Neos
  • Publication number: 20130073783
    Abstract: A method uses a record of I/O priorities in a determination of a storage medium of a hybrid storage system in which to store a file. The method maintains the record of I/O priorities by assigning an I/O temperature value to each request for access to the file based upon an I/O priority level of the process making the request. The method marks the file as hot if the file temperature value is greater than a threshold value. The method stores files marked as hot in a lower latency storage medium of the hybrid storage medium.
    Type: Application
    Filed: September 15, 2011
    Publication date: March 21, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mingming Cao, Ben Chociej, Scott R. Conor, Steven M. French, Matthew R. Lupfer, Steven L. Pratt
  • Patent number: 8397028
    Abstract: Systems, methods embodied on computer-readable media, and other embodiments associated with index entry eviction are described. One example method includes selecting an index entry for eviction from a bucket of index entries based on a time value, a utility value, and a precedence value. A precedence value may be a value associated with an index entry that is static over time. Additionally, results of a function that compares two precedence values may be static over time. The example method may also include providing an index entry identifier that identifies the index entry.
    Type: Grant
    Filed: June 15, 2010
    Date of Patent: March 12, 2013
    Inventor: Stephen Spackman
  • Patent number: 8397038
    Abstract: A method and system is provided for initializing files such as, for example and without limitation, pre-allocated files or raw device mapping (RDM) files, by delaying initializing file blocks. In accordance with one or more embodiments of the present invention, file blocks are associated with corresponding indicators to track un-initialized blocks.
    Type: Grant
    Filed: March 18, 2008
    Date of Patent: March 12, 2013
    Assignee: VMware, Inc.
    Inventors: Daniel J. Scales, Satyam B. Vaghani
  • Patent number: 8392666
    Abstract: An apparatus detects a load-store collision within a microprocessor between a load operation and an older store operation each of which accesses data in the same cache line. Load and store byte masks specify which bytes contain the data specified by the load and store operation within a word of the cache line in which the load and data begins, respectively. Load and store word masks specify which words contain the data specified by the load and store operations within the cache line, respectively. Combinatorial logic uses the load and store byte masks to detect the load-store collision if the data specified by the load and store operations begin in the same cache line word, and uses the load and store word masks to detect the load-store collision if the data specified by the load and store operations do not begin in the same cache line word.
    Type: Grant
    Filed: October 20, 2009
    Date of Patent: March 5, 2013
    Assignee: VIA Technologies, Inc.
    Inventors: Rodney E. Hooker, Colin Eddy
  • Patent number: 8386728
    Abstract: Methods and systems for prioritizing a crawl are described. One aspect of the invention includes a method for identifying a plurality of storage locations each comprising a plurality of articles, ranking the plurality of storage locations based at least in part on events associated with the plurality of articles, and crawling the storage locations based at least in part on the ranking. Another aspect of the invention includes identifying a plurality of storage locations each comprising a plurality of articles, identifying a plurality of types of the plurality of articles, ranking the plurality of storage locations based at least in part on the plurality of types of the plurality of articles; and crawling the storage locations based at least in part on the ranking.
    Type: Grant
    Filed: September 14, 2004
    Date of Patent: February 26, 2013
    Assignee: Google Inc.
    Inventors: Mihai Florin Ionescu, David Marmaros
  • Patent number: 8380942
    Abstract: Disclosed are various embodiments including systems and methods relating to the management of heat load in a data center. An access frequency of a data object is estimated in a computing device. The data object is stored in at least one of a plurality of storage units. Each of the storage units comprises a plurality of solid-state storage devices in a proximal arrangement. The storing is based at least in part on the access frequency of the data object and a density of the arrangement of the solid-state storage devices within the at least one of the storage units.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: February 19, 2013
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew T. Corddry, Benjamin Earle McGough, Michael W. Schrempp
  • Patent number: 8380959
    Abstract: A technique for managing memory allocation in an electronic device is provided. In one embodiment, a method includes loading a memory allocation strategy for an application executed by a processor of a device, and requesting memory for the application from various memory locations in accordance with the memory allocation strategy. In one embodiment, the device includes multiple sets of contiguous memory blocks and a memory heap, memory may be requested from at least one of these memory locations, and memory may then be allocated to the application in response to the request. In some embodiments, the memory allocation strategy may be stored in the device prior to execution of the application. Various other methods, devices, and manufactures are also provided.
    Type: Grant
    Filed: September 5, 2008
    Date of Patent: February 19, 2013
    Assignee: Apple Inc.
    Inventors: Aram Lindahl, Jesse W. Boettcher, David J. Rempel, Pulkit Desai, Vincent Wong
  • Patent number: 8370597
    Abstract: Technologies are described for implementing a migration mechanism in a storage system containing multiple tiers of storage with each tier having different cost and performance parameters. Access statistics can be collected for each territory, or storage entity, within the storage system. Data that is accessed more frequently can be migrated toward higher performance storage tiers while data that is accessed less frequently can be migrated towards lower performance storage tiers. The placement of data may be governed first by the promotion of territories with higher access frequency to higher tiers. Secondly, data migration may be governed by demoting territories to lower tiers to create room for the promotion of more eligible territories from the next lower tier. In instances where space is not available on the next lower tier, further demotion may take place to an even lower tier in order to make space for the first demotion.
    Type: Grant
    Filed: April 11, 2008
    Date of Patent: February 5, 2013
    Assignee: American Megatrends, Inc.
    Inventors: Paresh Chatterjee, Ajit Narayanan, Loganathan Ranganathan, Sharon Enoch
  • Patent number: 8364917
    Abstract: A method for replicating a deduplicated storage system is disclosed. A stream of data is stored on an originator deduplicating system by storing a plurality of deduplicated segments and information on how to reconstruct the stream of data. The originator deduplicating system is replicated on a replica system by sending a copy of the plurality of deduplicated segments and information on how to reconstruct the stream of data to the replica system. A first portion of the deduplicated segments stored on the originator deduplicating system that is corrupted is identified. A copy of the first portion of the deduplicated segments is requested to be sent by the replica system to the originator deduplicating system.
    Type: Grant
    Filed: April 2, 2012
    Date of Patent: January 29, 2013
    Assignee: EMC Corporation
    Inventors: Allan J. Bricker, Richard Johnsson, Greg Wade
  • Patent number: 8359449
    Abstract: A method manages memory paging operations. Responsive to a request to page out a memory page from a shared memory pool, the method identifies whether a physical space within one of a number of paging space devices has been allocated for the memory page. If physical space within the paging space device has not been allocated for the memory page, a page priority indicator for the memory page is identified. The memory page is then allocated to one of a number of memory pools within one of the number of paging space devices. The memory page is allocated one of the memory pools according to the page priority indicator of the memory page. The memory page is then written to the allocated memory pools.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: January 22, 2013
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Dirk Michel, Bret Ronald Olszewski
  • Patent number: 8359446
    Abstract: In a method for processing data using triple buffering, a data block to be processed is written to a memory area in a first interval of time. The data block is processed in the same memory area (A, B, C) in a second interval of time. The processed data block is returned from the same memory area in a third interval of time.
    Type: Grant
    Filed: November 9, 2009
    Date of Patent: January 22, 2013
    Assignee: Thomson Licensing
    Inventor: Ingo Huetter
  • Patent number: 8356137
    Abstract: Systems and methods are disclosed for partitioning data for storage in a non-volatile memory (“NVM”), such as flash memory. In some embodiments, a priority may be assigned to data being stored, and the data may be logically partitioned based on the priority. For example, a file system may identify a logical address within a first predetermined range for higher priority data and within a second predetermined range for lower priority data, such using a union file system. Using the logical address, a NVM driver can determine the priority of data being stored and can process (e.g., encode) the data based on the priority. The NVM driver can store an identifier in the NVM along with the data, and the identifier can indicate the processing techniques used on the associated data.
    Type: Grant
    Filed: February 26, 2010
    Date of Patent: January 15, 2013
    Assignee: Apple Inc.
    Inventors: Daniel J. Post, Matthew Byom, Vadim Khmelnitsky, Nir J. Wakrat, Kenneth Herman
  • Publication number: 20130013872
    Abstract: A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network.
    Type: Application
    Filed: September 11, 2012
    Publication date: January 10, 2013
    Applicant: QST Holdings LLC
    Inventors: Frederick Curtis Furtek, Paul L. Master
  • Patent number: RE44402
    Abstract: The present invention provides an improved apparatus and method for the receipt of high-speed sequential data streams. It utilizes the concept of banked memories to reduce the required speed and size of the input buffers used to receive the data streams. This allows the device to employ large, relatively slow memory elements, thereby permitting large amounts of sequential data to be stored by the receiving device. Using control information that was written as the data was being stored in the memory banks, a reordering element is later able to retrieve the data elements from the plurality of memory banks, in an order that is different from that in which the stream was received, and to reassemble the data stream into the original sequence.
    Type: Grant
    Filed: November 10, 2010
    Date of Patent: July 30, 2013
    Assignee: Jinsalas Solutions, LLC
    Inventors: Karl Meier, Nathan Dohm