With Special Data Handling, E.g., Priority Of Data Or Instructions, Pinning, Errors, Etc. (epo) Patents (Class 711/E12.075)
-
Patent number: 12086095Abstract: Technologies for enabling downstream components to update upstream states in streaming pipelines are described. One method of a first computing device receives a remote promise object assigned to a first serialized object from a second computing device in the data center over a network fabric. The remote promise object uniquely identifies a first contiguous block of the first serialized object stored in a memory associated with the second computing device. The method obtains contents of the first contiguous block and sends contents of a second serialized object back to the second computing device to release the remote promise object.Type: GrantFiled: July 11, 2022Date of Patent: September 10, 2024Assignee: Nvidia CorporationInventors: Ryan Olson, Michael Demoret, Bartley Richardson
-
Patent number: 12079517Abstract: Methods, systems, and apparatuses include receiving a write command including user data. The write command is directed to a portion of memory including a first block and a second block. A buffer is allocated for executing the write command to the first block. The buffer includes multiple buffer decks and the buffer holds the user data written to the first block. User data is programmed into the first block to a threshold percentage. The threshold percentage is less than one hundred percent of the first block. A buffer deck is invalidated in response to programming the first block to the threshold percentage. The buffer deck is reallocated to the second block for programming the user data into the second block. The buffer deck holds user data written to the second block.Type: GrantFiled: July 21, 2022Date of Patent: September 3, 2024Assignee: MICRON TECHNOLOGY, INC.Inventors: Kishore Kumar Muchherla, Peter Feeley, Jiangli Zhu, Fangfang Zhu, Akira Goda, Lakshmi Kalpana Vakati, Vivek Shivhare, Dave Scott Ebsen, Sanjay Subbarao
-
Patent number: 12045176Abstract: Embodiments are directed to memory protection with hidden inline metadata. An embodiment of an apparatus includes processor cores; a computer memory for the storage of data; and cache memory communicatively coupled with one or more of the processor cores, wherein one or more processor cores of the plurality of processor cores are to implant hidden inline metadata in one or more cachelines for the cache memory, the hidden inline metadata being hidden at a linear address level.Type: GrantFiled: April 18, 2023Date of Patent: July 23, 2024Assignee: Intel CorporationInventors: David M. Durham, Ron Gabor
-
Patent number: 12007905Abstract: A hinter data processing apparatus is provided with processing circuitry that determines that an execution context to be executed on a hintee data processing apparatus will require a virtual-to-physical address translation. Hint circuitry transmits a hint to a hintee data processing apparatus to prefetch a virtual-to-physical address translation in respect of an execution context of the further data processing apparatus. A hintee data processing apparatus is also provided with receiving circuitry that receives a hint from a hinter data processing apparatus to prefetch a virtual-to-physical address translation in respect of an execution context of the further data processing apparatus. Processing circuitry determines whether to follow the hint and, in response to determining that the hint is to be followed, causes the virtual-to-physical address translation to be prefetched for the execution context of the data processing apparatus. In both cases, the hint comprises an identifier of the execution context.Type: GrantFiled: September 24, 2021Date of Patent: June 11, 2024Assignee: Arm LimitedInventors: Jonathan Curtis Beard, Luis Emilio Pena
-
Patent number: 11966338Abstract: This disclosure provides a method, a computing system, and a computer program product for managing prefetching of pages in a database system. The method comprises obtaining shared information associated with page access, wherein the shared information associated with the page access includes information associated with the page access from a plurality of computing nodes. The method further comprises determining whether to prefetch a number of pages into a global buffer pool based at least on the shared information associated with the page access using a sequential prefetching method.Type: GrantFiled: July 19, 2022Date of Patent: April 23, 2024Assignee: International Business Machines CorporationInventors: Sheng Yan Sun, Xiaobo Wang, Shuo Li, Chun Lei Xu
-
Patent number: 11907722Abstract: Aspects of the present disclosure relate to an apparatus comprising processing circuitry, prefetch circuitry and prefetch metadata storage comprising a plurality of entries. Metadata items, each associated with a given stream of instructions, are stored in the prefetch metadata storage. Responsive to a given entry of the plurality of entries being associated with the given stream associated with a given metadata item, the given entry is updated. Responsive to no entry of the plurality of entries being associated with the given stream associated with a given metadata item, an entry is selected according to a default replacement policy, the given stream is allocated thereto, and the selected entry is updated based on the given metadata item. Responsive to a switch condition being met, the default selection policy is switched to an alternative selection policy comprising locking one or more entries by preventing allocation of streams to the locked entries.Type: GrantFiled: April 20, 2022Date of Patent: February 20, 2024Assignee: Arm LimitedInventors: Luca Maroncelli, Harvin Iriawan, Peter Raphael Eid, Cédric Denis Robert Airaud
-
Patent number: 11687457Abstract: A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.Type: GrantFiled: August 30, 2021Date of Patent: June 27, 2023Assignee: Texas Intruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria
-
Patent number: 11599417Abstract: An error correction system is disclosed. The error correction system is applied to a storage system. The error correction system generates X first operation codes, Y second operation codes and a third operation code based on the storage system. The error correction system includes an error state determining circuit and M decoding circuits. The error state determining circuit is configured to identify a current error state. When a plurality of pieces of data have a 1-bit error, the M decoding circuits are configured to execute decoding processing on the X first operation codes and the Y second operation codes, to obtain whether there is erroneous data in the bytes corresponding to the decoding circuits and locate a bit to which the erroneous data belongs.Type: GrantFiled: January 24, 2022Date of Patent: March 7, 2023Assignee: CHANGXIN MEMORY TECHNOLOGIES, INC.Inventors: Kangling Ji, Jun He, Yuanyuan Gong, Zhan Ying
-
Patent number: 11528039Abstract: Systems and methods are provided for performing error recovery using LLRs generated from multi-read operations. A method may comprise selecting a set of decoding factors for a multi-read operation to read a non-volatile storage device multiple times. The set of decoding factors may include a total number of reads, an aggregation mode for aggregating read results of multiple reads, and whether the read results include soft data. The method may further comprise issuing a command to the non-volatile storage device to read user data according to the set of decoding factors, generating a plurality of Log-Likelihood Ratio (LLR) values using a mapping engine from a pre-selected set of LLR value magnitudes based on the set of decoding factors, obtaining an aggregated read result in accordance with the aggregation mode and obtaining an LLR value from the plurality of LLR values using the aggregated read result as an index.Type: GrantFiled: March 17, 2021Date of Patent: December 13, 2022Assignee: INNOGRIT TECHNOLOGIES CO., LTD.Inventors: Han Zhang, Chenrong Xiong, Jie Chen
-
Patent number: 11520700Abstract: A holistic view of cache class of service (CLOS) to include an allocation of processor cache resources to a plurality of CLOS. The allocation of processor cache resources to include allocation of cache ways for an n-way set of associative cache. Examples include monitoring usage of the plurality of CLOS to determine processor cache resource usage and to report the processor cache resource usage.Type: GrantFiled: June 29, 2018Date of Patent: December 6, 2022Assignee: Intel CorporationInventors: Malini K. Bhandaru, Iosif Gasparakis, Sunku Ranganath, Liyong Qiao, Rui Zang, Dakshina Ilangovan, Shaohe Feng, Edwin Verplanke, Priya Autee, Lin A. Yang
-
Patent number: 11513723Abstract: Aspects of a storage device including a memory and a controller are provided which optimize read look ahead (RLA) performance based on zone configurations or stored metadata. The controller stores in memory information previously received from a host, including a zone configuration or other information from which metadata associated with subsequent data to be pre-fetched in RLA may be determined. When the stored information includes the zone configuration, the controller reads data from memory in response to a host command and limits pre-fetching of subsequent data from the memory based on the zone configuration. When the stored information includes metadata, the controller reads the metadata associated with subsequent data from the memory, and limits pre-fetching of the subsequent data based on the metadata. Thus, resources of the storage device that are typically used for RLA may instead be used for other operations, improving the efficiency of the storage device.Type: GrantFiled: February 19, 2021Date of Patent: November 29, 2022Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventor: Ramanathan Muthiah
-
Patent number: 11487874Abstract: Described herein are systems and methods for prime and probe attack mitigation. For example, some methods include, responsive to a cache miss caused by a process, checking whether a priority level of the process satisfies a first priority requirement of a first cache block of a cache with multiple ways including cache blocks associated with respective priority requirements; responsive to the priority level satisfying the first priority requirement, loading the first cache block; and, responsive to the priority level satisfying the first priority requirement, updating the first priority requirement to be equal to the priority level of the process.Type: GrantFiled: August 25, 2020Date of Patent: November 1, 2022Assignee: Marvell Asia Pte, Ltd.Inventor: Shubhendu Sekhar Mukherjee
-
Patent number: 11461239Abstract: A method and apparatus for caching a data block are provided. The method includes: obtaining, from a terminal, an access request for requesting access to a first data block; determining that the first data block is missed in a cache space of a storage system; detect whether a second data block satisfies a lazy condition, the second data block being a candidate elimination block in the cache space and the lazy condition being a condition for determining whether to delay replacing the second data block from the cache space according to a re-access probability; determining that the second data block satisfies the lazy condition; and accessing the first data block from a storage space of the storage system and skipping replacing the second data block from the cache space.Type: GrantFiled: September 30, 2020Date of Patent: October 4, 2022Assignees: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY, TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Ke Zhou, Yu Zhang, Hua Wang, Yong Guang Ji, Bin Cheng
-
Patent number: 11263150Abstract: A method, apparatus and product for utilizing address translation structures for testing address translation cache. The method comprises: obtaining a first address translation structure that comprises multiple levels, including a first top level which connects a sub-structure of the first address translation structure using pointers thereto; determining, based on the first address translation structure, a second address translation structure, wherein the second address translation structure comprises a second top level that is determined based on the first top level, wherein the second top level connects the sub-structure of the first address translation structure; executing a test so as to verify operation of an address translation cache of a target processor at least by: adding a plurality of cache lines to the address translation cache, wherein said adding is based on the address translation structures; and verifying the operation of the address translation cache using one or more memory access operations.Type: GrantFiled: February 11, 2020Date of Patent: March 1, 2022Assignee: International Business Machines CorporationInventors: Hillel Mendelson, Tom Kolan, Vitali Sokhin
-
Patent number: 10620962Abstract: An apparatus and method are provided for using predicted result values. The apparatus has processing circuitry for executing a sequence of instructions, and value prediction storage that comprises a plurality of entries, where each entry is used to identify a predicted result value for an instruction allocated to that entry. Dispatch circuitry maintains a record of pending instructions awaiting execution by the processing circuitry, and selects pending instructions from the record for dispatch to the processing circuitry for execution. The dispatch circuitry is arranged to enable at least one pending instruction to be speculatively executed by the processing circuitry using as a source operand a predicted result value provided by the value prediction storage. Allocation circuitry is arranged to apply a default allocation policy to identify a first instruction to be allocated an entry in the value prediction storage.Type: GrantFiled: July 2, 2018Date of Patent: April 14, 2020Assignee: Arm LimitedInventors: Vladimir Vasekin, David Michael Bull, Alexei Fedorov
-
Patent number: 10536274Abstract: This disclosure is directed to cryptographic protection for trusted operating systems. In general, a device may comprise for example, at least processing circuitry and memory circuitry. The device may be virtualized in that the processing circuitry may load virtual machines (VMs) and a virtual machine manager (VMM) into the memory circuitry during operation. At least one of the VMs may operate as a trusted execution environment (TEE) including a trusted operating system (TOS). The processing circuitry may comprise encryption circuitry to cryptographically protect the TOS. For example, the VMM may determine a first memory range in which the TOS will be loaded and store data regarding the first memory range in a register within the encryption circuitry. The register configures the encryption circuitry to cryptographically protect the TOS.Type: GrantFiled: March 31, 2016Date of Patent: January 14, 2020Assignee: INTEL CORPORATIONInventors: Alpa T. Narendra Trivedi, Siddhartha Chhabra, David M. Durham
-
Patent number: 9928117Abstract: A computer system includes a hardware synchronization component (HSC). Multiple concurrent threads of execution issue instructions to update the state of the HSC. Multiple threads may update the state in the same clock cycle and a thread does not need to receive control of the HSC prior to updating its states. Instructions referencing the state received during the same clock cycle are aggregated and the state is updated according to the number of the instructions. The state is evaluated with respect to a threshold condition. If it is met, then the HSC outputs an event to a processor. The processor then identifies a thread impacted by the event and takes a predetermined action based on the event (e.g. blocking, branching, unblocking of the thread).Type: GrantFiled: December 11, 2015Date of Patent: March 27, 2018Assignee: Vivante CorporationInventor: Mankit Lo
-
Patent number: 9400656Abstract: Embodiments include a method for chaining data in an exposed-pipeline processing element. The method includes separating a multiple instruction word into a first sub-instruction and a second sub-instruction, receiving the first sub-instruction and the second sub-instruction in the exposed-pipeline processing element. The method also includes issuing the first sub-instruction at a first time, issuing the second sub-instruction at a second time different than the first time, the second time being offset to account for a dependency of the second sub-instruction on a first result from the first sub-instruction, the first pipeline performing the first sub-instruction at a first clock cycle and communicating the first result from performing the first sub-instruction to a chaining bus coupled to the first pipeline and a second pipeline, the communicating at a second clock cycle subsequent to the first clock cycle that corresponds to a total number of latch pipeline stages in the first pipeline.Type: GrantFiled: August 14, 2013Date of Patent: July 26, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Thomas W. Fox, Bruce M. Fleischer, Hans M. Jacobson, Ravi Nair
-
Patent number: 8996824Abstract: A method for controlling memory refresh operations in dynamic random access memories. The method includes determining a count of deferred memory refresh operations for a first memory rank. Responsive to the count approaching a high priority threshold, issuing an early high priority refresh notification for the first memory rank, which indicates the pre-determined time for performing a high priority memory refresh operation at the first memory rank. Responsive to the early high priority refresh notification, the behavior of a read reorder queue is dynamically modified to give priority scheduling to at least one read command targeting the first memory rank, and one or more of the at least one read command is executed on the first memory rank according to the priority scheduling. Priority scheduling removes these commands from the re-order queue before the refresh operation is initiated at the first memory rank.Type: GrantFiled: February 28, 2013Date of Patent: March 31, 2015Assignee: International Business Machines CorporationInventors: Mark A. Brittain, John S. Dodson, Stephen Powell, Eric E. Retter, Jeffrey A. Stuecheli
-
Patent number: 8914591Abstract: An information processing apparatus processes data to be processed while accessing data to be processed that is stored in a memory or a HDD. The information processing apparatus determines the process content and calculates the access number to the HDD based on the determined process content and the content of data to be processed. The information processing apparatus also decides to store data to be processed in the memory when the access number is more than or equal to a threshold value. The information processing apparatus decides to store data to be processed in the HDD when the access number is less than the threshold value.Type: GrantFiled: February 17, 2012Date of Patent: December 16, 2014Assignee: Canon Kabushiki KaishaInventor: Hiromasa Kawasaki
-
Patent number: 8909874Abstract: A memory system and data processing system for controlling memory refresh operations in dynamic random access memories. The memory controller comprises logic that: tracks a time remaining before a scheduled time for performing a high priority, high latency operation a first memory rank of the memory system; responsive to the time remaining reaching a pre-established early notification time before the schedule time for performing the high priority, high latency operation, biases the re-order queue containing memory access operations targeting the plurality of ranks to prioritize scheduling of any first memory access operations that target the first memory rank. The logic further: schedules the first memory access operations to the first memory rank for early completion relative to other memory access operations in the re-order queue that target other memory ranks; and performs the high priority, high latency operation at the first memory rank at the scheduled time.Type: GrantFiled: February 13, 2012Date of Patent: December 9, 2014Assignee: International Business Machines CorporationInventors: Mark A. Brittain, John S. Dodson, Stephen J. Powell, Eric E. Retter, Jeffrey A. Stuecheli
-
Patent number: 8856452Abstract: A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.Type: GrantFiled: May 31, 2011Date of Patent: October 7, 2014Assignee: Illinois Institute of TechnologyInventors: Xian-He Sun, Yong Chen, Huaiyu Zhu
-
Patent number: 8819360Abstract: A more efficient technique is provided in an information processing apparatus which executes processing using pipelines. An information processing apparatus according to this invention includes a first pipeline, second pipeline, processing unit, and reorder unit. The first pipeline has a plurality of first nodes, and shifts first data held in a first node to a first node. The second pipeline has a plurality of second nodes respectively corresponding to the first nodes of the first pipeline, and shifts second data held in a second node to a second node. The processing unit executes data processing using the first data and the second data. The reorder unit holds one of the output second data based on attribute information of the second data output from the second pipeline, and outputs the held second data to the second pipeline.Type: GrantFiled: June 30, 2011Date of Patent: August 26, 2014Assignee: Canon Kabushiki KaishaInventor: Tadayuki Ito
-
Publication number: 20140095799Abstract: Systems and methods may provide for determining whether a memory access request is error-tolerant, and routing the memory access request to a reliable memory region if the memory access request is error-tolerant. Moreover, the memory access request may be routed to an unreliable memory region if the memory access request is error-tolerant. In one example, use of the unreliable memory region enables a reduction in the minimum operating voltage level for a die containing the reliable and unreliable memory regions.Type: ApplicationFiled: September 29, 2012Publication date: April 3, 2014Inventors: Zhen Fang, Shih-Lien Lu, Ravishankar Iyer, Srihari Makineni
-
Patent number: 8601217Abstract: A method of inserting cache blocks into a cache queue includes detecting a first cache miss for the cache queue, identifying a storage block receiving an access in response to the cache miss, calculating a first estimated cache miss cost for a first storage container that includes the storage block, calculating an insertion probability for the first storage container based on a mathematical formula of the first estimated cache miss cost, randomly selecting an insertion probability number from a uniform distribution, and inserting, in response to the insertion probability exceeding the insertion probability number, a new cache block corresponding to the storage block into the cache queue.Type: GrantFiled: January 14, 2011Date of Patent: December 3, 2013Assignee: Oracle International CorporationInventors: Garret Frederick Swart, David Vengerov
-
Patent number: 8578100Abstract: A disk drive is disclosed comprising a head actuated over a disk, a volatile semiconductor memory (VSM), and a command queue. A plurality of write commands received from a host are stored in the command queue, and write data for the write commands is stored in the VSM. A flush time needed to flush the write data from the VSM to the disk is computed, and the write data is flushed from the VSM to a non-volatile memory (NVM) in response to the flush time.Type: GrantFiled: November 8, 2010Date of Patent: November 5, 2013Assignee: Western Digital Technologies, Inc.Inventors: Sang Huynh, Ayberk Ozturk
-
Patent number: 8533395Abstract: A processor includes a multi-level cache hierarchy where a lock property is associated with a cache line. The cache line retains the lock property and may move back and forth within the cache hierarchy. The cache line may be evicted from the cache hierarchy after the lock property is removed.Type: GrantFiled: February 24, 2006Date of Patent: September 10, 2013Assignee: Micron Technology, Inc.Inventors: Dennis M. O'Connor, Michael W. Morrow, Stephen J. Strazdus
-
Patent number: 8499117Abstract: A method for writing and reading data in memory cells, comprises the steps of: defining a virtual memory, defining write commands and read commands of data (DT) in the virtual memory, providing a first nonvolatile physical memory zone (A1), providing a second nonvolatile physical memory zone (A2), and, in response to a write command of an initial data, searching for a first erased location in the first memory zone, writing the initial data (DT1a) in the first location (PB1(DPP0)), and writing, in the metadata (DSC0) an information (DS(PB1)) allowing the first location to be found and an information (LPA, DS(PB1)) forming a link between the first location and the location of the data in the virtual memory.Type: GrantFiled: September 21, 2010Date of Patent: July 30, 2013Assignee: STMicroelectronics (Rousset) SASInventor: Hubert Rousseau
-
Patent number: 8473646Abstract: Input and output (I/O) operations performed by a data storage device are managed dynamically to balance aspects such as throughput and latency. Sequential read and write requests are sent to a data storage device whereby the corresponding operations are performed without time delay due to extra disk revolutions. In order to minimize latency, particularly for read operations, random read and write requests are held in a queue upstream of an I/O controller of the data storage device until the buffer of the data storage device is empty. The queued requests can be reordered when a higher priority request is received, improving the overall latency for specific requests. An I/O scheduler of a data server is still able to use any appropriate algorithm to order I/O requests, such as by prioritizing reads over writes as long as the writes do not back up in the I/O queue beyond a certain threshold.Type: GrantFiled: June 21, 2012Date of Patent: June 25, 2013Assignee: Amazon Technologies, Inc.Inventors: Tate Andrew Certain, Roland Paterson-Jones, James R. Hamilton
-
Patent number: 8370580Abstract: Techniques for directory server integration are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for directory server integration comprising setting one or more parameters determining a range of permissible expiration times for a plurality of cached directory entries, creating, in electronic storage, a cached directory entry from a directory server, assigning a creation time to the cached directory entry, and assigning at least one random value to the cached directory entry, the random value determining an expiration time for the cached directory entry within the range of permissible expiration times, wherein randomizing the expiration time for the cached directory entry among the range of permissible expiration times for a plurality of cached directory entries reduces an amount of synchronization required between cache memory and the directory server at a point in time.Type: GrantFiled: March 30, 2012Date of Patent: February 5, 2013Assignee: Symantec CorporationInventors: Ayman Mobarak, Nathan Moser, Chad Jamart
-
Patent number: 8316184Abstract: Domain-based cache management methods and systems, including domain event based priority demotion (“EPD”). In EPD, priorities of cached data blocks are demoted upon one or more domain events, such as upon encoding of one or more macroblocks of a video frame. New data blocks may be written over lowest priority cached data blocks. New data blocks may initially be assigned a highest priority. Alternatively, or additionally, one or more new data blocks may initially be assigned one of a plurality of higher priorities based on domain-based information, such as a relative position of a requested data block within a video frame, and/or a relative direction associated with a requested data block. Domain-based cache management may be implemented with one or more other cache management techniques, such as least recently used techniques. Domain-based cache management may be implemented in associative caches, including set associative caches and fully associative caches, and may be implemented with indirect indexing.Type: GrantFiled: June 30, 2008Date of Patent: November 20, 2012Assignee: Intel CorporationInventors: Zhen Fang, Erik G Hallnor, Nitin B Gupte, Steven Zhang
-
Patent number: 8285952Abstract: A method of utilizing storage in a storage system comprises prioritizing a plurality of storage areas in the storage system for data recovery with different priorities; and performing data recovery of the storage system at an occurrence of a failure involving one or more of the storage areas in the storage system based on the priorities. Data recovery for one storage area having a higher priority is to occur before data recovery for another storage area having a lower priority in the storage system. In various embodiments, the prioritization is achieved by monitoring the access characteristics, or the priority is specified by the host or management computer based on the usage and/or importance of data stored in the storage system, or the priority is determined by the storage system based on the area assignment/release (i.e., usage) of thin provisioned volumes.Type: GrantFiled: September 17, 2009Date of Patent: October 9, 2012Assignee: Hitachi, Ltd.Inventors: Hiroshi Arakawa, Akira Yamamoto
-
Patent number: 8239589Abstract: Input and output (I/O) operations performed by a data storage device are managed dynamically to balance aspects such as throughput and latency. Sequential read and write requests are sent to a data storage device whereby the corresponding operations are performed without time delay due to extra disk revolutions. In order to minimize latency, particularly for read operations, random read and write requests are held in a queue upstream of an I/O controller of the data storage device until the buffer of the data storage device is empty. The queued requests can be reordered when a higher priority request is received, improving the overall latency for specific requests. An I/O scheduler of a data server is still able to use any appropriate algorithm to order I/O requests, such as by prioritizing reads over writes as long as the writes do not back up in the I/O queue beyond a certain threshold.Type: GrantFiled: March 31, 2010Date of Patent: August 7, 2012Assignee: Amazon Technologies, Inc.Inventors: Tate Andrew Certain, Roland Paterson-Jones, James R. Hamilton
-
Patent number: 8195886Abstract: A data processing apparatus and method are provided for implementing a replacement scheme for entries of a storage unit. The data processing apparatus has processing circuitry for executing multiple program threads including at least one high priority program thread and at least one lower priority program thread. A storage unit is then shared between the multiple program threads and has multiple entries for storing information for reference by the processing circuitry when executing the program threads. A record is maintained identifying for each entry whether the information stored in that entry is associated with a high priority program thread or a lower priority program thread. Replacement circuitry is then responsive to a predetermined event in order to select a victim entry whose stored information is to be replaced.Type: GrantFiled: March 16, 2007Date of Patent: June 5, 2012Assignee: ARM LimitedInventors: Emre Özer, Stuart David Biles
-
Patent number: 8069309Abstract: Memory is serviced. In response to an input indicating a serious condition, a service is invoked that is unaffected by the serious condition. By the service, it is determined whether other instructions are available to be executed that are not being affected by the serious condition. By the other instructions, data is copied from a write cache to a nonvolatile memory before the data is lost from the write cache.Type: GrantFiled: June 29, 2006Date of Patent: November 29, 2011Assignee: EMC CorporationInventor: Matthew Long
-
Patent number: 8037249Abstract: A method for queuing asynchronous memory accesses includes pinning memory buffers in a managed memory environment, issuing data transfer requests to a peripheral device, each request corresponding to at least one of the pinned memory buffers, and asynchronously accessing at least one of the pinned buffers responsive to the requests.Type: GrantFiled: September 27, 2007Date of Patent: October 11, 2011Assignee: Cypress Semiconductor CorporationInventor: Greg Nalder
-
Patent number: 7991975Abstract: To prevent random access commands from remaining even in the case of mixed sequential and random accesses. A storage medium control unit is used in a data storage device adapted to perform processing on a data storage medium based on multiple requests including sequential access requests and random access requests. The storage medium control unit includes: request response delay monitoring device for monitoring the presence of delay in response to the requests based on whether or not the response time for each request exceeds a certain allowable delay time; and request control device for preventing the rearrangement processing of the sequential access requests and controlling the processing of the requests to be performed in a certain request order at the allowable delay time if exceeded.Type: GrantFiled: March 24, 2008Date of Patent: August 2, 2011Assignee: NEC CorporationInventor: Kazunori Tanoue
-
Patent number: 7958324Abstract: A computer system of the present invention can adjust the execution frequencies of a command issued from a host and a command issued from a storage. An external manager disposed in the host configures a priority for a host command issued from a command issuing module inside the host. An internal manager disposed in the storage configures a priority for an internal command issued from a command issuing module inside the storage. The internal manager adjusts the execution frequency of the host command and the execution frequency of the internal command based on the host command priority and the internal command priority.Type: GrantFiled: May 1, 2008Date of Patent: June 7, 2011Assignee: Hitachi, Ltd.Inventors: Ken Tokoro, Takahiko Takeda
-
Patent number: 7930484Abstract: Instructions involving a relatively significant data transfer or a particular type of data transfer via a cache result in the application of a restricted access policy to control access to one or more partitions of the cache so as to reduce or prevent the overwriting of data that is expected to be subsequently used by the cache or by a processor. A processor or other system component may assert a signal which is utilized to select between one or more access policies and the selected access policy then may be applied to control access to one or more ways of the cache during the data transfer operation associated with the instruction. The access policy typically represents an access restriction to particular cache partitions, such as a restriction to one or more particular cache ways or one or more particular cache lines.Type: GrantFiled: February 7, 2005Date of Patent: April 19, 2011Assignee: Advanced Micro Devices, Inc.Inventors: Stephen P. Thompson, Mark A. Krom
-
Patent number: 7840751Abstract: Apparatus and method for command queue management of back watered requests. A selected request is released from a command queue, and further release of requests from the queue is interrupted when a total number of subsequently completed requests reaches a predetermined threshold.Type: GrantFiled: June 29, 2007Date of Patent: November 23, 2010Assignee: Seagate Technology LLCInventors: Clark Edward Lubbers, Robert Michael Lester
-
Patent number: 7831768Abstract: A method for writing data to a RAID 5 configuration of hard disks writes two or more items of data to a data stripe together. The method includes the determining of the suitability of data items to be written together, the storing of the new data items to temporary buffers, the reading of the original data and parity from the hard disk to the temporary buffers, the modification of the parity and the writing of the new data and new parity to the hard disks.Type: GrantFiled: October 30, 2007Date of Patent: November 9, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventors: Srikanth Ananthamurthy, Aaron Lindemann
-
Publication number: 20090240902Abstract: A computer system of the present invention can adjust the execution frequencies of a command issued from a host and a command issued from a storage. An external manager disposed in the host configures a priority for a host command issued from a command issuing module inside the host. An internal manager disposed in the storage configures a priority for an internal command issued from a command issuing module inside the storage. The internal manager adjusts the execution frequency of the host command and the execution frequency of the internal command based on the host command priority and the internal command priority.Type: ApplicationFiled: May 1, 2008Publication date: September 24, 2009Inventors: Ken Tokoro, Takahiko Takeda
-
Publication number: 20090164741Abstract: An information processing apparatus and an information processing method are capable of correctly selecting data to be deleted, without a user having to perform a troublesome operation. In a backup operation, a determination is made for each image file as to whether a predetermined condition is satisfied. If the condition is satisfied, image files are backed up, and storage priority levels defined for these image files are reduced in accordance with a rule predefined by a user. The storage priority level is a measure indicating the priority of keeping an image file in a storage unit. The higher the storage priority, the lower the probability that image files are deleted. The storage priority levels are changed depending on whether image files have been backed up and depending on the number of times image files were backed up.Type: ApplicationFiled: November 18, 2008Publication date: June 25, 2009Applicant: CANON KABUSHIKI KAISHAInventor: Yasuhito Takaki
-
Publication number: 20080162735Abstract: Embodiments include methods, apparatus, and systems for prioritizing input/outputs (I/Os) to storage devices. One embodiment includes a method that receives an input/output (I/O) command having a group number field and a priority number field at a target device. The method then generates a new priority value based on the group number field. The I/O command is processed at the target device with the new priority value.Type: ApplicationFiled: December 29, 2006Publication date: July 3, 2008Inventors: Doug Voigt, Michael K. Traynor, Santosh Ananth Rao