Using A Replacement Algorithm (epo) Patents (Class 711/E12.07)
  • Patent number: 7831785
    Abstract: A data collection management system comprises collection logic configured to store incoming data in a memory at a collection rate, and memory management logic configured to automatically downsample at least a portion of the stored data in response to a usage level of the memory reaching a threshold.
    Type: Grant
    Filed: April 30, 2007
    Date of Patent: November 9, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Cyrille de Brebisson, Brian Maguire
  • Patent number: 7822917
    Abstract: A mass storage system usable with a storage device is described. The mass storage system includes a housing, a device interface, a user interface, a controller, and a connector. The device interface is operatively coupled with the housing for connecting with a storage device. The user interface is connected with the housing and operatively coupled with the device interface for accessing information about a storage device connected via the device interface. The controller operatively coupled with the device interface and the user interface for controlling the mass storage system. The connector is connected with the controller for coupling the mass storage system to an external processing device.
    Type: Grant
    Filed: June 10, 2005
    Date of Patent: October 26, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Luca Lodolo, Yancy Chen
  • Publication number: 20100235583
    Abstract: Apparatus, systems, and methods may operate to send a window copy message including changed window identification information to a remote node when metadata associated with a changed foreground window at a local node has been cached, and otherwise, to locally cache the window metadata and send the window metadata and window pixel data to the remote node. When a preselected minimum bandwidth connection is not available between the local node and the remote node, additional operations may include sending a rectangle paint message including changed rectangle identification information to the remote node when rectangle metadata associated with a changed rectangle of a designated minimum size at the local node has been cached, and otherwise, to locally cache the rectangle metadata and send the rectangle metadata and rectangle pixel data to the remote node. Additional apparatus, systems, and methods are disclosed.
    Type: Application
    Filed: March 16, 2009
    Publication date: September 16, 2010
    Inventors: Ravi kiran Gokaraju, Sudhir Reddy Nathaala
  • Patent number: 7779227
    Abstract: A memory management apparatus and a related method thereof for accessing digital versatile disc(DVD) data stored in a memory device are disclosed. The memory management apparatus includes an address mapping module, coupled to a bus, for receiving a logic address from the bus and for generating a physical address according to the logic address, and an access control module, coupled to the address mapping module and the memory device, for accessing the digital versatile disc data according to the physical address.
    Type: Grant
    Filed: August 14, 2007
    Date of Patent: August 17, 2010
    Assignee: Realtek Semiconductor Corp.
    Inventors: Hui-Huang Chang, Yi-Chih Huang, Feng-Cheng Liu
  • Patent number: 7774549
    Abstract: A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.
    Type: Grant
    Filed: March 2, 2007
    Date of Patent: August 10, 2010
    Assignee: MIPS Technologies, Inc.
    Inventor: Sanjay Vishin
  • Publication number: 20100185913
    Abstract: A method for decoding LDPC code comprises the steps of: marking non-zero sub-matrices of a parity-check matrix of an LDPC code as 1 and zero sub-matrices of the parity-check matrix as 0 to form a simplified matrix; rearranging the sequence of rows of the simplified matrix according to the dependency between these rows; and updating the LDPC code in accordance with the sequence of the rows.
    Type: Application
    Filed: July 8, 2009
    Publication date: July 22, 2010
    Applicant: RALINK TECHNOLOGY CORPORATION
    Inventors: YEN CHIN LIAO, CHUN HSIEN WEN, YUNG SZU TU, JIUNN TSAIR CHEN
  • Patent number: 7734872
    Abstract: A facility for determining whether to consistency-check a cache entry is described. The facility randomly or pseudorandomly selects a value in a range. If the selected value satisfies a predetermined consistency-checking threshold within the range, the facility consistency-checks the entry, and may decide to propagate this knowledge to other cache managers. If, on the other hand, the selected value does not satisfy the consistency-checking threshold, the facility determines not to consistency-check the entry.
    Type: Grant
    Filed: December 1, 2008
    Date of Patent: June 8, 2010
    Assignee: Amazon Technologies, Inc.
    Inventors: Hemant Madhav Bhanoo, Ozgun A. Erdogan, Tobias Holgers, Nevil A. Shah, Ryan J. Snodgrass
  • Publication number: 20090271575
    Abstract: A cache memory according to the present invention is a cache memory that has a set associative scheme and includes: a plurality of ways, each way being made up of entries, each entry holding data and a tag; a first holding unit operable to hold, for each way, a priority attribute that indicates a type of data to be preferentially stored in that way; a second holding unit which is included at least in a first way among the ways, and is operable to hold, for each entry of the first way, a data attribute that indicates a type of data held in that entry; and a control unit operable to perform replace control on the entries by prioritizing a way whose priority attribute held by the first holding unit matches a data attribute outputted from a processor, wherein when a cache miss occurs and in the case where (i) valid data is held in an entry of the first way among entries that belong to a set selected based on an address outputted from the processor, (ii) all of the following attributes match: the data attribute of
    Type: Application
    Filed: July 7, 2009
    Publication date: October 29, 2009
    Inventor: Shirou YOSHIOKA
  • Publication number: 20090254710
    Abstract: A cache memory control device according to an embodiment of the present invention comprises: a refill counter that counts a refill request, and a cache-capacity determining unit that determines cache capacity. The cache-capacity determining unit transmits a cache-capacity-decrease command signal to the cache memory when a count value is equal to or smaller than a first threshold value or is smaller than the first threshold value, and the cache-capacity determining unit transmits a cache-capacity-increase command signal to the cache memory when the count value is equal to or larger than a second threshold value, which is larger than the first threshold value, or when the count value is larger than the second threshold value.
    Type: Application
    Filed: February 17, 2009
    Publication date: October 8, 2009
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Nobuhiro NONOGAKI
  • Patent number: 7552293
    Abstract: Embodiments of the present invention provide a mechanism for an operating system and applications to cooperate in memory management. Applications register with the operating system for cooperative memory management. The operating system monitors the memory and determines a memory “pressure” related to the amount of demand for the memory. As the memory pressure increases, the operating system provides a memory pressure signal as feedback to the registered applications. The operating system may send this signal to indicate it is about to commence evicting pages from the memory or when it has commenced swapping out application data. In response to the signal, the registered applications may evaluate the memory pressure, determine which data should be freed, if any, and provide this information back to the operating system. The operating system may then free those portions of memory relinquished by the applications. By releasing data the system may thus avoid swapping and increase its performance.
    Type: Grant
    Filed: February 28, 2006
    Date of Patent: June 23, 2009
    Assignee: Red Hat, Inc.
    Inventors: Henri Han Van Riel, Matthias Clasen
  • Publication number: 20090070541
    Abstract: A system for managing data includes providing at least one logical device having a table of information that maps sections of the logical device to sections of at least two storage areas. Characteristics of data associated with at least one section of the logical device may be evaluated. The at least one section of the data may moved between the at least two storage areas according to a policy and based on the characteristics of the data. The table of information is updated according to the movement of data between the at least two storage areas.
    Type: Application
    Filed: March 23, 2007
    Publication date: March 12, 2009
    Inventor: Yechiel Yochai