Entry Replacement Strategy Patents (Class 711/133)
-
Patent number: 10158676Abstract: In various embodiments, a data map generation system is configured to: (1) enable a user to specify one or more criteria; (2) identify one or more data flows based at least in part on the one or more specified criteria; (3) generate a data map based at least in part on the identified one or more data flows; and (4) display the data map to any suitable individual (e.g., the user). In particular embodiments, the system is configured to display all data flows associated with a particular organization that are stored within the system. In other embodiments, the system is configured to display all data flows that are associated with a particular privacy campaign undertaken by the organization.Type: GrantFiled: January 29, 2018Date of Patent: December 18, 2018Assignee: OneTrust, LLCInventor: Kabir A. Barday
-
Patent number: 10157133Abstract: A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.Type: GrantFiled: December 10, 2015Date of Patent: December 18, 2018Assignee: Arm LimitedInventors: Jamshed Jalal, Mark David Werkheiser
-
Patent number: 10152423Abstract: The population of data to be admitted into secondary data storage cache of a data storage system is controlled by determining heat metrics of data of the data storage system. If candidate data is submitted for admission into the secondary cache, data is selected to tentatively be evicted from the secondary cache; candidate data provided to the secondary data storage cache is rejected if its heat metric is less than the heat metric of the tentatively evicted data; and candidate data submitted for admission to the secondary data storage cache is admitted if its heat metric is equal to or greater than the heat metric of the tentatively evicted data.Type: GrantFiled: October 31, 2011Date of Patent: December 11, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
-
Patent number: 10140219Abstract: An apparatus for use in telecommunications system comprises a cache memory shared by multiple clients and a controller for controlling the shared cache memory. A method of controlling the cache operation in a shared cache memory apparatus is also disclosed. The apparatus comprises a cache memory accessible by a plurality of clients and a controller configured to allocate cache lines of the cache memory to each client according to a line configuration. The line configuration comprises, for each client, a maximum allocation of cache lines that each client is permitted to access. The controller is configured to, in response to a memory request from one of the plurality of clients that has reached its maximum allocation of cache lines, allocate a replacement cache line to the client from cache lines already allocated to the client when no free cache lines in the cache are available.Type: GrantFiled: November 2, 2012Date of Patent: November 27, 2018Assignee: BlackBerry LimitedInventor: Simon John Duggins
-
Patent number: 10142335Abstract: An apparatus, method, system, and program product are disclosed for intrinsic chip identification. One method includes receiving first counter information from a device, determining whether such information matches second counter information, receiving first frequencies from the device, determining whether each frequency of such frequencies is within a predetermined range of a corresponding frequency of second frequencies, receiving a response to a challenge sent to the device, determining whether the response matches an expected response, and granting authentication. Granting authentication may include granting authentication in response to: the first counter information matching the second counter information; each frequency of the first frequencies being within the predetermined range of a corresponding frequency of the second frequencies; and the expected response matching the response. The expected response may be updated over time.Type: GrantFiled: December 18, 2015Date of Patent: November 27, 2018Assignee: International Business Machines CorporationInventors: Chandrasekharan Kothandaraman, Sami Rosenblatt, Rasit O. Topaloglu
-
Patent number: 10126985Abstract: A processor includes a processing core to generate a memory request for an application data in an application. The processor also includes a virtual page group memory management (VPGMM) unit coupled to the processing core to specify a caching priority (CP) to the application data for the application. The caching priority identifies importance of the application data in a cache.Type: GrantFiled: June 24, 2015Date of Patent: November 13, 2018Assignee: Intel CorporationInventors: Subramanya R. Dulloor, Rajesh M. Sankaran, David A. Koufaty, Christopher J. Hughes, Jong Soo Park, Sheng Li
-
Patent number: 10120819Abstract: An embedded computer system includes a processor, an interrupt source, an interrupt controller and a cache memory subsystem. In response to a request from the processor to read a data element, the cache memory subsystem fills cache lines in a cache memory with data elements read from an upper-level memory. While filling a cache line the cache memory subsystem is unable to respond to a second request from the processor which also requires a cache line fill. In response to receiving an indication from an interrupt source, the interrupt controller provides an indication substantially simultaneously to the processor and to the cache memory subsystem. In response to receiving the indication from the interrupt controller, the cache memory subsystem terminates a cache line fill and prepares to receive another request from the processor.Type: GrantFiled: March 20, 2017Date of Patent: November 6, 2018Assignee: NXP USA, Inc.Inventors: Michael Rohleder, Stefan Singer, Josef Fuchs
-
Patent number: 10114754Abstract: Improved techniques and systems are disclosed for ensuring that physical storage is available for cached allocating writes in a thinly provisioned storage environment. By monitoring the size of the cached allocating writes in the file system cache and taking cache occupancy reduction actions when criteria for cache reduction are fulfilled, caching of allocating writes that do not have a corresponding physical storage allocation can be eliminated or reduced to a user-configurable maximum without disabling caching of allocating writes. Using these techniques, allocating writes may be cached without risk of data loss.Type: GrantFiled: September 30, 2015Date of Patent: October 30, 2018Assignee: Veritas Technologies LLCInventors: Niranjan Sanjiv Pendharkar, Anindya Banerjee
-
Patent number: 10114756Abstract: A method includes reading, by a processor, one or more configuration values from a storage device or a memory management unit. The method also includes loading the one or more configuration values into one or more registers of the processor. The one or more registers are useable by the processor to perform address translation.Type: GrantFiled: March 14, 2013Date of Patent: October 30, 2018Assignee: QUALCOMM IncorporatedInventors: Christopher Edward Koob, Erich James Plondke, Piyush Patel, Thomas Andrew Sartorius, Lucian Codrescu
-
Patent number: 10102150Abstract: An adaptive smart data cache eviction method takes file-based quotas into account during eviction of WEUs as opposed to a default eviction policy that treats all files the same. Adaptive smart data cache eviction is a granular and dynamic eviction of the least frequently and least recently accessed blocks contained in a WEU for those files that have exceeded file-based quotas established for them or that are determined to be candidates for eviction based on the number of blocks stored for them relative to other files and how frequently and recently those blocks were accessed.Type: GrantFiled: April 28, 2017Date of Patent: October 16, 2018Assignee: EMC IP Holding Company LLCInventors: Satish Kumar Kashi Visvanathan, Rahul Ugale
-
Patent number: 10102149Abstract: A hybrid hierarchical cache is implemented at the same level in the access pipeline, to get the faster access behavior of a smaller cache and, at the same time, a higher hit rate at lower power for a larger cache, in some embodiments. A split cache at the same level in the access pipeline includes two caches that work together. In the hybrid, split, low level cache (e.g., L1) evictions are coordinated locally between the two L1 portions, and on a miss to both L1 portions, a line is allocated from a larger L2 cache to the smallest L1 cache.Type: GrantFiled: April 17, 2017Date of Patent: October 16, 2018Assignee: Intel CorporationInventors: Abhishek R. Appu, Joydeep Ray, James A. Valerio, Altug Koker, Prasoonkumar P. Surti, Balaji Vembu, Wenyin Fu, Bhushan M. Borole, Kamal Sinha
-
Patent number: 10102036Abstract: Allocating threads to processors based, at least in part, on identifiers for thread sets and applications. A thread is paired with an application and, using the identifier for the application, an ID pairing is allocated to a processor.Type: GrantFiled: February 5, 2016Date of Patent: October 16, 2018Assignee: International Business Machines CorporationInventors: David Granshaw, Samuel T. Massey, Daniel J. McGinnes, Martin A. Ross, Richard G. Schofield, Craig H. Stirling
-
Patent number: 10096081Abstract: An adaptive list stores previously received hardware state information that has been used to configure a graphics processing core. One or more filters are configured to filter packets from a packet stream directed to the graphics processing core. The packets are filtered based on a comparison of hardware state information included in the packet and hardware state information stored in the adaptive list. The adaptive list is modified in response to filtering the first packet. The filters can include a hardware filter and a software filter that selectively filters the packets based on whether the graphics processing core is limiting throughput. The adaptive list can be implemented as content-addressable memory (CAM), a cache, or a linked list.Type: GrantFiled: September 20, 2016Date of Patent: October 9, 2018Assignee: Advanced Micro Devices, Inc.Inventors: Alexander Fuad Ashkar, Harry J. Wise, Rex Eldon McCrary, Angel E. Socarras
-
Patent number: 10097635Abstract: A reception unit receives an input of designation of a performance level for a volume. A target value calculation unit obtains a target value of performance of data transmission and reception with respect to the volume according to the input performance level. A setting unit sets the target value obtained by the target value calculation unit for the volume. A monitoring unit monitors a load factor of a transmission resource. A bandwidth management unit identifies a target transmission resource based on the load factor of the transmission resource, decides a bandwidth allocation to a memory unit that uses the target transmission resource based on the target value, and instructs a bandwidth control unit of a storage device to tune the bandwidth using the decided bandwidth allocation.Type: GrantFiled: February 24, 2015Date of Patent: October 9, 2018Assignee: FUJITSU LIMITEDInventors: Toshiharu Makida, Kiyoshi Sugioka, Jouichi Bita
-
Patent number: 10097406Abstract: Aspects of the present disclosure describe systems and corresponding methods for storing and/or redistributing data within a network. In various aspects, data and/or sets of data stored in a database, data store, or other type of database storage system may be pulled, pushed, distributed, redistributed, or otherwise positioned at one or more data caches and/or servers strategically located across an enterprise network, a content delivery network (“CDN”), etc., and may be accessible over such networks, other networks, and/or the Internet.Type: GrantFiled: March 15, 2013Date of Patent: October 9, 2018Assignee: LEVEL 3 COMMUNICATIONS, LLCInventors: James Edward Borowicz, Kevin Dean Wein, William Charles Ramthun
-
Patent number: 10089232Abstract: Embodiments of the present invention include methods for increasing off-chip bandwidth. The method includes designing a circuit of switchable pins, replacing a portion of allocated pins of a processor with switchable pins, connecting the processor to a memory interface configured to switch the switchable pins between a power mode and a signal mode, providing a metric configured to identify which of the power mode and the signal mode is most beneficial during 1 millisecond intervals, and switching the switchable pins to signal mode during intervals where the signal mode provides more benefit than the power mode.Type: GrantFiled: June 11, 2015Date of Patent: October 2, 2018Assignee: Board of Supervisors of Louisiana State University and Agricultural and Mechanical CollegeInventors: Lu Peng, Ashok Srivastava, Shaoming Chen
-
Patent number: 10082958Abstract: Provided are a computer program product, system, and method for invoking Input/Output (I/O) threads on processors to demote tracks from a cache. An Input/Output (I/O) thread, executed by a processor, processes I/O requests directed to tracks in the storage by accessing the tracks in the cache. After processing at least one I/O request, the I/O thread determines whether a number of free cache segments in the cache is below a free cache segment threshold. The I/O thread processes a demote ready list, indicating tracks eligible to demote from the cache, to demote tracks from the cache in response to determining that the number of free cache segments is below the free cache segment threshold. The I/O thread continues to process I/O requests directed to tracks from the storage stored in the cache after processing the demote ready list to demote tracks in the cache.Type: GrantFiled: March 27, 2018Date of Patent: September 25, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Patent number: 10078598Abstract: A plurality of linked lists of elements is maintained corresponding to a plurality of threads accessing a plurality of cache entries, including a first linked list corresponding to a first thread and a second linked list corresponding to a second thread. Each element of each linked list corresponds to one of the plurality of cache entries. In response to the first thread accessing a cache entry corresponding to an element of the second linked list of elements, the element corresponding to the accessed cache entry is inserted to a head of the first linked list of elements. The element corresponding to the accessed cache entry is removed from the second linked list. One or more neighboring elements that were adjacent to the removed elements are re-linked on the second linked list.Type: GrantFiled: August 31, 2016Date of Patent: September 18, 2018Assignee: EMC IP Holding Company LLCInventors: Grant Wallace, Philip Shilane
-
Patent number: 10067947Abstract: A server computational device maintains commonly occurring duplicate chunks of deduplicated data that have already been stored in a server side repository via one or more client computational devices. The server computational device provides a client computational device with selected elements of the commonly occurring duplicate chunks of deduplicated data, in response to receiving a request by the server computational device from the client computational device to prepopulate, refresh or update a client side deduplication cache maintained in the client computational device.Type: GrantFiled: November 12, 2015Date of Patent: September 4, 2018Assignee: International Business Machines CorporationInventors: Jeremy M. Bares, Robert G. Genis, Jr., Howard N. Martin, Diem T. Nguyen, Michael G. Sisco
-
Patent number: 10069929Abstract: A technique for estimating cache size for cache routers in information centric networks (ICNs) is disclosed. In an example, an average rate of incoming requests and a probability of occurrence of each request at a cache router in a predefined time interval is determined. Further, a relation between cache hit and cache miss with and without replacement is derived based on the probability of occurrence of each request. Furthermore, an entropy of the requests is computed based on the probability of occurrence of each request. Moreover, a diversity index of the requests is calculated based on the entropy and the average rate of the requests. A cache size for the cache router is then estimated based on a user defined probability of cache hit, the average rate of the requests, the diversity index of the requests and the relation between the cache hit and cache miss with and without replacement.Type: GrantFiled: March 9, 2016Date of Patent: September 4, 2018Assignee: Tata Consultancy Services LimitedInventors: Bighnaraj Panigrahi, Samar Shailendra, Hemant Kumar Rath, Anantha Simha
-
Patent number: 10069896Abstract: Transferring data from a first data storage drive of a first data storage library to a first computer system that is connected to a second data storage drive of a second data storage library. A data transfer request for transferring data accessible by the first data storage drive to the first computer system is received. A network connection is initiated between the first and the second data storage drive. An access to the requested data is initiated by the first data storage drive. A transfer of the requested data from the first to the second data storage drive via the network connection is initiated. A transfer of the requested data from the second data storage drive to the first computer system is initiated.Type: GrantFiled: November 1, 2015Date of Patent: September 4, 2018Assignee: International Business Machines CorporationInventors: Bernd Freitag, Brian G. Goodman, Frank Krick, Tim Oswald
-
Patent number: 10061595Abstract: Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information.Type: GrantFiled: May 11, 2016Date of Patent: August 28, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Mehmet Iyigun, Yevgeniy Bak, Emily N. Wilson, Kirsten V. Stark, Sushu Zhang, Patrick L. Stemen, Brian E. King, Vasilios Karagounis, Neel Jain
-
Patent number: 10055304Abstract: An in-memory application has a state is associated with data (CA0, CB0, CC0) stored in a memory and accessed by the application. A first restore point of the application is determined to represent a first time point (T0) in the execution time associated with a first state at which the application accesses the data being stored in memory locations (CA0) using first addresses (S1) and first pointers (A0) which are stored in a first data structure. A first restore point identifier is assigned to the first restore point, whose value is indicative of (T0). The first restore point identifier is stored in association with (A0) and (S1) in a first entry of a second data structure. In the first data structure, the first addresses (S1) are associated to second pointers (A1) to contents of memory locations (CA1) in the memory, and redirecting writing operations.Type: GrantFiled: March 25, 2015Date of Patent: August 21, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Alexander Neef, Martin Oberhofer, Andreas Trinks, Andreas Uhl
-
Patent number: 10048891Abstract: Transferring data from a first data storage cartridge of a first data storage library to a second data storage library. The first library includes a first data storage drive, the second library includes a second data storage drive. If the first and second libraries are not connected by a mechanical pass-through, mounting the first data storage cartridge into the first data storage drive. A network connection between the first and second data storage drives, with both data storage drives operating in a data transfer mode, is initiated. Mounting a second data storage cartridge into the second data storage drive. Copying the data of the first data storage cartridge onto the second data storage cartridge via the network connection, and deleting the data of the first data storage cartridge. Else, transporting the data cartridge from the first library to the second library via the mechanical pass-through.Type: GrantFiled: November 29, 2017Date of Patent: August 14, 2018Assignee: International Business Machines CorporationInventors: Bernd Freitag, Brian G. Goodman, Frank Krick, Erik Rueger
-
Patent number: 10048979Abstract: Systems and method for the management of migrations of virtual machine instances are provided. A migration manager monitors the resource usage of a virtual machine instance over time in order to create a migration profile. When migration of a virtual machine instance is desired, the migration manager schedules the migration to occur such that the migration conforms to the migration profile.Type: GrantFiled: February 1, 2016Date of Patent: August 14, 2018Assignee: Amazon Technologies, Inc.Inventors: Pradeep Vincent, Nathan Thomas
-
Patent number: 10049051Abstract: Systems and methods are described to reserve cache space of points of presence (“POPs”) within a content delivery network (“CDN”). A provider may submit a request to the CDN to reserve cache space on one or more POPs for data objects designated by that provider. Thereafter, the CDN may mark those designated data objects within its cache as protected from eviction. When the CDN implements a cache eviction policy on the cache, the protected objects may be ignored for purposes of eviction, or may be evicted only after non-protected data objects.Type: GrantFiled: December 11, 2015Date of Patent: August 14, 2018Assignee: Amazon Technologies, Inc.Inventor: Matthew Graham Baldwin
-
Patent number: 10025718Abstract: Modifications to throughput capacity provisioned at a data store for servicing access requests to the data store may be performed according to cache performance metrics. A cache that services access requests to the data store may be monitored to collected and evaluate cache performance metrics. The cache performance metrics may be evaluated with respect to criteria for triggering different throughput modifications. In response to triggering a throughput modification, the throughput capacity for the data store may be modified according to the triggered throughput modification. In some embodiments, the criteria for detecting throughput modifications may be determined and modified based on cache performance metrics.Type: GrantFiled: June 28, 2016Date of Patent: July 17, 2018Assignee: Amazon Technologies, Inc.Inventors: Muhammad Wasiq, Nima Sharifi Mehr
-
Patent number: 10019368Abstract: A placement policy enables the selective storage of cachelines in a multi-level cache hierarchy: Reuse behavior of a cacheline is tracked during execution of an application in both a first level cache memory and a second level cache memory. A cache placement policy for the cacheline is determined based on the tracked reuse behavior.Type: GrantFiled: May 1, 2015Date of Patent: July 10, 2018Assignee: Samsung Electronics Co., Ltd.Inventors: Erik Hagersten, Andreas Sembrant, David Black-Schaffer
-
Patent number: 10019354Abstract: Apparatus, systems, and methods to manage memory operations are described. A cache controller is provided comprising logic to receive a transaction to operate on a data element in a cache memory, determine whether the data element is to be stored in a nonvolatile memory by querying a source address decoder (SAD), and, in response to a determination that the data element is to be stored in the nonvolatile memory, to forward the transaction to a memory controller coupled to the nonvolatile memory, and, in response to a determination that the data element is not to be stored in the nonvolatile memory, to drop the transaction from a cache flush procedure of the cache controller. Additionally, the cache controller may receive a confirmation signal from the memory controller that the data element was stored in the nonvolatile memory, and return a completion signal to an originator of the transaction. The cache controller may also include logic to place a processor core in a low power state.Type: GrantFiled: December 9, 2013Date of Patent: July 10, 2018Assignee: Intel CorporationInventors: Sarathy Jayakumar, Mohan J. Kumar, Eswaramoorthi Nallusamy
-
Patent number: 10013362Abstract: Some embodiments modify caching server operation to evict cached content based on a deterministic and multifactor modeling of the cached content. The modeling produces eviction scores for the cached items. The eviction scores are derived from two or more factors of age, size, cost, and content type. The eviction scores determine what content is to be evicted based on the two or more factors included in the eviction score derivation. The eviction scores modify caching server eviction operation for specific traffic or content patterns. The eviction scores further modify caching server eviction operation for granular control over an item's lifetime on cache.Type: GrantFiled: May 13, 2016Date of Patent: July 3, 2018Assignee: Verizon Digital Media Services Inc.Inventors: Harkeerat Bedi, Amir Reza Khakpour, Robert J. Peters
-
Patent number: 9996447Abstract: Embodiments of the invention may provide for collecting specified data each time that a call to a given method occurs, wherein a given call to the given method is associated with a set of arguments comprising one or more particular argument values for the given method, and the collected data includes an element uniquely identifying each of the particular argument values. The process may further include storing the collected data at a selected location, and selecting a call threshold for the given method, wherein the call threshold comprises a specified number of occurrences of the given call to the given method, when the program is running. The collected data may be selectively analyzed at the storage location, to determine whether an occurrence of the given call to the given method has exceeded the call threshold.Type: GrantFiled: December 1, 2014Date of Patent: June 12, 2018Assignee: International Business Machines CorporationInventors: Mark A. Alkins, Denny Pichardo, Martin J. C. Presler-Marshall, Hunter K. Presnall
-
Patent number: 9990297Abstract: A processor includes an instruction executing unit which executes a memory access instruction, a cache memory unit disposed between a main memory which stores data related to the memory access instruction and the instruction executing unit, a control information retaining unit which retains control information related to a prefetch issued to the cache memory unit, an address information retaining unit which retains address information based on the memory access instruction executed in the past, and a control unit which generates and issues a hardware prefetch request. The control unit compares address information retained in the address information retaining unit and an access address in the memory access instruction executed, and generates and issues based on a comparison result a hardware prefetch request to the cache memory unit according to the control information of the control information retaining unit specified by specifying information added to the memory access instruction.Type: GrantFiled: July 29, 2016Date of Patent: June 5, 2018Assignee: FUJITSU LIMITEDInventors: Hideki Okawara, Masatoshi Haraguchi
-
Patent number: 9977620Abstract: A storage device includes a memory device and a processor. The memory device is configured to store therein a plurality of data pieces. The processor is configured to determine overlapping degrees of the plurality of data pieces stored in the memory device. The processor is configured to determine, on basis of the determined overlapping degrees, an order in which a plurality of information pieces for identifying the respective data pieces are to be sent to another storage device.Type: GrantFiled: December 17, 2015Date of Patent: May 22, 2018Assignee: FUJITSU LIMITEDInventor: Jun Kato
-
Patent number: 9977742Abstract: A cache coherency controller comprises a directory indicating, for memory addresses cached by a group of two or more cache memories in a coherent cache structure, which of the cache memories are caching those memory addresses, the directory being associative so that multiple memory addresses map to an associative set of more than one directory entry; and control logic responsive to a memory address to be newly cached, and configured to detect whether one or more of the set of directory entries mapped to that memory address is available for storage of an indication of which of the two or more cache memories are caching that memory address; the control logic being configured so that when all of the set of directory entries mapped to that memory address are occupied, the control logic is configured to select one of the set of directory entries as a directory entry to be overwritten and the corresponding cached information to be invalidated, the control logic being configured to select a directory entry to be oveType: GrantFiled: April 20, 2016Date of Patent: May 22, 2018Assignee: ARM LimitedInventors: Andrew David Tune, Sean James Salisbury
-
Patent number: 9971689Abstract: Provided are a computer program product, system, and method for invoking Input/Output (I/O) threads and demote threads on processors to demote tracks from a cache. An Input/Output (I/O) thread, executed by a processor, processes I/O requests directed to tracks from the storage stored in the cache. A demote thread, executed by the processor, processes a demote ready list, indicating tracks eligible to demote from cache, to select tracks to demote from the cache to free cache segments in the cache. After processing a number of I/O requests, the I/O thread processes the demote ready list to demote tracks from the cache in response to determining that a number of free cache segments in the cache is below a free cache segment threshold.Type: GrantFiled: June 6, 2016Date of Patent: May 15, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Patent number: 9971508Abstract: Provided are a computer program product, system, and method for invoking Input/Output (I/O) threads on processors to demote tracks from a cache. An Input/Output (I/O) thread, executed by a processor, processes I/O requests directed to tracks in the storage by accessing the tracks in the cache. After processing at least one I/O request, the I/O thread determines whether a number of free cache segments in the cache is below a free cache segment threshold. The I/O thread processes a demote ready list, indicating tracks eligible to demote from the cache, to demote tracks from the cache in response to determining that the number of free cache segments is below the free cache segment threshold. The I/O thread continues to process I/O requests directed to tracks from the storage stored in the cache after processing the demote ready list to demote tracks in the cache.Type: GrantFiled: June 6, 2016Date of Patent: May 15, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Patent number: 9965515Abstract: A method, software and device for managing a cache service layer of an online solution is described. The online solution includes a database, at least one client, a cache service layer having a plurality of nodes which are interconnected to each other and provide processing and caching power for the cache service layer, and the cache manager. The method comprises reading in a business object from the database; assigning, using a cache manager, the business object to a business object group on a first node of the cache service layer; determining, by the cache manager, the effective probability of cache expiration of the business object group; and setting an expiration time for the business object group based on the determination of the effective probability of cache expiration of the business object group.Type: GrantFiled: February 26, 2016Date of Patent: May 8, 2018Assignee: SAP SEInventor: Dinesh Kumar
-
Patent number: 9965397Abstract: An apparatus having a cache and a circuit is disclosed. The cache includes a plurality of cache lines. The cache is configured to (i) store a plurality of data items in the cache lines and (ii) generate a map that indicates a dirty state or a clean state of each of the cache lines. The cache also has a write-back policy to a memory. The circuit is configured to (i) check a location in the map corresponding to a read address of a read request and (ii) obtain read data directly from the memory by bypassing the cache in response to the location having the clean state.Type: GrantFiled: March 15, 2013Date of Patent: May 8, 2018Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.Inventors: Horia Simionescu, Siddartha Kumar Panda, Kunal Sablok, Veera Kumar Reddy Oleti
-
Patent number: 9965207Abstract: A method and associated systems for efficient management of cloned data. One or more processors create a “child” clone of a “parent” software image. The child and parent contain identical information organized into identical sets of file blocks. To conserve storage, each child block initially points to a physical storage location already in use by a corresponding parent block, rather than requiring additional storage of its own. The first time a child block is updated, however, it will require additional physical storage. At the time of the child's creation, the processors reserve a number of physical blocks sufficient to store the contents of all child file blocks likely to be updated. A child file block is identified as likely to be updated by analyzing past volatility of a corresponding file block of the parent or of corresponding file blocks of other children of the same parent.Type: GrantFiled: June 20, 2017Date of Patent: May 8, 2018Assignee: International Business Machines CorporationInventors: Blaine H. Dolph, Dean Hildebrand, Sandeep R. Patil, Riyazahamad M. Shiraguppi
-
Patent number: 9965023Abstract: A method performed by a multi-core processor is described. The method includes, while a core is executing program code, reading a dirty cache line from the core's last level cache and sending the dirty cache line from the core for storage external from the core, where, the dirty cache line has not been evicted from the cache nor requested by another core or processor.Type: GrantFiled: September 13, 2016Date of Patent: May 8, 2018Assignee: INTEL CORPORATIONInventors: David Keppel, Kelvin Kwan, Jawad Nasrullah
-
Patent number: 9961152Abstract: Some embodiments provide a content delivery network (CDN) solution that affords the CDN control over those elements of customer content that are delivered by third parties. The CDN integrates a distributed set of monitoring agents. Each monitoring agent monitors the delivery performance of third parties to the region in which the agent operates. The CDN uses the performance monitoring information to dynamically manage the content tags to the third-party delivered elements of CDN-customer content. Specifically, a CDN server retrieves the parent page for requested CDN-customer content. The CDN server identifies the region from where the request originates and retrieves the logs from the monitoring agents monitoring from that region. The CDN server then modifies the base page by dynamically removing the tags to the third-party delivered elements that are reported in the monitoring agent logs as being unavailable, inaccessible, or underperforming in the identified region.Type: GrantFiled: August 22, 2016Date of Patent: May 1, 2018Assignee: VERIZON DIGITAL MEDIA SERVICES INC.Inventors: Alexander A. Kazerani, Robert J. Peters
-
Patent number: 9948714Abstract: The computer detects a request from a first computer to store a data item, and determines if a volatile memory in a second computer comports with an isolation rule for the data item. In response to determining that the volatile memory in the second computer comports with the isolation rule for the data item, the computer compares access time for data in the volatile memory in the second computer with access time for data in a local hard drive in the first computer, and then selectively stores the data item in a storage location that has a lower access time. The computer establishes a threshold resource consumption rate for both the volatile memory in the second computer and the local hard drive in the first computer to further select the volatile memory in the second computer or the local hard drive in the first computer to store the data item.Type: GrantFiled: April 18, 2016Date of Patent: April 17, 2018Assignee: International Business Machines CorporationInventors: James C. Fletcher, David P. Johnson, David L. Kaminsky
-
Patent number: 9940671Abstract: An item is determined to exist in a dataset by arranging the dataset into a plurality of subsets, each bounded by the minimum amount of memory that may be transferred between levels of memory in a memory configuration. The item and the subsets have attributes that allow for a determination of which subset the item would exist in if the item were in the dataset. A singular subset is transferred between levels of memory to determine whether the item exists in the transferred subset. If the item does not exist in the transferred subset, it is determined that the item does not exist in the dataset.Type: GrantFiled: October 21, 2016Date of Patent: April 10, 2018Assignee: Chicago Mercantile Exchange Inc.Inventors: Paul Meacham, Jacques Doornebos
-
Patent number: 9933979Abstract: The data storage device includes a separator configured to separate data requested to write by clients into data chunks, an address translator configured to translate first addresses generated by the data chunks into second addresses as global addresses, a storage node mapper configured to map the second addresses to a plurality of storage nodes, and a data store unit configured to select a target storage node among the plurality of storage nodes and store the data chunks in the target storage node. The data chunks include a plurality of data input/output unit blocks. If other data chunks that are the same with the data chunks are pre-stored in the plurality of storage nodes, the data store unit is configured to establish links between the same pre-stored data chunks and the second addresses, rather than stores the data chunks in the plurality of storage nodes.Type: GrantFiled: March 11, 2015Date of Patent: April 3, 2018Assignee: Samsung Electronics Co., Ltd.Inventors: Bon-Cheol Gu, Ju-Pyung Lee
-
Patent number: 9921963Abstract: A data processing system and methods for performing cache eviction are disclosed. An exemplary method includes maintaining a metadata set for each cache unit of a cache device, wherein the cache device comprises a plurality of cache units, each cache unit having a plurality of segments, calculating a score for each metadata set, and arranging the metadata sets in a list in ascending order from lowest score to highest score. The exemplary method further includes in response to determining that a cache eviction is to be performed, selecting a cache unit corresponding to the metadata set in the list having the lowest score, without recalculating a score for any of the metadata set, and evicting the selected cache unit. The metadata nay include, for example, segment count metadata, validity metadata, last access time (LAT) metadata, and hotness metadata.Type: GrantFiled: January 30, 2015Date of Patent: March 20, 2018Assignee: EMC IP Holding Company LLCInventors: Cheng Li, Philip Shilane, Grant Wallace
-
Patent number: 9921973Abstract: In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application.Type: GrantFiled: March 6, 2013Date of Patent: March 20, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 9921968Abstract: Methods and apparatus are disclosed for using a shared page miss handler device to satisfy page miss requests of a plurality of devices in a multi-core system. One embodiment of such a method comprises receiving one or more page miss requests from one or more respective requesting devices of the plurality of devices in the multi-core system, and arbitrating to identify a first page miss requests of the one or more requesting devices A page table walk is performed to generate a physical address responsive to the first page miss request. Then the physical address is sent to the corresponding requesting device, or a fault is signaled to an operating system for the corresponding requesting device responsive to the first page miss request.Type: GrantFiled: December 31, 2016Date of Patent: March 20, 2018Assignee: Intel CorporationInventors: Christopher D. Bryant, Rama S. Gopal
-
Patent number: 9922089Abstract: Certain example embodiments described herein relate to techniques for processing XML documents of potentially very large sizes. For instance, certain example embodiments parse a potentially large XML document, store the parsed data and some associated metadata in multiple independent blocks or partitions, and instantiate only the particular object model object requested by a program. By including logical references rather than physical memory addresses in such pre-parsed partitions, certain example embodiments make it possible to move the partitions through a caching storage hierarchy without necessarily having to adjust or encode memory references, thereby advantageously enabling dynamic usage of the created partitions and making it possible to cache an arbitrarily large document while consuming a limited amount of program memory.Type: GrantFiled: July 18, 2012Date of Patent: March 20, 2018Assignee: SOFTWARE AG USA, INC.Inventor: Bernard J. Style
-
Patent number: 9894175Abstract: In accordance with an embodiment, described herein is a system and method for providing distributed caching in a transactional processing environment. The caching system can include a plurality of layers that provide a caching feature for a plurality of data types, and can be configured for use with a plurality of caching providers. A common data structure can be provided to store serialized bytes of each data type, and architecture information of a source platform executing a cache-setting application, so that a cache-getting application can use the information to convert the serialized bytes to a local format. A proxy server can be provided to act as a client to a distributed in-memory grid, and advertise services to a caching client, where each advertised service can match a cache in the distributed in-memory data grid, such as Coherence. The caching system can be used to cache results from a service.Type: GrantFiled: January 15, 2016Date of Patent: February 13, 2018Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Todd Little, Xugang Shen, Jim Yongshun Jin, Jesse Hou
-
Patent number: 9892059Abstract: Methods and apparatus are disclosed for using a shared page miss handler device to satisfy page miss requests of a plurality of devices in a multi-core system. One embodiment of such a method comprises receiving one or more page miss requests from one or more respective requesting devices of the plurality of devices in the multi-core system, and arbitrating to identify a first page miss requests of the one or more requesting devices A page table walk is performed to generate a physical address responsive to the first page miss request. Then the physical address is sent to the corresponding requesting device, or a fault is signaled to an operating system for the corresponding requesting device responsive to the first page miss request.Type: GrantFiled: December 31, 2016Date of Patent: February 13, 2018Assignee: Intel CorporationInventors: Christopher D. Bryant, Rama S. Gopal