Entry Replacement Strategy Patents (Class 711/133)
-
Patent number: 10684850Abstract: Embodiment of this application disclose an application component deployment method and a deployment node. In the method, a target deployment node receives a first deployment instruction sent by a management server, and determines a kinship node of the target deployment node according to the first deployment instruction, and a second application component that is in the multiple application components and that corresponds to the parent node, where the kinship node includes a parent node. Then, when detecting that the parent node has deployed the second application component, the target deployment node sends a second deployment instruction to the parent node. The target deployment node deploys a first application component according to the first deployment instruction.Type: GrantFiled: February 18, 2019Date of Patent: June 16, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Huan Zhu, Qi Zhang, Yuqing Liu
-
Patent number: 10678706Abstract: Embodiments of the present disclosure are directed towards a computing device having a cache memory device with a scrubber logic. In some embodiments, the scrubber logic controller may be coupled with the cache device, and may perform a selection for eviction from the cache device a portion of data stored in the cache device, based at least in part on one or more selection criteria, at a dynamically adjusted level of aggressiveness of the selection performance. The scrubber logic controller may adjust the level of aggressiveness of the selection performance, based at least in part on a determined time left to complete the selection performance at a current level of aggressiveness. Other embodiments may be described and/or claimed.Type: GrantFiled: March 13, 2018Date of Patent: June 9, 2020Assignee: INTEL CORPORATIONInventors: Zvika Greenfield, Eshel Serlin, Asaf Rubinstein, Eli Abadi
-
Patent number: 10671394Abstract: A computer system for prefetching data in a multithreading environment includes a processor having a prefetching engine and a stride detector. The processor is configured to perform requesting data associated with a first thread of a plurality of threads, and prefetching requested data by the prefetching engine, where prefetching includes allocating a prefetch stream in response to an occurrence of a cache miss. The processor performs detecting each cache miss, and based on detecting the cache miss, monitoring the prefetching engine to detect subsequent cache misses and to detect one or more events related to allocations performed by the prefetching engine. The processor further performs, based on the stride detector detecting a selected number of events, directing the stride detector to switch from the first thread to a second thread by ignoring stride-1 allocations for the first thread and evaluating stride-1 allocations for potential strided accesses on the second thread.Type: GrantFiled: October 31, 2018Date of Patent: June 2, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Vivek Britto, George W. Rohrbaugh, III, Mohit Karve, Brian Thompto
-
Patent number: 10671539Abstract: A method comprises receiving input reference attributes from a data reference interface and selecting a replacement data location of a cache to store data. The replacement data location is selected based on the input reference attributes and reference states associated with cached-data stored in data locations of the cache and an order of state locations of a replacement stack storing the reference states. The reference states are based on reference attributes associated with the cached-data and can include a probability count. The order of state locations is based on the reference states and the reference attributes. In response to receiving some input reference attributes, reference states stored in the state locations can be modified and a second order of the state locations can be determined. A reference state can be stored in the replacement stack based on the second order.Type: GrantFiled: October 15, 2018Date of Patent: June 2, 2020Assignee: International Business Machines CorporationInventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
-
Patent number: 10671538Abstract: A memory system may include: a nonvolatile memory device comprising a plurality of memory blocks each of which includes a plurality of pages; a volatile memory device configured to temporarily store data to be transmitted between a host and the nonvolatile memory device; and a controller configured to enter an exclusive mode in response to a request of the host, a result of checking a state of the nonvolatile memory device, or performing a merge operation on the nonvolatile memory device, exclusively use the volatile memory device to perform the merge operation during an entry period of the exclusive mode, and exit the exclusive mode in response to completing the performing of the merge operation.Type: GrantFiled: September 6, 2018Date of Patent: June 2, 2020Assignee: Sk hynix Inc.Inventors: Jong-Min Lee, Beom-Rae Jeong
-
Patent number: 10666743Abstract: Techniques for discovery of applications based on application logs are disclosed. In one embodiment, a system may include a log analyzer to receive application logs generated by a plurality of applications running in a computing environment and analyze the received application logs using a trained initialization model to parse information about the plurality of applications. Further, the system may include an application discovery unit to determine a presence of an application running on a compute node in the computing environment using the parsed information about the plurality of applications.Type: GrantFiled: April 23, 2018Date of Patent: May 26, 2020Assignee: VMWARE, INC.Inventors: Sidhartha Sahoo, Vipul Chaudhary, Sandeep L Hegde, Arunvijai Sridharan
-
Patent number: 10664177Abstract: Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.Type: GrantFiled: November 17, 2017Date of Patent: May 26, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta, Brian D. Hatfield, Gail A. Spear
-
Patent number: 10657064Abstract: A request for retrieving a cached data object from a data object cache used to cached data objects retrieved from one or more primary data sources is received from a data object requester. Responsive to determining that the cached data object in the data object cache is expired, it is determined whether the cached data object in the data object cache is still within an extended time period. If the cached data object in the data object cache is still within an extended time period, it is determined whether the cached data object is free of a cache invalidity state change caused by a data change operation. If the cached data object is free of a cache invalidity state change, the cached data object is returned to the data object requester.Type: GrantFiled: January 31, 2019Date of Patent: May 19, 2020Assignee: salesforce.com. inc.Inventors: Sameer Khan, Francis James Leahy, III
-
Patent number: 10657070Abstract: A method and apparatus are described for a shared LRU policy between cache levels. For example, one embodiment comprises: a level N cache to store a first plurality of entries; a level N+1 cache to store a second plurality of entries; the level N+1 cache to initially be provided with responsibility for implementing a least recently used (LRU) eviction policy for a first entry until receipt of a request for the first entry from the level N cache at which time the entry is copied from the level N+1 cache to the level N cache, the level N cache to then be provided with responsibility for implementing the LRU policy until the first entry is evicted from the level N cache, wherein upon being notified that the first entry has been evicted from the level N cache, the level N+1 cache to resume responsibility for implementing the LRU eviction policy.Type: GrantFiled: August 20, 2018Date of Patent: May 19, 2020Assignee: Intel CorporationInventors: Daniel Greenspan, Blaise Fanning, Yoav Lossin, Asaf Rubinstein
-
Patent number: 10649754Abstract: An electronic whiteboard includes a white list in which predetermined software is registered, a mode switching unit configured to switch a normal mode in which software unregistered in the white list is not permitted to be installed and an install mode in which the unregistered software is permitted to be installed, an invalidating/validating processor configured to invalidate the white list in the install mode, and a registerer configured to register software installed while the white list is invalidated in the white list, in which the invalidating/validating processor validates the white list after the installed software is registered in the white list.Type: GrantFiled: July 18, 2017Date of Patent: May 12, 2020Assignee: RICOH COMPANY, LTD.Inventor: Shoichiro Kanematsu
-
Patent number: 10649665Abstract: The present disclosure includes apparatuses, methods, and systems for data relocation in hybrid memory. A number of embodiments include a memory, wherein the memory includes a first type of memory and a second type of memory, and a controller configured to identify a subset of data stored in the first type of memory to relocate to the second type of memory based, at least in part, on a frequency at which an address corresponding to the subset of data stored in the first type of memory has been accessed during program operations performed on the memory.Type: GrantFiled: November 8, 2016Date of Patent: May 12, 2020Assignee: Micron Technology, Inc.Inventors: Emanuele Confalonieri, Marco Dallabora, Paolo Amato, Danilo Caraccio, Daniele Balluchi
-
Patent number: 10645143Abstract: The present invention relates to systems, apparatus, and methods of scanning a response to a first HTTP request for a web page in order to identify a web object for prefetching, and using a static tracker to identify and improve results. In one potential alternative embodiment, after a response is scanned a web object may be prefetched to a proxy server prior to a browser requesting the web object. The proxy server may observe one or more HTTP requests that are associated with the response to the first HTTP request for the web page and measure the success of the prefetching. After success is measured for the specific instance of the web object and the web page, a success rate for prefetching or not prefetching the web object as associated with the web page may be updated.Type: GrantFiled: December 20, 2018Date of Patent: May 5, 2020Assignee: VIASAT, Inc.Inventors: Peter Lepeska, William B. Sebastian
-
Patent number: 10642755Abstract: Provided are a computer program product, system, and method for invoking demote threads on processors to demote tracks from a cache. A plurality of demote ready lists indicate tracks eligible to demote from the cache. In response to determining that a number of free cache segments in the cache is below a free cache segment threshold, a determination is made of a number of demote threads to invoke on processors based on the number of free cache segments and the free cache segment threshold. The determined number of demote threads are invoked to demote tracks in the cache indicated in the demote ready lists, wherein each invoked demote thread processes one of the demote ready lists to select tracks to demote from the cache to free cache segments in the cache.Type: GrantFiled: February 23, 2018Date of Patent: May 5, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Patent number: 10630758Abstract: A system and method for accelerating content deliver over a content delivery network (CDN) are provided. In an embodiment, the method includes determining, based on a received hypertext transfer protocol (HTTP) request, a PUSH list, wherein the PUSH list includes at least one resource that can be immediately provided to a web browser without requesting the at least one resource from an origin server; and issuing, based on the PUSH list, at least one PUSH resource designator to an edge proxy, wherein each PUSH resource designator indicates one of the at least one resource, wherein the edge proxy is commutatively connected in geographic proximity to a client running the web browser, wherein the origin server and the edge proxy communicate over the CDN.Type: GrantFiled: May 5, 2016Date of Patent: April 21, 2020Assignee: RADWARE, LTD.Inventors: Kent Douglas Alstad, Roy Berland
-
Patent number: 10628317Abstract: Systems and methods are disclosed herein for caching data in a virtual storage environment. An exemplary method comprises monitoring, by a hardware processor, operations on a virtual storage device, identifying, by a hardware processor, transitions between blocks of the virtual storage device that have the operations performed thereon, determining, by a hardware processor, a relationship between each of the blocks based on the identified transitions, clustering the blocks into groups of related blocks based on the relationship and applying, by a hardware processor, one of a plurality of different caching policies to blocks in each of the groups based on clustering.Type: GrantFiled: September 13, 2018Date of Patent: April 21, 2020Assignee: PARALLELS INTERNATIONAL GMBHInventors: Anton Zelenov, Nikolay Dobrovolskiy, Serguei Beloussov
-
Patent number: 10628046Abstract: According to one aspect, a system for managing information objects in dynamic data storage devices including a first data storage device having a plurality of information objects, a second data storage device operatively connectable to an output device for providing at least some of the information objects to at least one user, and at least one processor operatively coupled to the first data storage device and the second data storage device.Type: GrantFiled: March 5, 2018Date of Patent: April 21, 2020Assignee: D2L CorporationInventors: Brian John Cepuran, David Robert Lockhart, Ali Ghassemi, Dariusz Grabka
-
Patent number: 10628241Abstract: Provided are a computer program product, system, and method for determining when to send message to a computing node to process items by training a machine learning module. A machine learning module receives as input information related to sending of messages to the computing node to process items and outputs a send message parameter value for a send message parameter indicating when to send a message to the computing node. The send message parameter value is adjusted based on a performance condition and a performance condition threshold to produce an adjusted send message parameter value. The machine learning module is retrained with the input information related to the sending of messages to produce the adjusted send message parameter value. The retrained machine learning module is used to produce a new send message parameter value used to determine when to send a message.Type: GrantFiled: July 31, 2018Date of Patent: April 21, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Kyler A. Anderson
-
Patent number: 10621110Abstract: Some embodiments modify caching server operation to evict cached content based on a deterministic and multifactor modeling of the cached content. The modeling produces eviction scores for the cached items. The eviction scores are derived from two or more factors of age, size, cost, and content type. The eviction scores determine what content is to be evicted based on the two or more factors included in the eviction score derivation. The eviction scores modify caching server eviction operation for specific traffic or content patterns. The eviction scores further modify caching server eviction operation for granular control over an item's lifetime on cache.Type: GrantFiled: June 26, 2018Date of Patent: April 14, 2020Assignee: Verizon Digital Media Services Inc.Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Robert J. Peters
-
Patent number: 10623308Abstract: A flow routing system includes a source device and a destination device that are coupled together via a network that includes a Software Defined Networking (SDN) device. The source device generates a packet that includes a packet header, provides a connection identifier in the packet header, and transmits the packet through the network. The SDN device receives the packet through the network from the source device, matches the connection identifier that is included in the packet header to a single tuple in a flow entry of a flow table and, in response, uses the flow entry to route the packet through the network to the destination device. The connection identifier may be provided by hashing a source IP address, a destination IP address, a VLAN identity, a source MAC address, a source port identifier, a destination port identifier; and a creation time for the flow including the packet.Type: GrantFiled: February 17, 2017Date of Patent: April 14, 2020Assignee: Dell Products L.P.Inventors: Ankit Singh, Shrikant U. Hallur, Rohit Kumar Arehalli
-
Patent number: 10621095Abstract: Processing of prefetched data based on cache residency. Data to be used in future processing is prefetched. A block of data being prefetched is selected for processing, and a check is made as to whether the block of data is resident in a selected cache (e.g., L1 cache). If the block of data is resident in the selected cache, it is processed; otherwise, processing is bypassed until a later time when it is resident in the selected cache.Type: GrantFiled: July 20, 2016Date of Patent: April 14, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael K. Gschwind, Timothy J. Slegel
-
Patent number: 10616305Abstract: Managing the timing of publication of new webpages and in particular new versions of existing webpages. New webpages are uploaded into a data repository for storing them before they are made available for access externally. A dependency processor is provided to process these new webpages to assess their readiness for publication by checking for dependencies on other webpages; locating any of the other webpages; and ascertaining whether each such dependency is satisfied. If dependencies are satisfied, then the new webpage is deemed ready for publication and is published. In the case that the new webpage is a new version of an existing webpage, it replaces the old version. If dependencies are not satisfied, then the new webpage is held back until such time as they are met.Type: GrantFiled: January 17, 2017Date of Patent: April 7, 2020Assignee: International Business Machines CorporationInventors: Gary S. Bettesworth, Andreas Martens, Sam Rogers, Paul S. Thorpe
-
Patent number: 10616356Abstract: A system and method for optimization of resource pushing are presented. The method includes intercepting a current request for web content from a client device; determining a current PUSH list from at least one generated PUSH list based on the current request, wherein each generated PUSH list ensures availability of resources to the client device prior to receiving of a response, from an origin server, corresponding to the request; and pushing, in real-time, resources to the client device based on the determined PUSH list. Some embodiments also include a method and system for generating PUSH lists for optimizing asynchronous resource pushing.Type: GrantFiled: February 24, 2016Date of Patent: April 7, 2020Assignee: Radware, Ltd.Inventors: Kent Douglas Alstad, Shawn David Bissell, Jarrod Patrick Thomas Connolly
-
Patent number: 10609173Abstract: Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples.Type: GrantFiled: April 16, 2019Date of Patent: March 31, 2020Assignee: Verizon Digital Media Services Inc.Inventors: Amir Reza Khakpour, Harkeerat Singh Bedi
-
Patent number: 10581820Abstract: Key generation and roll over is provided for a cloud based identity management system. A key set is generated that includes a previous key and expiration time, a current key and expiration time, and a next key and expiration time, and stores the key set in a database table and a memory cache associated with the database table. At the current key expiration time, the key set is rolled over, including retrieving the key set from the database table, updating the previous key and expiration time with the current key and expiration time, updating the current key and expiration time with the next key and expiration time, generating a new key and expiration time, updating the next key and expiration time with the new key and expiration time, and updating the key set in the database table and the memory cache.Type: GrantFiled: May 8, 2017Date of Patent: March 3, 2020Assignee: Oracle International CorporationInventors: Rakesh Keshava, Sreedhar Katti, Sirish Vepa, Vadim Lander, Prateek Mishra
-
Patent number: 10567509Abstract: A method by a computing device of a dispersed storage network (DSN) begins by determining whether alternate form data (AFD) exists for a data object. When the alternate form data does not exist, the method continues by identifying a content derivation function in accordance with an AFD policy of the DSN. The method continues by identifying a portion of the data object based on the content derivation function and identifying one or more sets of encoded data slices of a plurality of sets of encoded data slices corresponding to the portion of the data object. The method continues by generating at least a portion of the AFD based on the one or more sets of encoded data slices. The method continues by storing the at least a portion of the AFD within memory of the DSN in accordance with a storage approach.Type: GrantFiled: May 15, 2017Date of Patent: February 18, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wesley B. Leggette, Manish Motwani, Brian F. Ober, Jason K. Resch
-
Patent number: 10567439Abstract: Data processing systems and methods, according to various embodiments, perform privacy assessments and monitor new versions of computer code for updated features and conditions that relate to compliance with privacy standards. The systems and methods may obtain a copy of computer code (e.g., a software application or code associated with a website) that collects and/or uses personal data, and then automatically analyze the computer code to identify one or more privacy-related attributes that may impact compliance with applicable privacy standards. The system may be adapted to monitor one or more locations (e.g., an online software application marketplace, and/or a specified website) to determine whether the application or website has changed. The system may, after analyzing the computer code, display the privacy-related attributes, collect information regarding the attributes, and automatically notify one or more designated individuals (e.g.Type: GrantFiled: July 8, 2019Date of Patent: February 18, 2020Assignee: OneTrust, LLCInventor: Kabir A. Barday
-
Patent number: 10558387Abstract: In general, embodiments of the technology relate to writing data to storage appliances. More specifically, embodiments of the technology are directed to writing data to storage media using a push-based mechanism in which clients provide the data to write to the storage media and then subsequently provide a command to write the data to the storage media.Type: GrantFiled: February 15, 2019Date of Patent: February 11, 2020Assignee: EMC IP Holding Company LLCInventor: Michael W. Shapiro
-
Patent number: 10552088Abstract: A computer system including: a first computer including a first processor and a first nonvolatile memory; and a second computer including a second processor and a second nonvolatile memory in which the second computer is connected to the first computer. The first computer includes a redundant hardware that, on receiving a write command from the first processor, writes the write data of the write command both into the first nonvolatile memory and into the second computer.Type: GrantFiled: April 19, 2016Date of Patent: February 4, 2020Assignee: HITACHI, LTD.Inventor: Masanori Takada
-
Patent number: 10552269Abstract: A means for assigning database objects to a backup storage group proceeds by collecting information related to a plurality of backup devices. The information collected includes speed of recovery, time to backup, and a recovery rank for each device. A backup pool is defined, using a database configuration parameter, to contain one or more of the plurality of backup devices. A determination is made to store a backup of a data object in a first device of the plurality of backup devices based on the collected information and a priority rank associated with the data object.Type: GrantFiled: August 31, 2017Date of Patent: February 4, 2020Assignee: International Business Machines CorporationInventors: Gaurav Mehrotra, Nishant Sinha, Pratik P. Paingankar
-
Patent number: 10547705Abstract: The present invention relates to a proxy apparatus and to a caching method of same. The caching method of the proxy apparatus according to one embodiment of the present invention includes the steps of: receiving a transmission request for content that can be divided into a plurality of blocks and instruction information for the playback position of the content from an external device; obtaining a header block that includes the header of the content when receiving the transmission request; identifying an intermediate block that corresponds to the playback position using the header; requesting and obtaining the intermediate block through a network when the identified intermediate block is not cached in the proxy apparatus; and transmitting to the external device at least a portion of the header and at least a portion of the intermediate block corresponding to the request.Type: GrantFiled: July 3, 2013Date of Patent: January 28, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Jiangwei Xu, Sang Jun Moon, Young Seok Park, Chul Ki Lee, Jung Hwan Lim
-
Patent number: 10545874Abstract: A method may include dividing, into a first portion of memory resources and a second portion of memory resources, a plurality of memory resources included in a cache coupled with a database. The plurality of memory resources included in the cache may store data from the database. The first portion of memory resources may be occupied by data assigned to a first weight class. The second portion of memory resources may be occupied by data assigned to a second weight class. The first portion of memory resources may be selected based at least on the first weight class and an age of at least some of the data occupying the first portion of memory resources. In response to the selection of the first portion of memory resources, the first portion of memory resources may be reclaimed. Related systems and articles of manufacture, including computer program products, are also provided.Type: GrantFiled: February 20, 2018Date of Patent: January 28, 2020Assignee: SAP SEInventors: Daniel Booss, Ivan Schreter
-
Patent number: 10540281Abstract: A cache to provide data caching in response to data access requests from at least one system device, and a method operating such a cache, are provided. Allocation control circuitry of the cache is responsive to a cache miss to allocate an entry of the multiple entries in the data caching storage circuitry in dependence on a cache allocation policy. Quality-of-service monitoring circuitry is responsive to a quality-of-service indication to modify the cache allocation policy with respect to allocation of the entry for the requested data item. The behaviour of the cache, in particular regarding allocation and eviction, can therefore be modified in order to seek to maintain a desired quality-of-service for the system in which the cache is found.Type: GrantFiled: January 17, 2017Date of Patent: January 21, 2020Assignee: Arm LimitedInventors: Paul Stanley Hughes, Michael Andrew Campbell
-
Patent number: 10521126Abstract: A device may be configured to perform techniques that efficiently write back data to a storage device. A file system driver may be configured to delay write backs. A file system driver may be configured to extend a range of pages that are written back to a storage device.Type: GrantFiled: August 9, 2017Date of Patent: December 31, 2019Assignee: Tuxera, Inc.Inventor: Anton Ivanov Altaparmakov
-
Patent number: 10509721Abstract: In some examples, performance counters for computer memory may include ascertaining a request associated with a memory address range of computer memory. The memory address range may be assigned to a specified performance tier of a plurality of specified performance tiers. A performance value associated with a performance attribute of the memory address range may be ascertained, and based on the ascertained performance value, a weight value may be determined. Based on the ascertained request and the determined weight value, a count value associated with a counter associated with the memory address range may be incremented. Based on an analysis of the count value associated with the counter, a determination may be made as to whether the memory address range is to be assigned to a different specified performance tier of the plurality of specified performance tiers.Type: GrantFiled: May 18, 2018Date of Patent: December 17, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: John G. Bennett, Siamak Tavallaei
-
Patent number: 10509769Abstract: Managing data blocks stored in a data processing system comprises logging input/output (I/O) accesses to an in-memory buffer with each log entry recording at least identifiers of the data blocks accessed and when the data blocks were accessed. When the size of the log entries reaches a predetermined threshold, the system may append log entries of the in-memory buffer to the end of a history log file. The history log file is analyzed to determine patterns of accesses, and each pattern is stored in a record in an access heuristics database. While processing a request for access to a data block, the data processing system queries the access heuristics database to obtain prior access patterns associated with the data block. A data management action may be taken based on the prior access patterns.Type: GrantFiled: June 12, 2014Date of Patent: December 17, 2019Assignee: EMC IP Holding Company LLCInventors: Philip Shilane, Grant Wallace
-
Patent number: 10503647Abstract: A cache memory device shared by a plurality of processors includes a cache memory configured to store some of data stored in a main memory and to be accessed by the plurality of processors. A cache controller stores quality-of-service (QoS) information of each of the plurality of processors and differently sets a size of a storage space of the cache memory to be managed by a target processor, based on the QoS information of the target processor.Type: GrantFiled: January 29, 2018Date of Patent: December 10, 2019Assignee: Samsung Electronics Co., Ltd.Inventor: Moon Gyung Kim
-
Patent number: 10503538Abstract: In a branch predictor in a processor capable of executing transactional memory transactions, the branch predictor speculatively predicts the outcome of branch instructions, such as taken/not-taken, the target address and the target instruction. Branch prediction information is buffered during a transaction, and is only loaded into the branch predictor when the transaction is completed. The branch prediction information is discarded if the transaction aborts.Type: GrantFiled: June 2, 2014Date of Patent: December 10, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael K Gschwind, Valentina Salapura
-
Patent number: 10498770Abstract: In various embodiments, a data map generation system is configured to: (1) enable a user to specify one or more criteria; (2) identify one or more data flows based at least in part on the one or more specified criteria; (3) generate a data map based at least in part on the identified one or more data flows; and (4) display the data map to any suitable individual (e.g., the user). In particular embodiments, the system is configured to display all data flows associated with a particular organization that are stored within the system. In other embodiments, the system is configured to display all data flows that are associated with a particular privacy campaign undertaken by the organization.Type: GrantFiled: July 23, 2018Date of Patent: December 3, 2019Assignee: OneTrust, LLCInventor: Kabir A. Barday
-
Patent number: 10474583Abstract: An information handling system may implement a method for controlling cache flush size by limiting the amount of modified cached data in a data cache at any given time. The method may include keeping a count of the number of modified cache lines (or modified cache lines targeted to persistent memory) in the cache, determining that a threshold value for modified cache lines is exceeded and, in response, flushing some or all modified cache lines to persistent memory. The threshold value may represent a maximum number or percentage of modified cache lines. The cache controller may include a field for each cache line indicating whether it targets persistent memory. Limiting the amount of modified cached data at any given time may reduce the number of cache lines to be flushed in response to a power loss event to a number that can be flushed using the available hold-up energy.Type: GrantFiled: July 28, 2016Date of Patent: November 12, 2019Assignee: Dell Products L.P.Inventors: John E. Jenne, Stuart Allen Berke, Vadhiraj Sankaranarayanan
-
Patent number: 10474584Abstract: A technique includes using a cache controller of an integrated circuit to control a cache including cached data content and associated cache metadata. The technique includes storing the metadata and the cached data content off of the integrated circuit and organizing the storage of the metadata relative to the cached data content such that a bus operation initiated by the cache controller to target the cached data content also targets the associated metadata.Type: GrantFiled: April 30, 2012Date of Patent: November 12, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Jichuan Chang, Justin James Meza, Parthasarathy Ranganathan
-
Patent number: 10467053Abstract: A multi-thread processor includes a plurality of hardware threads that generates a plurality of mutually independent instruction streams, respectively and a scheduler that schedules the plurality of hardware threads.Type: GrantFiled: November 7, 2017Date of Patent: November 5, 2019Assignee: RENESAS ELECTRONICS CORPORATIONInventors: Junichi Sato, Koji Adachi, Yousuke Nakamura
-
Patent number: 10460774Abstract: This technology relates to a memory control apparatus for processing data into a memory device and an operating method of the memory control apparatus. A method for controlling a memory may include converting received program data with a first address into compressed data, searching a deduplication table including compressed data, a second address of a memory device in which non-compressed data corresponding to the compressed data has been written and a counter indicative of the write number of the data for the converted compressed data, and if the converted compressed data is searched for in the deduplication table, mapping a second address corresponding to the compressed data in the deduplication table to the first address, not performing a write operation of the memory device for the received program data, and updating the deduplication table.Type: GrantFiled: May 26, 2017Date of Patent: October 29, 2019Assignee: SK hynix Inc.Inventor: Dong-Sop Lee
-
Patent number: 10459947Abstract: Using historical queries to determine database columns to populate a partial database. A partial database is created based, at least in part, on key values related to columns in a database for which the columns are most frequently accessed.Type: GrantFiled: February 5, 2016Date of Patent: October 29, 2019Assignee: International Business Machines CorporationInventors: Kai Feng Cui, Shuo Li, Shu Hua Liu, Xin Ying Yang
-
Patent number: 10452686Abstract: A system for memory synchronization of a multi-core system is provided, the system comprising: an assigning module which is configured to assign at least one memory partition to at least one core of the multi-core system; a mapping module which is configured to provide information for translation lookaside buffer shootdown for the multi-core system leveraged by sending an interrupt to the at least one core of the multi-core system, if a page table entry associated with the memory partition assigned to the at least one core is modified; and an interface module which is configured to provide an interface to the assigning module from user-space.Type: GrantFiled: August 4, 2017Date of Patent: October 22, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Antonios Iliopoulos, Shay Goikhman, Eliezer Levy
-
Patent number: 10452473Abstract: Techniques for managing caching use of a solid state device are disclosed. In some embodiments, the techniques may be realized as a method for managing caching use of a solid state device. Management of the caching use may include receiving, at a host device, notification of failure of a solid state device. In response to the notification a cache mode may be set to uncached. In uncached mode input/output (I/O) requests may be directed to uncached storage (e.g., disk).Type: GrantFiled: December 21, 2015Date of Patent: October 22, 2019Assignee: Western Digital Technologies, Inc.Inventors: Saied Kazemi, Siddharth Choudhuri
-
Patent number: 10447623Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing key-value store functionality within a real-time messaging system. An example method includes: providing a plurality of channels, wherein each channel comprises an ordered plurality of messages, wherein each channel represents a unique key, and wherein each message comprises one or more key-value pairs; receiving a function comprising a key for identifying one of the plurality of channels and processing instructions to be applied to a subset of the key-value pairs; and applying the processing instructions based at least in part on the unique key.Type: GrantFiled: February 24, 2017Date of Patent: October 15, 2019Assignee: Satori Worldwide, LLCInventors: Igor Milyakov, Fredrik E. Linder, Anton Koinov, Francois Orsini, Boaz Sedan, Oleg Khabinov, Bartlomiej Puzon
-
Patent number: 10430344Abstract: The present invention provides a memory resource management method and apparatus. The method includes: first, determining a recyclable cache unit according to first indication information and second indication information that correspond to each cache unit, where the first indication information and the second indication information both include at least one bit, the first indication information indicates whether the cache unit is occupied, and the second indication information indicates a quantity of cache unit recycling periods for which the cache unit has been occupied; and then, recycling the recyclable cache unit. A quantity of cache unit recycling periods is set, and when a time for which a cache unit has been occupied reaches the preset quantity of cache unit recycling periods, the cache unit is forcibly recycled, thereby effectively improving cache unit utilization and improving system bandwidth utilization.Type: GrantFiled: December 28, 2016Date of Patent: October 1, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xianfu Zhang, Qiang Wang
-
Patent number: 10430349Abstract: A processing system includes a cache that includes a cache lines that are partitioned into a first subset of the cache lines and a second subsets of the cache lines. The processing system also includes one or more counters that are associated with the second subsets of the cache lines. The processing system further includes a processor configured to modify the one or more counters in response to a cache hit or a cache miss associated with the second subsets. The one or more counters are modified by an amount determined by one or more characteristics of a memory access request that generated the cache hit or the cache miss.Type: GrantFiled: June 13, 2016Date of Patent: October 1, 2019Assignee: Advanced Micro Devices, Inc.Inventor: Paul James Moyer
-
Patent number: 10423528Abstract: An apparatus includes: a processor core to execute an instruction; a first cache to retain data used by the processor core; and a second cache to be coupled to the first cache, wherein the second cache includes a data-retaining circuit to include storage areas to retain data, an information-retaining circuit to retain management information that includes first state information for indicating a state of data retained in the data-retaining circuit, a state-determining circuit to determine, based on the management information, whether requested data that is requested with a read request from the first cache is retained in the data-retaining circuit, and an eviction-processing circuit to, where the state-determining circuit determines the requested data not to be retained in the data-retaining circuit with no enough space in the storage areas to store the requested data, evict data from the storage areas without issuing an eviction request based on the read request.Type: GrantFiled: June 7, 2017Date of Patent: September 24, 2019Assignee: FUJITSU LIMITEDInventors: Kenta Umehara, Toru Hikichi, Hideaki Tomatsuri
-
Patent number: 10419493Abstract: In various embodiments, a data map generation system is configured to: (1) enable a user to specify one or more criteria; (2) identify one or more data flows based at least in part on the one or more specified criteria; (3) generate a data map based at least in part on the identified one or more data flows; and (4) display the data map to any suitable individual (e.g., the user). In particular embodiments, the system is configured to display all data flows associated with a particular organization that are stored within the system. In other embodiments, the system is configured to display all data flows that are associated with a particular privacy campaign undertaken by the organization.Type: GrantFiled: December 14, 2018Date of Patent: September 17, 2019Assignee: OneTrust, LLCInventor: Kabir A. Barday