Entry Replacement Strategy Patents (Class 711/133)
-
Patent number: 10942629Abstract: The present disclosure provides methods, computer readable media, and a system (the “platform”) for recall probability-based data storage and retrieval. The platform may comprise a hierarchical data storage architecture having at least one of the following storage tiers: a first tier, and a second tier; at least one computing agent, wherein the at least one computing agent is configured to: compute a recall probability for a data element stored in the data storage, and effect a transfer of the data element based on, at least in part, the recall probability, wherein the transfer of the data element is between at least the following: the first tier, and the second tier; and a graphical user interface (GUI) comprising at least one functional GUI element configured to: enable an end-user to specific a desired balance between at least one of the following elements: speed of data retrieval, and cost of data storage.Type: GrantFiled: October 16, 2020Date of Patent: March 9, 2021Assignee: Laitek, Inc.Inventors: Cameron Brackett, Barry Brown, Razvan Costea-Barlutiu
-
Patent number: 10942860Abstract: A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address.Type: GrantFiled: June 2, 2020Date of Patent: March 9, 2021Assignees: SK hynix Inc., Korea University Industry Cooperation FoundationInventors: Seonwook Kim, Wonjun Lee, Yoonah Paik, Jaeyung Jun
-
Patent number: 10936539Abstract: Provided are systems and methods for linking source data fields to target inputs having a different data structure. In one example, the method may include receiving a request to load a data file from a source data structure to a target data structure, identifying a plurality of target inputs of the target data structure, wherein the plurality of target inputs include a format of the target data structure, and at least one of the target inputs has a format that is different from a format of a source data structure, dynamically linking the plurality of source data fields to the plurality of target inputs based on metadata of the plurality of source data fields, and loading the data file from the source data structure to the target data structure.Type: GrantFiled: June 4, 2018Date of Patent: March 2, 2021Assignee: SAP SEInventor: Bertram Beyer
-
Patent number: 10929385Abstract: Facilitating multi-level data deduplication in an elastic cloud storage environment is provided herein. A system can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise performing a first deduplication on a group of data objects at a data block level of a storage device. The operations can also comprise performing a second deduplication of the group of data objects at an object level of the storage device.Type: GrantFiled: June 22, 2018Date of Patent: February 23, 2021Assignee: EMC IP Holding Company LLCInventors: Mikhail Danilov, Konstantin Buinov
-
Patent number: 10929032Abstract: In a computer network in which a data storage array maintains data for at least one host computer, the host computer provides sequential access hints to the storage array. A monitoring program monitors a host application running on the host computer to detect generation of data that is likely to be sequentially accessed by the host application along with associated data. When the host application writes such data to a thinly provisioned logical production volume the monitoring program prompts a multipath IO driver to generate the sequential access hint. In response to the hint the storage array allocates a plurality of sequential storage spaces on a hard disk drive for the data and the associated data. The allocated storage locations on the hard disk drive are written in a spatial sequence that matches the spatial sequence in which the storage locations on the production volume are written.Type: GrantFiled: December 19, 2016Date of Patent: February 23, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Nir Sela, Gabriel Benhanokh, Arieh Don
-
Patent number: 10922229Abstract: Database objects are retrieved from a database and parsed into normalized cached data objects. The database objects are stored in the normalized cached data objects in a cache store, and tenant data requests are serviced from the normalized cached data objects. The normalized cached data objects include references to shared objects in a shared object pool that can be shared across different rows of the normalized cached data objects and across different tenant cache systems.Type: GrantFiled: March 11, 2019Date of Patent: February 16, 2021Assignee: Microsoft Technology Licensing, LLCInventor: Subrata Biswas
-
Patent number: 10922231Abstract: Provided is a predictive read ahead system for dynamically prefetching content from different storage devices. The dynamic prefetching may include receiving requests to read a first set of data of first content from a first storage device at a first rate, and requests to read a first set of data of second content from a second storage device at a different second rate. The dynamic prefetching may include determining different performance for the first storage device than the second storage device, prioritizing an allocation of cache based on a first difference between the first rate and the second rate, and a second difference based on the different performance between the storage devices, and prefetching a first amount of the first content data from the first storage device and a different second amount of the second content data from the second storage device based on the first and second differences.Type: GrantFiled: October 22, 2020Date of Patent: February 16, 2021Assignee: Open Drives LLCInventors: Scot Gray, Sean Lee
-
Patent number: 10922147Abstract: A storage system includes a plurality of storage devices, a data structure, and a storage controller that is configured to obtain a threshold value for a synchronization object associated with the data structure. The storage controller is further configured to activate a plurality of threads. Each thread is configured to determine a count value of the synchronization object corresponding to a number of entries in the data structure and determine whether the count value of the synchronization object exceeds the threshold value plus a predetermined number of entries. In response to determining that the count value of the synchronization object exceeds the threshold value plus the predetermined number of entries, the thread is configured to perform an action.Type: GrantFiled: July 19, 2018Date of Patent: February 16, 2021Assignee: EMC IP Holding Company LLCInventor: Vladimir Shveidel
-
Patent number: 10915461Abstract: Embodiments of the present invention are directed to a computer-implemented method for cache eviction. The method includes detecting a first data in a shared cache and a first cache in response to a request by a first processor. The first data is determined to have a mid-level cache eviction priority. A request is detected from a second processor for a same first data as requested by the first processor. However, in this instance, the second processor has indicated that the same first data has a low-level cache eviction priority. The first data is duplicated and loaded to a second cache, however, the data has a low-level cache eviction priority at the second cache.Type: GrantFiled: March 5, 2019Date of Patent: February 9, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Matthias Klein, Craig Walters, Kevin Lopes, Michael A. Blake, Tim Bronson, Kenneth Klapproth, Vesselina Papazova, Hieu T Huynh
-
Patent number: 10909038Abstract: There are provided in the present disclosure a cache management method for a computing device, a cache and a storage medium, the method including: storing, according to a first request sent by a processing unit of the computing device, data corresponding to the first request in a first cache line of a cache set, and setting age of the first cache line to a first initial age value according to a priority of the first request.Type: GrantFiled: December 30, 2018Date of Patent: February 2, 2021Assignee: Chengdu Haiguang Integrated Circuit Design Co. Ltd.Inventors: Chunhui Zhang, Jun Cao, Linli Jia, Bharath Iyer, Hao Huang
-
Patent number: 10909071Abstract: According to one set of embodiments, a computer system can receive a request or command to delete a snapshot from among a plurality of snapshots of a dataset, where the plurality of snapshots are stored in cloud/object storage. In response, the computer system can add the snapshot to a batch of pending snapshots to be deleted and can determine whether the size of the batch has reached a threshold. If the size of the batch has not reached the threshold, the computer system return a response to an originator of the request or command indicating that the snapshot has been deleted, without actually deleting the snapshot from the cloud/object storage.Type: GrantFiled: August 23, 2018Date of Patent: February 2, 2021Assignee: VMWARE, INC.Inventors: Pooja Sarda, Satish Kumar Kashi Visvanathan
-
Patent number: 10904353Abstract: A content serving data processing system is configured for trending topic cache eviction management. The system includes a computing system communicatively coupled to different sources of content objects over a computer communications network. The system also includes a cache storing different cached content objects retrieved from the different content sources. The system yet further includes a cache eviction module. The module includes program code enabled to manage cache eviction of the content objects in the cache by marking selected ones of the content objects as invalid in accordance with a specified cache eviction strategy, detect a trending topic amongst the retrieved content objects, and override the marking of one of the selected ones of the content objects as invalid and keeping the one of the selected ones of the content objects in the cache when the one of the selected ones of the content objects relates to the trending topic.Type: GrantFiled: June 17, 2019Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Al Chakra, Patrick S. O'Donnell, Kevin L. Ortega
-
Patent number: 10901908Abstract: The present disclosure relates to storing data in a computer system. The computer system comprising a main memory coupled to a processor and a cache hierarchy. The main memory comprises a predefined bit pattern replacing existing data of the main memory. Aspects include storing the predefined bit pattern into a reference storage of the computer system. At least one bit in a cache directory entry of a first cache line of the cache hierarchy can be set. Upon receiving a request to read the content of the first cache line, the request can be redirected to the predefined bit pattern in the reference storage based on the value of the set bit of the first cache line.Type: GrantFiled: January 16, 2019Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Wolfgang Gellerich, Peter Altevogt, Martin Bernhard Schmidt, Martin Schwidefsky
-
Patent number: 10884949Abstract: Embodiments of the invention are directed to a computer-implemented method of memory acceleration. The computer-implemented method includes mapping, by a processor, an array of logic blocks in system memory to an array of logic blocks stored in level 1 (L1) on an accelerator chip, wherein each logic block stores a respective look up table for a function, wherein each function row of a respective look up table stores an output function value and a combination of inputs to the function. The processor determines that a number of instances of request for the output function value from a logic block is less than a first threshold. The processor evicts the function row to a higher level memory.Type: GrantFiled: April 5, 2019Date of Patent: January 5, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bulent Abali, Sameh Asaad
-
Patent number: 10884751Abstract: Systems, apparatuses, and methods for virtualizing a micro-operation cache are disclosed. A processor includes at least a micro-operation cache, a conventional cache subsystem, a decode unit, and control logic. The decode unit decodes instructions into micro-operations which are then stored in the micro-operation cache. The micro-operation cache has limited capacity for storing micro-operations. When new micro-operations are decoded from pending instructions, existing micro-operations are evicted from the micro-operation cache to make room for the new micro-operations. Rather than being discarded, micro-operations evicted from the micro-operation cache are stored in the conventional cache subsystem. This prevents the original instruction from having to be decoded again on subsequent executions.Type: GrantFiled: July 13, 2018Date of Patent: January 5, 2021Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Jagadish B. Kotra
-
Patent number: 10877690Abstract: A memory system includes a memory device including a plurality of pages in which data are stored and a plurality of memory blocks which include the pages; and a controller suitable for transmitting capacity information of the memory device to a host in response to a user request received from the host, receiving a boost command corresponding to the capacity information, from the host, and triggering a background operation corresponding to the boost command.Type: GrantFiled: August 22, 2017Date of Patent: December 29, 2020Assignee: SK hynix Inc.Inventor: Eu-Joon Byun
-
Patent number: 10877654Abstract: User interfaces are provided for improved data optimization. A model user interface can be used to generate models based on a historical data file based on modeling details and filters specified by a user. The user can save the models and apply the models to optimize a data file. The user can specify optimization details and see visualizations of the results.Type: GrantFiled: September 27, 2018Date of Patent: December 29, 2020Assignee: Palantir Technologies Inc.Inventors: Robert Speare, Dayang Shi, Spencer Lake
-
Patent number: 10853237Abstract: A method, computer program product, and computer system for receiving, at a first computing device, a first data chunk sent from a second computing device. It may be determined that the first data chunk includes a first type of data. The first data chunk may be stored to a cache operatively coupled to the first computing device based upon, at least in part, determining that the first data chunk includes the first type of data, wherein the cache may include a first storage device type. An acknowledgement of a successful write of the first data chunk to the second computing device may be sent based upon, at least in part, a successful storing of the first data chunk to the cache operatively coupled to the first computing device.Type: GrantFiled: December 14, 2018Date of Patent: December 1, 2020Assignee: EMC IP Holding Company, LLCInventors: Mikhail Danilov, Andrey Fomin, Alexander Rakulenko, Mikhail Malygin, Chen Wang
-
Patent number: 10853258Abstract: A request for retrieving a cached data object from a data object cache used to cached data objects retrieved from one or more primary data sources is received from a data object requester. Responsive to determining that the cached data object in the data object cache is expired, it is determined whether the cached data object in the data object cache is still within an extended time period. If the cached data object in the data object cache is still within an extended time period, it is determined whether the cached data object is free of a cache invalidity state change caused by a data change operation. If the cached data object is free of a cache invalidity state change, the cached data object is returned to the data object requester.Type: GrantFiled: January 31, 2020Date of Patent: December 1, 2020Assignee: salesforce.com, inc.Inventors: Sameer Khan, Francis James Leahy, III
-
Patent number: 10846241Abstract: The subject matter described herein analyzes an item that is a candidate for admission into or eviction from a cache to determine characteristics of the item. The characteristics include at least one of item content and item source. A score associated with the item is calculated based on the determined characteristics. The item is admitted into, or evicted from, the cache based on the calculated score.Type: GrantFiled: August 29, 2018Date of Patent: November 24, 2020Assignee: VMware, Inc.Inventor: Oleg Zaydman
-
Patent number: 10849122Abstract: The present disclosure discloses cache-based data transmission methods and apparatuses. The method is implemented as follows. An apparatus where a caching node is located reports a caching capability to a network side, and the caching node is configured to cache data. The network side sends a cache indicating parameter to the apparatus where the caching node is located, and maintains a data list. Wherein, the cache indicating parameter is configured to control the caching node to cache the data which has the property of high repetition probability and/or high cache utilization, and the data list is a list of the data cached in the caching node. When the caching node has cached data requested by a UE, the UE obtains the requested data from the caching node.Type: GrantFiled: May 13, 2019Date of Patent: November 24, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Hong Wang, Lixiang Xu, Chengjun Sun, Bin Yu
-
Patent number: 10846094Abstract: Embodiments of the present invention relate to a method and system for managing data access in a storage system. A method for managing data access in a storage system, the method comprising: obtaining state information about available resources in a storage control node in the storage system; determining, based on the state information, a credit score descriptive of processing capacity of the storage control node for data access; and publishing the credit score so as to notify a host of the processing capacity of the storage control node for the data access.Type: GrantFiled: September 22, 2017Date of Patent: November 24, 2020Assignee: EMC IP Holding Company LLCInventors: LIfeng Yang, Xinlei Xu, Jian Gao, Ruiyong Jia, Yousheng Liu
-
Patent number: 10846227Abstract: Techniques are disclosed relating to controlling cache size and priority of data stored in the cache using machine learning techniques. A software cache may store data for a plurality of different user accounts using one or more hardware storage elements. In some embodiments, a machine learning module generates, based on access patterns to the software cache, a control value that specifies a size of the cache and generates time-to-live values for entries in the cache. In some embodiments, the system evicts data based on the time-to-live values. The disclosed techniques may reduce cache access times and/or improve cache hit rate.Type: GrantFiled: December 21, 2018Date of Patent: November 24, 2020Assignee: PayPal, Inc.Inventor: Shanmugasundaram Alagumuthu
-
Patent number: 10838805Abstract: A buffer memory storing first parity data for a first stream of data associated with data that is stored at a storage system may be identified. A request to store second parity data for a second stream of data associated with data that is to be stored at the storage system may be received and a characteristic of the first stream of data may be identified. A determination may be made as to whether to replace the first parity data for the first stream with the second parity data for the second stream based on the characteristic of the first stream of data. In response to determining to replace the first parity data with the second parity data based on the characteristic, the second parity data for the second stream of data may be generated and stored at the buffer memory to replace the first parity data.Type: GrantFiled: February 23, 2018Date of Patent: November 17, 2020Assignee: MICRON TECHNOLOGY, INC.Inventors: Shirish Bahirat, Aditi P. Kulkarni
-
Patent number: 10838940Abstract: A data object is received for storage in a key-value store. A partitioning token prefix is generated for the data object. A logical key for the data object is determined. A partitioning key is generated based at least in part on combining the partitioning token prefix and the logical key. Data associated with the data object is stored in the key-value store based on the partitioning key.Type: GrantFiled: August 3, 2017Date of Patent: November 17, 2020Assignee: MuleSoft, Inc.Inventors: Jiang Wu, Aditya Vailaya, Nilesh Khandelwal
-
Patent number: 10824359Abstract: A technique for storing data in a data storage system detects that a read is being performed pursuant to a data copy request. In response, the data storage system stores a digest of the data being read in an entry of a digest cache. Later, when a write pursuant to the same copy request arrives, the storage system obtains the entry from the digest cache and completes the write request without creating a duplicate copy of the data.Type: GrantFiled: October 31, 2017Date of Patent: November 3, 2020Assignee: EMC IP Holding Company LLCInventors: Philippe Armangau, John Gillono, Maher Kachmar, Christopher A. Seibel
-
Patent number: 10817296Abstract: In an example, an apparatus comprises a plurality of execution units, and logic, at least partially including hardware logic, to assemble a general register file (GRF) message and hold the GRF message in storage in a data port until all data for the GRF message is received. Other embodiments are also disclosed and claimed.Type: GrantFiled: April 21, 2017Date of Patent: October 27, 2020Assignee: INTEL CORPORATIONInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Ramkumar Ravikumar, Kiran C. Veernapu, Prasoonkumar Surti, Vasanth Ranganathan
-
Patent number: 10817207Abstract: A computer-implemented method for managing a first storage library and a second storage library, according to one embodiment, includes associating a first physical tape and a second physical tape with a logical tape. The associating includes writing a first identifier to an index of the logical tape. The first identifier represents the first physical tape and the first storage library. The associating further includes writing a second identifier to the index of the logical tape. The second identifier represents the second physical tape and the second storage library. The computer-implemented method further includes storing the index of the logical tape in memory, and displaying the logical tape by reading the index from memory as a file system.Type: GrantFiled: August 21, 2018Date of Patent: October 27, 2020Assignee: International Business Machines CorporationInventors: Hiroshi Itagaki, Tohru Hasegawa, Shinsuke Mitsuma, Tsuyoshi Miyamura, Noriko Yamamoto, Sosuke Matsui
-
Patent number: 10802975Abstract: A dataflow execution environment is provided with dynamic placement of cache operations. An exemplary method comprises: obtaining a first cache placement plan for a dataflow comprised of multiple operations; executing operations of the dataflow and updating a number of references to the executed operations to reflect remaining executions of the executed operations; determining a current cache gain by updating an estimated reduction in the total execution cost for the dataflow of the first cache placement plan; determining an alternative cache placement plan for the dataflow following the execution; and implementing the alternative cache placement plan based on a predefined threshold criteria. A cost model is optionally updated for the executed operations using an actual execution time of the executed operations. A cached dataset can be removed from memory based on the number of references to the operations that generated the cached datasets.Type: GrantFiled: July 20, 2018Date of Patent: October 13, 2020Assignee: EMC IP Holding Company LLCInventors: Vinícius Michel Gottin, Fábio André Machado Porto, Yania Molina Souto
-
Patent number: 10795772Abstract: A memory system includes a volatile memory, a nonvolatile memory, and a controller. The controller is configured to execute a non-volatilization process to store data in the volatile memory into the nonvolatile memory in response to an initiate request received by the controller if no cancellation request is received by the controller during a cancelable period that begins upon receipt of the initiate request by the controller, and to transmit a completion notification when the non-volatilization process has completed.Type: GrantFiled: May 29, 2018Date of Patent: October 6, 2020Assignee: TOSHIBA MEMORY CORPORATIONInventors: Hiroyasu Nakatsuka, Mikiya Kurosu, Yasuo Kudo
-
Patent number: 10783076Abstract: Methods, systems, and computer-readable and executable medium embodiments for revising cache expiration are described herein. One method for revising cache expiration includes tracking attributes of a number of queries of a database; identifying a storage database is outside a database threshold in response to a write operation against the database and based on the tracked attributes; and revising a cache expiration date for at least one query of the number of queries to bring the storage database to within the database threshold.Type: GrantFiled: January 22, 2019Date of Patent: September 22, 2020Assignee: United Services Automobile Association (USAA)Inventors: Noah McConnell, Kevin Paterson
-
Patent number: 10769129Abstract: Techniques for tracking function usage in an enterprise system are provided. The techniques include executing a set of processes in one or more applications on one or more computer systems. Next, a set of threads in each process is used to track, in a hash table stored in memory on a computer system, calls to a set of functions by the process. A thread in the process is then used to update a data store containing usage data for the process with the tracked calls in the hash table.Type: GrantFiled: February 28, 2018Date of Patent: September 8, 2020Assignee: Oracle International CorporationInventors: Pradip Kumar Pandey, Ira Foster Bindley, John David Holder, Brett Weston McGarity, Jason Michael Rice
-
Patent number: 10768821Abstract: There are provided a memory system for processing data and a method for operating the memory system. A memory system includes: a memory device including a plurality of memory blocks for storing data; and a controller for creating a SPOT table including a plurality of SPOT entries according to a logical block address (LBA) of the data and managing the SPOT table, using a least recently used (LRU) algorithm.Type: GrantFiled: April 4, 2018Date of Patent: September 8, 2020Assignee: SK hynix Inc.Inventor: Sung Kwan Hong
-
Patent number: 10762000Abstract: A method of choosing a cache line of a plurality of cache lines of data for eviction from a frontend memory, the method including assigning a baseline replacement score to each way of a plurality of ways of a cache, the ways respectively storing the cache lines, assigning a validity score to each way based on a degree of validity of the cache line stored in each way, assigning an eviction decision score to each way based on a function of the baseline replacement score for the way and the validity score for the way, and choosing a cache line of the way having a highest eviction decision score as the cache line for eviction.Type: GrantFiled: July 27, 2017Date of Patent: September 1, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Mu-Tien Chang, Heehyun Nam, Youngsik Kim, Youngjin Cho, Dimin Niu, Hongzhong Zheng
-
Patent number: 10757360Abstract: Methods and apparatus are provided for automatically transcoding media files. An exemplary method comprises obtaining an input media file having an input file format and encoded with a codec of a first type; automatically determining output media file formats for transcoding the input media file based on statistics of previously transcoded files and statistics of trending media formats for previously downloaded files; and transcoding the input media file into transcoded output media files using a codec of a second type to obtain the determined output media file formats. The output media file formats can be automatically determined using a weighting scheme. Transcoding algorithms are optionally automatically selected based on transcoding algorithms previously used to transcode proximally similar files as the input media file.Type: GrantFiled: March 24, 2016Date of Patent: August 25, 2020Assignee: EMC IP Holding Company LLCInventor: Karin Breitman
-
Patent number: 10747596Abstract: Provided are a computer program product, system, and method for determining when to send message to a computing node to process items using a machine learning module. A send message threshold indicates a send message parameter value for a send message parameter indicating when to send a message to the computing node with at least one requested item to process. Information related to sending of messages to the computing node to process requested items is provided to a machine learning module to produce a new send message parameter value for the send message parameter indicating when to send the message, which is set to the send message parameter value. A message is sent to the computing node to process at least one item in response to the current value satisfying the condition with respect to the send message parameter value.Type: GrantFiled: July 31, 2018Date of Patent: August 18, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Kyler A. Anderson
-
Patent number: 10740016Abstract: A system, method, and computer program product are provided for allocating blocks of memory in a virtual storage device based on access frequency. The method includes the steps of tracking access frequency for a plurality of blocks of memory in a virtual storage device utilizing a heat map and reallocating the plurality of blocks of memory in the virtual storage device among a plurality of blocks of memory in two or more real storage devices based on the heat map. The heat map includes at least one data structure that maps block identifiers corresponding to the plurality of blocks of memory in the virtual storage device to heat values that represent the access frequency of a corresponding block of memory.Type: GrantFiled: November 11, 2016Date of Patent: August 11, 2020Inventors: Philip Andrew White, Hank T. Hsieh, Scott Loughmiller, Asad Ahmad Saeed, Sumner Augustine St. Clair
-
Patent number: 10740225Abstract: A radio communication processor receives first received data including first write data, a first address within a first area of a nonvolatile memory, and error detection information or second received data including second write data whose data amount is larger than a data amount of the first write data and a second address within a second area of the nonvolatile memory. If the radio communication processor receives the first received data, then a controller stores the first write data in a volatile buffer. If there is no error in the first write data, then the controller reads out the first write data from the volatile buffer and stores the first write data in the first area. If the radio communication processor receives the second received data, then the controller stores the second write data in the second area without storing the second write data in the volatile buffer.Type: GrantFiled: July 18, 2018Date of Patent: August 11, 2020Assignee: FUJITSU SEMICONDUCTOR LIMITEDInventor: Takahiko Sato
-
Patent number: 10740230Abstract: A computer-implemented method and a computer processing system are provided for increasing memory density in a memory using heap contraction. The method includes dividing the heap into a plurality of work regions including a last region and other regions such the last region is larger in size than the other regions. The method further includes calculating a size of the heap contraction. The method also includes forming a pair of the last region and one of the other regions that has a largest free portion. The method additionally includes executing intra-table compaction and inter-table compaction on the heap. The method further includes contracting the last region by subtracting a prescribed space from the last region.Type: GrantFiled: October 20, 2016Date of Patent: August 11, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michihiro Horie, Hiroshi H. Horii, Kazunori Ogata, Tamiya Onodera
-
Patent number: 10733108Abstract: A system for computer memory management that implements a memory pool table, the memory pool table including entries that describe a plurality of memory pools, each memory pool representing a group of memory pages related by common attributes; a per-page tracking table, each entry in the per-page tracking table used to related a memory page with a memory pool of the memory pool table; and processing circuitry to: scan each entry in the per-page tracking table and, for each entry: determine an amount of memory released if the memory page related with the entry is swapped; aggregate the amount of memory for the respective memory pool related with the memory page related with the entry in the per-page tracking table, to produce a per-pool memory aggregate; and output the per-pool memory aggregate for the memory pools related with the memory pages in the per-page tracking table.Type: GrantFiled: May 15, 2018Date of Patent: August 4, 2020Assignee: Intel CorporationInventors: Vijay Bahirji, Amin Firoozshahian, Mahesh Madhav, Toby Opferman, Omid Azizi
-
Patent number: 10725782Abstract: Providing variable interpretation of usefulness indicators for memory tables in processor-based systems is disclosed. In one aspect, a memory system comprises a memory table providing multiple memory table entries, each including a usefulness indicator. A memory controller of the memory system comprises a global polarity indicator representing how the usefulness indicator for each memory table entry is interpreted and updated by the memory controller. If the global polarity indicator is set, the memory controller interprets a value of each usefulness indicator as directly corresponding to the usefulness of the corresponding memory table entry. Conversely, if the global polarity indicator is not set, the polarity is reversed such that the memory controller interprets the usefulness indicator value as inversely corresponding to the usefulness of the corresponding memory table entry.Type: GrantFiled: September 12, 2017Date of Patent: July 28, 2020Assignee: Qualcomm IncorporatedInventors: Anil Krishna, Yongseok Yi, Eric Rotenberg, Vignyan Reddy Kothinti Naresh, Gregory Michael Wright
-
Patent number: 10725927Abstract: Aspects of the present disclosure describe a cache system that is co-managed by software and hardware that obviates use of a cache coherence protocol. In some embodiments, a cache would have the following two hardware interfaces that are driven by software: (1) invalidate or flush its content to the lower level memory hierarchy; (2) specify memory regions that can be cached. Software would be responsible for specifying what regions can be cacheable, and may flexibly change memory from cacheable and not, depending on the stage of the software program. In some embodiments, invalidation can be done in one cycle. Multiple valid bits can be kept for each tag in the memory. A vector “valid bit vec” comprising a plurality of bits can be used. Only one of two bits may be used as the valid bit to indicate that this region of memory is holding valid information for use by the software.Type: GrantFiled: December 4, 2018Date of Patent: July 28, 2020Assignee: Beijing Panyi Technology Co., Ltd.Inventor: Xingzhi Wen
-
Patent number: 10725934Abstract: A first data storage holds cache lines, an accelerator has a second data storage that selectively holds accelerator data and cache lines evicted from the first data storage, a tag directory holds tags for cache lines stored in the first and second data storages, and a mode indicator indicates whether the second data storage is operating in a first or second mode in which it respectively holds cache lines evicted from the first data storage or accelerator data. In response to a request to evict a cache line from the first data storage, in the first mode the control logic writes the cache line to the second data storage and updates a tag in the tag directory to indicate the cache line is present in the second data storage, and in the second mode the control logic instead writes the cache line to a system memory.Type: GrantFiled: May 16, 2018Date of Patent: July 28, 2020Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.Inventors: G. Glenn Henry, Terry Parks, Douglas R. Reed
-
Patent number: 10725923Abstract: An apparatus comprises a cache memory to store data as a plurality of cache lines each having a data size and an associated physical address in a memory, access circuitry to access the data stored in the cache memory, detection circuitry to detect, for at least a set of sub-units of the cache lines stored in the cache memory, whether a number of accesses by the access circuitry to a given sub-unit exceeds a predetermined threshold, in which each sub-unit has a data size that is smaller than the data size of a cache line, prediction circuitry to generate a prediction, for a given region of a plurality of regions of physical address space, of whether data stored in that region comprises streaming data in which each of one or more portions of the given cache line is predicted to be subject to a maximum of one read operation or multiple access data in which each of the one or more portions of the given cache line is predicted to be subject to more than one read operation, the prediction circuitry being configuredType: GrantFiled: February 5, 2019Date of Patent: July 28, 2020Assignee: Arm LimitedInventors: Lei Ma, Alexander Alfred Hornung, Ian Michael Caulfield
-
Patent number: 10708213Abstract: The present invention provides an interface for controlling the transfer of electronic transaction messages between a first financial institution and a plurality of switches distributed amongst a plurality of switch sites, wherein the first financial institution and the plurality of switches are connected via a data communications network, the interface comprising communication circuitry, processing circuitry and memory storing the operational status of each switch site, wherein the communication circuitry is operable to transmit a test message to one of the switch sites over the data network if no transaction message has been received from that switch site for a predetermined time, and in response to the test message, the communication circuitry is operable to receive an echo of the test message from the switch site; wherein the processing circuitry is operable such that if the echo is received within a defined time then the operational status of the switch site is set as operational and if the echo is not rType: GrantFiled: October 14, 2015Date of Patent: July 7, 2020Assignee: IPCO 2012 LIMITEDInventors: Steven George Garlick, Neil Antony Masters
-
Patent number: 10705994Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.Type: GrantFiled: May 4, 2017Date of Patent: July 7, 2020Assignee: NVIDIA CorporationInventors: Xiaogang Qiu, Ronny Krashinsky, Steven Heinrich, Shirish Gadre, John Edmondson, Jack Choquette, Mark Gebhart, Ramesh Jandhyala, Poornachandra Rao, Omkar Paranjape, Michael Siu
-
Patent number: 10705964Abstract: In one embodiment, a processor includes a control logic to determine whether to enable an incoming data block associated with a first priority to displace, in a cache memory coupled to the processor, a candidate victim data block associated with a second priority and stored in the cache memory, based at least in part on the first and second priorities, a first access history associated with the incoming data block and a second access history associated with the candidate victim data block. Other embodiments are described and claimed.Type: GrantFiled: April 28, 2015Date of Patent: July 7, 2020Assignee: Intel CorporationInventors: Kshitij A. Doshi, Christopher J. Hughes
-
Patent number: 10698571Abstract: Technologies for narrowing the choices for programs that each comply with example behaviors provided by a user in programming by example. Even if the user provides insufficient behavior examples to precisely identify a program that should be used, the system still uses program behavior features (along with potentially structure features) of the program in order to identify suitability of each program that would comply with the specific set of behavior examples. A particular program is then selected and enabled for that user so that the particular program performs behaviors exemplified by the one or more program behavior examples. In the case where user assistance is used in selection of the program, the suitability for each possible program may be used to decide which of multiple possible programs should be made selectable by the user. Those higher suitability programs might be visualized to the user for selection.Type: GrantFiled: December 29, 2016Date of Patent: June 30, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Sumit Gulwani, Kevin Michael Ellis
-
Patent number: 10691348Abstract: A system comprises a processor, a memory fabric, and a fabric bridge coupled to the memory fabric and the processor. The fabric bridge may receive, from the processor a first eviction request comprising first eviction data, transmit, to the processor, a message indicating the fabric bridge has accepted the first eviction request, transmit a first write comprising the first eviction data to the fabric, receive, from the processor, a second eviction request comprising second eviction data, and transmit a second write comprising the second eviction data to the fabric. Responsive to transmitting the second write request, the fabric bridge may transmit, to the processor, a message indicating the fabric bridge accepted the second eviction request, determine that the first write and the second write have persisted, and transmit, to the processor, a notification to the processor responsive to determining that the first write and the second write have persisted.Type: GrantFiled: February 12, 2019Date of Patent: June 23, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Derek Alan Sherlock, Shawn Walker
-
Patent number: 10684823Abstract: A memory module includes at least two memory devices. Each of the memory devices perform verify operations after attempted writes to their respective memory cores. When a write is unsuccessful, each memory device stores information about the unsuccessful write in an internal write retry buffer. The write operations may have only been unsuccessful for one memory device and not any other memory devices on the memory module. When the memory module is instructed, both memory devices on the memory module can retry the unsuccessful memory write operations concurrently. Both devices can retry these write operations concurrently even though the unsuccessful memory write operations were to different addresses.Type: GrantFiled: June 9, 2017Date of Patent: June 16, 2020Assignee: Rambus Inc.Inventors: Hongzhong Zheng, Brent Haukness