Multiple Caches Patents (Class 711/119)
-
Patent number: 11169810Abstract: According to one general aspect, an apparatus may include an instruction fetch unit circuit configured to retrieve instructions from a memory. The apparatus may include an instruction decode unit configured to convert instructions into one or more micro-operations that are provided to an execution unit circuit. The apparatus may also include a micro-operation cache configured to store micro-operations. The apparatus may further include a branch prediction circuit configured to: determine when a kernel of instructions is repeating, store at least a portion of the kernel within the micro-operation cache, and provide the stored portion of the kernel to the execution unit circuit without the further aid of the instruction decode unit circuit.Type: GrantFiled: April 3, 2019Date of Patent: November 9, 2021Inventors: Ryan J. Hensley, Fuzhou Zou, Monika Tkaczyk, Eric C. Quinnell, James David Dundas, Madhu Saravana Sibi Govindan
-
Patent number: 11169737Abstract: The present disclosure is related to performing speculation in, for example, a memory device or a computing system that includes a memory device. Speculation can be used to identify data that is accessed together or to predict data that will be accessed with greater frequency. The identified data can be organized to improve efficiency in providing access to the data.Type: GrantFiled: August 13, 2019Date of Patent: November 9, 2021Assignee: Micron Technology, Inc.Inventors: Richard C. Murphy, Glen E. Hush, Honglin Sun
-
Patent number: 11144469Abstract: Distributed computing system functionality is enhanced. Transmission of data changes may be incremental, thus reducing bandwidth usage and latency. Data changes may be propagated over geographic distances in an outward-only manner from a central data store to one or more servers or other remote nodes, using proactive updates as opposed to making cache updates only in reaction to cache misses. Cache expiration and eviction may be reduced or avoided as mechanisms for determining when cached data is modified. A central computing environment may proactively push incremental data entity changes to place them in remote data stores. Remote nodes proactively check their remote data store, find changes, pull respective selected changes into their remote node caches, and provide current data in response to service requests. Data may be owned by particular tenants. Data pulls may be limited to data in selected categories, data of recently active tenants, or both.Type: GrantFiled: July 2, 2019Date of Patent: October 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Amir Geri, Asher Budik, Daniel Senderovich
-
Patent number: 11144466Abstract: An embodiment of a memory device includes technology for a memory cell array logically organized in two or more banks of at least two rows and two columns per bank, and two or more local caches respectively coupled to the two or more banks of the memory cell array, where each local cache has a size which is an integer multiple of a memory page size of the memory cell array. Other embodiments are disclosed and claimed.Type: GrantFiled: June 6, 2019Date of Patent: October 12, 2021Assignee: Intel CorporationInventors: Jongwon Lee, Vivek Kozhikkottu, Kuljit S. Bains, Hussein Alameer
-
Patent number: 11144356Abstract: Embodiments of the present systems and methods may provide techniques to provide simple and accurate estimate of memory requirements for application invocation in a serverless environment. For example, a method may comprise selecting sample invocations of functions as a service from a larger plurality of invocations, submitting for execution the plurality of sample invocations and, for each sample invocation, submitting a specification of a memory size to be used for execution of each sample invocation, determining, whether the specification of the memory size to be used for execution of each sample invocation results in unsuccessful execution of at least some of the sample invocations due to insufficient memory and, if so, adjusting the specification of the memory size for at least some of the sample invocations, and submitting for execution at least those invocations in the larger plurality of invocations that were not included in the plurality of sample invocations.Type: GrantFiled: October 30, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Michael Factor, Gil Vernik
-
Patent number: 11099999Abstract: A cache management method for a computing device, a cache controller, a processor and a processor readable storage medium are disclosed. The cache management method for the computing device includes classifying a workload on a cache based on a cache architecture of the computing device, characteristics of a cache level of the cache and a difference in the workload on the cache, and configuring a priority for the classified workload; and allocating a cache resource and performing cache management according to the configured priority.Type: GrantFiled: April 19, 2019Date of Patent: August 24, 2021Assignee: CHENGDU HAIGUANG INTEGRATED CIRCUIT DESIGN CO., LTD.Inventors: Chunhui Zhang, Leigang Kou, Jiang Lin, Jing Li, Zehan Cui
-
Patent number: 11099991Abstract: Described herein is a method for tracking changes to memory locations made by an application. In one embodiment, the application decides to start tracking and sends a list of virtual memory pages to be tracked to an operating system via an interface. The operating system converts the list of virtual memory pages to a list of physical addresses and sends the list of physical addresses to a hardware unit which performs the tracking by detecting write backs on a coherence interconnect coupled to the hardware unit. After the application ends tracking, the application requests a list of dirty cache lines. In response to the request, the operating system obtains the list of dirty cache lines from the hardware unit and adds the list to a buffer that the application can read. In other embodiments, the operating system can perform the tracking without the application making the request.Type: GrantFiled: January 24, 2019Date of Patent: August 24, 2021Assignee: VMware, Inc.Inventors: Aasheesh Kolli, Irina Calciu, Jayneel Gandhi, Pratap Subrahmanyam
-
Patent number: 11042299Abstract: Embodiments disclosed herein provide systems, methods, and computer-readable media to implement an object store with removable storage media. In a particular embodiment, a method provides identifying first data for storage on a first removable storage medium and designating at least a portion of the first data to a first data object. The method further provides determining a first location where to store the first data object in a first value store partition of the first removable storage medium and writing the first data object to the first location. Also, the method provides writing a first key that identifies the first data object and indicates the first location to a first key store partition of the first removable storage medium.Type: GrantFiled: November 1, 2016Date of Patent: June 22, 2021Assignee: Quantum CorporationInventors: Roderick B. Wideman, Turguy Goker, Suayb S. Arslan
-
Patent number: 11016698Abstract: A storage system is coupled to another storage system and a higher-level apparatus via a network, and copies write data received from the higher-level apparatus to the other storage system. This storage system is provided with interface units, each provided with a plurality of ports that can be coupled to the network; and a plurality of controllers coupled to a respective one of the interface units. Each controller has a processor unit. When each processor unit receives write data from the higher-level apparatus via a first port coupled to the interface unit that is coupled to the controller to which the processor unit belongs, the processor unit selects, from among the ports of the interface unit coupled to the controller to which the processor unit belongs, a second port for transmitting the write data to the other storage system, and transmits the write data to the other storage system.Type: GrantFiled: July 4, 2017Date of Patent: May 25, 2021Assignee: HITACHI, LTD.Inventors: Kazuki Hongo, Yasuhiko Yamaguchi
-
Patent number: 10997494Abstract: Methods and systems for detecting disparate incidents in processed data using a plurality of machine learning models. For example, the system may receive native asset data. The system may extract telemetry data from the native asset data. The system may input the first feature input into a first machine learning model, wherein the first machine learning model is trained to detect known incidents of a first type in a first set of labeled telemetry data. The system may then detect a first incident based on a first output from the first machine learning model, wherein the first incident is a first event in an asset related to the user's behavior.Type: GrantFiled: December 31, 2020Date of Patent: May 4, 2021Assignee: GGWP, Inc.Inventors: George Ng, Brian Wu, Ling Xiao
-
Patent number: 10977181Abstract: A computer-implemented method, according to one approach, includes: receiving write requests, accumulating the write requests in a destage buffer, and determining a current read heat value of each logical page which corresponds to the write requests. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Furthermore, data in the write requests is destaged from the write queues to their respective page stripes. Other systems, methods, and computer program products are described in additional approaches.Type: GrantFiled: July 10, 2019Date of Patent: April 13, 2021Assignee: International Business Machines CorporationInventors: Roman Alexander Pletka, Timothy Fisher, Aaron Daniel Fry, Nikolaos Papandreou, Nikolas Ioannou, Sasa Tomic, Radu Ioan Stoica, Charalampos Pozidis, Andrew D. Walls
-
Patent number: 10963383Abstract: Hardware assisted remote transactional memory including receiving, from a first remote processor over a high-speed communications fabric, an indication of a beginning of a first memory transaction; queuing, in a first hardware memory assistant, memory instructions for the first memory transaction; receiving, from a second remote processor over the high-speed communications fabric, an indication of a beginning of a second memory transaction; queuing, in a second hardware memory assistant, memory instructions for the second memory transaction; receiving, from the first remote processor over the high-speed communications fabric, an indication of an ending of the first memory transaction; comparing memory addresses accessed in the first memory transaction to memory addresses accessed in the second memory transaction; and in response to determining that the memory addresses accessed in the first memory transaction overlap with the memory addresses accessed in the second memory transaction, aborting the first memorType: GrantFiled: May 15, 2018Date of Patent: March 30, 2021Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventor: Makoto Ono
-
Patent number: 10956342Abstract: A multi-controller memory system includes a flexible channel memory controller coupled to at least first and second physical interfaces. The second physical interface is also coupled to an auxiliary memory controller. The physical interfaces may be coupled to separate memory modules. In a single-channel control mode, the memory controllers respectively control the memory modules coupled to the first and second physical interface. In a multi-channel control mode, the flexible channel memory controller controls both memory modules while the auxiliary memory controller is inactive. In a single-channel control mode, the memory controllers coordinate restricted memory control commands which access a resource shared by both modules, by one controller transmitting a request signal for the resource to the other controller, awaiting an acknowledgment signal from the other controller, and maintaining transmission of the request signal until the use of the resource is completed.Type: GrantFiled: February 3, 2017Date of Patent: March 23, 2021Assignee: CADENCE DESIGN SYSTEMS, INC.Inventors: John MacLaren, Jerome J. Johnson, Landon Laws, Anne Hughes
-
Patent number: 10949359Abstract: Determining storage of particular data in cache memory of a storage device includes using a first mechanism to determine when to remove the particular data from the cache memory and using a second mechanism, independent from the first mechanism, to inhibit the particular data from being stored in the cache memory independent of whether the first mechanism otherwise causes the particular data to be stored in the cache memory. The first mechanism may remove data from the cache memory that was least recently accessed. The second mechanism may be based, at least in part, on a prediction value of an expected benefit of storing the particular data in the cache memory. The prediction value may be determined based on input data corresponding to measured cache read hits (RH), cache write hits (WH), cache read misses (RM), cache write destage operations (WD), and prefetch reads (PR) for the particular data.Type: GrantFiled: April 24, 2018Date of Patent: March 16, 2021Assignee: EMC IP Holding Company LLCInventors: Owen Martin, Kaustubh S. Sahasrabudhe, Mark D. Moreau, Malak Alshawabkeh, Earl Medeiros
-
Patent number: 10922018Abstract: The present teaching relates to a method, system, and programming for determining a source of a data object. A first average latency of a plurality of users in accessing the data object from the first data source is computed, wherein the first data source was previously identified as being the source of the data object. From each of other data sources, a second average latency of the plurality of users in accessing the data object from the other data source is obtained. In response to the first data source satisfying a first criterion associated with the first average latency, the first data source is maintained to be the source of the data object. In response to the first data source violating the first criterion, one of the other data sources that satisfies a second criterion associated with the second average latency is deemed as the source of the data object.Type: GrantFiled: March 4, 2019Date of Patent: February 16, 2021Assignee: Verizon Media Inc.Inventor: Ric Allinson
-
Patent number: 10915446Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level and a memory resource utilization threshold.Type: GrantFiled: November 23, 2015Date of Patent: February 9, 2021Assignee: International Business Machines CorporationInventors: Richard J. Eickemeyer, John B. Griswell, Jr., Mohit S. Karve
-
Patent number: 10909035Abstract: A system and method for efficiently supporting a cache memory hierarchy potentially using a zero size cache in a level of the hierarchy. In various embodiments, logic in a lower-level cache controller or elsewhere receives a miss request from an upper-level cache controller. When the requested data is non-cacheable, the logic sends a snoop request with an address of the memory access operation to the upper-level cache controller to determine whether the requested data is in the upper-level data cache. When the snoop response indicates a miss or the requested data is cacheable, the logic retrieves the requested data from memory. When the snoop response indicates a hit, the logic retrieves the requested data from the upper-level cache. The logic completes servicing the memory access operation while preventing cache storage of the received requested data in a cache at a same level of the cache memory hierarchy as the logic.Type: GrantFiled: April 3, 2019Date of Patent: February 2, 2021Assignee: Apple Inc.Inventor: Brian R. Mestan
-
Multi-level caching method and multi-level caching system for enhancing graph processing performance
Patent number: 10891229Abstract: A multi-level caching method and a multi-level caching system for enhancing a graph processing performance are provided. The multi-level caching method includes searching for graph data associated with a query from a first cache memory in which data output in response to a previous query request is stored, when a query request for the query is received, re-searching for the graph data from a second cache memory in which neighboring data with a history of an access to each of data stored in the first cache memory is stored, when the graph data is not found in the first cache memory, and outputting first neighboring data found by the re-searching as the graph data when a response to the query request is output.Type: GrantFiled: December 14, 2018Date of Patent: January 12, 2021Assignee: CHUNGBUK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATIONInventors: Seunghun Yoo, Dojin Choi, Jongtae Lim, Kyoungsoo Bok, Jaesoo Yoo -
Patent number: 10884984Abstract: Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.Type: GrantFiled: May 31, 2017Date of Patent: January 5, 2021Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Mark Maybee, James Kremer, Ankit Gureja, Kimberly Morneau
-
Patent number: 10877901Abstract: An apparatus comprises processing circuitry to process data access operations specifying a virtual address of data to be loaded from or stored to a data store, and proxy identifier determining circuitry to determine a proxy identifier for a data access operation to be processed by the data access circuitry, the proxy identifier having fewer bits than a physical address corresponding to the virtual address specified by the data access operation. The processing circuitry comprises at least one buffer to buffer information (including the proxy identifier) associated with one or more pending data access operations awaiting processing. Address translation circuitry determines the physical address corresponding to the virtual address specified for a data access operation after that data access operation has progressed beyond said at least one buffer.Type: GrantFiled: June 26, 2017Date of Patent: December 29, 2020Assignee: ARM LimitedInventors: Richard F. Bryant, Kim Richard Schuttenberg, Lilian Atieno Hutchins, Thomas Edward Roberts, Alex James Waugh, Max John Batley
-
Patent number: 10871906Abstract: A multichip package may include at least a main die mounted on a substrate. The main die may be coupled to one or more transceiver dies also mounted on the substrate. The main die may include one or more universal interface blocks configured to interface with an on-package memory device or an on-package expansion die, both of which can be mounted on the substrate. The expansion die may include external memory interface (EMIF) components for communicating with off-package memory devices and/or bulk random-access memory (RAM) components for storing large amounts of data for the main die. Smaller input-output blocks such as GPIO (general purpose input-output) or LVDS (low-voltage differential signaling) interfaces may be formed within the core fabric of the main die without causing routing congestion while providing the necessary clock source.Type: GrantFiled: September 28, 2018Date of Patent: December 22, 2020Assignee: Intel CorporationInventors: Chee Hak Teh, Curtis Wortman, Jeffrey Erik Schulz
-
Patent number: 10866892Abstract: A memory cache controller includes a transaction arbiter circuit and a retry queue circuit. The transaction arbiter circuit may determine whether a received memory transaction can currently be processed by a transaction pipeline. The retry queue circuit may queue memory transactions that the transaction arbiter circuit determines cannot be processed by the transaction pipeline. In response to receiving a memory transaction that is a cache management transaction, the retry queue circuit may establish a dependency from the cache management transaction to a previously stored memory transaction in response to a determination that both the previously stored memory transaction and the cache management transaction target a common address. Based on the dependency, the retry queue circuit may initiate a retry, by the transaction pipeline, of one or more of the queued memory transactions in the retry queue circuit.Type: GrantFiled: August 13, 2018Date of Patent: December 15, 2020Assignee: Apple Inc.Inventors: Sridhar Kotha, Neeraj Parik
-
Patent number: 10853139Abstract: Allocation of storage array hardware resources between host-visible and host-hidden services is managed to ensure that sufficient hardware resources are allocated to host-visible services. Information obtained from monitoring real-world operation of the storage array is used to generate a model of the storage array. The generated model represents temporal dependencies between storage array hardware, host-visible services, and host-hidden services. Because the model includes information gathered over time and represents temporal dependencies, future occurrence of repeating variations of storage-related service usage and requirements can be predicted. The model may be used to generate hardware recommendations and dynamically re-allocate existing hardware resources to more reliably satisfy a predetermined level of measured performance.Type: GrantFiled: October 19, 2018Date of Patent: December 1, 2020Assignee: EMC IP HOLDING COMPANY LLCInventors: Sweetesh Singh, Ramesh Doddaiah
-
Patent number: 10853267Abstract: A method of managing a direct-mapped cache is provided. The method includes a direct-mapped cache receiving memory references indexed to a particular cache line, using a first cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference, and using a second cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference. The method further includes identifying, over a plurality of most recently received memory references, which one of the algorithms has selected a main memory block that matches a next memory reference a greater number of times, and storing a block of main memory in the cache line, wherein the block of main memory stored in the cache line is the main memory block selected by the identified algorithm.Type: GrantFiled: June 14, 2016Date of Patent: December 1, 2020Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.Inventor: Daniel J. Colglazier
-
Patent number: 10817528Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.Type: GrantFiled: November 30, 2016Date of Patent: October 27, 2020Assignee: Futurewei Technologies, Inc.Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
-
Patent number: 10810116Abstract: Loading of a page into memory of an in-memory database system is initiated. Thereafter, a new page size for the page in memory is allocated corresponding to a greater of a current page size and an intended page size. Later, the page is loaded into the allocated memory so that a consistent change can be opened. Content within the page is reorganized according to the new page size followed by the consistent change being closed.Type: GrantFiled: June 29, 2017Date of Patent: October 20, 2020Assignee: SAP SEInventors: Dirk Thomsen, Thorsten Glebe
-
Patent number: 10802931Abstract: Technology is described for management of shadowing for devices. A computing hub may store device data from a device in a buffer associated with a local device shadow of the device. The computing hub may determine a write status of the device data using a last write marker representing a last data entry written to the buffer. The computing hub may also determine a shadowing upload status of the device data using a last sent shadow marker representing a last data entry of the buffer sent to a device shadowing service in a service provider environment. The computing hub may send computing hub information that includes the last write marker and the last sent shadow marker to the device shadowing service in the service provider environment.Type: GrantFiled: November 21, 2018Date of Patent: October 13, 2020Assignee: Amazon Technologies, Inc.Inventors: John Morkel, Sergejus Barinovas, Manish Geverchand Jain, Bradley Jeffery Behm
-
Patent number: 10802971Abstract: A computer-implemented method for cache memory management includes receiving a coherence request message from a requesting processor. The method can further include determining a request type responsive to detecting the transactional conflict. The request type is indicative of whether the coherence request is a prefetch request. The method further includes detecting, with a conflict detecting engine, a transactional conflict with the coherence request message. The method further includes sending, with the adaptive prefetch throttling engine, a negative acknowledgement to the requesting processor responsive to a determination that the coherence request is a prefetch request.Type: GrantFiled: October 13, 2016Date of Patent: October 13, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Harold W. Cain, III, Pratap C. Pattnaik
-
Patent number: 10785322Abstract: In an example embodiment, a system and method to store and retrieve application data from a database are provided. In an example embodiment, location data comprising a database identifier is received. A location of a database is derived based on the database identifier, the database being one of a plurality of databases, each database of the plurality of databases comprising application data, and application data is requested from the database based on the derived location.Type: GrantFiled: June 2, 2016Date of Patent: September 22, 2020Assignee: PayPal, Inc.Inventors: Christopher J. Kasten, Vilas Athavale, Tim Kane, Haili Ma, Naga Mayakuntla, Fred Ty, Scott Molenaar
-
Patent number: 10776012Abstract: Systems and methods (including hardware and software) are disclosed for us in a multi-core, multi-socket server with many RDMA network adapters and NVME solid state drives. One of the features of the subject matter is to optimize the total IO throughput of the system by first replacing software locks with non-interruptible event handlers running on specific CPU cores that own individual software data structures and hardware queues, and second by moving work to that CPU affinity without stalling due to software lock overhead.Type: GrantFiled: May 19, 2017Date of Patent: September 15, 2020Assignee: EXTEN TECHNOLOGIES, INC.Inventors: Michael Enz, Ashwin Kamath
-
Patent number: 10776043Abstract: Storage circuitry is provided, that is designed to form part of a memory hierarchy. The storage circuitry comprises receiver circuitry for receiving a request to obtain data from the memory hierarchy. Transfer circuitry causes the data to be stored at a selected destination in response to the request, wherein the selected destination is selected in dependence on at least one selection condition. Tracker circuitry tracks the request while the request is unresolved. If at least one selection condition is met then the destination is the storage circuitry and otherwise the destination is other storage circuitry in the memory hierarchy.Type: GrantFiled: August 31, 2018Date of Patent: September 15, 2020Assignee: Arm LimitedInventors: Adrian Montero, Miles Robert Dooley, Joseph Michael Pusdesris, Klas Magnus Bruce, Chris Abernathy
-
Patent number: 10776119Abstract: An example embodiment combines use of a branch predictor with cache-like storage of previously executed branch targets to improve processor performance while minimizing hardware cost. The branch predictor is configured to predict both conditional branch and indirect branch targets and includes a combined predictor table configured to store at least one tagged conditional branch prediction in combination with at least one tagged indirect branch target prediction. The at least one tagged indirect branch target prediction is configured to include a predicted partial target address of a complete target address, the complete target address associated with an indirect branch instruction of a processor. The predictor includes prediction logic configured to use the predicted partial target address to produce a predicted complete target address of the complete target address for use by the processor prior to execution of the indirect branch instruction.Type: GrantFiled: June 15, 2018Date of Patent: September 15, 2020Assignee: MARVELL ASIA PTE, LTD.Inventors: Edward J. McLellan, David A. Carlson, Rohit P. Thakar
-
Patent number: 10768927Abstract: A management system and management method for facilitating resetting of necessary properties along with a version upgrade of a component are proposed. The management system and management method are designed to: update a version of a target component associated with a designated service template or its duplicate in response to a version upgrade request which designates a service template; estimate possible configurations as a post-reset configuration caused by the version upgrade of the target component with respect to each property group including properties associated with the version-upgraded target component, from among property groups of the designated service template or its duplicate; search for a property group having any of the estimated configurations from among property groups of a service template other than the designated service template or its duplicate; and display setting content of the property group detected by the search.Type: GrantFiled: July 14, 2016Date of Patent: September 8, 2020Assignee: HITACHI, LTD.Inventors: Hiroaki Yamaguchi, Yuma Tanahashi, Masashi Nakaoka
-
Patent number: 10769004Abstract: A processor circuit includes: multiple processor cores; multiple individual memories; multiple shared memories; multiple memory control circuits; multiple selectors; and a control core; wherein when an address of the read request from the first processor associated with a specific memory control circuit is identical to the transfer source address, the specific memory control circuit controls the transfer data based on the read request to be transferred to the transfer destination address via a specific selector of the multiple selectors in which the transfer selection information is set, wherein, when the control core sets read selection information in each of the multiple selectors, read data is read by one of the first processor core and the first adjacent processor core from the associated shared memory via a specific selector of the multiple selectors in which the read selection information is set.Type: GrantFiled: March 4, 2019Date of Patent: September 8, 2020Assignee: FUJITSU LIMITEDInventors: Katsuhiro Yoda, Mitsuru Tomono, Takahiro Notsu
-
Patent number: 10761998Abstract: An apparatus comprises processing circuitry for accessing data in a physically-indexed cache. Set indicator recording circuitry is provided to record a set indicator corresponding to a target physical address, where the set indicator depends on which set of one or more storage locations of the cache corresponds to the target physical address. The set indicator is insufficient to identify the target physical address itself. This enables performance issues caused by contention of data items for individual sets in a physically-indexed set-associative or direct-mapped cache to be identified without needing to expose the physical address itself to potentially insecure processes or devices.Type: GrantFiled: November 23, 2016Date of Patent: September 1, 2020Assignee: ARM LimitedInventor: Alasdair Grant
-
Patent number: 10732841Abstract: Ownership of a memory unit in a data processing system is tracked by assigning an identifier to each software component in the data processing system that can acquire ownership of the memory unit. An ownership variable is updated with the identifier of the software component that acquires ownership of the memory unit whenever the memory unit is acquired.Type: GrantFiled: December 13, 2017Date of Patent: August 4, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Jerry W. Stevens
-
Patent number: 10706150Abstract: Systems and methods that detect presence of malicious software while comparing address mappings in multiple table look-aside buffers are provided. Address mappings in an instruction table look-aside buffer (ITLB) and a data table look-aside buffer (DTLB) may be scanned with each address mapping including a mapping between a virtual page in a virtual memory and a frame in a physical memory of a computing device. A discrepancy between an address mapping in the ITLB and an address mapping in the DTLB can be identified. Based on the discrepancy, a process associated with the mapping may then be identified as a malicious process.Type: GrantFiled: December 13, 2017Date of Patent: July 7, 2020Assignee: PayPal, Inc.Inventor: Shlomi Boutnaru
-
Patent number: 10691614Abstract: Techniques to manage virtual memory are disclosed. In various embodiments, a time domain page access signal of a page is transformed to a frequency domain to obtain an access frequency. The access frequency is used to manage storage of the page in a page cache in memory. The access frequency may be used to evict the page from the page cache or, in some embodiments, to predictively load the page into the page cache.Type: GrantFiled: November 14, 2017Date of Patent: June 23, 2020Assignee: TIBCO SOFTWARE INC.Inventor: Suresh Subramani
-
Patent number: 10686915Abstract: According to an aspect of the present invention, a device includes a transmitter and receiver that transmits and receives communication packets to and from other device, a memory that stores one or more reference information which present a function for data request and a function for data communication which a service comprises, one or more predetermined data relating to the functions, and one or more distinguishing information for distinguishing the predetermined data, and a processor. The processor generates a packet which includes the reference information associated with the function for data communication, the predetermined data, and the distinguishing information.Type: GrantFiled: March 6, 2018Date of Patent: June 16, 2020Assignee: CASIO COMPUTER CO., LTD.Inventor: Kazuho Kyou
-
Patent number: 10664402Abstract: Example implementations relate to read operation redirect. For example, a system according in the present disclosure may include a data storage device accessible by a host. The system may include an input/output filter of an operating system of the host. The input/output filter may monitor read operations and write operations from the host to the data storage device. The input/output filter may copy a portion of the data storage device to a random-access memory (RAM) buffer within the host responsive to monitored read operations to the portion exceeding a threshold. The input/output filter may redirect a successive read operation, addressed to the portion of the data storage device, to the copy of the portion in the RAM buffer.Type: GrantFiled: January 27, 2017Date of Patent: May 26, 2020Assignee: Hewlett-Packard Development Company, L.P.Inventors: Virginia Q. Herrera, Christoph Graham, Thomas Joseph Flynn
-
Patent number: 10642740Abstract: In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.Type: GrantFiled: June 4, 2018Date of Patent: May 5, 2020Assignee: Apple Inc.Inventors: Bikram Saha, Harshavardhan Kaushikkar, Sukalpa Biswas, Prashant Jain
-
Patent number: 10642743Abstract: An apparatus and method are provided for handling caching of persistent data. The apparatus comprises cache storage having a plurality of entries to cache data items associated with memory address in a non-volatile memory. The data items may comprise persistent data items and non-persistent data items. Write back control circuitry is used to control write back of the data items from the cache storage to the non-volatile memory. In addition, cache usage determination circuitry is used to determine, in dependence on information indicative of capacity of a backup energy source, a subset of the plurality of entries to be used to store persistent data items. In response to an event causing the backup energy source to be used, the write back control circuitry is then arranged to initiate write back to the non-volatile memory of the persistent data items cached in the subset of the plurality of entries.Type: GrantFiled: June 12, 2018Date of Patent: May 5, 2020Assignee: ARM LIMITEDInventors: Wei Wang, Stephan Diestelhorst, Wendy Arnott Elsasser, Andreas Lars Sandberg, Nikos Nikoleris
-
Patent number: 10593380Abstract: Disclosed herein are techniques for monitoring the performance of a storage-class memory (SCM). In some embodiments, a performance monitoring circuit at an interface between the SCM and a memory controller of the SCM receives transaction commands from the memory controller to the SCM, measures statistics associated with the transaction commands, and determines a utilization rate of the SCM based on the statistics. Based on the determined utilization rate of the SCM, future transaction requests can be optimized to improve the utilization rate of the SCM.Type: GrantFiled: December 13, 2017Date of Patent: March 17, 2020Assignee: Amazon Technologies, Inc.Inventors: Thomas A. Volpe, Mark Anthony Banse, Steven Scott Larson, Douglas Lloyd Mainz
-
Patent number: 10591977Abstract: A method, system, and device provide for selective control in a distributed cache system of the power state of a number of receiver partitions arranged in one or more partition groups. A power control element coupled to one or more of the receiver partitions and a coherent interconnect selectively control transition from a current power state to a new power state by each receiver partition of one or more partition groups of the plurality of partition groups.Type: GrantFiled: December 10, 2015Date of Patent: March 17, 2020Assignee: Arm LimitedInventors: Mark David Werkheiser, Dominic William Brown, Ashley John Crawford, Paul Gilbert Meyer
-
Patent number: 10579615Abstract: There is provided a method and server for retrieving data from a data storage system including a plurality of storage nodes. The method may include sending a multicast message to at least a subset of the storage nodes. The multicast message may include a request for the subset of storage nodes to send the data. The multicast message may further include a data identifier, indicating the data to be retrieved. Moreover, the method may include receiving data from a first storage node of the subset of storage nodes. The data received from the first storage node may correspond to the requested data. At least the act of sending a multicast message or the act of receiving data from the first storage node may be performed on a condition that an estimated size of the data is less than a predetermined value.Type: GrantFiled: May 30, 2014Date of Patent: March 3, 2020Assignee: Compuverde ABInventors: Stefan Bernbo, Christian Melander, Roger Persson, Gustav Petersson
-
Patent number: 10564978Abstract: Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices, where each load/store slice includes a load miss queue and a load reorder queue, includes: receiving, at a load reorder queue, a load instruction requesting data; responsive to the data not being stored in a data cache, determining whether a previous load instruction is pending a fetch of a cache line comprising the data; if the cache line does not comprise the data, allocating an entry for the load instruction in the load miss queue; and if the cache line does comprise the data: merging, in the load reorder queue, the load instruction with an entry for the previous load instruction.Type: GrantFiled: June 8, 2018Date of Patent: February 18, 2020Assignee: International Business Machines CorporationInventors: Kimberly M. Fernsler, David A. Hrusecky, Hung Q. Le, Elizabeth A. McGlone, Brian W. Thompto
-
Patent number: 10552325Abstract: A method and apparatus for reducing write-backs to memory is disclosed herein. The method includes determining whether a read/write request entering a lower level cache is a cache line containing modified data, and responsive to determining that the read/write request is not a cache line containing modified data, manipulating age information of the cache line to reduce a number of write-backs to memory.Type: GrantFiled: April 20, 2018Date of Patent: February 4, 2020Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventor: Md Kamruzzaman
-
Patent number: 10534653Abstract: A hypervisor-based virtual machine isolation apparatus and method. The hypervisor-based virtual machine isolation method performed by the hypervisor-based virtual machine isolation apparatus includes when a hypervisor starts to run virtual machines, allocating one or more colors to each of the virtual machines, allocating a page frame corresponding to the allocated colors to the corresponding virtual machine, allocating an accessible core depending on the colors of the virtual machine, and performing isolation between virtual machines corresponding to an identical color by changing a temporal/spatial scheduling order between the virtual machines corresponding to the identical color.Type: GrantFiled: November 13, 2017Date of Patent: January 14, 2020Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Woomin Hwang, Sung-Jin Kim, Byung-Joon Kim, Hyunyi Yi, Chulwoo Lee, Hyoung-Chun Kim
-
Patent number: 10528481Abstract: A data block storage management capability is presented. A cloud file system management capability manages storage of data blocks of a file system across multiple cloud storage services (e.g., including determining, for each data block to be stored, a storage location and a storage duration for the data block). A cloud file system management capability manages movement of data blocks of a file system between storage volumes of cloud storage services. A cloud file system management capability provides a probabilistic eviction scheme for evicting data blocks from storage volumes of cloud storage services in advance of storage deadlines by which the data blocks are to be removed from the storage volumes. A cloud file system management capability enables dynamic adaptation of the storage volume sizes of the storage volumes of the cloud storage services.Type: GrantFiled: August 13, 2014Date of Patent: January 7, 2020Assignee: Provenance Asset Group LLCInventors: Krishna P. Puttaswamy Naga, Thyagarajan Nandagopal, Muralidharan S. Kodialam
-
Patent number: 10514863Abstract: A memory system includes: a non-volatile memory device; a host controller suitable for generating a cache read command for controlling a cache read operation of the non-volatile memory device and at least one other command for controlling at least one other operation of the non-volatile memory device excluding the cache read operation in response to a request received from a host; and a memory controller suitable for controlling an operation of the non-volatile memory device in response to the cache read command and the at least one other command that are inputted from the host controller. The memory controller suitable for checking out the operation of the non-volatile memory device corresponding to a command that is inputted next to the input of the cache read command, and adding a read operation command including a read preparation command or a read end command next to the cache read command.Type: GrantFiled: March 30, 2017Date of Patent: December 24, 2019Assignee: SK hynix Inc.Inventor: Joo-Young Lee