Caching Patents (Class 711/118)
  • Patent number: 11068417
    Abstract: A computational device receives an indication of a minimum retention time in a cache for a plurality of tracks of an application. In response to determining that tracks of the application that are stored in the cache exceed a predetermined threshold in the cache, the computational device demotes one or more tracks of the application from the cache even though a minimum retention time in cache has been indicated for the one or more tracks of the application, while performing least recently used (LRU) based replacement of tracks in the cache.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 20, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Roger G. Hathorn, Joseph Hayward, Matthew G. Borlick
  • Patent number: 11068612
    Abstract: Embodiments for mitigating cache-based data security vulnerabilities in a computing environment are provided. Cache pollution due to speculative memory accesses within a speculative path is avoided by delaying data updates to a cache and memory subsystem until the speculative memory accesses are resolved. A speculative buffer is used to maintain the speculative memory accesses such that a state of the cache remains unchanged until the speculative memory accesses are committed.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: July 20, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Prashant J. Nair, Seokin Hong, Alper Buyuktosunoglu, Ravi Nair
  • Patent number: 11068268
    Abstract: An apparatus comprises: an instruction decoder and processing circuitry. In response to a data structure processing instruction specifying at least one input data structure identifier and an output data structure identifier, the instruction decoder controls the processing circuitry to perform a processing operation on at least one input data structure to generate an output data structure. Each input/output data structure comprises an arrangement of data corresponding to a plurality of memory addresses. The apparatus comprises two or more sets of one or more data structure metadata registers, each set associated with a corresponding data structure identifier and designated to store address-indicating metadata for identifying the memory addresses for the data structure identified by the corresponding data structure identifier.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: July 20, 2021
    Assignee: Arm Limited
    Inventors: Nigel John Stephens, David Hennah Mansell, Richard Roy Grisenthwaite, Matthew Lucien Evans
  • Patent number: 11055234
    Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues by training a machine learning module. A machine learning module is provided input comprising cache segment management information related to management of segments in the local queues by the processing units and accesses of the global queue to transfer cache segments between the local queues and the global queue to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to move between a local queue and the global queue. The machine learning module is retrained based on the cache segment management information to output an adjusted transfer number parameter and an adjusted optimum number parameter for the processing units.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: July 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew R. Craig
  • Patent number: 11055233
    Abstract: A query for data stored in a database that includes a set of segments is received at a computer system. The set of segments are divided into a plurality of columns and at least one column of the plurality of columns includes one or more fields. The system analyzes the query to determine fields required to be retrieved from the database. The system determines whether a required field of the query is located in a main memory of the computer system. The system creates an input/output request for a column containing the required field for a plurality of segments of the set of segments prior to executing the query.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: July 6, 2021
    Assignee: Medallia, Inc.
    Inventor: Thorvald Natvig
  • Patent number: 11036639
    Abstract: A cache apparatus is provided comprising a data storage structure providing N cache ways that each store data as a plurality of cache blocks. The data storage structure is organised as a plurality of sets, where each set comprises a cache block from each way, and further the data storage structure comprises a first data array and a second data array, where at least the second data array is set associative. A set associative tag storage structure stores a tag value for each cache block, with that set associative tag storage structure being shared by the first and second data arrays. Control circuitry applies an access likelihood policy to determine, for each set, a subset of the cache blocks of that set to be stored within the first data array.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: June 15, 2021
    Assignee: ARM Limited
    Inventors: Ricardo Daniel Queiros Alves, Nikos Nikoleris, Shidhartha Das, Andreas Lars Sandberg
  • Patent number: 11029959
    Abstract: Branch prediction circuitry processes blocks of instructions and provides instruction fetch circuitry with indications of predicted next blocks of instructions to be retrieved from memory. Main branch target storage stores branch target predictions for branch instructions in the blocks of instructions. Secondary branch target storage caches the branch target predictions from the main branch target storage. Look-ups in the secondary branch target storage and the main branch target storage are performed in parallel. The main branch target storage is set-associative and an entry in the main branch target storage comprises multiple ways, wherein each way of the multiple ways stores a branch target prediction for one branch instruction.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: June 8, 2021
    Assignee: Arm Limited
    Inventors: Yasuo Ishii, Muhammad Umar Farooq, Chris Abernathy
  • Patent number: 11030106
    Abstract: A storage system and method for enabling host-driven regional performance in memory are provided. In one embodiment, a method is provided comprising receiving a directive from a host device as to a preferred logical region of a non-volatile memory in a storage system; and based on the directive, modifying a caching policy specifying which pages of a logical-to-physical address map stored in the non-volatile memory are to be cached in a volatile memory of the storage system. Other embodiments are provided, such as modifying a garbage collection policy of the storage system based on information from the host device regarding a preferred logical region of the memory.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: June 8, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Ramanathan Muthiah, Ramkumar Ramamurthy, Judah Gamliel Hahn
  • Patent number: 11030110
    Abstract: An integrated circuit includes a first communication interface for communicatively coupling the integrated circuit with a coherent data processing system, a second communication interface for communicatively coupling the integrated circuit with an accelerator unit including an effective address-based accelerator cache for buffering copies of data from a system memory, and a real address-based directory inclusive of contents of the accelerator cache. The real address-based directory assigns entries based on real addresses utilized to identify storage locations in the system memory. The integrated circuit further includes request logic that communicates memory access requests and request responses with the accelerator unit.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: June 8, 2021
    Assignee: International Business Machines Corporation
    Inventors: Michael S. Siegel, Bartholomew Blaner, Jeffrey A. Stuecheli, William J. Starke, Derek E. Williams, Kenneth M. Valk, John D. Irish, Lakshminarayana Arimilli
  • Patent number: 11031085
    Abstract: A non-volatile memory system comprises a memory structure and a control circuit connected to the memory structure. The memory structure includes one or more planes of non-volatile memory cells. Each plane is divided into a plurality of partial planes. The control circuit is configured to write to and read from the memory cells by writing a partial page into a particular partial plane and reading the partial page from the particular partial plane using a set of parameters optimized for the particular partial plane.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: June 8, 2021
    Assignee: SanDisk Technologies LLC
    Inventors: Mohan V Dunga, Pitamber Shukla
  • Patent number: 11025546
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the NIC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: June 1, 2021
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Patent number: 11016846
    Abstract: A storage device sharing a host memory of a host, the storage device includes a serial interface that exchanges data with the host, and a storage controller that stores buffering data in a host memory buffer allocated by the host through the serial interface. The storage controller performs error correction encoding and error correction decoding on the buffering data.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: May 25, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kicheol Eom, Jaeho Sim, Dong-Ryoul Lee, Hyun Ju Yi, Hyotaek Leem
  • Patent number: 11010307
    Abstract: A method, a computer system, and a computer program product to perform a directory lookup in a first level cache for requested cache line data. A first processor core can detect that the requested cache line data is not found in a plurality of sets of data in the first level cache and detect that existing cache line data stored in a least recently used data set stored in the first level cache is in an exclusive state, wherein the existing cache line data stored in the least recently used data set is to be overwritten by the requested cache line data retrieved from a second level cache. Furthermore, the first processor core can send a request for the requested cache line data and a physical address of the least recently used data set to the second level cache and execute additional instructions based on the first level cache and data retrieved from the second level cache.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: May 18, 2021
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. D. Berger, Christian Jacobi, Martin Recktenwald, Yossi Shapira, Aaron Tsai
  • Patent number: 11010335
    Abstract: Methods and systems for a networked storage system is provided. One method includes creating a first snapshot for data units stored at a persistent memory of a computing device, the data units managed by a first file system; transferring metadata associated with the data units and the data units stored at the persistent memory to a storage device managed by a second file system using a logical object, the second file system executed by a storage system interfacing with the computing device; and generating a second snapshot of the logical object at the storage device, the second snapshot including data units and associated metadata of the first snapshot.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: May 18, 2021
    Assignee: NETAPP, INC.
    Inventors: Sriram Venketaraman, Amit Golander
  • Patent number: 11003453
    Abstract: Branch instructions are managed in an emulation environment that is executing a program. A plurality of slots in a Polymorphic Inline Cache is populated. A plurality of entries is populated in a branch target buffer residing within an emulated environment in which the program is executing. When an indirect branch instruction associated with the program is encountered, a target address associated with the instruction is identified from the indirect branch instruction. At least one address in each of the slots of the Polymorphic Inline Cache is compared to the target address associated with the indirect branch instruction. If none of the addresses in the slots of the Polymorphic Inline Cache matches the target address associated with the indirect branch instruction, the branch target buffer is searched to identify one of the entries in the branch target buffer that is associated with the target address of the indirect branch instruction.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 11, 2021
    Assignee: International Business Machines Corporation
    Inventors: Carlos Cavanna, Reid Copeland, Chad McIntyre, Ali Sheikh
  • Patent number: 11003631
    Abstract: A method for use by a first device associated with a redundant second device includes issuing a synchronization request to a file system of the first device. The file system is configured to cache changes associated with a memory space of an application, the synchronization request causing the file system to send the cached changes to a driver of the first device The driver is used to commit the cached changes to a copy of the memory space of the application in order to cause the copy of the memory space of the application to match the memory space of the application. One or more changes made to the copy of the memory space of the application caused by committing the cached changes are identified. A change set identifying the one or more changes being made to the copy of the memory space of the application is created in a buffer and the change set transmitted from the buffer to the second device in order to synchronize an additional copy of the memory space of the application at the second device.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 11, 2021
    Assignee: Honeywell International Inc
    Inventors: Gary Drayton, Norman Swanson, Christopher Pulini
  • Patent number: 11006243
    Abstract: A guidance device of an embodiment includes a first acquirer configured to acquire first mobile body movement information of a first mobile body from a reference position to a first position, a second acquirer configured to acquire a second mobile body movement history including a movement history including the first position in a movement history of a second mobile body, and an information provision controller configured to cause an output to output guidance information directed to a user of the first mobile body on the basis of the first mobile body movement information and the second mobile body movement history.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: May 11, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Kosuke Suzuki
  • Patent number: 11003583
    Abstract: A method, a computing device, and a non-transitory machine-readable medium for modifying cache settings in the array cache are provided. Cache settings are set in an array cache, such that the array cache caches data in an input/output (I/O) stream based on the cache settings. Multiple cache simulators simulate the caching the data from the I/O stream in the array cache using different cache settings in parallel with the array cache. The cache settings in the array cache are replaced with the cache settings from one of the cache simulators based on the determination that the cache simulators increase effectiveness of caching data in the array cache.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: May 11, 2021
    Assignee: NETAPP, INC.
    Inventors: Brian McKean, Sai Susarla, Ariel Hoffman
  • Patent number: 10997077
    Abstract: A data structure (e.g., a table) stores a listing of prefetches. Each entry in the data structure includes a respective virtual address and a respective prefetch stride for a corresponding prefetch. If the virtual address of a memory request (e.g., a request to load or fetch data) matches an entry in the data structure, then the value of a counter associated with that entry is incremented. If the value of the counter satisfies a threshold, then the lookahead amount associated with the memory request is increased.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: May 4, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventor: David Carlson
  • Patent number: 10990533
    Abstract: A system and method for retrieving cached data are disclosed herein. The system includes a cache server including a local memory and a table residing on the local memory, wherein the table is used to identify data objects corresponding to cached data. The system also includes the data objects residing on the local memory, wherein the data objects contain pointers to the cached data. The system further includes a remote memory communicatively coupled to the cache server through an Input-Output (I/O) connection, wherein the cached data resides on the remote memory.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: April 27, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kevin T. Lim, Alvin AuYoung
  • Patent number: 10992743
    Abstract: A content delivery system dynamically manages a content cache fleet by expanding or shrinking the size of the cache fleet to anticipate and/or respond to changes in demand for cached content. The content delivery system can consider various demand-based parameters when determining when and how to scale the cache fleet, including the overall demand (expected or observed) for all content available for delivery by the content delivery system, the demand for a subset of content or individual content items relative to the demand for other subsets of content or individual content items, etc. When content servers are removed from the cache fleet, snapshots of the content caches of the content servers can be stored to a persistent data store, and then restored to content servers when content servers are added to the cache fleet.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: April 27, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Prashant Verma, Ronil Sudhir Mokashi, Karthik Uthaman
  • Patent number: 10990726
    Abstract: Address generators for use in verifying an integrated circuit hardware design for an n-way set associative cache. The address generator is configured to generate, from a reverse hashing algorithm matching the hashing algorithm used by the n-way set associative cache, a list of cache set addresses that comprises one or more addresses of the main memory corresponding to each of one or more target sets of the n-way set associative cache. The address generator receives requests for addresses of main memory from a driver to be used to generate stimuli for testing an instantiation of the integrated circuit hardware design for the n-way set associative cache. In response to receiving a request the address generator provides an address from the list of cache set addresses.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: April 27, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Anthony Wood, Philip Chambers
  • Patent number: 10990323
    Abstract: The present invention provides a flash memory controller, where the flash memory controller includes a read-only memory, a processor and a cache, the read-only memory stores a program code, and the processor executes the program code to perform access a flash memory module. When the processor receives first data from a host, the processor stores the first data into a region of the cache, and the processor builds or updates a binary tree according to the first data, wherein the binary tree is used when the processor receives a read command from the host.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: April 27, 2021
    Assignee: Silicon Motion, Inc.
    Inventor: Kuan-Hui Li
  • Patent number: 10983826
    Abstract: A method, computer system, and a computer program product for designing and executing at least one storlet is provided. The present invention may include receiving a plurality of restore operations based on a plurality of data. The present invention may also include identifying a plurality of blocks corresponding to the received plurality of restore operations from the plurality of data. The present invention may then include identifying a plurality of grain packs corresponding with the identified plurality of blocks. The present invention may further include generating a plurality of grain pack index identifications corresponding with the identified plurality of grain packs. The present invention may also include generating at least one storlet based on the generated plurality of grain pack index identifications. The present invention may then include returning a plurality of consolidated objects by executing the generated storlet.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: April 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Sasikanth Eda, Akshat Mithal, Sandeep R. Patil
  • Patent number: 10983916
    Abstract: A data processing apparatus is provided that includes a plurality of storage elements. Receiving circuitry receives a plurality of incoming data beats from cache circuitry and stores the incoming data beats in the storage elements. At least one existing data beat in the storage elements is replaced by an equal number of the incoming data beats belonging to a different cache line of the cache circuitry. The existing data beats stored in said plurality of storage elements form an incomplete cache line.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: April 20, 2021
    Assignee: ARM Limited
    Inventors: Huzefa Moiz Sanjeliwala, Klas Magnus Bruce, Leigang Kou, Michael Filippo, Miles Robert Dooley, Matthew Andrew Rafacz
  • Patent number: 10978131
    Abstract: Provided are a mobile device and an operation method of the mobile device. The operation method controlled by a central processing unit of the mobile device includes, in response to an initialization request with respect to a memory device of the mobile device, setting a first type area of the memory device, which is not initialized as a first value, and processing an operation command with respect to the first type area.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: April 13, 2021
    Inventor: Seung-soo Yang
  • Patent number: 10970077
    Abstract: In an embodiment, a processor includes a load/store unit that executes load/store operations. The load/store unit may implement a two-level load queue. One of the load queues, referred to as a load retirement queue (LRQ), may track load operations from initial execution to retirement. Ordering constraints may be enforced using the LRQ. The other load queue, referred to as a load execution queue (LEQ), may track loads from initial execution to forwarding of data. Replay may be managed by the LEQ. In an embodiment, the LEQ may be smaller than the LRQ, which may permit the management of replay while still meeting timing requirements. Additionally, the larger LRQ may permit more load operations to be pending (not retired) in the processor, widening the window for out of order execution and supporting potentially higher processor performance.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: April 6, 2021
    Assignee: Apple Inc.
    Inventors: Aditya Kesiraju, Mridul Agarawal, Nikhil Gupta
  • Patent number: 10970341
    Abstract: Methods, systems, and computer-readable storage media for predicting a type of an event in a computer-implemented system, implementations including receiving event data including a set of features representative of an event, determining a probability for at least one feature in the set of features from a data structure that stores a plurality of feature-probability pairs, the data structure representative of a type of event, providing a joint probability based on the probability of the at least one feature, the joint probability indicating a likelihood that the event is of the type of event, comparing the joint probability to a threshold to provide a comparison, and selectively executing one or more actions based on the comparison.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: April 6, 2021
    Assignee: SAP SE
    Inventor: Ahmad Hassan
  • Patent number: 10963422
    Abstract: Techniques for enabling user search of content stored in a file archive include providing a search interface comprising a search rules portion and an action rules portion, receiving a file archive search criterion comprising at least one search rule, and searching the file archive using the search criterion. The techniques also include generating a set of files filtered using the search criterion and performing an action specified in the action rules portion on a file included in the set of files.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: March 30, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Paramasivam Kumarasamy, Prakash Varadharajan, Deepak Raghunath Attarde, Pavan Kumar Reddy Bedadala, Satish Chandra Kilaru
  • Patent number: 10963567
    Abstract: Preventing the observation of the side effects of mispredicted speculative execution flows using restricted speculation. In an embodiment a microprocessor comprises a register file including a plurality of entries, each entry comprising a value and a flag. The microprocessor (i) sets the flag corresponding to any entry whose value results from a memory load operation that has not yet been retired or cancelled, or results from a calculation that was derived from a register file entry whose corresponding flag was set, and (ii) clears the flag corresponding to any entry when the operation that generated the entry's value is retired. The microprocessor also comprises a memory unit that is configured to hold any memory load operation that uses an address whose value is calculated based on a register file entry whose flag is set, unless all previous instructions have been retired or cancelled.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: March 30, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kenneth D. Johnson, Jonathan E. Lange
  • Patent number: 10963255
    Abstract: Techniques related to executing a plurality of instructions by a processor comprising receiving a first instruction configured to cause the processor to output a first data value to a first address in a first data cache, outputting, by the processor, the first data value to a second address in a second data cache, receiving a second instruction configured to cause a streaming engine associated with the processor to prefetch data from the first data cache, determining that the first data value has not been outputted from the second data cache to the first data cache, stalling execution of the second instruction, receiving an indication, from the second data cache, that the first data value has been output from the second data cache to the first data cache, and resuming execution of the second instruction based on the received indication.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: March 30, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Kai Chirca, Timothy D. Anderson, Duc Bui, Abhijeet A. Chachad, Son Hung Tran
  • Patent number: 10956284
    Abstract: An approach is provided for optimizing reference counting. Responsive to receiving code representing a program by a just-in-time compiler, one or more processors in computing machinery supporting transactional memory identify regions of the code having respective sets of reference counting operations executed dynamically. Identifying the regions of the code uses an analysis of semantics of the code. The identified regions are enclosed in respective transactions. The code that was to perform atomic operations, including the reference counting operations in the identified regions, is transformed into new code that performs non-atomic operations that are variants of the atomic operations. Fallback code sequences are inserted into the transformed code. In a non-transactional manner and in response to detections of failures in respective transactions, the fallback code sequences execute original code sequences that were in the code prior to the transformation of the code.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: March 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Vijay Sundaresan, Andrew J. Craik, Younes Manton, Yi Zhang
  • Patent number: 10949630
    Abstract: There is provided an information processing device including a selection unit configured to, on the basis of first identification information included in a processing instruction and corresponding to a service, and first association information in which the first identification information is associated with second identification information for identifying an application, select an application to perform the service corresponding to the processing instruction, and an execution unit configured to cause the selected application to perform a process in accordance with the processing instruction.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: March 16, 2021
    Assignee: SONY CORPORATION
    Inventor: Yasuo Takeuchi
  • Patent number: 10949403
    Abstract: A method, apparatus, and system for policy driven data placement and information lifecycle management in a database management system are provided. A user or database application can specify declarative policies that define the movement and transformation of stored database objects. The policies are associated with a database object and may also be inherited. A policy defines, for a database object, an archiving action to be taken, a scope, and a condition before the archiving action is triggered. Archiving actions may include compression, data movement, table clustering, and other actions to place the database object into an appropriate storage tier for a lifecycle phase of the database object. Conditions based on access statistics can be specified at the row level and may use segment or block level heatmaps. Policy evaluation occurs periodically in the background, with actions queued as tasks for a task scheduler.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 16, 2021
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Vineet Marwah, Hariharan Lakshmanan, Ajit Mylavarapu, Prashant Gaharwar, Amit Ganesh
  • Patent number: 10943323
    Abstract: An instruction is included in a program, which instruction causes execution threads of a processor executing the program to determine whether they satisfy a condition which can only be satisfied by a subset of one or more execution threads at any one time. If a thread satisfies the condition, it executes subsequent instructions in the program. Otherwise, the thread sleeps. The subsequent instructions in the program can accordingly be executed by one execution thread subset at a time in serial order.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 9, 2021
    Assignee: Arm Limited
    Inventors: Olof Henrik Uhrenholt, Sean Tristram LeGuay Ellis
  • Patent number: 10942859
    Abstract: A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: March 9, 2021
    Assignees: SK hynix Inc., Korea University Industry Cooperation Foundation
    Inventors: Seonwook Kim, Wonjun Lee, Yoonah Paik, Jaeyung Jun
  • Patent number: 10936500
    Abstract: A database system includes a database server, a DRAM, a persistent memory, and at least one storage media. The database server includes a cache manager. The DRAM stores a buffer hash table and the persistent memory includes a persistent memory database cache including a plurality of buffers. Buffer content in a buffer is conditionally persisted subsequent to a system initialization event based on the respective buffer satisfying one or more predefined conditions. Each buffer is associated with buffer descriptor values corresponding to a plurality of buffer descriptors. The plurality of buffer descriptors includes a first type of buffer descriptors and a second type of buffer descriptors. Modifications to the buffer hash table are routed to the DRAM, and modifications to the buffer content and modifications to buffer descriptor values corresponding to the first type of buffer descriptors are explicitly flushed to the persistent memory database cache.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: March 2, 2021
    Assignee: Memhive, Inc.
    Inventors: Naresh Kumar Inna, Keshav Prasad H S
  • Patent number: 10936509
    Abstract: A memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a memory management unit configured to maintain a mapping from the virtual address space to the physical address space; and a coherency manager comprising a reverse translation module configured to maintain a mapping from the physical address space to the virtual address space; wherein the memory interface is configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus; receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addr
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: March 2, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Martin John Robinson, Mark Landers
  • Patent number: 10936455
    Abstract: A controller includes an interface and storage circuitry. The interface communicates with a memory that includes memory cells that store data in multiple programming levels, and that are organized in Word Lines (WLs). Each WL connects to one or more cell-groups of the memory cells. The memory cells in some cell-groups suffer from an impairment that has a different severity for reading data units of different bit-significance values. The storage circuitry assigns multiple parity groups to data units stored in cell-groups belonging to consecutive WLs, so that a same parity group is assigned to data units of different bit-significance values in neighboring groups of Nwl consecutive WLs. Upon detecting a failure to access a data unit of a given parity group, due to the impairment, the storage circuitry recovers the data unit using other data units assigned to the given parity group, and that are stored in other cell-groups.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: March 2, 2021
    Assignee: APPLE INC.
    Inventors: Eli Yazovitsky, Eyal Gurgi, Michael Tsohar
  • Patent number: 10922128
    Abstract: Techniques for efficiently managing the interruption of user-level critical sections are provided. In certain embodiments, a physical CPU of a computer system can execute a critical section of a user-level thread of an application, where program code for the critical section is marked with CPU instruction(s) indicating that the critical section should be executed atomically. The physical CPU can detect, while executing the critical section, an event to be handled by an OS kernel of the computer system and upon detecting the event, revert changes performed within the critical section. The physical CPU can then invoke a trap handler of the OS kernel, and in response the OS kernel can invoke a user-level handler of the application with information including (1) the identity of the user-level thread, (2) an indication of the event, (3) the physical CPU state upon detecting the event, and (4) an indication that the user-level thread was interrupted while in the critical section.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: February 16, 2021
    Assignee: VMWARE, INC.
    Inventors: Gerd Zellweger, Lalith Suresh, Jayneel Gandhi, Amy Tai
  • Patent number: 10915409
    Abstract: Contents of a plurality of backups that share a common characteristic are profiled. A portion of the plurality of backups is selected as a base backup reference data to be distributed. A first copy of the base backup reference data is stored at a storage of a backup server. A second copy of the base backup reference data is provided for storage at a storage of a client that shares the common characteristic. The client is located remotely from the backup server.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: February 9, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Balaji Panchanathan, Arun Sambamoorthy, Satchidananda Patra, Pravin Kumar Ashok Kumar
  • Patent number: 10915982
    Abstract: A graphics processing unit (GPU) is provided. The GPU includes a command stream parser (CSP). The CSP receives a command list from a display driver and parses commands in the command list to determine a rendering mode of the GPU and perform a graphics rendering pipeline for graphics processing according to the rendering mode. When the CSP determines that at least a specific CSP command is not included in the command list, the CSP determines that the rendering mode is a first rendering mode. When the CSP determines that the specific CSP command is included in the command list, the CSP determines that the rendering mode is a second rendering mode. In the second rendering mode, the CSP divides a rendering target into tiles, obtains first drawing commands from the command list according to the specific CSP command, and executes the first drawing commands for each tile.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: February 9, 2021
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Ying Wang, Fengxia Wu, Deming Gu, Yi Zhou, Jiakuan Hu
  • Patent number: 10915466
    Abstract: Caches may be vulnerable to side-channel attacks, such as Spectre and Meltdown, that involve speculative execution of instructions, revealing information about a cache that the attacker is not permitted to access. Access permission may be stored in the cache, such as in an entry of a cache table or in the region information for a cache table. Optionally, the access permission may be re-checked if the access permission changes while a memory instruction is pending. Optionally, a random index value may be stored in a cache and used, at least in part, to identify a memory location of a cacheline. Optionally, cachelines that are involved in speculative loads for memory instructions may be marked as speculative. On condition of resolving the speculative load as non-speculative, the cacheline may be marked as non-speculative; and on condition of resolving the speculative load as mis-speculated, the cacheline may be removed from the cache.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: February 9, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Erik Ernst Hagersten, David Black-Schaffer, Stefanos Kaxiras
  • Patent number: 10917926
    Abstract: Various implementations disclosed herein include systems, methods and apparatuses of a first device, that obtain contact point information of a second device associated with the first device, as a peer device in a private network, where the contact point information of the second device includes one or more peer uplink identifiers and each respective peer uplink identifier corresponds to a respective peer device uplink of the second device. The systems, methods and apparatuses establish a first private network data tunnel from a first uplink of the first device to the second device, using the contact point information of the second device, and a first uplink identifier associated with the first uplink, and establish a second private network data tunnel from a second uplink of the first device to the second device, using the contact point information of the second device, and a second uplink identifier associated with the second uplink.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: February 9, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Robert Tristan Shanks, Jignesh Devji Patel, Patrick Douglas Verkaik, Selahattin Daghan Altas, Joseph Morgan Aronow, Justin Delegard, Dylan Jason Koenig
  • Patent number: 10915527
    Abstract: Methods that can parallel search a partitioned data set extended (PSDE) indexes are provided. One method includes managing a set of quick indexes in a memory device in which the set of quick indexes include references to storage locations for a subset of members of a PDSE stored in a set of long-term storage devices. The method further includes receiving a request to determine a storage location of a member of the PDSE and, in response to the request, searching the set of quick indexes to determine the storage location. Systems and computer program products for performing the above method are also provided.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Derek L. Erdmann, David C. Reed, Thomas C. Reed, Max D. Smith
  • Patent number: 10915447
    Abstract: A system including: a reader; a writer; and a shared memory shared by the reader and the writer, wherein the writer is configured to: specify, in the shared memory, first and second cache lines as unsafe to read; prefetch sole ownership of the first and second cache lines; specify, after the prefetching, that the first and second prefetched cache lines are safe to read; write data to the first prefetched cache line in the shared memory; and in response to completing writing data to the first prefetched cache, relinquish control of the first prefetched cache line to a reader.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: February 9, 2021
    Inventor: Johnny Yau
  • Patent number: 10908823
    Abstract: Methods, systems, and devices for data transfer for wear-leveling are described. Data may be stored in pages of banks and the banks may be grouped into bank clusters. A host device may address one bank of a bank cluster at a time. Data may be transferred from a bank to a buffer or a different bank cluster for wear-leveling purposes and this data transfer may take place opportunistically while a second bank, which may be in the same bank cluster, is being accessed based on an access command.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: February 2, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 10908818
    Abstract: According to some embodiment, a backup storage system receives a request from a client to read a data segment associated with a file object stored in a storage system. In response to the request, the system performs a lookup operation in a first index stored in a memory to identify a first index entry based on a fingerprint of the requested data segment to obtain a first write-evict unit (WEU) identifier (ID) identifying a first WEU storing the requested data segment. The system accesses a solid state device (SSD) operating as a cache memory device to retrieve the data segment from the first WEU. The system extracts and decompresses a compressed data segment retrieved from the first WEU and returns the decompressed data segment to the client without accessing a storage unit for retrieving the same data segment.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 2, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Satish Visvanathan, Rahul B. Ugale, Yamini Allu, Vrushali A. Kulkarni
  • Patent number: 10903981
    Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage devices, for data processing and storage. One of the systems includes a first tier storage device with a first performance characteristic and a second tier storage device with a second performance characteristic inferior to the first performance characteristic. The first tier storage device stores a first data log file that includes first blockchain data generated by a blockchain network. The second tier storage device stores a second data log file that includes second blockchain data generated by the blockchain network at an earlier time than the first blockchain data.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 26, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Shikun Tian
  • Patent number: 10901903
    Abstract: Devices and techniques are disclosed herein for implementing, in addition to a first cache, a second, persistent cache in a memory system coupled to a host. The memory system can include flash memory. In certain examples, the first cache and the second cache are configured to store mapping information. In some examples, the mapping information of the second persistent cache is determined by the host using a persistence flag of memory requests provided to the memory system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: January 26, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Nadav Grosz