Patents Examined by Denise Tran
  • Patent number: 10664192
    Abstract: In an example, a computing system is configured to detect data to temporarily store in a group of buffers using an in-memory buffer service; correlate, to the detected data, one or more identifiers of a plurality of identifiers based on a characteristic of the detected data, wherein a first identifier of the plurality corresponds to a first buffer type and a second different identifier of the plurality corresponds to a second buffer type; in response to the data correlated to a single identifier of the identifiers, create a first data object and place the first data object in one of the buffers of the corresponding buffer type; and in response to the data correlated to more than one of the identifiers, create a second data object for each one of the identifiers and place the second data objects in ones of the buffers of the corresponding buffer types, respectively.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: May 26, 2020
    Assignee: SALESFORCE.COM, INC.
    Inventors: Choapet Oravivattanakul, Samarpan Jain
  • Patent number: 10642747
    Abstract: An apparatus may include a virtual flash device configured to emulate a flash memory device. The virtual flash device may include a flash interface configured to communicate with a flash controller, an address translation module configured to translate memory addresses from a flash based memory space to another memory space of another memory, a threshold voltage shift module configured to modify data on a data path between the flash controller and the other memory to simulate data corruption caused by threshold voltage shifts in cells of the emulated flash memory device, and a non-flash memory controller configured to communicate with the other memory.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: May 5, 2020
    Assignee: Seagate Technology LLC
    Inventors: Sachin Sudhir Jagtap, Deepak Govind Choudhary
  • Patent number: 10620881
    Abstract: An apparatus includes an interface for dynamic random access memory (DRAM); and an integrated circuit. The integrated circuit includes a memory pinout configured to connect to the memory and control logic. The control logic is configured multiplex address information, command information, and data to be written to or read from the DRAM memory on a subset of pins of the memory pinout to the DRAM memory. The control logic is further configured to route other signals on other pins of the memory pinout to the DRAM in parallel with the multiplexed address information, command information, and data information.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: April 14, 2020
    Assignee: MICROCHIP TECHNOLOGY INCORPORATED
    Inventors: Eric Matulik, Patrick Filippi, Marc Maunier
  • Patent number: 10599835
    Abstract: Embodiments are disclosed to mitigate the meltdown vulnerability by selectively using page table isolation. Page table isolation is enabled for 64-bit applications, so that unprivileged areas in the kernel address space cannot be accessed in user mode due to speculative execution by the processor. On the other hand, page table isolation is disabled for 32-bit applications thereby providing mapping into unprivileged areas in the kernel address space. However, speculative execution is limited to a 32-bit address space in a 32-bit application, and s access to unprivileged areas in the kernel address space can be inhibited.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: March 24, 2020
    Assignee: VMWARE, INC.
    Inventors: Nadav Amit, Dan Tsafrir, Michael Wei
  • Patent number: 10585765
    Abstract: A method, computer program product, and system for selective memory mirroring including identifying, by a computer during an initial program load, predictively deconfigured memory units and memory interfaces, wherein the predictively deconfigured memory units and memory interfaces are marked by the computer for removal from a computer configuration prior to the initial program load, analyzing the predictively deconfigured memory units and memory interfaces to determine a level of granularity for selective memory mirroring and initiating selective memory mirroring at the determined level of granularity using the analyzed predictively deconfigured memory units and memory interfaces.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Sachin Gupta, Prem Shanker Jha, Venkatesh Sainath
  • Patent number: 10564690
    Abstract: The present disclosure includes methods for operating a memory system, and memory systems. One such method includes updating transaction log information in a transaction log using write look ahead information; and updating a logical address (LA) table using the transaction log.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: February 18, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Joseph M. Jeddeloh
  • Patent number: 10558562
    Abstract: A method for operating a data storage device includes determining an nth garbage collection throughput by multiplying a rate of a number of used pages of an open memory block to an amount of write data to be processed to a sum of the number of used empty memory blocks and an immediately previous garbage collection throughput average value; and performing a garbage collection operation based on the nth garbage collection throughput.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: February 11, 2020
    Assignee: SK hynix Inc.
    Inventors: Hae Lyong Song, Woong Sik Shin
  • Patent number: 10545875
    Abstract: Systems, apparatuses, and methods for implementing a tag accelerator cache are disclosed. A system includes at least a data cache and a control unit coupled to the data cache via a memory controller. The control unit includes a tag accelerator cache (TAC) for caching tag blocks fetched from the data cache. The data cache is organized such that multiple tags are retrieved in a single access. This allows hiding the tag latency penalty for future accesses to neighboring tags and improves cache bandwidth. When a tag block is fetched from the data cache, the tag block is cached in the TAC. Memory requests received by the control unit first lookup the TAC before being forwarded to the data cache. Due to the presence of spatial locality in applications, the TAC can filter out a large percentage of tag accesses to the data cache, resulting in latency and bandwidth savings.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 28, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Kevin M. Lepak, Ganesh Balakrishnan, Ravindra N. Bhargava
  • Patent number: 10528269
    Abstract: A controller of a storage system may poll a non-volatile memory component to determine an operational status of the memory component after a memory operation has been initiated in the memory component. The controller may, in response to determining the operational status of the memory component is busy, update a polling interval based on a polling factor. The controller may re-poll the memory component to determine the operational status of the memory component after expiration of the updated polling interval. The controller may repeat the updating of the polling interval and the re-polling of the memory component until the operational status of the memory component is determined to be ready or until a predetermined number of iterations of the updating and re-polling have been performed if, in response to the re-polling, the operational status is determined to be busy.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: January 7, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Mark Elliott
  • Patent number: 10528471
    Abstract: Methods and systems for self-invalidating cachelines in a computer system having a plurality of cores are described. A first one of the plurality of cores, requests to load a memory block from a cache memory local to the first one of the plurality of cores, which request results in a cache miss. This results in checking a read-after-write detection structure to determine if a race condition exists for the memory block. If a race condition exists for the memory block, program order is enforced by the first one of the plurality of cores at least between any older loads and any younger loads with respect to the load that detects the prior store in the first one of the plurality of cores that issued the load of the memory block and causing one or more cache lines in the local cache memory to be self-invalidated.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 7, 2020
    Assignee: ETA SCALE AB
    Inventors: Alberto Ros, Stefanos Kaxiras
  • Patent number: 10496282
    Abstract: Storage group performance targets are achieved by managing resources using discrete techniques that are selected based on learned cost-benefit rank. The techniques include delaying start of IOs based on storage group association, making a storage group active or passive on a port, and biasing front end cores. A performance goal may be assigned to each storage group based on volume of IOs and the difference between an observed response time and a target response time. A decision tree is used to select a correction technique which is biased based on the cost of deployment. The decision tree maintains an average benefit of each technique and over time with rankings based on maximizing cost-benefit.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: December 3, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Owen Martin, Arieh Don, Michael Scharland
  • Patent number: 10489294
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes receiving a request for exclusive access to a cache line from a requestor of a drawer in a processing system. The method further includes bringing the cache line to a local cache of the drawer. The method further includes invalidating copies of the cache line in the processing system. The method further includes loading a remote fetch address register (RFAR) controller on other drawers in the processing system, wherein the RFAR comprises a local pending flag and a remote pending flag.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Rebecca M. Gott, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 10481796
    Abstract: A method for screening bad data columns in a data storage medium comprising a plurality of data columns includes: labeling or recording a plurality of bad data columns as a bad data column group, wherein the bad data columns are selected from the data columns in the data storage medium, each of the bad data column groups labels or records a position and a number of the bad data columns; determining whether the total number of the bad data columns is greater than a total number of the bad data column groups; and if yes, labeling or recording any two bad data columns of the bad data columns spaced apart by P data columns as one of the bad data column groups, wherein P is a positive integer.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: November 19, 2019
    Assignee: SILICON MOTION, INC.
    Inventors: Sheng-Yuan Huang, Yu-Ping Chang
  • Patent number: 10467086
    Abstract: A method of determining causes of external fragmentation in a memory. The method includes collecting information associated with release of an area of the memory by an application, storing the information in the area of the memory, and analyzing the information to determine why the area of the memory has not been reallocated to any application. In embodiments wherein a first portion of an area of a memory is allocated to an application by an allocator and a second portion of the area of the memory is released by the allocator, the method includes storing in the second portion of the area of the memory an indicator indicating that the second portion is a remaining portion, collecting information associated with release of the second portion, storing the information in the second portion, and analyzing the information to determine why the second portion is not reallocated to any application.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: November 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Matthew R. Kilner, David K. Siegwart
  • Patent number: 10452301
    Abstract: Technologies are provided for storing data by alternating the performance of data write operations using multiple clusters of storage devices. Data is written to internal buffers of storage devices in one cluster while data stored in buffers of storage devices in another cluster is transferred to the storage devices' permanent storages. When available buffer capacity in a cluster falls below a specified threshold, data write commands are no longer sent the cluster and the storage devices in the cluster transfer data stored in their buffers to their permanent storages. While the data is being transferred, data write commands are transmitted to other clusters. When the data transfer is complete, the storage devices in the cluster can be scheduled to receive data write commands again. A cluster can be selected for performing a given data write request by matching the attributes of the cluster to parameters of the data write request.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: October 22, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Munif M. Farhan, Darin Lee Frink, Douglas Stewart Laurence
  • Patent number: 10437482
    Abstract: A method of coordinating memory commands in a high-bandwidth memory HBM+ system, the method including sending a host memory controller command from a host memory controller to a memory, receiving the host memory controller command at a coordinating memory controller, forwarding the host memory controller command from the coordinating memory controller to the memory, and scheduling, by the coordinating memory controller, a coordinating memory controller command based on the host memory controller command.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: October 8, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mu-Tien Chang, Dimin Niu, Hongzhong Zheng
  • Patent number: 10423528
    Abstract: An apparatus includes: a processor core to execute an instruction; a first cache to retain data used by the processor core; and a second cache to be coupled to the first cache, wherein the second cache includes a data-retaining circuit to include storage areas to retain data, an information-retaining circuit to retain management information that includes first state information for indicating a state of data retained in the data-retaining circuit, a state-determining circuit to determine, based on the management information, whether requested data that is requested with a read request from the first cache is retained in the data-retaining circuit, and an eviction-processing circuit to, where the state-determining circuit determines the requested data not to be retained in the data-retaining circuit with no enough space in the storage areas to store the requested data, evict data from the storage areas without issuing an eviction request based on the read request.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: September 24, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Kenta Umehara, Toru Hikichi, Hideaki Tomatsuri
  • Patent number: 10416928
    Abstract: The present disclosure relates to storing a data object to one or more storage devices of the data storage system in units of data blocks; storing a metadata structure for the data object including one or more direct metadata nodes, and optionally including a root metadata node and optionally further including one or more indirect metadata nodes, each direct metadata node including block pointers referencing respective data blocks of the respective data object; dividing the data object into plural compression units; compressing each compression unit of the plural compression units to a respective compressed unit associated with the respective compression unit; modifying, for each compression unit, block pointers of the direct metadata node referencing respective data blocks of the respective compression unit on the basis of the associated compressed unit; and managing I/O access to the data object based on the metadata structure of the data object.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: September 17, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Christopher James Aston, Mitsuo Hayasaka
  • Patent number: 10410710
    Abstract: Steering logic circuitry includes bit-flipping logic that determines a first neighboring redundant word line adjacent to a redundant word line of a memory bank, which also includes normal word lines. Redundant word lines include main word lines, each of which includes paired word lines. Each paired word line includes two redundant word lines. The steering logic circuitry also includes border determination logic that determines whether the redundant word line is on a border between the redundant word lines and an end of the memory bank or the normal word lines. The steering logic circuitry further includes main word line steering logic that determines a neighboring main word line that a second neighboring redundant word line adjacent to the redundant word line is disposed in, and paired word line steering logic that determines a neighboring paired word line that the second neighboring redundant word line is disposed in.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: September 10, 2019
    Assignee: Micron Technology, Inc.
    Inventor: Joosang Lee
  • Patent number: 10395424
    Abstract: A method and apparatus of copying data from a first memory location to a second memory location includes performing a copy operation selected out of one or more copy operations. The copy operations include performing interleaved data copying, performing a full wavefront copy operation, copying all data to a local data store (LDS) prior to copying to the second memory location, or pipelining the data for copying. The copy operation is applied to copy the data from the first location to the second memory location.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: August 27, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Guohua Jin, Todd Martin