Patents Examined by Gurtej Bansal
  • Patent number: 10489314
    Abstract: A memory module operable to communicate data with a memory controller via a data bus comprises a plurality of memory integrated circuits including first memory integrated circuits and second memory integrated circuits, a data buffer coupled between the first memory integrated circuits and the data bus, and between the second memory integrated circuits and the data bus, and logic coupled to the data buffer. The logic is configured to respond to a first memory command by providing first control signals to the data buffer to enable communication of at least one first data signal between the first memory integrated circuits and the memory controller through the data buffer, and is further configured to respond to a second memory command by providing second control signals to the data buffer to enable communication of at least one second data signal between the second memory integrated circuit and the memory controller through the data buffer.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: November 26, 2019
    Assignee: Netlist, Inc.
    Inventors: Jefferey C. Solomon, Jayesh R. Bhakta
  • Patent number: 10474638
    Abstract: An illustrative pseudo-file-system driver uses deduplication functionality and resources in a storage management system to provide an application and/or a virtual machine with access to a locally-stored file system. From the perspective of the application/virtual machine, the file system appears to be of virtually unlimited capacity. The pseudo-file-system driver instantiates the file system in primary storage, e.g., configured on a local disk. The application/virtual machine requires no configured settings or limits for the file system's storage capacity, and may thus treat the file system as “infinite.” The pseudo-file-system driver intercepts write requests and may use the deduplication infrastructure in the storage management system to offload excess data from local primary storage to deduplicated secondary storage, based on a deduplication database.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: November 12, 2019
    Assignee: Commvault Systems, Inc.
    Inventors: Amit Mitkar, Paramasivam Kumarasamy, Rajiv Kottomtharayil
  • Patent number: 10474587
    Abstract: Smart weighted container data cache eviction preserves write evict units (WEUs) containing the most frequently and recently accessed blocks to maintain low latency data cache. Prior to performing cache eviction, the WEUs are weighted based on the page statistics maintained for each WEU. Page statistics include page hit/frequency and recency statistics associated with each WEU and data cache eviction is performed at the WEU level of granularity. Therefore, an entire WEU can be evicted based on page hit/frequency and recency statistics associated with the WEU.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: November 12, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Satish Kumar Kashi Visvanathan, Rahul Ugale
  • Patent number: 10467137
    Abstract: Provided are an apparatus, system, integrated circuit die, and method for caching data in a hierarchy of caches. A first cache line in a first level cache having modified data for an address is processed. Each cache line of cache lines in the first level cache store data for one of a plurality of addresses stored in multiple cache lines of a second level cache. A second cache line in the second level cache is selected and a determination is made of a number of corresponding bits in the first cache line and the second cache line that are different. Bits in the first cache line that are different from the corresponding bits in the second cache line are written to the corresponding bits in the second cache line in response to a determination that the number of corresponding bits that are different is less than a threshold.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: November 5, 2019
    Assignee: INTEL CORPORATION
    Inventors: Helia Naeimi, Qi Zeng
  • Patent number: 10452532
    Abstract: The present disclosure includes apparatuses and methods for directed sanitization of memory. One example method comprises, responsive to receiving a sanitization command, performing a deterministic garbage collection operation on a memory, wherein performing the deterministic garbage collection operation results in physical erasure of all invalid data stored on the memory without losing valid data stored on the memory.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: October 22, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Jeffrey L. McVay, Daniel J. Hubbard, Robert W. Strong, Michael B. Danielson, Jonathan Tanguy
  • Patent number: 10452300
    Abstract: Each node includes a cache to store data of the storage shared by the plurality nodes. Time information when a process accessing to data migrates from one node to another node is recorded. The one node, after migration of the process to the other node, selectively invalidates data held in the cache of the one node with a time of last access thereto by the process on the one node being older than a time of migration of the process from the one node to the other node.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: October 22, 2019
    Assignee: NEC Corporation
    Inventor: Shugo Ogawa
  • Patent number: 10430331
    Abstract: A solid-state drive (SSD) is configured for dynamic resizing. When the SSD approaches the end of its useful life because the over-provisioning amount is nearing the minimum threshold as a result of an increasing number of bad blocks, the SSD is reformatted with a reduced logical capacity so that the over-provisioning amount may be maintained above the minimum threshold.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: October 1, 2019
    Assignee: Toshiba Memory Corporation
    Inventor: Daisuke Hashimoto
  • Patent number: 10430339
    Abstract: A memory management method includes determining a stride value for stride access by referring to a size of two-dimensional (2D) data, and allocating neighboring data in a vertical direction of the 2D data to a plurality of banks that are different from one another according to the determined stride value. Thus, the data in the vertical direction may be efficiently accessed by using a memory having a large data width.
    Type: Grant
    Filed: December 30, 2014
    Date of Patent: October 1, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-seok Kwon, Chul-soo Park, Suk-jin Kim
  • Patent number: 10423529
    Abstract: Implementations of this disclosure are directed to systems, methods and media for assessing the status of data being stored in distributed, cached databases that includes retrieving, from a data cache, variables which include a cache loss indicator and a non-null value. The variables are analyzed to determine a state of the cache loss indicator. If the cache loss indicator indicates an intentional cache loss state, the cache loss indicator is removed and the non-null value is provided to an application. Otherwise, a cache restore process is initiated.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: September 24, 2019
    Assignee: MZ IP HOLDINGS, LLC
    Inventors: Ajk Palikuqi, Garth Gillespie, Arya Bondarian, Jai Kim
  • Patent number: 10417130
    Abstract: Apparatuses, systems, methods for a spatial memory streaming (SMS) prefetch engine are described. In one aspect, an SMS prefetch engine uses trigger-to-trigger stride detection to promote training table entries to pattern history table (PHT) entries and to drive spatially related prefetches in more distant regions. In another aspect, an SMS prefetch engine maintains a blacklist of program counter (PC) values to not use as trigger values. In yet another aspect, an SMS prefetch engine uses hashed values of certain fields, such as the trigger PC, in entries of, e.g., filter tables, training tables, and PHTs, as index values for the table.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: September 17, 2019
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Edward A Brekelbaum, Arun Radhakrishnan
  • Patent number: 10417045
    Abstract: An apparatus and a method is provided that comprises at least one first processing unit configured to run at least one first computer program application capable of receiving and processing signals received from at least one interface or device connected to said first processing unit, at least one second processing unit configured to run at least a second computer program application capable of further processing at least some information processed in said first processing unit.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: September 17, 2019
    Assignee: Amer Sports Digital Services Oy
    Inventors: Erik Lindman, Jyrki Uusitalo, Timo Eriksson, Tomi Lehto, Tero Aurto
  • Patent number: 10416896
    Abstract: A memory module includes a memory device, a command/address buffering device, and a processing data buffer. The memory device includes a memory cell array, a first set of input/output terminals, each terminal configured to receive first command/address bits, and a second set of input/output terminals, each terminal configured to receive both data bits and second command/address bits. The command/address buffering device is configured to output the first command/address bits to the first set of input/output terminals. The processing data buffer is configured to output the data bits and second command/address bits to the second set of input/output terminals. The memory device is configured such that the first command/address bits, second command/address bits, and data bits are all used to access the memory cell array.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: September 17, 2019
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB Foundation, Wisconsin Alumni Research Foundation
    Inventors: Seong-Il O, Nam Sung Kim, Young-Hoon Son, Chan-Kyung Kim, Ho-Young Song, Jung Ho Ahn, Sang-Joon Hwang
  • Patent number: 10409729
    Abstract: Control over the overall data cache hit rate is obtained by managing partitioning caching responsibility by address space. Data caches determine whether to cache data by hashing the data address. Each data cache is assigned a range of hash values to serve. By choosing hash value ranges that do not overlap, data duplication can be eliminated if desired, or degrees of overlap can be allowed. Control over hit rate maximization of data caches having best hit response times is obtained by maintaining separate dedicated and undedicated partitions within each cache. The dedicated partition is only used for the assigned range of hash values.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Amnon Naamad, Sean Dolan
  • Patent number: 10409597
    Abstract: Embodiments of an invention for memory management in secure enclaves are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive a first instruction and a second instruction. The execution unit is to execute the first instruction, wherein execution of the first instruction includes allocating a page in an enclave page cache to a secure enclave. The execution unit is also to execute the second instruction, wherein execution of the second instruction includes confirming the allocation of the page.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: September 10, 2019
    Assignee: Intel Corporation
    Inventors: Rebekah Leslie-Hurd, Carlos V. Rozas, Vincent R. Scarlata, Simon P. Johnson, Uday R. Savagaonkar, Barry E. Huntley, Vedvyas Shanbhogue, Ittai Anati, Francis X. Mckeen, Michael A. Goldsmith, Ilya Alexandrovich, Alex Berenzon, Wesley H. Smith, Gilbert Neiger
  • Patent number: 10394475
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for allocating memory. A computer receives a request for memory to be allocated to a computer node and determines if the allocation request needs to be carried out on a cluster level, a server rack level, or on a server level. The computer retrieves a memory policy associated with the determined level the allocation request needs to be carried out on from a memory policy database and determines how much available memory may be allocated and if there enough available memory to meet the request. The computer reallocates the available memory to address the received the received request based on the retrieved memory policy.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Zhong Li, Xian Dong Meng
  • Patent number: 10394477
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for allocating memory. A computer receives a request for memory to be allocated to a computer node and determines if the allocation request needs to be carried out on a cluster level, a server rack level, or on a server level. The computer retrieves a memory policy associated with the determined level the allocation request needs to be carried out on from a memory policy database and determines how much available memory may be allocated and if there enough available memory to meet the request. The computer reallocates the available memory to address the received the received request based on the retrieved memory policy.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Zhong Li, Xian Dong Meng
  • Patent number: 10387045
    Abstract: The present invention relates to an apparatus and a method for managing a buffer having three states on the basis of a flash memory and, more specifically, to an apparatus and a method for improving the performance of a database management system (DBMS) on the basis of the flash memory and a use life span of a storage device by reducing a writing operation for a flash memory device in which the writing operation is very slow in comparison with a reading operation, through an efficient buffer managing method and a new index node split policy. To this end, the buffer management device having three states on the basis of the flash memory according to an embodiment of the present invention comprises: a buffer memory unit; a list management unit; a buffer memory management unit; and a log buffer unit.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: August 20, 2019
    Assignee: AJOU UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Tae Sun Chung, Rize Jin, Hyung Ju Cho
  • Patent number: 10372336
    Abstract: A file access method, a system, and a host are provided. According to the method, after obtaining information about first virtual space of a target file, a host allocates, in local virtual address space of the host, second virtual space to the target file, where the first virtual space is space allocated in global virtual address space by a management node in a distributed storage system to the target file. The host converts, according to a correspondence between the first virtual space and the second virtual space, a second access request of accessing the second virtual space into a first access request, where an address of the first virtual space in the first access request includes device information of a first storage node. Then, the host sends the first access request to a network device to route the first access request to the first storage node.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: August 6, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jun Xu, Yuangang Wang, Guanyu Zhu
  • Patent number: 10360040
    Abstract: The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: July 23, 2019
    Assignee: Movidius, LTD.
    Inventors: David Moloney, Richard Richmond, David Donohoe, Brendan Barry
  • Patent number: 10354095
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to initialize enclaves on target processors. An example apparatus includes an image file retriever to retrieve configuration parameters associated with an enclave file, and an address space manager to calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters, and generate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: July 16, 2019
    Assignee: Intel Corporation
    Inventor: Bin Xing