Patents Examined by Trang K Ta
  • Patent number: 10802995
    Abstract: A system may include a host processor coupled to a communication bus, a first hardware accelerator communicatively linked to the host processor through the communication bus, and a second hardware accelerator communicatively linked to the host processor through the communication bus. The first hardware accelerator and the second hardware accelerator are directly coupled through an accelerator link independent of the communication bus. The host processor is configured to initiate a data transfer between the first hardware accelerator and the second hardware accelerator directly through the accelerator link.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: October 13, 2020
    Assignee: Xilinx, Inc.
    Inventors: Sarabjeet Singh, Hem C. Neema, Sonal Santan, Khang K. Dao, Kyle Corbett, Yi Wang, Christopher J. Case
  • Patent number: 10795877
    Abstract: Disclosed herein are embodiments for performing multi-version concurrency control (MVCC) in non-volatile memory. An embodiment operates by determining that an event occurred, wherein one or more write transactions to one or more records of a multi-version database that were pending prior to the event did not commit. The one or more write transactions are identified based on a commit value that was stored in the non-volatile memory prior to the event. A particular one of the identified uncommitted write transactions is selected. From the multi-version database, a first version of a record corresponding to the selected uncommitted write transaction that was not committed, and an earlier version of the record that was committed prior to the event are identified. A visibility of the record is set to indicate that the earlier version of the record is visible and the first version of the record is not visible.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: October 6, 2020
    Assignee: SAP SE
    Inventors: Ismail Oukid, Wolfgang Lehner, Daniel dos Santos Bossle
  • Patent number: 10789015
    Abstract: The present disclosure includes apparatuses and methods related to performing background operations in memory. A memory device can be configured to perform background operations while another memory device in a memory system and/or on a common memory module is busy performing commands received from a host coupled to the memory system and/or common memory module. An example apparatus can include a first memory device, wherein the first memory device can include an array of memory cells and a controller configured to perform a background operation on the first memory device in response to detecting a command from a host to a second memory device.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: September 29, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Frank F. Ross, Matthew A. Prather
  • Patent number: 10788989
    Abstract: A system and a method are disclosed for providing for non-uniform memory access (NUMA) resource assignment and re-evaluation. In one example, the method includes receiving, by a processing device, a request to launch a first process in a system having a plurality of Non-Uniform Memory Access (NUMA) nodes, determining, by the processing device, a resource requirement of the first process, determining, based on resources available on the plurality of NUMA nodes, a preferred NUMA node of the plurality of NUMA nodes to execute the first process, the preferred NUMA node being determined by the processing device without user input, and binding, by the processing device, the first process to the preferred NUMA node.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: September 29, 2020
    Assignee: Red Hat, Inc.
    Inventor: William Samuel Gray
  • Patent number: 10782914
    Abstract: A buffer system may include a buffer configured to receive input data having an assigned priority level, store the input data within a memory stack regardless of the priority level assigned to the input data, and sequentially output the input data stored in the memory stack in order of the priority levels assigned to the input data.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: September 22, 2020
    Assignee: SK hynix Inc.
    Inventors: Seunggyu Jeong, Jung Hyun Kwon, Wongyu Shin, Do-Sun Hong
  • Patent number: 10769076
    Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: September 8, 2020
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
  • Patent number: 10761763
    Abstract: A cache buffer coupled to a page buffer includes: a first cache group and a second cache group corresponding to a first area and a second area of a memory cell array; a selector coupled to the first and second cache groups; and an input/output (I/O) controller coupled to the selector and configured to output data to the first and second cache groups or receive data input from the first and second cache groups. The selector: performs normal repair operation by transferring data received through a first data line to the first cache group and transferring data received through a second data line to the second cache group; performs cross repair operation by transferring data received through the first data line to the second cache group and transferring data received through the second data line to the first cache group.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: September 1, 2020
    Assignee: SK hynix Inc.
    Inventors: KangYoul Lee, Kyeong Min Chae
  • Patent number: 10761779
    Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Sanjeev N. Trika, Steven C. Miller
  • Patent number: 10762048
    Abstract: A computer-implemented method according to one embodiment includes receiving a request for a creation or expansion of a file within a predetermined volume of a system, determining that a first amount of available space within the predetermined volume is insufficient to allow the creation or expansion of the file within the predetermined volume of the system, expanding the first amount of available space within the predetermined volume to create a second amount of available space that is greater than the first amount of available space, in response to determining that the first amount of available space is insufficient, and implementing the creation or expansion of the file within the predetermined volume of the system, utilizing the second amount of available space within the predetermined volume.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: September 1, 2020
    Assignee: International Business Machines Corporation
    Inventors: Tan Q. Nguyen, Tony Xu, John R. Paveza
  • Patent number: 10754895
    Abstract: A method for reducing I/O performance impacts associated with a data commit operation is disclosed. In one embodiment, such a method includes periodically performing a data commit operation wherein modified data is destaged from cache to persistent storage drives. Upon performing a particular instance of the data commit operation, the method determines whether modified data in the cache is a metadata track. In the event the modified data is a metadata track, the method attempts to acquire an exclusive lock on the metadata track. In the event the exclusive lock cannot be acquired, the method skips over the metadata track without destaging the metadata track for the particular instance of the data commit operation. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Edward Lin, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 10754730
    Abstract: Provided are a computer program product, system, and method for copying point-in-time data in a storage to a point-in-time copy data location in advance of destaging data to the storage. A point-in-time copy is created to maintain tracks in a source storage unit as of a point-in-time. A source copy data structure indicates tracks in the source storage unit to copy from the storage to a point-in-time data location. An update to write to a source track is received and a determination is made as to whether the source copy data structure indicates to copy the source track from the storage to the point-in-time data location. The update is written to a cache. A copy operation is initiated to copy the source track from the storage to the point-in-time data location asynchronous before the source track is destaged from the cache to the storage unit.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Theresa M. Brown, Kevin Lin, David Fei, Nedlaya Y. Francisco
  • Patent number: 10732839
    Abstract: A universal mechanism is utilized for data rebalancing in a scaled-out data storage cluster. A value (l) representing a number of erasure coded fragments of each data portion that are to be moved to a newly added node can be calculated. Initially, the number of erasure coded fragments moved per data fragment is determined based on the greatest integer that is less than or equal to l and remainders are accumulated. When accumulated reminders equal or exceed l, the number of erasure coded fragments moved per data fragment is determined based on the lowest integer that is greater than or equal to l. A value of accumulated reminders is then decreased by l. Accordingly, system-level imbalances can be avoided and data availability, data robustness, and/or overall system performance can be increased.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: August 4, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Vladislav Eremeev
  • Patent number: 10713081
    Abstract: Secure and efficient memory sharing for guests is disclosed. For example, a host has a host memory storing first and second guests whose memory access is managed by a hypervisor. A request to map an IOVA of the first guest to the second guest is received, where the IOVA is mapped to a GPA of the first guest, which is is mapped to an HPA of the host memory. The HPA is mapped to a second GPA of the second guest, where the hypervisor controls access permissions of the HPA. The second GPA is mapped in a second page table of the second guest to a GVA of the second guest, where a supervisor of the second guest controls access permissions of the second GPA. The hypervisor enables a program executing on the second guest to access contents of the HPA based on the access permissions of the HPA.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: July 14, 2020
    Assignee: RED HAT, INC.
    Inventors: Michael Tsirkin, Stefan Hajnoczi
  • Patent number: 10705758
    Abstract: Apparatus, methods, media and systems for multiple sets of trim parameters are described. A non-volatile memory device may comprise a first register, a second register, a multiplexer, a first set of I/O lines, each coupled to the first register and the multiplexer, each associated with a particular trim set among multiple trim sets stored in the first register, one or more second I/O lines, each coupled to the second register and the multiplexer. The multiplexer is configured to receive a control signal. The multiplexer is configured to output, based on the control signal, a particular trim set among the multiple trim sets to the second register using the one or more second I/O lines.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: July 7, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Tomer Tzvi Eliash, Asaf Gueta, Inon Cohen, Yuval Grossman
  • Patent number: 10698833
    Abstract: A method for supporting a plurality of requests for access to a data cache memory (“cache”) is disclosed. The method comprises accessing a first set of requests to access the cache, wherein the cache comprises a plurality of blocks. Further, responsive to the first set of requests to access the cache, the method comprises accessing a tag memory that maintains a plurality of copies of tags for each entry in the cache and identifying tags that correspond to individual requests of the first set. The method also comprises performing arbitration in a same clock cycle as the accessing and identifying of tags, wherein the arbitration comprises: (a) identifying a second set of requests to access the cache from the first set, wherein the second set accesses a same block within the cache; and (b) selecting each request from the second set to receive data from the same block.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Sourabh Alurkar
  • Patent number: 10691344
    Abstract: A first memory controller receives an access command from a second memory controller, where the access command is timing non-deterministic with respect to a timing specification of a memory. The first memory controller sends at least one access command signal corresponding to the access command to the memory, wherein the at least one access command signal complies with the timing specification. The first memory controller determines a latency of access of the memory. The first memory controller sends feedback information relating to the latency to the second memory controller.
    Type: Grant
    Filed: May 30, 2013
    Date of Patent: June 23, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Doe Hyun Yoon, Sheng Li, Jichuan Chang, Ke Chen, Parthasarathy Ranganathan, Norman Paul Jouppi
  • Patent number: 10691345
    Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 23, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark Schmisseur
  • Patent number: 10691376
    Abstract: A computer-implemented method according to one embodiment includes identifying code word interleaved (CWI)-4 entries to be re-written to a data storage cartridge, selecting a subset of the CWI-4 entries to be included within a first CWI-4 set, where a plurality of the CWI-4 entries within the subset are associated with a single sub data set (SDS), and re-writing the first CWI-4 set to the data storage cartridge.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: June 23, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kevin D. Butt, Roy D. Cideciyan, Simeon Furrer, Mark A. Lantz
  • Patent number: 10691542
    Abstract: According to an embodiment, a storage device includes a plurality of memory nodes and a control unit. Each of the memory nodes includes a storage unit including a plurality of storage areas having a predetermined size. The memory nodes are connected to each other in two or more different directions. The memory nodes constitute two or more groups each including two or more memory nodes. The control unit is configured to sequentially allocate data writing destinations in the storage units to the storage areas respectively included in the different groups.
    Type: Grant
    Filed: September 11, 2013
    Date of Patent: June 23, 2020
    Assignee: Toshiba Memory Corporation
    Inventors: Yuki Sasaki, Takahiro Kurita, Atsuhiro Kinoshita
  • Patent number: 10684961
    Abstract: External memory protection may be implemented for content addressable memory (CAM). Memory protection data, such as duplicate values for entries in a CAM or error detection codes generated from values of the entries in a CAM, may be stored in a random access memory that is separate from the CAM. When an entry in the CAM is accessed to perform a lookup or scrubbing operation, the memory protection data may be obtained from the RAM. A validation of the value of the entry may then be performed according to the memory protection data to determine whether the value is valid.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: June 16, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Kiran Kalkunte Seshadri, Thomas A. Volpe