Patents Examined by Jae U Yu
-
Patent number: 11048637Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.Type: GrantFiled: August 21, 2019Date of Patent: June 29, 2021Inventor: Karthik Sundaram
-
Patent number: 11042487Abstract: According to one embodiment, a memory system receives a write request specifying a first logical address to which first data is to be written, and a length of the first data, from a host. The memory system writes the first data to a nonvolatile memory, and stores a first physical address indicating a physical storage location on the nonvolatile memory to which the first data is written, and the length of the first data, in an entry of a logical-to-physical address translation table corresponding to the first logical address. When the memory system receives a read request specifying the first logical address, the memory system acquires the first physical address and the length from the address translation table, and reads the first data from the nonvolatile memory.Type: GrantFiled: August 29, 2019Date of Patent: June 22, 2021Assignee: Toshiba Memory CorporationInventors: Hideki Yoshida, Shinichi Kanno
-
Patent number: 11036405Abstract: Example methods and systems are provided for a computer system to transfer runtime information between a first kernel module and a second kernel module. In one example, the method may comprise assigning ownership of a memory pool to the first kernel module; and the first kernel module accessing the memory pool to store runtime information associated with one or more operations performed by the first kernel module. The method may also comprise releasing ownership of the memory pool from the first kernel module while maintaining the runtime information in the memory pool; and assigning ownership of the memory pool to the second kernel module. The second kernel module may then access the memory pool to obtain the runtime information stored by the first kernel module.Type: GrantFiled: September 7, 2018Date of Patent: June 15, 2021Assignee: VMWARE, INC.Inventors: Jingmin Zhou, Subrahmanyam Manuguri, Anirban Sengupta
-
Patent number: 11036436Abstract: Systems and methods for scheduling the execution of disk access commands in a split-actuator hard disk drive are provided. In some embodiments, while a first actuator of the split actuator is in the process of performing a first disk access command (a victim operation), a second disk access command (an aggressor operation) is selected for and executed by a second actuator of the split actuator. The aggressor operation is selected from a queue of disk access commands for the second actuator, and is selected based on being the disk access command in the queue that can be initiated sooner than any other disk access command in the queue without disturbing the victim operation.Type: GrantFiled: September 30, 2019Date of Patent: June 15, 2021Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATIONInventors: Gary W. Calfee, Richard M. Ehrlich, Thorsten Schmidt, Eric R. Dunn
-
Patent number: 11030104Abstract: Provided are a computer program product, system, and method for queuing prestage requests in one of a plurality of prestage request queues as a function of the number of track holes determined to be present in a track cached in a multi-tier cache. A prestage request when executed prestages read data from storage to a slow cache tier of the multi-tier cache, for one or more sectors identified by one or more track holes. In another aspect, allocated tasks are dispatched to execute prestage requests queued on selected prestage request queues as a function of priority associated with each prestage request queues. Other aspects and advantages are provided, depending upon the particular application.Type: GrantFiled: January 21, 2020Date of Patent: June 8, 2021Assignee: International Business Machines CorporationInventors: Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson, Matthew G. Borlick
-
Patent number: 11023134Abstract: A host device is configured to communicate over a network with a storage system. The host device comprises a multi-path input-output (MPIO) driver configured to control delivery of input-output (IO) operations from the host device to the storage system over selected ones of a plurality of paths through the network, and a data service driver. The data service driver is configured to provide one or more data services on the host device, wherein the one or more data services correspond to respective extensions. The respective extensions are organized in different levels in a stacked configuration. The data service driver is further configured to receive and process a given IO operation through the respective extensions in the stacked configuration. The MPIO driver is a component of first MPIO software for the host device, and the data service driver is a component of second MPIO software for the host device.Type: GrantFiled: May 22, 2020Date of Patent: June 1, 2021Assignee: EMC IP Holding Company LLCInventors: Vinay G. Rao, Madhu Tarikere
-
Patent number: 11023132Abstract: According to one embodiment, an electronic device includes a nonvolatile memory that includes blocks and a controller. The controller transmits information to the host. The information indicates a first logical address range corresponding to cold data stored in the nonvolatile memory, and a processing amount for turning a cold block that comprises the cold data into a block to which data is writable. The controller reads the cold data from the nonvolatile memory in accordance with a read command that is received from the host and designates the first logical address range, and transmits the read cold data to the host. The controller writes, to the nonvolatile memory, the cold data that is received with a write command designating the first logical address range from the host.Type: GrantFiled: February 3, 2020Date of Patent: June 1, 2021Assignee: Toshiba Memory CorporationInventors: Tetsuya Sunata, Daisuke Iwai, Kenichiro Yoshii
-
Patent number: 11023375Abstract: Described is a data cache implementing hybrid writebacks and writethroughs. A processing system includes a memory, a memory controller, and a processor. The processor includes a data cache including cache lines, a write buffer, and a store queue. The store queue writes data to a hit cache line and an allocated entry in the write buffer when the hit cache line is initially in at least a shared coherence state, resulting in the hit cache line being in a shared coherence state with data and the allocated entry being in a modified coherence state with data. The write buffer requests and the memory controller upgrades the hit cache line to a modified coherence state with data based on tracked coherence states. The write buffer retires the data upon upgrade. The data cache writebacks the data to memory for a defined event.Type: GrantFiled: February 21, 2020Date of Patent: June 1, 2021Assignee: SiFive, Inc.Inventors: John Ingalls, Wesley Waylon Terpstra, Henry Cook
-
Patent number: 11016900Abstract: Technology for selectively prefetching data, such that less data is prefetched when it is determined that the requested data is located in logical addresses allocated to a symbol table data structure. In some embodiments, data is still prefetched when the request is directed to the symbol table, but the amount of data prefetched (measured in memory lines, bytes or other unit) is decreased relative to what it otherwise would be in the context of a non-symbol-table request. In other embodiments, prefetching is simply not performed at all when the request is directed to the symbol table.Type: GrantFiled: January 6, 2020Date of Patent: May 25, 2021Assignee: International Business Machines CorporationInventors: Mohit Karve, Edmund Joseph Gieske
-
Patent number: 11016708Abstract: A non-volatile memory (NVM) driver includes a function library with native function calls and a hardware abstraction layer for receiving at least one instruction from the function library and providing signals to cause an NVM to execute the at least one instruction. The NVM includes a plurality of sectors, and the NVM driver uses a first portion as an application visible memory, and a second portion for another purpose. The NVM driver maintains the NVM as a circular buffer within the application visible memory. When a native function call is a resizing command, the function library adjusts the circular buffer selectively according to whether the resizing command increases or decreases the application visible memory. When a native function call is a write counter command, the NVM driver selectively creates a new counter object including a counter base and a plurality of increment locations using a next location pointer.Type: GrantFiled: October 24, 2019Date of Patent: May 25, 2021Assignee: Silicon Laboratories Inc.Inventor: Marius Grannaes
-
Patent number: 11016892Abstract: The present disclosure provides a cache system and an operating method thereof. The system includes an upper-level cache unit and a last level cache (LLC). The LLC includes a directory, a plurality of counters, and a register. The directory includes a status indicator recording a utilization status of the upper-level cache unit to the LLC. The counters are used to increase or decrease a counting value according to a variation of the status indicator, record an access number from the upper-level cache unit, and record a hit number of the upper-level cache unit accessing the LLC. According to the counting value, the access number, and the hit number, the first parameters of the register are controlled, so as to adjust a utilization strategy to the LLC.Type: GrantFiled: October 24, 2019Date of Patent: May 25, 2021Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.Inventors: Xianpei Zheng, Zhongmin Chen, Weilin Wang, Jiin Lai, Mengchen Yang
-
Patent number: 11016802Abstract: In various embodiments, an ordered atomic operation enables a parallel processing subsystem to executes an atomic operation associated with a memory location in a specified order relative to other ordered atomic operations associated with the memory location. A level 2 (L2) cache slice includes an atomic processing circuit and a content-addressable memory (CAM). The CAM stores an ordered atomic operation specifying at least a memory address, an atomic operation, and an ordering number. In operation, the atomic processing circuit performs a look-up operation on the CAM, where the look-up operation specifies the memory address. After the atomic processing circuit determines that the ordering number is equal to a current ordering number associated with the memory address, the atomic processing circuit executes the atomic operation and returns the result to a processor executing an algorithm.Type: GrantFiled: January 26, 2018Date of Patent: May 25, 2021Assignee: NVIDIA CorporationInventors: Ziyad Hakura, Olivier Giroux, Wishwesh Gandhi
-
Patent number: 11010097Abstract: Methods that can offload data operations to a storage system are provided. One method includes performing, by a processor, a set of non-storage operations at a storage system for data stored in a set of storage devices on the storage system to generate a set of results, in which the storage system is separate from a client device that owns the data, and transmitting the result(s) to the client device that owns the data. Apparatus, systems, and computer program products that can include, perform, and/or implement the methods are also disclosed herein.Type: GrantFiled: September 18, 2019Date of Patent: May 18, 2021Assignee: International Business Machines CorporationInventors: Ning Ding, Yao Dong Zhang, Zhen Nyu Yao, Bo Liu, Wei Feng Yang
-
Patent number: 11003593Abstract: A method for managing a cache memory, including executing first and second processes, when the second process modifies the state of the cache memory, updating the value of an indicator associated with this second process, and comparing the value of this indicator to a predefined threshold and, when this predefined threshold is exceeded, detecting an abnormal use of the cache memory by the second process, in response to this detection, modifying pre-recorded relationships in order to associate with the identifier of the second process a value of a parameter q different from the value of the parameter q associated with the first process so that, after this modification, when the received address of a word to be read is the same for the first and second processes, then the set addresses used to read this word from the cache memory are different.Type: GrantFiled: January 16, 2020Date of Patent: May 11, 2021Assignee: Commissariat a l'Energie Atomique et aux Energies AlternativesInventors: Thomas Hiscock, Mustapha El Majihi, Olivier Savry
-
Patent number: 10990490Abstract: Establishing a synchronous replication relationship between two or more storage systems, including: identifying, for a dataset, a plurality of storage systems across which the dataset will be synchronously replicated; configuring one or more data communications links between each of the plurality of storage systems to be used for synchronously replicating the dataset; exchanging, between the plurality of storage systems, timing information for at least one of the plurality of storage systems; and establishing, in dependence upon the timing information for at least one of the plurality of storage systems, a synchronous replication lease, the synchronous replication lease identifying a period of time during which the synchronous replication relationship is valid.Type: GrantFiled: July 23, 2019Date of Patent: April 27, 2021Assignee: Pure Storage, Inc.Inventors: Connor Brooks, Thomas Gill, Christopher Golden, David Grunwald, Steven Hodgson, Ronald Karr, Zoheb Shivani, Kunal Trivedi
-
Patent number: 10983906Abstract: An apparatus to facilitate memory data compression is disclosed. The apparatus includes a memory and having a plurality of banks to store main data and metadata associated with the main data and a memory management unit (MMU) coupled to the plurality of banks to perform a hash function to compute indices into virtual address locations in memory for the main data and the metadata and adjust the metadata virtual address locations to store each adjusted metadata virtual address location in a bank storing the associated main data.Type: GrantFiled: March 18, 2019Date of Patent: April 20, 2021Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Niranjan Cooray, Prasoonkumar Surti, Sudhakar Kamma, Vasanth Ranganathan
-
Patent number: 10983826Abstract: A method, computer system, and a computer program product for designing and executing at least one storlet is provided. The present invention may include receiving a plurality of restore operations based on a plurality of data. The present invention may also include identifying a plurality of blocks corresponding to the received plurality of restore operations from the plurality of data. The present invention may then include identifying a plurality of grain packs corresponding with the identified plurality of blocks. The present invention may further include generating a plurality of grain pack index identifications corresponding with the identified plurality of grain packs. The present invention may also include generating at least one storlet based on the generated plurality of grain pack index identifications. The present invention may then include returning a plurality of consolidated objects by executing the generated storlet.Type: GrantFiled: August 1, 2019Date of Patent: April 20, 2021Assignee: International Business Machines CorporationInventors: Sasikanth Eda, Akshat Mithal, Sandeep R. Patil
-
Patent number: 10976941Abstract: A peer to peer remote copy operation is performed between a primary storage controller and a secondary storage controller, to establish a peer to peer remote copy relationship between a primary storage volume and a secondary storage volume. Subsequent to indicating completion of the peer to peer remote copy operation to a host, a determination is made as to whether the primary storage volume and the secondary storage volume have identical data, by performing operations of staging data of the primary storage volume from auxiliary storage of the primary storage controller to local storage of the primary storage controller, and transmitting the data of the primary storage volume that is staged, to the secondary storage controller for comparison with data of the secondary storage volume stored in an auxiliary storage of the secondary storage controller.Type: GrantFiled: January 15, 2020Date of Patent: April 13, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew G. Borlick, Lokesh M. Gupta, Brian A. Rinaldi, Micah Robison
-
Patent number: 10977181Abstract: A computer-implemented method, according to one approach, includes: receiving write requests, accumulating the write requests in a destage buffer, and determining a current read heat value of each logical page which corresponds to the write requests. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Furthermore, data in the write requests is destaged from the write queues to their respective page stripes. Other systems, methods, and computer program products are described in additional approaches.Type: GrantFiled: July 10, 2019Date of Patent: April 13, 2021Assignee: International Business Machines CorporationInventors: Roman Alexander Pletka, Timothy Fisher, Aaron Daniel Fry, Nikolaos Papandreou, Nikolas Ioannou, Sasa Tomic, Radu Ioan Stoica, Charalampos Pozidis, Andrew D. Walls
-
Patent number: 10976935Abstract: A method and apparatus for assigning an allocated workload in a data center having multiple storage systems includes selecting one or more storage systems to be assigned the allocated workload based on a combination of performance impact scores and deployment scores. By considering both performance impact and deployment effort, the allocated workload is able to be assigned with a view not only toward storage system performance, but also with a view toward how deployment on a particular storage system would comply with data center policies and the amount of configuration effort it would take to enable the workload to be implemented on the target storage system. This enables workloads to be allocated within the data center while minimizing the required amount of configuration or reconfiguration required to implement the workload allocation within the data center.Type: GrantFiled: February 11, 2020Date of Patent: April 13, 2021Assignee: EMC IP Holding Company LLCInventors: Jason McCarthy, Girish Warrier, Rongnong Zhou