Patents Examined by Jae U Yu
-
Patent number: 11204877Abstract: The amount of data that is written to disk is minimized when an overlay optimizer is used in conjunction with a write filter to prevent the overlay from becoming full. An overlay optimizer minifilter can be used to intercept writes that were initiated by the overlay optimizer's request to commit files cached in the write filter's overlay to thereby extract only the modified portions of the files that are actually stored in the overlay. The overlay optimizer minifilter can then write these modified portions of the files, as opposed to the entire files, in the overlay cache. Directory change notifications are also enabled when a write filter is employed as well as in other multi-volume filter environments.Type: GrantFiled: October 18, 2019Date of Patent: December 21, 2021Assignee: Dell Products L.P.Inventors: Gokul Thiruchengode Vajravel, Jyothi Bandakka, Ankit Kumar
-
Patent number: 11204797Abstract: A computing system includes a host and a storage device. The host includes a central processing unit (CPU) and a first volatile memory device. The storage device includes a second volatile memory device and a nonvolatile memory device. The CPU uses the first volatile memory device and the second volatile memory device as a main memory to store temporary data used for operation of the CPU. The CPU determines a swap-out page to be swapped-out of first pages stored in the first volatile memory device, determines a swap-in page to be swapped-in of second pages stored in the second volatile memory device, and exchanges the swapped-out page and the swapped-in page.Type: GrantFiled: March 23, 2020Date of Patent: December 21, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-Gun Kim, Won-Moon Cheon
-
Patent number: 11204701Abstract: A method of processing transactions associated with a command in a storage system is provided. The method includes receiving, at a first authority of the storage system, a command relating to user data. The method includes sending a transaction of the command, from the first authority to a second authority of the storage system, wherein a token accompanies the transaction and writing data in accordance with the transaction as permitted by the token into a partition that is allocated to the second authority in a storage device of the storage system.Type: GrantFiled: March 23, 2020Date of Patent: December 21, 2021Assignee: Pure Storage, Inc.Inventors: John Hayes, Robert Lee, Igor Ostrovsky, Peter Vajgel
-
Patent number: 11204878Abstract: An apparatus is provided that includes a memory hierarchy comprising a plurality of caches and a memory. Prefetch circuitry acquires data from the memory hierarchy before the data is explicitly requested by processing circuitry configured to execute a stream of instructions. Writeback circuitry causes the data to be written back from a higher level cache of the memory hierarchy to a lower level cache of the memory hierarchy and tracking circuitry tracks a proportion of entries that are stored in the lower level cache of the memory hierarchy having been written back from the higher level cache of the memory hierarchy, that are subsequently explicitly requested by the processing circuitry in response to one of the instructions.Type: GrantFiled: October 8, 2020Date of Patent: December 21, 2021Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Chris Abernathy
-
Patent number: 11199991Abstract: The invention introduces an apparatus for controlling different types of storage units, at least including: an interface and a processing unit. The interface connects at least two types of storage units, which include at least a nonvolatile hybrid memory. The processing unit is configured to operably access data to the different types of storage units through the interface.Type: GrantFiled: December 12, 2019Date of Patent: December 14, 2021Assignee: SILICON MOTION, INC.Inventor: Sheng-I Hsu
-
Patent number: 11194617Abstract: A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry.Type: GrantFiled: May 22, 2020Date of Patent: December 7, 2021Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson
-
Patent number: 11194729Abstract: A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes a set of cache lines, line type bits configured to store an indication that a corresponding cache line of the set of cache lines is configured to store write-miss data, and an eviction controller configured to flush stored write-miss data based on the line type bits.Type: GrantFiled: May 22, 2020Date of Patent: December 7, 2021Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
-
Patent number: 11188475Abstract: A technique is provided for managing caches in a cache hierarchy. An apparatus has processing circuitry for performing operations and a plurality of caches for storing data for reference by the processing circuitry when performing the operations. The plurality of caches form a cache hierarchy including a given cache at a given hierarchical level and a further cache at a higher hierarchical level. The given cache is a set associative cache having a plurality of cache ways, and the given cache and the further cache are arranged such that the further cache stores a subset of the data in the given cache. In response to an allocation event causing data for a given memory address to be stored in the further cache, the given cache issues a way indication to the further cache identifying which cache way in the given cache the data for the given memory address is stored in.Type: GrantFiled: October 2, 2020Date of Patent: November 30, 2021Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Balaji Vijayan
-
Patent number: 11188465Abstract: A cache memory is disclosed. The cache memory includes a plurality of ways, each way including an instruction memory portion, where the instruction memory portion includes a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a controller configured to determine that each of a predetermined number of cache memory hit conditions have occurred, and a replacement policy circuit configured to identify one of the plurality of ways as having experienced a fewest quantity of hits of the predetermined number of cache memory hit conditions, where the controller is further configured to determine that a cache memory miss condition has occurred, and, in response to the miss condition, to cause instruction data retrieved from a RAM memory to be written to the instruction memory portion of the way identified by the replacement policy circuit.Type: GrantFiled: September 2, 2020Date of Patent: November 30, 2021Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventor: Bassam S Kamand
-
Patent number: 11182297Abstract: An information providing method of an electronic apparatus is disclosed. The information providing method may include receiving a counter information request, identifying cache counter information corresponding to the counter information request from a cache database related to a counter, and transmitting response information corresponding to the counter information request based on the identified cache counter information.Type: GrantFiled: December 29, 2020Date of Patent: November 23, 2021Assignee: Coupang Corp.Inventor: Seok Hyun Kim
-
Patent number: 11182292Abstract: Techniques are disclosed for processing cache operations. The techniques include determining a set of cache lines that include data for a vector memory access request; determining bank allocation priorities for the set of cache lines, wherein the bank allocation priorities are chosen to result in the set of cache lines being evenly distributed among the banks; determining actual banks for the set of cache lines; and accessing the cache lines in one or more access iterations, wherein at least one of the one or more access iterations includes accessing multiple cache lines in different banks at the same time.Type: GrantFiled: September 22, 2020Date of Patent: November 23, 2021Assignee: Advanced Micro Devices, Inc.Inventor: Jeffrey Christopher Allan
-
Patent number: 11182299Abstract: The present application discloses a data acquisition method, a microprocessor and an apparatus with storage function. The method may include: a request information for obtaining a target information may be received. The request type of the request information may be an instruction request or a data request. The instruction cache and the data cache may be queried respectively, to determine whether the target information matching with the requested information exits in the instruction cache and the data cache. If the target information exists in another cache that does not match with the request type of the request information, then the target information may be returned from the cache that does not match with the request type of the request information. The present application may physically separate the instruction cache and the data cache, thereby improving the data acquisition efficiency.Type: GrantFiled: August 24, 2020Date of Patent: November 23, 2021Assignee: AUTOCHIPS WUHAN CO., LTD.Inventors: Mingyang Li, Bin Zhang
-
Patent number: 11176038Abstract: A data processing system includes multiple processing units coupled to a system interconnect including a broadcast address interconnect and a data interconnect. The processing unit includes a processor core that executes memory access instructions and a cache memory, coupled to the processor core, which is configured to store data for access by the processor core. The processing unit is configured to broadcast, on the address interconnect, a cache-inhibited write request and write data for a destination device coupled to the system interconnect. In various embodiments, the initial cache-inhibited request and the write data can be communicated in the same or different requests on the address interconnect.Type: GrantFiled: September 30, 2019Date of Patent: November 16, 2021Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen
-
Patent number: 11170847Abstract: Apparatuses and methods for determining soft data for fractional digit memory cells are provided. One example apparatus can include a controller to determine states of memory cells of a group of memory cells operated as fractional digit memory cells, and determine soft data based, at least partially, on dimensions to which particular memory cells correspond with respect to the group of memory cells, determined states of the memory cells with respect to a state adjacent a state corresponding to a swapping shell, and whether a particular memory cell is a candidate for swapping.Type: GrantFiled: February 5, 2020Date of Patent: November 9, 2021Assignee: Micron Technology, Inc.Inventors: Sivagnanam Parthasarathy, Patrick R. Khayat, Mustafa N. Kaynak
-
Patent number: 11163700Abstract: An upper level cache receives from an associated processor core a plurality of memory access requests including at least first and second memory access requests of differing first and second classes. Based on class histories associated with the first and second classes of memory access requests, the upper level cache initiates, on the system interconnect fabric, a first interconnect transaction corresponding to the first memory access request without first issuing the first memory access request to the lower level cache via a private communication channel between the upper level cache and the lower level cache.Type: GrantFiled: April 30, 2020Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen, Luke Murray
-
Patent number: 11163706Abstract: A method for improving performance of a host bus adapter in a data storage system is disclosed. In one embodiment, such a method uses, as an interface to a memory controller contained within a host bus adapter, multiple two-way ports configured to operate in parallel. The method uses, within each two-way port, a read FIFO buffer for transferring read data across the two-way port and a write FIFO buffer for transferring write data across the two-way port. The method also uses the read FIFO buffer and the write FIFO buffer within each two-way port to provide speed-matching for different clock speeds that operate on opposite sides of the two-way port. A corresponding system and computer program product are also disclosed.Type: GrantFiled: October 22, 2019Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Bitwoded Okbay, Michael J. Palmer, Jianwei Zhuang, Ailoan Tran
-
Patent number: 11151058Abstract: Provided are a computer program product, system, and method for staging data from storage to a fast cache tier of a multi-tier cache in a non-adaptive sector caching mode in which data staged in response to a read request is limited to track sectors required to satisfy the read request. Data is also staged from storage to a slow cache tier of the multi-tier cache in a selected adaptive caching mode of a plurality of adaptive caching modes available for staging data of tracks. Adaptive caching modes are selected for the slow cache tier as a function of historical access ratios. Prestage requests for the slow cache tier are enqueued in one of a plurality of prestage request queues of various priority levels as a function of the selected adaptive caching mode and historical access ratios. Other aspects and advantages are provided, depending upon the particular application.Type: GrantFiled: January 21, 2020Date of Patent: October 19, 2021Assignee: International Business Machines CorporationInventors: Lokesh Mohan Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
-
Patent number: 11133061Abstract: An example method includes determining a time between writes in place to a particular memory cell, incrementing a disturb count corresponding to a neighboring memory cell by a particular count increment that is based on the time between the writes to the particular memory cell, and determining whether to check a write disturb status of the neighboring memory cell based on the incremented disturb count.Type: GrantFiled: March 9, 2020Date of Patent: September 28, 2021Assignee: Micron Technology, Inc.Inventors: Edward C. McGlaughlin, Samuel E. Bradshaw
-
Patent number: 11126556Abstract: Memory prefetching in a processor comprises: identifying, in response to memory access instructions, a pattern of addresses; and determining, based on the pattern of addresses, an address to prefetch. Determining the address to prefetch comprises: determining, using the pattern of addresses, an index into a history table; retrieving, from the history table and using the index, an offset value, wherein the offset value is not the address to prefetch; and determining the address to prefetch using the offset value and at least one address of the pattern of addresses. The method further comprises prefetching the address to prefetch.Type: GrantFiled: April 30, 2020Date of Patent: September 21, 2021Assignee: Marvell Asia Pte, Ltd.Inventor: Shubhendu Sekhar Mukherjee
-
Patent number: 11126753Abstract: A processor chip including a memory controller, application processor and a communication processor, where the memory controller is configured to define an area of memory as secure memory, and allow only an access request with a security attribute to access the secure memory. The application processor is configured to invoke a secure application in a trusted execution environment, and write an instruction request for a secure element into the secure memory using the secure application. The communication processor is configured to read the instruction request from the secure memory in the trusted execution environment, and send the instruction request to the secure element. The application processor and the communication processor need to be in the trusted execution environment when accessing the secure memory, and access the secure memory only using the secure application.Type: GrantFiled: April 25, 2019Date of Patent: September 21, 2021Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Li Zhu, Zhihua Lu