Addressing Cache Memories Patents (Class 711/3)
  • Patent number: 12108063
    Abstract: A video processing circuit coupled to an external memory and for generating a video stream is provided. The external memory stores a part of a first frame. The video processing circuit includes a memory, a control circuit, an image processing circuit, and a video encoding circuit. The control circuit is used for reading a first image block from the external memory and storing the first image block in the memory, the first image block being a part of the first frame. The image processing circuit is used for reading the first image block from the memory and processing the first image block to generate a second image block which is a part of the second frame different from the first frame. The video encoding circuit is used for reading the first image block from the memory and encoding the first image block to generate a part of the video stream.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: October 1, 2024
    Assignee: SIGMASTAR TECHNOLOGY LTD.
    Inventor: Shan Li
  • Patent number: 12086039
    Abstract: The embodiments described herein describe technologies for non-volatile memory persistence in a multi-tiered memory system including two or more memory technologies for volatile memory and non-volatile memory.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 10, 2024
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, J. James Tringali, Ely Tsern
  • Patent number: 12079134
    Abstract: Control logic in a memory device executes a first programming operation to program the set of memory cells to a set of programming levels. A first cache ready signal is generated, the first cache ready signal indicating to a host system to send first data associated with a second programming operation to an input/output (I/O) data cache of the memory device. A first encoded data value and a second encoded data value associated with each memory cell of the set of memory cells are generated. A second cache ready signal is generated, the second cache ready signal indicating to the host system to send second data associated with the next programming operation to the I/O data cache. The first data associated with the second programming operation is caused to be stored in a third data cache of the cache storage. A third cache ready signal is generated, the third cache ready signal indicating to the host system to send third data associated with the second programming operation to the I/O data cache.
    Type: Grant
    Filed: March 3, 2023
    Date of Patent: September 3, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Sushanth Bhushan, Dheeraj Srinivasan
  • Patent number: 12045474
    Abstract: Systems and methods providing efficient dirty memory page expiration. In one implementation, a processing device may identify a storage device. The processing device may determine a value of an indicator associated with the storage device. The indicator may indicate a level of consistency between a volatile memory device and a non-volatile memory device of the storage device. In view of the value of the indicator, the processing device may modify a synchronization timeout value associated with the volatile memory device.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: July 23, 2024
    Assignee: Red Hat, Inc.
    Inventors: Andrea Arcangeli, Giuseppe Scrivano, Michael Tsirkin
  • Patent number: 12032488
    Abstract: A circuit and corresponding method provide a translation lookaside buffer (TLB) implementation. The circuit comprises a plurality of TLB banks and TLB logic. The TLB logic computes a plurality of hash values of a tag included in a memory request. The TLB logic locates, based on hash values of the plurality of hash values computed, a contiguous translation entry (TE) and a non-contiguous TE in different TLB banks of the plurality of TLB banks. The TLB logic determines a result by comparing the tag with the contiguous TE located and by comparing the tag with the non-contiguous TE located. The TLB logic outputs the result determined toward servicing the memory request. The TLB logic advantageously enables the TLB implementation to support contiguous pages using standard random-access memories for the plurality of TLB banks.
    Type: Grant
    Filed: September 14, 2022
    Date of Patent: July 9, 2024
    Assignee: Marvell Asia Pte Ltd
    Inventors: Albert Ma, Oded Tsur
  • Patent number: 12032824
    Abstract: An event log management technique may include determining a new event associated with a storage device has occurred, determining a new event log can be stored in an event log chunk stored in an event log buffer, and deleting a number of old event logs starting from an oldest event log among old event logs of the event log chunk stored in the event log buffer if the new event log can be stored in the event log chunk stored in the event log buffer. The number of old event logs being deleted corresponds to a size of a new event log associated with the new event. The technique may also include storing the new event log starting at a start position of the oldest event log.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: July 9, 2024
    Assignee: SK hynix Inc.
    Inventors: Do Geon Park, Soong Sun Shin
  • Patent number: 12005911
    Abstract: Primary/secondary controller management techniques for a network include determining, by a secondary controller of a set of secondary controllers on the network, a set of housekeeping activities/tasks to be completed before shutdown of the secondary controller, communicating, by the secondary controller with a primary controller on the network, to keep the primary controller awake until the secondary controller completes the set of housekeeping activities/tasks, wherein the primary controller is configured to control and/or supervise the set of secondary controllers on the network, completing, by the secondary controller, the set of housekeeping activities/tasks irrespective of a shutdown command received from the primary controller, and upon completing the set of housekeeping activities/tasks, communicating, by the secondary controller with the primary controller, such that the primary controller shuts down and then independently shutting down itself to mitigate or eliminate errors/malfunctions due to any in
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: June 11, 2024
    Assignee: FCA US LLC
    Inventors: Abhilash Gudapati, Varun Pradeepkumar Khemchandani, Riley P McGee
  • Patent number: 11973655
    Abstract: Some embodiments provide a method of performing control plane operations in a radio access network (RAN). The method deploys several machines on a host computer. On each machine, the method deploys a control plane application to perform a control plane operation. The method also configures on each machine a RAN intelligent controller (RIC) SDK to serve as an interface between the control plane application on the same machine and a set of one or more elements of the RAN. In some embodiments, the RIC SDK on each machine includes a set of network connectivity processes that establish network connections to the set of RAN elements for the control plane application. These RIC SDK processes allow the control plane application on their machine to forego having the set of network connectivity processes.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: April 30, 2024
    Assignee: VMware LLC
    Inventors: Aditya Gudipati, Amit Singh
  • Patent number: 11947453
    Abstract: An example memory sub-system includes: a plurality of bank groups, wherein each bank group comprises a plurality of memory banks; a plurality of row buffers, wherein two or more row buffers of the plurality of row buffers are associated with each memory bank; a cache comprising a plurality of cache lines; a processing logic communicatively coupled to the plurality of bank groups and the plurality of row buffers, the processing logic to perform operations comprising: receiving an activate command specifying a row of a memory bank of the plurality of memory banks; fetching data from the specified row to a row buffer of the plurality of row buffers; and copying the data to a cache line of the plurality of cache lines.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: April 2, 2024
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Sean S. Eilert, Ameen D. Akel, Shivam Swami
  • Patent number: 11914515
    Abstract: A cache memory device is provided in the disclosure. The cache memory device includes a first AGC, a compression circuit, a second AGC, a virtual tag array, and a comparator circuit. The first AGC generates a virtual address based on a load instruction. The compression circuit obtains the higher part of the virtual address and generates a target hash value based on the higher part of the virtual address. The second AGC generates the lower part of the virtual address based on the load instruction. The virtual tag array obtains the lower part and selects a set of memory units. The comparator circuit compares the target hash value to a hash value stored in each memory unit of the set of memory units. When the comparator circuit generates the virtual tag miss signal, the comparator circuit transmits the virtual tag miss signal to the reservation station.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: February 27, 2024
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Junjie Zhang, Mengchen Yang, Jing Qiao, Jianbin Wang
  • Patent number: 11906583
    Abstract: The present invention relates to a method for testing a device under test. A component of the device under test generates or receives a bus signal, wherein the bus signal comprises a first data signal or a second data signal, and wherein an amplitude of the first data signal is different from an amplitude of the second data signal. A measurement instrument measures an amplitude of the bus signal. Further, it is determined whether the bus signal comprises the first data signal or the second data signal, based on the measured amplitude of the bus signal.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: February 20, 2024
    Assignee: ROHDE & SCHWARZ GMBH & CO. KG
    Inventors: Kevin Guo, Hong Jin Kim
  • Patent number: 11860947
    Abstract: Provided are a computer program product, system, and method for deleted data restoration in accordance with one embodiment of the present description, in which in response to a search request having specified search parameters, data of a deleted data unit is located as a function of specified search parameters. In addition, metadata erased as a result of the deletion operation, is restored as a function of the located data. Accordingly, the previously deleted data unit is undeleted and access to the previously deleted data unit is restored via the restored metadata. Other aspects of deleted data restoration in accordance with the present description are described.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: January 2, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael J. Koester, Kevin L Miner, Raymond E. Garcia, Richard A. Schaeffer
  • Patent number: 11853218
    Abstract: A data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to receive a command, wherein the command comprises a plurality of logical block addresses (LBAs), determine that one or more LBAs of the plurality of LBAs are not aligned to a transfer layer packet (TLP) boundary, determine whether the one or more LBAs that are not aligned to a TLP boundary has a head that is unaligned that matches a previously stored tail that is unaligned, and merge and transfer the head that is unaligned with a previously stored tail that is unaligned when the head that is unaligned matches the previously stored tail that is unaligned.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: December 26, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Amir Segev, Amir Rozen, Shay Benisty
  • Patent number: 11797439
    Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: October 24, 2023
    Assignee: Micron Technologies, Inc.
    Inventor: David Andrew Roberts
  • Patent number: 11789872
    Abstract: A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: October 17, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Joseph R. M. Zbiciak, Matthew D. Pierson
  • Patent number: 11775480
    Abstract: A method for deleting obsolete files from a file system is provided. The method includes receiving a request to delete a reference to a first target file of a plurality of target files stored in a file system, the first target file having a first target file name. A first reference file whose file name includes the first target file name is identified. The first reference file is deleted from the file system. The method further includes determining whether the file system includes at least one reference file, distinct from the first reference file, whose file name includes the first target file name. In accordance with a determination that the file system does not include the at least one reference file, the first target file is deleted from the file system.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: October 3, 2023
    Assignee: Google LLC
    Inventors: Yasushi Saito, Sanjay Ghemawat, Jeffrey Adgate Dean
  • Patent number: 11720483
    Abstract: In mutation testing, source code is mutated at various positions, and test suites are run against the original object code and each version of the mutated object code, to determine the quality of test suites against arbitrary changes in the object code. The present disclosure provides a mutation test manager configured to initialize multiple computing threads configuring a computing host to perform parallel computation; mutate class files within context of each computing thread; recompile mutated class files independently in each respective computing thread to generate heterogeneous mutants; and execute pending unit tests against heterogeneous mutants independently in each respective computing thread. Consequently, the mutation testing process is decoupled from computational bottlenecks which would result from linear, sequential generation, compilation, and testing of each mutation, especially in the context of JVM® programming languages configured to generate class-rich object code.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: August 8, 2023
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Andrew L. Pearson, Nate Shepherd
  • Patent number: 11720490
    Abstract: Responsive to receiving a table flush command, a first portion of an address mapping table is identified. A first flush operation with respect to a first portion of the address mapping table is performed. Responsive to receiving at least one memory access command, flush operations for a subsequent portion of the address mapping table is suspended. At least one memory access operation specified by the at least one memory access command is performed. A second flush operation with respect to the subsequent portion of the address mapping table is performed.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: August 8, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Yuehhung Chen, Chih-Kuo Kao, Fangfang Zhu, Jiangli Zhu
  • Patent number: 11704209
    Abstract: Provided are a computer program product, system, and method for using a track format code in a cache control block for a track in a cache to process read and write requests to the track in the cache. A track format table associates track format codes with track format metadata. A determination is made as to whether the track format table has track format metadata matching track format metadata of a track staged into the cache. A determination is made as to whether a track format code from the track format table for the track format metadata in the track format table matches the track format metadata of the track staged. A cache control block for the track being added to the cache is generated including the determined track format code when the track format table has the matching track format metadata.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: July 18, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Beth A. Peterson
  • Patent number: 11687466
    Abstract: A virtually-indexed and virtually-tagged cache has E entries each holding a memory line at a physical memory line address (PMLA), a tag of a virtual memory line address (VMLA), and permissions of a memory page that encompasses the PMLA. A directory having E corresponding entries is physically arranged as R rows by C columns=E. Each directory entry holds a directory tag comprising hashes of corresponding portions of a page address portion of the VMLA whose tag is held in the corresponding cache entry. In response to a translation lookaside buffer management instruction (TLBMI), the microprocessor generates a target tag comprising hashes of corresponding portions of a TLBMI-specified page address. For each directory row, the microprocessor: for each directory entry of the row, compares the target and directory tags to generate a match indictor used to invalidate the corresponding cache entry.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: June 27, 2023
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 11668992
    Abstract: Methods are described for the commissioning of optically switchable window networks. During commissioning, network addresses are paired with the locations of installed devices for components on a window network. Commissioning may also involve steps of testing and validating the network devices. By correctly pairing the location of a device with its network address, a window network is configured to function such that controls sent over the network reach their targeted device(s) which in turn respond accordingly. The methods described herein may reduce frustrations that result from mispairing and installation issues that are common to conventional commissioning practices. Commissioning may involve recording a response to a manually or automatically initiated trigger. Commissioning methods described herein may rely on user input, or be automatic, not requiring user input.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: June 6, 2023
    Assignee: View, Inc.
    Inventors: Dhairya Shrivastava, Stephen Clark Brown
  • Patent number: 11645135
    Abstract: Methods and apparatuses relating to memory corruption detection are described. In one embodiment, a hardware processor includes an execution unit to execute an instruction to request access to a block of a memory through a pointer to the block of the memory, and a memory management unit to allow access to the block of the memory when a memory corruption detection value in the pointer is validated with a memory corruption detection value in the memory for the block, wherein a position of the memory corruption detection value in the pointer is selectable between a first location and a second, different location.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: May 9, 2023
    Assignee: Intel Corporation
    Inventors: Tomer Stark, Ron Gabor, Joseph Nuzman, Raanan Sade, Bryant E. Bigbee
  • Patent number: 11625193
    Abstract: A redundant array of independent disks (RAID) storage device including; a memory device including first memory devices configured to store at least one of data chunks and corresponding parity (data chunks/parity) and a second memory device configured to serve as a spare memory region, and a RAID controller including a RAID internal memory configured to store a count table and configured to control performing of a rebuild operation in response to a command received from a host, wherein upon identification of a failed first memory device, the RAID controller accesses used regions of non-failed first memory devices based on the count table and rebuilds data of the failed first memory device using the second memory device.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: April 11, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Hwan Lim, Seung-Woo Lim, Sung-Wook Kim, So-Geum Kim, Jae Eun Kim, Dae Hun You, Walter Jun
  • Patent number: 11614894
    Abstract: Apparatuses, systems, and methods for hierarchical memory systems are described. A hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. In an example apparatus, an input/output (I/O) device can receive signaling that includes a command to write to or read data from an address corresponding to a non-persistent memory device, and can determine where to redirect the request. For example, the I/O device can determine to write or read data to and/or from the non-persistent memory device or the persistent memory device based at least in part on one or more characteristics of the data.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: March 28, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Anton Korzh, Vijay S. Ramesh, Richard C. Murphy
  • Patent number: 11581036
    Abstract: A CAM array of compare memory cell circuits includes a decode column corresponding to each set, and each set includes at least one row of the compare memory cell circuits. Each decode column receives a set clock signal addressing the corresponding set and generates a set match signal in each row of the corresponding set. A column compare circuit generates compare data indicating a bit of a compare tag. A row match circuit generates, for each row, in response to the set match signal, a row match signal indicating the compare tag matches the binary tag stored in the row. Circuits and loads in a decode column employed to generate the set clock signal correspond to circuits generating the row match signal in each column of the CAM array to reduce a timing margin of the match indication and decrease the access time for the CAM array.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: February 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sai Prakash Reddy Bijivemula, Rajesh Kumar
  • Patent number: 11567868
    Abstract: A memory device is described, including a command decoder configured to receive a copy command to copy data stored in a first memory location to a second memory location without transmitting the data to an external controller, a memory array electrically connected to the command decoder and including a plurality of memory locations including the first memory location and the second memory location, a data line electrically connected to the memory array and configured to receive, from the first memory location, the data to be transmitted to the second memory location through the same data line, and an output buffer configured to store the data received from the first memory location through the data line to be written into the second memory location without transmitting the data to the external controller.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: January 31, 2023
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventor: Shih-Lien Linus Lu
  • Patent number: 11550730
    Abstract: A method for performing access management in a memory device, the associated memory device and the controller thereof, and the associated electronic device are provided. The method may include: receiving a host command and a logical address from a host device; performing at least one checking operation to obtain at least one checking result, for determining whether to load a logical-to-physical (L2P) table from the NV memory to a random access memory (RAM) of the memory device, wherein the L2P table includes address mapping information for accessing the target data, and performing the at least one checking operation to obtain at least one checking result includes checking whether a first L2P-table index pointing toward the L2P table and a second L2P-table index sent from the host device are equivalent to each other; and reading the target data from the NV memory, and sending the target data to the host device.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: January 10, 2023
    Assignee: Silicon Motion, Inc.
    Inventors: Jie-Hao Lee, Cheng-Yu Yu
  • Patent number: 11537306
    Abstract: A cold data detector circuit includes a bubble break register that is configured to detect cold data in a memory system including main memory and secondary memory. The bubble break register selectively shifts received segment addresses to fill empty slots without having to wait until the empty slots are shifted out an end slot, and may provide an indication of cold data in response to every slot of the register being filled with a different respective segment address.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: December 27, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Yuan He, Daigo Toyama
  • Patent number: 11520705
    Abstract: A multicore processing environment (MCPE) is disclosed. In embodiments, the MCPE includes multiple processing cores hosting multiple user applications configured for simultaneous execution. The cores and user applications share system resources including main memory and input/output (I/O) domains, each I/O domain including multiple I/O devices capable of requesting inbound access to main memory through an I/O memory management unit (IOMMU). For example, the IOMMU cache associates unique cache tags to each I/O device based on device identifiers or settings determined by the system registers, caching the configuration data for each I/O device under the appropriate cache tag. When each I/O device requests main memory access, the IOMMU cache refers to the appropriate configuration data under the corresponding unique cache tag. This prevents contention in the IOMMU cache caused by one device evicting the cache entry of another, minimizing interference channels by reducing the need for main memory access.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: December 6, 2022
    Assignee: Rockwell Collins, Inc.
    Inventors: Carl J. Henning, David J. Radack
  • Patent number: 11516001
    Abstract: A method for conveying auditable information regarding provenance of a product that is cryptographically accurate while retaining complete anonymity of product and participant on a blockchain includes: receiving a product identifier; generating a digital token by applying a hashing algorithm to the product identifier; generating an entry value by applying the hashing algorithm to a combination of an event identifier and the digital token; generating a digital signature by digitally signing a data package using a private key of a cryptographic key pair, where the data package includes at least a blockchain address, the event identifier, and the digital token; and transmitting the blockchain address, the digital signature, and the entry value to a node in a blockchain network.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: November 29, 2022
    Assignee: MASTERCARD INTERNATIONAL INCORPORATED
    Inventors: Steven C. Davis, Rob Byrne, Robert Collins, Leandro Nunes Da Silva Carvalho, Deborah Eleanor Barta
  • Patent number: 11513740
    Abstract: A hybrid memory system provides rapid, persistent byte-addressable and block-addressable memory access to a host computer system by providing direct access to a both a volatile byte-addressable memory and a volatile block-addressable memory via the same parallel memory interface. The hybrid memory system also has at least a non-volatile block-addressable memory that allows the system to persist data even through a power-loss state. The hybrid memory system can copy and move data between any of the memories using local memory controllers to free up host system resources for other tasks.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: November 29, 2022
    Assignee: EXECUTIVE ADVISORY FIRM LLC
    Inventors: Mike Hossein Amidi, Fariborz Frankie Roohparvar
  • Patent number: 11467965
    Abstract: A PIM device includes a plurality of first storage regions, a second storage region, and a column control circuit. The second storage region is coupled to each of the plurality of first storage regions through a data transmission line. The column control circuit generates a memory read control signal for reading data stored in an initially selected storage region of the plurality of first storage regions and a buffer write control signal for writing the data read from the initially selected storage region to the second storage region. The column control circuit generates a global buffer read control signal for reading the data written to the second storage region and a memory write control signal for writing the data read from the second storage region to a subsequently selected storage region of the plurality of first storage regions.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 11, 2022
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song
  • Patent number: 11455110
    Abstract: Embodiments of the present invention provide concepts for handling a handover of ownership of data from a source to a referrer in a data deduplication environment. By performing a handover of the ownership of the data from the source to the referrer, the number of processes required to access the data may be reduced and so the performance of the system may be improved. The identification of a source for performing the handover on may be performed by way of a volatile cache.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: September 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ben Sasson, Paul Nicholas Cashman, Dominic Tomkins, Florent C. Rostagni
  • Patent number: 11442866
    Abstract: A device (e.g., an application-specific integrated circuit chip) includes a memory module processing unit and an interface. The memory module processing unit is configured to receive an instruction to obtain values stored in one or more memory components and process the obtained values to return a processed result. The memory module processing unit is also configured to store the obtained values in a cache based on one or more criteria. The memory module processing unit is configured to be included on a computer memory module configured to be installed in a computer system. The interface is configured to communicate with the one or more memory components included on the computer memory module.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: September 13, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Liu Ke, Xuan Zhang, Udit Gupta, Carole-Jean Wu, Mark David Hempstead, Brandon Reagen, Hsien-Hsin Sean Lee
  • Patent number: 11409659
    Abstract: A device includes a memory controller and a cache memory coupled to the memory controller. The cache memory has a first set of cache lines associated with a first memory block and comprising a first plurality of cache storage locations, as well as a second set of cache lines associated with a second memory block and comprising a second plurality of cache storage locations. A first location of the second plurality of cache storage locations comprises cache tag data for both the first set of cache lines and the second set of cache lines.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: August 9, 2022
    Assignee: Rambus Inc.
    Inventors: Michael Raymond Miller, Dennis Doidge, Collins Williams
  • Patent number: 11403231
    Abstract: Hash-based application programming interface (API) importing can be prevented by allocating a name page and a guard page in memory. The name page and the guard page being associated with (i) an address of names array, (ii) an address of name ordinal array, and (iii) an address of functions array that are all generated by an operating system upon initiation of an application. The name page can then be filled with valid non-zero characters. Thereafter, protections on the guard page can be changed to no access. An entry is inserted into the address of names array pointing to a relative virtual address corresponding to anywhere within the name page. Access to the guard page causes the requesting application to terminate. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: August 2, 2022
    Assignee: Cylance Inc.
    Inventor: Jeffrey Tang
  • Patent number: 11361281
    Abstract: Methods and systems for expense management, comprising: retrieving at least one electronic feed of charges for multiple expense receipt records directly from at least one lodging and/or transportation vendor, the at least one feed of charges including computer-readable electronic transaction data; detecting that at least one expense receipt record from the multiple expense receipt records from the at least one feed of charges is comprised of two or more line items; mapping the two or more line items to at least one transportation and/or lodging good and/or service that is chargeable to at least one account identifier, the mapping utilizing vendor expense codes and/or keyword searches; and pre-populating the at least one transportation and/or lodging good and/or service mapped to each of the two or more line items from the at least one expense receipt record in at least one expense report in at least one expense management system as two or more expense itemizations.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 14, 2022
    Assignee: SAP SE
    Inventors: Michael Fredericks, Joseph Dunnick, Valery Gorodnichev, Jeannine Armstrong
  • Patent number: 11347650
    Abstract: A method includes, for each data value in a set of one or more data values, determining a boundary between a high order portion of the data value and a low order portion of the data value, storing the low order portion at a first memory location utilizing a low data fidelity storage scheme, and storing the high order portion at a second memory location utilizing a high data fidelity storage scheme for recording data at a higher data fidelity than the low data fidelity storage scheme.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: May 31, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Roberts, Elliot H. Mednick
  • Patent number: 11327768
    Abstract: An arithmetic processing apparatus includes an arithmetic circuit configured to perform an arithmetic operation on data having a first data width and perform an instruction in parallel on each element of data having a second data width, and a cache memory configured to store data, wherein the cache memory includes a tag circuit storing tags for respective ways, a data circuit storing data for the respective ways, a determination circuit that determines a type of an instruction with respect to whether data accessed by the instruction has the first data width or the second data width, and a control circuit that performs either a first pipeline operation where the tag circuit and the data circuit are accessed in parallel or a second pipeline operation where the data circuit is accessed in accordance with a tag result after accessing the tag circuit, based on a result determined by the determination circuit.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: May 10, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Noriko Takagi
  • Patent number: 11314647
    Abstract: Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 26, 2022
    Assignee: INTEL CORPORATION
    Inventor: Karthikeyan Avudaiyappan
  • Patent number: 11314654
    Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: April 26, 2022
    Assignee: Intel Corporation
    Inventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 11314595
    Abstract: A RAID controller periodically collects an indication of a current compression ratio achieved by each of a plurality of storage devices within the RAID. The RAID controller determines a placement of data and the parity information within at least one of the plurality of storage devices according to at least one of a plurality of factors associated with the current compression ratio. The RAID controller writes the data and the parity information to the at least one of the plurality of storage devices according to the determined placement.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 26, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roman Alexander Pletka, Sasa Tomic, Timothy Fisher, Nikolaos Papandreou, Nikolas Ioannou, Aaron Fry
  • Patent number: 11303550
    Abstract: Described embodiments provide systems and methods for monitoring server utilization and reallocating resources using upper bound values. A device can determine a value indicative of an upper bound of a processing load of a server using data points detected for the processing load over a first range of time. The upper bound can correspond to a percentage of the processing load during the first range of time. The device can monitor, using the value, the processing load of the server over a second range of time. A determination can be made whether the value of the processing load is greater than a threshold during the second range of time. The device can generate an alert for the device responsive to a comparison of the value of the processing load to the threshold.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: April 12, 2022
    Assignee: Citrix Systems, Inc.
    Inventors: Andreas Varnavas, Satyendra Tiwari, Manikam Muthiah, Nikolaos Georgakopoulos
  • Patent number: 11287993
    Abstract: Techniques involve: determining corresponding valid metadata rates of a plurality of metadata blocks stored in a metadata storage area of a storage system, the valid metadata rate of each metadata block indicating a ratio of valid metadata in the metadata block to all metadata in the metadata block; selecting a predetermined number of metadata blocks having a valid metadata rate lower than a first valid metadata rate threshold from the plurality of metadata blocks; storing valid metadata in the predetermined number of metadata blocks into at least one metadata block following the plurality of metadata blocks in the metadata storage area; and making the valid metadata in the predetermined number of metadata blocks invalid. Accordingly, such techniques can improve the efficiency of the storage system.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: March 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Shaoqin Gong, Jibing Dong, Hongpo Gao, Jianbin Kang, Baote Zhuo
  • Patent number: 11275716
    Abstract: An approach is provided for tracking a single process instance flow. A request is received in a first system of a multi-system environment. Log files are pulled from systems with which the request interacts. Log entries are captured for the request. The log files are combined and flattened into a chronological log. A predictive model is built from an order of entries in the chronological log. Correlation keys in the entries of the chronological log are identified. Logs specifying processing of multiple ongoing requests are aggregated. A process instance of interest to a user is received. Instance specific log files are generated by deflattening the aggregated logs and by using a pattern detection algorithm that uses the predictive model and an alternate identifier algorithm that uses the correlation keys. One of the generated instance specific log files specifies a flow of the process instance of interest.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zachary A. Silverstein, Kelly Camus, Tiberiu Suto, Andrew R. Jones
  • Patent number: 11243773
    Abstract: A computer system, includes a store queue that holds store entries and a load queue that holds load entries sleeping on a store entry. A processor detects a store drain merge operation call and generates a pair of store tags comprising a first store tag corresponding to a first store entry to be drained and a second store tag corresponding to a second store entry to be drained. The processor determines the pair of store tags an even-type store tag or an odd-type store tag. The processor disables the odd store tag included in the even-type store tag pair when detecting the even-type store tag pair, and wakes up a first load entry dependent on the even store tag and a second load entry dependent on the odd store tag based on the even store tag included in the even-type store tag pair while the odd store tag is disabled.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: February 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bryan Lloyd, David Campbell, Brian Chen, Robert A. Cordes
  • Patent number: 11237758
    Abstract: A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, where current data to be written to a nonvolatile memory corresponds to an address cache hit is determined. If the current data to be written corresponds to an address cache hit, the current data are written to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to an address cache miss, the current data are written to the destined location in the nonvolatile memory. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: February 1, 2022
    Assignee: Wolley Inc.
    Inventor: Chuen-Shen Bernard Shung
  • Patent number: 11232039
    Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 25, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 11188475
    Abstract: A technique is provided for managing caches in a cache hierarchy. An apparatus has processing circuitry for performing operations and a plurality of caches for storing data for reference by the processing circuitry when performing the operations. The plurality of caches form a cache hierarchy including a given cache at a given hierarchical level and a further cache at a higher hierarchical level. The given cache is a set associative cache having a plurality of cache ways, and the given cache and the further cache are arranged such that the further cache stores a subset of the data in the given cache. In response to an allocation event causing data for a given memory address to be stored in the further cache, the given cache issues a way indication to the further cache identifying which cache way in the given cache the data for the given memory address is stored in.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: November 30, 2021
    Assignee: Arm Limited
    Inventors: Joseph Michael Pusdesris, Balaji Vijayan
  • Patent number: 11176043
    Abstract: A method for using a distributed memory device in a memory augmented neural network system includes receiving, by a controller, an input query to access data stored in the distributed memory device, the distributed memory device comprising a plurality of memory banks. The method further includes determining, by the controller, a memory bank selector that identifies a memory bank from the distributed memory device for memory access, wherein the memory bank selector is determined based on a type of workload associated with the input query. The method further includes computing, by the controller and by using content based access, a memory address in the identified memory bank. The method further includes generating, by the controller, an output in response to the input query by accessing the memory address.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: November 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ahmet Serkan Ozcan, Tomasz Kornuta, Carl Radens, Nicolas Antoine