Write-back Patents (Class 711/143)
-
Patent number: 12282470Abstract: Techniques are disclosed relating to backing up skip list data structures to facilitate a subsequent recovery. In various embodiments, a computing system creates a checkpoint of a skip list including a plurality of key-value records that include pointers to others of the plurality of key-value records. Creating the checkpoint includes scanning the skip list to identify ones of the plurality of key-value records that are relevant to the checkpoint and storing the identified key-value records in a storage such that the identified key-value records include pointers modified to exclude ones of the plurality of key-value records that are not relevant to the checkpoint. The computing system can then recover the skip list based on the created checkpoint.Type: GrantFiled: October 5, 2022Date of Patent: April 22, 2025Assignee: Salesforce, Inc.Inventors: Patrick James Helland, James E. Mace
-
Patent number: 12271305Abstract: A two-level main memory in which both volatile memory and persistent memory are exposed to the operating system in a flat manner and data movement and management is performed in cache line granularity is provided. The operating system can allocate pages in the two-level main memory randomly across the first level main memory and the second level main memory in a memory-type agnostic manner, or, in a more intelligent manner by allocating predicted hot pages in first level main memory and predicted cold pages in second level main memory. The cache line granularity movement is performed in a “swap” manner, that is, a hot cache line in the second level main memory is swapped with a cold cache line in first level main memory because data is stored in either first level main memory or second level main memory not in both first level main memory and second level main memory.Type: GrantFiled: March 27, 2021Date of Patent: April 8, 2025Assignee: Intel CorporationInventors: Sai Prashanth Muralidhara, Alaa R. Alameldeen, Rajat Agarwal, Wei P. Chen, Vivek Kozhikkottu
-
Patent number: 12271316Abstract: A memory system includes a firmware unit and a cache module that includes a cache controller and a cache memory. The cache controller receives an I/O message that includes a local message ID (LMID) and data to be written to a logical drive (LD), stores the data in a cache segment (CS) row of the cache memory and sends an ID of the CS row to the firmware unit. The firmware unit, in response to receiving the ID of the CS row, acquires a timestamp and stores the timestamp to check against a cache flush timeout for the CS row. The firmware unit periodically checks cache flush timeout and in response to detecting the cache flush timeout, sends a flush command with the ID of the CS row to the cache controller. The cache controller, in response to receiving the flush command, flushes the first data of the CS row.Type: GrantFiled: February 16, 2023Date of Patent: April 8, 2025Assignee: Avago Technologies International Sales Pte. LimitedInventor: Arun Prakash Jana
-
Patent number: 12265713Abstract: A method for managing tasks in a storage system, the method may include: (a) obtaining, by a scheduler, a shared budget for background storage tasks and foreground storage tasks; (b) obtaining, by the scheduler, a background budget for background storage tasks; wherein the background budget is a fraction of the shared budget; (c) allocating, by the scheduler, resources to pending storage tasks according to the shared budget and the background budget; wherein the allocating comprises (i) allocating the shared budget while prioritizing foreground storage tasks over background storage tasks; and (ii) allocating the background budget to background storage tasks; and (d) participating, by the scheduler, in executing of storage tasks according to the allocation.Type: GrantFiled: November 5, 2021Date of Patent: April 1, 2025Assignee: VAST DATA LTD.Inventors: Hillel Costeff, Asaf Levy
-
Patent number: 12260225Abstract: A system for providing system level sleep state power savings includes a plurality of memory channels and corresponding plurality of memories coupled to respective memory channels. The system includes one or more processors operative to receive information indicating that a system level sleep state is to be entered and in response to receiving the system level sleep indication, moves data stored in at least a first of the plurality of memories to at least a second of the plurality of memories. In some implementations, in response to moving the data to the second memory, the processor causes power management logic to shut off power to: at least the first memory, to a corresponding first physical layer device operatively coupled to the first memory and to a first memory controller operatively coupled to the first memory and place the second memory in a self-refresh mode of operation.Type: GrantFiled: September 13, 2022Date of Patent: March 25, 2025Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Jyoti Raheja, Hideki Kanayama, Guhan Krishnan, Ruihua Peng
-
Patent number: 12238015Abstract: A memory sub-system connectable to a microprocessor to provide network storage services. The memory sub-system has a random-access memory configured with: first queues for the microprocessor and a network interface; second queues for the microprocessor and a processing device; and third queues for the processing device and a storage device. The processing device is configured to: generate first control messages and first data messages from packets received by the network interface; place the first control messages into the first queues for the microprocessor; and place the first data messages into the third queues for the storage device. The microprocessor processes the first control messages to implement security and administrative functions and place second control messages in the second queues. The storage device is configured to retrieve the first data messages from the third queues and second control messages from the second queues for processing.Type: GrantFiled: July 15, 2022Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventor: Luca Bert
-
Patent number: 12229051Abstract: Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.Type: GrantFiled: December 29, 2022Date of Patent: February 18, 2025Assignee: Intel Germany GmbH & Co. KGInventors: Ritesh Banerjee, Jiaxiang Shi, Ingo Volkening
-
Patent number: 12204768Abstract: A set of blocks of a storage device are allocated for storage of data, wherein the set of blocks of the storage device is selected based on a power requirement that is based on a number of partially programmed blocks stored in the cache. Subsequent data to be stored at the storage device is assigned to the set of blocks for storage at the storage device.Type: GrantFiled: May 26, 2023Date of Patent: January 21, 2025Assignee: PURE STORAGE, INC.Inventors: Andrew R. Bernat, Wei Tang
-
Patent number: 12153520Abstract: A method and an apparatus for processing Bitmap data are provided by the embodiments of the present disclosure. The method for processing Bitmap data includes: dividing a Bitmap region in a disk into a plurality of partitions in advance and setting an update region in the disk; obtaining a respective amount of dirty data corresponding to each of the plurality of partitions in memory in response to a condition for writing back to the disk being satisfied; finding multiple second partitions with an amount of dirty data satisfying to be merged into the update region from the plurality of partitions according to the respective amount of dirty data corresponding to each of the plurality of partitions; and recording dirty data corresponding to the multiple second partitions in the memory into the update region in the disk through one or more I/O operations after merging.Type: GrantFiled: January 10, 2023Date of Patent: November 26, 2024Assignee: Alibaba Cloud Computing Ltd.Inventors: Ya Lin, Feifei Li, Peng Wang, Zhushi Cheng, Fei Wu
-
Patent number: 12124852Abstract: A graphics processing device is provided that includes a set of compute units to execute a workload, a cache coupled with the set of compute units, and circuitry coupled with the cache and the set of compute units. The circuitry is configured to, in response to a cache miss for the read from a first cache, broadcast an event within the graphics processor device to identify data associated with the cache miss, receive the event at a second compute unit in the set of compute units, and prefetch the data identified by the event into a second cache that is local to the second compute unit before an attempt to read the instruction or data by the second thread.Type: GrantFiled: July 6, 2023Date of Patent: October 22, 2024Assignee: Intel CorporationInventors: James Valerio, Vasanth Ranganathan, Joydeep Ray, Pradeep Ramani
-
Patent number: 12111772Abstract: Techniques and mechanisms for providing information to determine whether a software prefetch instruction is to be executed. In an embodiment, one or more entries of a translation lookaside buffer (TLB) each include a respective value which indicates whether, according to one or more criteria, corresponding data has been sufficiently utilized. Insufficiently utilized data is indicated in a TLB entry with an identifier of an executed instruction to prefetch the corresponding data. An eviction of the TLB entry results in the creation of an entry in a registry of prefetch instructions. The entry in the registry includes the identifier of the executed prefetch instruction, and a value indicating a number of times that one or more future prefetch instructions are to be dropped. In another embodiment, execution of a subsequent prefetch instruction—which also corresponds to the identifier—is prevented based on the registry entry.Type: GrantFiled: December 23, 2020Date of Patent: October 8, 2024Assignee: Intel CorporationInventors: Wim Heirman, Ibrahim Hur
-
Patent number: 12105980Abstract: A tool for tape library hierarchical storage management. The tool determines there is available tape capacity on a tape cartridge mounted to a tape drive to migrate data from a migration queue during recall operations. The tool sends a locate end of data (EOD) command to the tape drive. The tool determines the migration queue is within a longitudinal position (LPOS) range. The tool writes data from the migration queue to the tape cartridge within the LPOS range.Type: GrantFiled: September 29, 2023Date of Patent: October 1, 2024Assignee: International Business Machines CorporationInventors: Noriko Yamamoto, Hiroshi Itagaki, Tsuyoshi Miyamura, Tohru Hasegawa, Shinsuke Mitsuma, Atsushi Abe
-
Patent number: 12099447Abstract: Prefetch circuitry generates, based on stream prefetch state information, prefetch requests for prefetching data to at least one cache. Cache control circuitry controls, based on cache policy information associated with cache entries in a given level of cache, at least one of cache entry replacement in the given level of cache, and allocation of data evicted from the given level of cache to a further level of cache. The stream prefetch state information specifies, for at least one stream of addresses, information representing an address access pattern for generating addresses to be specified by a corresponding series of prefetch requests. Cache policy information for at least one prefetched cache entry of the given level of cache (to which data is prefetched for a given stream of addresses) is set to a value dependent on at least one stream property associated with the given stream of addresses.Type: GrantFiled: October 13, 2022Date of Patent: September 24, 2024Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Roberto Gattuso
-
Patent number: 12086458Abstract: A memory system includes a memory device comprising a programming buffer and a content addressable memory (CAM) block. The memory system further includes a processing device that receives a plurality of data entries to be stored at the memory device and stores the plurality of data entries in a plurality of pages of the programming buffer, each of the plurality of pages of the programming buffer comprising a respective subset of the plurality of data entries. The processing device further initiates a conversion operation to copy the plurality of data entries from the programming buffer to the CAM block. The conversion operation includes reading respective portions of each data entry in each respective subset of the plurality of data entries from the plurality of pages of the programming buffer, and writing the respective portions to respective CAM pages of the CAM block.Type: GrantFiled: April 26, 2022Date of Patent: September 10, 2024Assignee: Micron Technology, Inc.Inventors: Tomoko Ogura Iwasaki, Manik Advani
-
Patent number: 12061562Abstract: A memory expansion device operable with a host computer system (host) comprises a non-volatile memory (NVM) subsystem, cache memory, and control logic configurable to receive a submission from the host including a read command and specifying a payload in the NVM subsystem and demand data in the payload. The control logic is configured to request ownership of a set of cache lines corresponding to the payload, to indicate completion of the submission after acquiring ownership of the cache lines, and to load the payload to the cache memory. The set of cache lines correspond to a set of cache lines in a coherent destination memory space accessible by the host. The control logic is further configured to, after indicating completion of the submission and in response to a request from the host to read demand data in the payload, return the demand data after determining that the demand data is in the cache memory.Type: GrantFiled: June 1, 2021Date of Patent: August 13, 2024Assignee: Netlist, Inc.Inventors: Jordan Horwich, Jerry Alston, Chih-Cheh Chen, Patrick Lee, Scott Milton, Jeekyoung Park
-
Patent number: 12056528Abstract: A system for cooperation of disaggregated computing resources interconnected through an optical circuit, and a method for cooperation of disaggregated resources are disclosed. Functional block devices such as a processor block, an accelerator block, and a memory fabric block exist at a remote location, and these three types of remote functional block devices are interconnected and interoperated in a specific program to perform a cooperative computation and processing process. Accordingly, the system shares data and information of a memory existing in each block through optical signal interconnection that provides low-latency, fast processing, and wide bandwidth, and maintains cooperation and memory coherency.Type: GrantFiled: April 2, 2021Date of Patent: August 6, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Daeub Kim, Jongtae Song, Ji Wook Youn, Joon Ki Lee, Kyeong-Eun Han
-
Patent number: 12019551Abstract: Embodiments of the invention are directed to systems and methods for utilizing a multi-tiered caching architecture in a multi-tenant caching system. A portion of the in-memory cache may be allocated as dedicated shares (e.g., dedicated allocations) that are each dedicated to a particular tenant, while another portion of the in-memory cache (e.g., a shared allocation) can be shared by all tenants in the system. When a threshold period of time has elapsed since data stored in a dedicated allocation has last been accessed, the data may be migrated to the shared allocation. If data is accessed from the shared allocation, it may be migrated back to the dedicated allocation Utilizing the techniques for providing a multi-tiered approach to a multi-tenant caching system can increase performance and decrease latency with respect to conventional caching systems.Type: GrantFiled: October 4, 2019Date of Patent: June 25, 2024Assignee: VISA INTERNATIONAL SERVICE ASSOCIATIONInventors: Yu Gu, Hongqin Song
-
Patent number: 12014073Abstract: Methods, systems, and devices for techniques for sequential access operations are described. In some cases, a memory system may be configured to suppress storing a checkpoint while in a sequential write mode. While in the sequential write mode, the memory system may initiate and store a first a checkpoint, along with an indication that the checkpoint was stored as part of the sequential write mode. Subsequently, the memory system may initiate a second checkpoint and suppress storing the second checkpoint. In some cases, to rebuild an address mapping after an asynchronous power loss, the memory system may access a last stored checkpoint to determine whether the checkpoint was stored as part of a sequential write mode. The memory system may generate logical addresses for data stored after the last checkpoint and before the asynchronous power loss using a starting logical address, as well as an ending logical address.Type: GrantFiled: May 17, 2022Date of Patent: June 18, 2024Assignee: Micron Technology, Inc.Inventor: Giuseppe Cariello
-
Patent number: 11947834Abstract: A method to provide network storage services to a remote host system, including: generating, from packets received from the remote host system, first control messages and first data messages; buffering, in a random-access memory of a memory sub-system, the first control messages for a local host system to fetch the first control messages, process the first control messages, and generate second control messages; sending the first data messages to a storage device of the memory sub-system without the first data messages being buffered in the random-access memory; communicating the second control messages generated by the local host system to the storage device of the memory sub-system; and processing, within the storage device, the second control messages and the first data messages to provide the network storage services.Type: GrantFiled: July 15, 2022Date of Patent: April 2, 2024Assignee: Micron Technology, Inc.Inventor: Luca Bert
-
Patent number: 11947995Abstract: A multilevel memory system includes a nonvolatile memory (NVM) device with an NVM media having a media write unit that is different in size than a host write unit of a host controller of the system that has the multilevel memory system. The memory device includes a media controller that controls writes to the NVM media. The host controller sends a write transaction to the media controller. The write transaction can include the write data in host write units, while the media controller will commit data in media write units to the NVM media. The media controller can send a transaction message to indicate whether the write data for the write transaction was successfully committed to the NVM media.Type: GrantFiled: May 19, 2020Date of Patent: April 2, 2024Assignee: Intel CorporationInventors: Kuan Hua Tan, Sahar Khalili, Eng Hun Ooi, Shrinivas Venkatraman, Dimpesh Patel
-
Patent number: 11934311Abstract: Various embodiments include a system for managing cache memory in a computing system. The system includes a sectored cache memory that provides a mechanism for sharing sectors in a cache line among multiple cache line allocations. Traditionally, different cache line allocations are assigned to different cache lines in the cache memory. Further, cache line allocations may not use all of the sectors of the cache line, leading to low utilization of the cache memory. With the present techniques, multiple cache lines share the same cache line, leading to improved cache memory utilization relative to prior techniques. Further, sectors of cache allocations can be assigned to reduce data bank conflicts when accessing cache memory. Reducing such data bank conflicts can result in improved memory access performance, even when cache lines are shared with multiple allocations.Type: GrantFiled: May 4, 2022Date of Patent: March 19, 2024Assignee: NVIDIA CORPORATIONInventors: Michael Fetterman, Steven James Heinrich, Shirish Gadre
-
Patent number: 11921637Abstract: In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory controller. The memory controller has a memory pipeline. The memory controller is coupled to control the cache memory and communicatively coupled to the processor core. The memory controller is configured to receive the memory write requests from the processor core; schedule the memory write requests on the memory pipeline; and contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed.Type: GrantFiled: May 14, 2020Date of Patent: March 5, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Timothy David Anderson, David Matthew Thompson
-
Patent number: 11899932Abstract: Embodiments of the present invention generally provide for multi-dimensional disk arrays and methods for managing same and can be used in video surveillance systems for the management of real-time video data, image data, or combinations thereof.Type: GrantFiled: March 17, 2023Date of Patent: February 13, 2024Assignee: NEC CORPORATIONInventors: Wing-Yee Au, Alan Rowe
-
Patent number: 11875050Abstract: Provided herein is a memory device including a memory block with memory cells to which word lines and bit lines are connected; page buffers, connected to the memory block through the bit lines, during a program operation, configured to convert original data that is received from an external device into variable data that is divided into groups according to a number of specific data, and configured to apply a program enable voltage or a program inhibit voltage to the bit lines according to the variable data; and a data pattern manager configured to control the page buffers to convert the original data into the variable data during the program operation.Type: GrantFiled: November 12, 2021Date of Patent: January 16, 2024Assignee: SK hynix Inc.Inventor: Hyung Jin Choi
-
Patent number: 11868257Abstract: Embodiments of the present disclosure generally relate to a target device handling overlap write commands. In one embodiment, a target device includes a non-volatile memory and a controller coupled to the non-volatile memory. The controller includes a random accumulated buffer, a sequential accumulated buffer, and an overlap accumulated buffer. The controller is configured to receive a new write command, classify the new write command, and write data associated with the new write command to one of the random accumulated buffer, the sequential accumulated buffer, or the overlap accumulated buffer. Once the overlap accumulated buffer becomes available, the controller first flushes to the non-volatile memory the data in the random accumulated buffer and the sequential accumulated buffer that was received prior in sequence to the data in the overlap accumulated buffer. The controller then flushes the available overlap accumulated buffer, ensuring that new write commands override prior write commands.Type: GrantFiled: July 8, 2022Date of Patent: January 9, 2024Assignee: Western Digital Technologies, Inc.Inventor: Shay Benisty
-
Patent number: 11853580Abstract: A computer implemented method includes obtaining positional information corresponding to end of data (EOD) on a tape and a data extent stored in the tape, wherein the positional information includes longitudinal position (LPOS), latitudinal position (wrap), and number of data blocks, comparing a block number of at least one of a currently read or located data with the positional information of the data extent to identify a current position of a tape head, identifying a positional relationship between a location of data to be read, the positional information of the EOD on the tape, and the current position of the tape head, identifying a directional relationship between a current direction of the tape head locating to data to be read and a pending write direction, and determining an appendable range for data after the EOD on the tape based on the identified positional relationship and the identified directional relationship.Type: GrantFiled: June 30, 2022Date of Patent: December 26, 2023Assignee: International Business Machines CorporationInventors: Noriko Yamamoto, Atsushi Abe, Tsuyoshi Miyamura, Tohru Hasegawa, Hiroshi Itagaki, Shinsuke Mitsuma
-
Patent number: 11853221Abstract: In some examples, a system dynamically adjusts a prefetching load with respect to a prefetch cache based on a measure of past utilizations of the prefetch cache, wherein the prefetching load is to prefetch data from storage into the prefetch cache.Type: GrantFiled: February 18, 2022Date of Patent: December 26, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Xiali He, Alex Veprinsky, Matthew S. Gates, William Michael McCormack, Susan Agten
-
Patent number: 11836090Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, all of the disjointed range partitions are deleted. A first new cached partition range that contains the data value is created; it excludes at least one value that had been cached. The remaining values are placed in uncached range partitions; contents of the cache are updated to reflect the new range partition.Type: GrantFiled: February 17, 2021Date of Patent: December 5, 2023Assignee: Kinaxis Inc.Inventor: Angela Lin
-
Patent number: 11822551Abstract: An approach is provided that receives a request to write an entry to a database. Database caches are then checked for a portion of the entry, such as a portion that includes a primary key. Based on the checking, the approach determines whether to write the entry to the database. In response to the determination being that the entry cannot be written to the database, an error is returned with the error being returned without accessing the database, only the caches. On the other hand, the entry is written to the database in response to the determination being that the entry can be written to the database.Type: GrantFiled: August 31, 2021Date of Patent: November 21, 2023Assignee: International Business Machines CorporationInventors: Hariharan Krishna, Shajeer K Mohammed, Sudheesh S. Kairali
-
Patent number: 11809899Abstract: A server having a host processor coupled to a programmable coprocessor is provided. One or more virtual machines may run on the host processor. The coprocessor may be coupled to an auxiliary memory that stores virtual machine (VM) states. During live migration, the coprocessor may determine when to move the VM states from the auxiliary memory to a remote server node. The coprocessor may include a coherent protocol home agent and state tracking circuitry configured to track data modification at a cache line granularity. Whenever a particular cache line has been modified, only the data associated with that cache line will be moved to the remote server without having to copy over the entire page, thereby substantially reducing the amount of data that needs to be transferred during migration events.Type: GrantFiled: September 26, 2019Date of Patent: November 7, 2023Assignee: Intel CorporationInventors: Nagabhushan Chitlur, Mariano Aguirre, Stephen S. Chang, Rohan Menezes, Michael T. Werstlein, Jonathan Lo
-
Patent number: 11803566Abstract: Disclosed herein is a data structure which includes a sequence of events, each event associated with a sequence number indicating a temporal position of an event within the sequence of events; one or more read-offsets, each read-offset associated with a consumer, wherein each read-offset indicates a sequence number up to which a consumer has read events within the sequence of events; and at least one snapshot which represents events with sequence numbers smaller than the smallest read-offset in a compacted form. Disclosed herein is also a computer-implemented method of maintaining the data structure.Type: GrantFiled: December 15, 2021Date of Patent: October 31, 2023Assignee: Palantir Technologies Inc.Inventors: Robert Fink, James Baker, Mark Elliot
-
Patent number: 11775527Abstract: Region summaries of database data are stored in persistent memory of a storage cell. Because the region summaries are stored in persistent memory, when a storage cell is powered off and data in volatile memory is not retained, region summaries are nevertheless preserved in persistent memory. When the storage cell comes online, the region summaries already exist and may be used without the delay attendant to regenerating the region summaries stored in volatile memory.Type: GrantFiled: June 24, 2021Date of Patent: October 3, 2023Assignee: Oracle International CorporationInventors: Krishnan Meiyyappan, Semen Ustimenko, Adrian Tsz Him Ng, Kothanda Umamageswaran
-
Patent number: 11741063Abstract: An example system includes a processor to receive, from a client device, a delete query requesting deletion of a row of in a fully homomorphically encrypted (FHE) database. The processor can store an identifier of the row to be deleted in a deletion queue, where the row is to be replaced with values of a row to be inserted from a received insertion query.Type: GrantFiled: October 21, 2021Date of Patent: August 29, 2023Assignee: International Business Machines CorporationInventors: Allon Adir, Michael Mirkin, Ramy Masalha, Omri Soceanu
-
Patent number: 11734185Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the range of the target range partition is reduced until either: the data value is excluded (if the data value is an end point of the partition range); or elements within the target range are evicted to make space for the data value.Type: GrantFiled: February 16, 2021Date of Patent: August 22, 2023Assignee: Kinaxis Inc.Inventor: Angela Lin
-
Patent number: 11718312Abstract: Disclosed are devices, systems and methods for an audio assistant in an autonomous or semi-autonomous vehicle. In one aspect the informational audio assistant receives a first set of data from a vehicle sensor and identifies an object or condition using the data from the vehicle sensor. Audio is generated representative of a perceived danger of an object or condition. A second set of data from the vehicle sensor subsystem is received and the informational audio assistant determines whether an increased danger exists based on a comparison of the first set of data to the second set of data. The informational audio assistant will apply a sound profile to the generated audio based on the increased danger.Type: GrantFiled: October 8, 2021Date of Patent: August 8, 2023Assignee: TUSIMPLE, INC.Inventors: Cheng Zhang, Xiaodi Hou, Sven Kratz
-
Patent number: 11714758Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective portions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the target range partition is divided into two partitions, the partition that excludes the data value is designated as uncached; the values therein are evicted. If the cache has space, the data value is copied onto the cache; otherwise the division/eviction are repeated until the cache has space.Type: GrantFiled: January 29, 2021Date of Patent: August 1, 2023Assignee: Kinaxis Inc.Inventor: Angela Lin
-
Patent number: 11687455Abstract: Described is a data cache implementing hybrid writebacks and writethroughs. A processing system includes a memory, a memory controller, and a processor. The processor includes a data cache including cache lines, a write buffer, and a store queue. The store queue writes data to a hit cache line and an allocated entry in the write buffer when the hit cache line is initially in at least a shared coherence state, resulting in the hit cache line being in a shared coherence state with data and the allocated entry being in a modified coherence state with data. The write buffer requests and the memory controller upgrades the hit cache line to a modified coherence state with data based on tracked coherence states. The write buffer retires the data upon upgrade. The data cache writebacks the data to memory for a defined event.Type: GrantFiled: October 6, 2022Date of Patent: June 27, 2023Assignee: SiFive, Inc.Inventors: John Ingalls, Wesley Waylon Terpstra, Henry Cook
-
Patent number: 11656764Abstract: Embodiments disclosed herein provide systems, methods, and computer-readable media to implement an object store with removable storage media. In a particular embodiment, a method provides identifying first data for storage on a first removable storage medium and designating at least a portion of the first data to a first data object. The method further provides determining a first location where to store the first data object in a first value store partition of the first removable storage medium and writing the first data object to the first location. Also, the method provides writing a first key that identifies the first data object and indicates the first location to a first key store partition of the first removable storage medium.Type: GrantFiled: June 7, 2021Date of Patent: May 23, 2023Assignee: QUANTUM CORPORATIONInventors: Roderick B. Wideman, Turguy Goker, Suayb S. Arslan
-
Patent number: 11620231Abstract: Aspects of the invention include defining one or more processor units having a plurality of caches, each processor unit comprising a processor having at least one cache, and wherein each of the one or more processor units are coupled together by an interconnect fabric, for each of the plurality of caches, arranging a plurality of cache lines into one or more congruence classes, each congruence class comprises a chronology vector, arranging each cache in the plurality of caches into a cluster of caches based on a plurality of scope domains, determining a first cache line to evict based on the chronology vector, and determining a target cache for installing the first cache line based on a scope of the first cache line and a saturation metric associated with the target cache, wherein the scope of the first cache line is determined based on lateral persistence tag bits.Type: GrantFiled: August 20, 2021Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Ram Sai Manoj Bamdhamravuri, Craig R. Walters, Christian Jacobi, Timothy Bronson, Gregory William Alexander, Hieu T. Huynh, Robert J. Sonnelitter, III, Jason D. Kohl, Deanna P. D. Berger, Richard Joseph Branciforte
-
Patent number: 11593272Abstract: In response to receiving a read request for target data, an external address of the target data is obtained from the read request, which is an address unmapped to a storage system; hit information of the target data in cache of the storage system is determined based on the external address; and based on the hit information, an address from the external address and an internal address for providing the target data is determined. The internal address is determined based on the external address and a mapping relationship. Therefore, it can shorten the data access path, accelerate the responding speed for the data access request, and allow the cache to prefetch the data more efficiently.Type: GrantFiled: September 24, 2019Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Ruiyong Jia, Jibing Dong, Baote Zhuo, Chun Ma, Jianbin Kang
-
Patent number: 11586544Abstract: A data prefetching method and a terminal device are provided. The CPU core cluster is configured to deliver a data access request to a first cache of the at least one level of cache, where the data access request carries a first address, and the first address is an address of data that the CPU core cluster currently needs to access in the memory. The prefetcher in the terminal device provided in embodiments of this application may generate a prefetch-from address, and load data corresponding to the generated prefetch-from address to the first cache. When needing to access the data, the CPU core cluster can read from the first cache, without a need to read from the memory. This helps increase an operating rate of the CPU core cluster.Type: GrantFiled: July 23, 2019Date of Patent: February 21, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Gongzheng Shi, Jianliang Ma, Liqiang Wang
-
Patent number: 11567873Abstract: Disclosed herein are system, method, and computer program product embodiments for utilizing an extended cache to access an object store efficiently. An embodiment operates by executing a database transaction, thereby causing pages to be written from a buffer cache to an extended cache and to an object store. The embodiment determines a transaction type of the database transaction. The transaction type can a read-only transaction or an update transaction. The embodiment determines a phase of the database transaction based on the determined transaction type. The phase can be an execution phase or a commit phase. The embodiment then applies a caching policy to the extended cache for the evicted pages based on the determined transaction type of the database transaction and the determined phase of the database transaction.Type: GrantFiled: September 27, 2021Date of Patent: January 31, 2023Assignee: SAP SEInventors: Sagar Shedge, Nishant Sharma, Nawab Alam, Mohammed Abouzour, Gunes Aluc, Anant Agarwal
-
Patent number: 11567832Abstract: A storage unit includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and processing circuitry. The storage unit receives a set of read slice requests for a set of encoded data slices (EDSs) associated with a data object stored within a first set of storage units, where the storage the first set of storage units includes the storage unit. When at least a read threshold number of EDSs and fewer than all of the set of EDSs can be successfully retrieved from the first set of storage units, the storage unit identifies at least one EDS associated with a data object that is stored in a second set of storage units, obtains the at least one EDS and stores the at least one EDS in the storage unit.Type: GrantFiled: March 12, 2021Date of Patent: January 31, 2023Assignee: PURE STORAGE, INC.Inventors: Ravi V. Khadiwala, Yogesh R. Vedpathak, Jason K. Resch, Asimuddin Kazi
-
Patent number: 11567817Abstract: A processing device can determine a configuration parameter based on a memory type of a memory component that is managed by a memory system controller. The processing device can receive data from a host system. The processing device can generate, by performing a memory operation using the configuration parameter, an instruction based on the data. The processing device can identify a sequencer of a plurality of sequencers that are collocated, within a single package external to the memory system controller, wherein each sequencer of the plurality of sequencers interfaces with a respective memory component. The processing device can send the instruction to the sequencer.Type: GrantFiled: June 1, 2021Date of Patent: January 31, 2023Assignee: Micron Technology, Inc.Inventors: Samir Mittal, Ying Yu Tai, Cheng Yuan Wu
-
Patent number: 11556470Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full all cached range partitions that do not contain the data value are designated as uncached. All values that lie in the cached range partitions designated as uncached are evicted. The data value is then inserted into the target range partition, and copied to the first tier.Type: GrantFiled: February 2, 2021Date of Patent: January 17, 2023Assignee: Kinaxis Inc.Inventor: Angela Lin
-
Patent number: 11556477Abstract: A system and method are disclosed for a cache IP that includes registers that are programmed through a service port. Service registers are selected from the registers to define an address range so that all cache lines within the address range can be flushed automatically using a control signal sent to a control register.Type: GrantFiled: June 17, 2019Date of Patent: January 17, 2023Assignee: ARTERIS, INC.Inventors: Mohammed Khaleeluddin, Jean-Philipe Loison
-
Patent number: 11544226Abstract: A plurality of computing devices are communicatively coupled to each other via a network, and each of the plurality of computing devices is operably coupled to one or more of a plurality of storage devices. A plurality of failure resilient address spaces are distributed across the plurality of storage devices such that each of the plurality of failure resilient address spaces spans a plurality of the storage devices. The plurality of computing devices maintains metadata that maps each failure resilient address space to one of the plurality of computing devices. The metadata is grouped into buckets. Each bucket is stored in a group of computing devices. However, only the leader of the group is able to directly access a particular bucket at any given time.Type: GrantFiled: December 17, 2019Date of Patent: January 3, 2023Inventors: Maor Ben Dayan, Omri Palmon, Liran Zvibel
-
Patent number: 11526445Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, a memory system is disclosed. The memory system includes volatile memory configured as a cache. The cache stores first data at first storage locations. Backing storage media couples to the cache. The backing storage media stores second data in second storage locations corresponding to the first data. Logic uses a presence or status of first data in the first storage locations to cease maintenance operations to the stored second data in the second storage locations.Type: GrantFiled: May 6, 2020Date of Patent: December 13, 2022Assignee: Rambus Inc.Inventors: Collins Williams, Michael Miller, Kenneth Wright
-
Patent number: 11516133Abstract: Packet-processing circuitry including one or more flow caches whose contents are managed using a cache-entry replacement policy that is implemented based on one or more updatable counters maintained for each of the cache entries. In an example embodiment, the implemented policy enables the flow cache to effectively catch and keep elephant flows by giving to the caught elephant flows appropriate preference in terms of the cache dwell time, which can beneficially improve the overall cache-hit ratio and/or packet-processing throughput. Some embodiments can be used to implement an Open Virtual Switch (OVS). Some embodiments are advantageously capable of implementing the cache-entry replacement policy with very limited additional memory allocation.Type: GrantFiled: July 6, 2020Date of Patent: November 29, 2022Assignee: Nokia Solutions and Networks OyInventors: Hyunseok Chang, Fang Hao, Muralidharan Kodialam, T. V. Lakshman, Sarit Mukherjee, Limin Wang
-
Patent number: RE49818Abstract: According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. The driver is configured to write data into the second storage and read data from the second storage in units of predetermined blocks using the first storage as a cache for the second storage. The driver is further configured to reserve a cache area in the memory, between the buffer area and the first external storage, and between the buffer area and the second storage. The driver is further configured to manage the cache area in units of the predetermined blocks.Type: GrantFiled: July 27, 2020Date of Patent: January 30, 2024Assignee: Kioxia CorporationInventor: Takehiko Kurashige