Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Type:
Grant
Filed:
December 6, 2021
Date of Patent:
May 30, 2023
Assignee:
Rambus Inc.
Inventors:
Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
Abstract: An operating method of a system-on-chip includes outputting a prefetch command in response to an update of mapping information on a first read target address, the update occurring in a first translation lookaside buffer storing first mapping information of a second address with respect to a first address, and storing, in response to the prefetch command, in a second translation lookaside buffer, second mapping information of a third address with respect to at least some second addresses of an address block including a second read target address.
Type:
Grant
Filed:
July 13, 2021
Date of Patent:
May 30, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Seongmin Jo, Youngseok Kim, Chunghwan You, Wooil Kim
Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a platform update request to update data item information associated with a first version of a data item cached in a shared cache memory. The embodiment may further operate by transmitting a cache update request to update the data item information of the first version of the data item cached in the shared cache memory, and isolating the first version of the data item cached in the shared cache memory based on a collection of version specific identifiers and a version agnostic identifier associated with the data item.
Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.
Type:
Grant
Filed:
April 20, 2021
Date of Patent:
April 11, 2023
Assignee:
Hewlett Packard Enterprise Development LP
Abstract: A processing device identifies a portion of data in a cache memory to be written to a managed unit of a separate memory device and determines, based on respective memory addresses, whether an additional portion of data associated with the managed unit is stored in the cache memory. The processing device further generates a bit mask identifying a first location and a second location in the managed unit, wherein the first location is associated with the portion of data and the second location is associated with the additional portion of data, and performs, based on the bit mask, a read-modify-write operation to write the portion of data to the first location in the managed unit of the separate memory device and the additional portion of data to the second location in the managed unit of the separate memory device.
Type:
Grant
Filed:
July 21, 2021
Date of Patent:
March 21, 2023
Assignee:
Micron Technology, Inc.
Inventors:
Trevor C. Meyerowitz, Dhawal Bavishi, Fangfang Zhu
Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.
Type:
Grant
Filed:
July 9, 2021
Date of Patent:
March 7, 2023
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez
Abstract: A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
January 10, 2023
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Derrick Allen Aguren, Eric H. Van Tassell, Gabriel H. Loh, Jay Fleischman
Abstract: Systems, apparatus and methods are provided for logical-to-physical (L2P) address translation. A method may comprise receiving a request for a first logical data address (LDA), and calculating a first translation data unit (TDU) index for a first TDU. The first TDU may contain a L2P entry for the first LDA. The method may further comprise searching a cache of lookup directory entries of recently accessed TDUs using the first TDU index, determining that there is a cache miss, generating and storing an outstanding request for the lookup directory entry for the first TDU in a miss buffer, retrieving the lookup directory entry for the first TDU from an in-memory lookup directory, determining that the lookup directory entry for the first TDU is not valid, reserve a TDU space for the first TDU in a memory and generating a load request for the first TDU.
Type:
Grant
Filed:
May 10, 2021
Date of Patent:
December 27, 2022
Assignee:
INNOGRIT TECHNOLOGIES CO., LTD.
Inventors:
Bo Fu, Chi-Chun Lai, Jie Chen, Dishi Lai, Jian Wu, Cheng-Yun Hsu, Qian Cheng
Abstract: A database management system for controlling prioritized transactions, comprising: a processor adapted to: receive from a client module a request to write into a database item as part of a high-priority transaction; check a lock status and an injection status of the database item; when the lock status of the database item includes a lock owned by a low-priority transaction and the injection status is not-injected status: change the injection status of the database item to injected status; copy current content of the database item to an undo buffer of the low-priority transaction; and write into a storage engine of the database item.
Abstract: An apparatus and a method for executing host commands, which is performed by a host interface in a flash controller, to include: determining whether a preset number of successive unaligned host long-write commands have been detected, where a first starting logical block address (LBA) number of data to be written, which is requested by each unaligned host long-write command, does not align with a first physical page of one super page; if so, calculating an offset, so that a second starting LBA number of data to be written, which is requested by a host write command, plus the offset aligns with a first physical page of one super page; generating a third starting LBA number by adding the offset to the second starting LBA number; and storing an entry in an LBA shifting table, which includes information about the second starting LBA number and the offset.
Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level memory, where, an upper level of the multi-level memory is to act as a cache for a lower level of the multi-level memory. The memory controller has circuitry to determine: i) an original address of a slot in the upper level of memory from an address of a memory request in a direct mapped fashion; ii) a miss in the cache for the request because the slot is pinned with data from another address that competes with the address; iii) a partner slot of the slot in the cache in response to the miss; iv) whether there is a hit or miss in the partner slot in the cache for the request.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
December 13, 2022
Assignee:
Intel Corporation
Inventors:
Zhe Wang, Alaa R. Alameldeen, Yi Zou, Gordon King
Abstract: A method and a system for filtering process I/O operations are provided herein. The system may include: a memory component configured to store computer implementable instructions; and, a processor configured to implement the computer implementable instructions, such that the system is arranged to: determine that a process is queued for initiation on the system; correlate the process with one or more predefined policies; and filter the process by blocking or permitting the process from completing I.O operations on the system according to the one or more predefined policies, wherein the computer implementable instructions are implemented in a kernel-mode of the system.
Type:
Grant
Filed:
September 2, 2021
Date of Patent:
December 13, 2022
Assignee:
CYNET SECURITY LTD
Inventors:
Eyal Gruner, Aviad Hasnis, Mathieu Wolf, Igor Lahav, Avi Cashingad, Tomer Gavish
Abstract: A mechanism is described for facilitating dynamic merging of atomic operations in computing devices. A method of embodiments, as described herein, includes facilitating detecting atomic messages and a plurality of slot addresses. The method further includes comparing one or more slot addresses of the plurality of slot addresses with other slot addresses of the plurality of slot addresses to seek one or more matched slot addresses, where the one or more matched slot addresses are merged into one or more merged groups. The method may further include generating one or more merged atomic operations based on and corresponding to the one or more merged groups.
Type:
Grant
Filed:
September 29, 2020
Date of Patent:
December 6, 2022
Assignee:
INTEL CORPORATION
Inventors:
Joydeep Ray, Altug Koker, Abhishek R. Appu, Balaji Vembu
Abstract: Apparatuses, systems, and methods for hierarchical memory systems are described. An example method includes receiving a request to store data in a persistent memory device and a non-persistent memory device via an input/output (I/O) device; redirecting the request to store the data to logic circuitry in response to determining that the request corresponds to performance of a hierarchical memory operation; storing in a base address register associated with the logic circuitry, logical address information corresponding to the data responsive to receipt of the redirected request; asserting, by the logic circuitry, an interrupt signal on a hypervisor, the interrupt signal indicative of initiation of an operation to be performed by the hypervisor to control access to the data by the logic circuitry; and writing, based at least in part, on receipt of the redirected request, the data to the persistent memory device and the non-persistent memory device substantially concurrently.
Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
Abstract: Systems, apparatuses, and methods may provide for an eventually-consistent distributed caching mechanism for database systems. As an example, the system may include a recently updated objects (RUO) manager, which may store object identifiers of recently updated objects and RUO time-to-live values of the object identifiers. As servers read objects from the cache or write objects into the cache, the servers may also check the RUO manager to determine if the object has been updated recently enough to be at risk of being stale or outdated. If so, the servers may invalidate the object stored at the cache as it may be stale, which results in eventual consistency across the distributed database system.
Type:
Grant
Filed:
December 16, 2020
Date of Patent:
November 8, 2022
Assignee:
Comcast Cable Communications, LLC
Inventors:
Christopher Orogvany, Mark Perry, Bradley W. Jacobs
Abstract: Devices and techniques are disclosed herein for more efficiently performing random write operation for a memory device. In an example, a method of operating a flash memory device can include receiving a write request at a flash memory device from a host, the write request including a first logical block address and write data, saving the write data to a location of the flash memory device having a first physical address, operating the flash memory device in a first mode when an amount of write data associated with the write request is above a threshold, operating the flash memory device in a second mode when an amount of write data is below the threshold, and comparing the amount of write data to the threshold.
Abstract: A storage device communicates with a host including a host memory. The storage device includes a semiconductor memory device and a device memory. The semiconductor memory device includes a plurality of non-volatile memory cells. The device memory stores validity information of host performance booster (HPB) sub-regions included in each of HPB regions cached in the host memory. The storage device determines to deactivate at least one HPB region among the HPB regions cached in the host memory based on the validity information included in the device memory, and transfers a message recommending to deactivate the determined HPB region to the host.
Abstract: Access control request parameter interleaving may be implemented that supports user-configurable and host-configurable processing stages. A request may be received and evaluated to determine whether user-configured interleaving, host-configured interleaving, or both user-interleaving and host-interleaving are applied. For applied interleaving, two different portions of a request parameter may be swapped.
Abstract: Executable code comprising a local file system is stored at a collaboration system server for downloading. The remote collaboration system responds to a message from a user device to download the local file system. The local file system to be downloaded is configured to operate on the user device so as to issue requests from the user device to perform an initial access to server-side collaboration data. The collaboration system responds to such requests by predicting interests of the user, which predictions are used to retrieve additional server-side collaboration data. The additional server-side collaboration data is sent to the user device and stored on the user device in an area for locally-stored collaboration system information. The user provides search terms for searching the locally-stored collaboration system information, and results are displayed on the user device. The results are displayed without the need to perform additional communications with remote collaboration system.
Abstract: Logic may store at least a portion of an incoming packet at a memory location in a host device in response to a communication from the host device. Logic may compare the incoming packet to a digest in an entry of a primary array. When the incoming packet matches the digest, logic may retrieve a full entry from the secondary array and compare the full entry with the first incoming packet. When the full entry matches the first incoming packet, logic may store at least a portion of the first incoming packet at the memory location. And, in the absence of a match between the first incoming packet and the digest or full entry, logic may compare the first incoming packet to subsequent entries in the primary array to identify a full entry in the secondary array that matches the first incoming packet.
Type:
Grant
Filed:
March 30, 2018
Date of Patent:
September 27, 2022
Assignee:
INTEL CORPORATION
Inventors:
Keith Underwood, Karl Brummel, John Greth
Abstract: Provided is an analysis apparatus including a first storage device configured to store data, and a processing circuitry that is configured to control the own apparatus to function as: a dispatcher that is communicably connected to an analysis target device that performs operational processing by use of a processor and a memory unit, and generates collection target data for reproducing at least part of a state of the operational processing in the analysis target device, in accordance with data being transmitted and received between the processor and the memory unit; a data mapper that assigns, to one or more areas included in the collection target data, tag information for identifying the area; and a data writer that saves the one or more areas into the first storage device in accordance with a first policy defining a procedure of saving the collection target data into the first storage device.
Abstract: In various embodiments, a predictive assignment application computes a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model. Based on the forecasted amounts of processor use, the predictive assignment application computes a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors. Subsequently, the predictive assignment application determines processor assignment(s) based on the performance cost estimate. At least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s).
Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.
Type:
Grant
Filed:
January 29, 2021
Date of Patent:
August 30, 2022
Inventors:
Hema Venkataramani, Felipe Franciosi, Gokul Kannan, Sreejith Mohanan, Alok Nemchand Kataria, Raphael Shai Norwitz
Abstract: A method includes receiving a communication identifying a remote database and a first value stored in the remote database that is being transferred to a first entity by a second entity. That first value is capable of being modified by the second entity. Modification of the first value stored in the remote database by the second entity is prevented by identifying an application programming interface allowing operations to be performed on the remote database, and using that API to transfer the first value so as to be associated with one or more other identifiers unknown to the second entity. After modification of the first value stored by the second entity is prevented, a transfer of a second value to a database record associated with the second entity is triggered. Related systems and applications of the method and those systems are also disclosed.
Abstract: A System on a Chip (SoC) includes a plurality of general purpose processors, a plurality of application specific processors, a plurality of SoC support processing components, a security processing subsystem (SCS), a general access Network on a Chip (NoC) coupled to and servicing communications between the plurality of general purpose processors and the plurality of SoC support components, and a proprietary access NoC coupled to and servicing communications for the plurality of application specific processors and the SCS. The SoC may further include a safety processor subsystem (SMS) coupled to the proprietary access NoC, wherein the proprietary access NoC further services communications for the SMS and isolates communications of the SMS from communications of the plurality of general purpose processors. The general access NoC and the proprietary access NoC isolate communications of the SCS and the SMS from communications of the plurality of general purpose processors.
Type:
Grant
Filed:
April 18, 2019
Date of Patent:
August 23, 2022
Assignee:
Tesla, Inc.
Inventors:
David Glasco, Patryk Kaminski, Thaddeus Fortenberry
Abstract: Systems and methods for dynamic and automatic data storage scheme switching in a distributed data storage system. A machine learning-based policy for computing probable future content item access patterns based on historical content item access patterns is employed to dynamically and automatically switch the storage of content items (e.g., files, digital data, photos, text, audio, video, streaming content, cloud documents, etc.) between different data storage schemes. The different data storage schemes may have different data storage cost and different data access cost characteristics. For example, the different data storage schemes may encompass different types of data storage devices, different data compression schemes, and/or different data redundancy schemes.
Type:
Grant
Filed:
April 3, 2020
Date of Patent:
August 23, 2022
Assignee:
DROPBOX, INC.
Inventors:
Michael Loh, Daniel R. Horn, Andraz Kavalar, David Lichtenberg, Austin Sung, Shi Feng, Jongmin Baek
Abstract: Aspects of the present invention disclose a method, computer program product, and system for performing a multiplication of a matrix with an input vector. The method includes one or more processors subdividing a matrix into logical segments, the matrix being given in a sparse-matrix data format. The method further includes one or more processors obtaining one or more test vectors. The method further includes one or more processors performing an optimization cycle. In an additional aspect, performing the optimization cycle further comprises, for each of the test vectors, one or more processors, performing a cache performance test.
Type:
Grant
Filed:
March 9, 2020
Date of Patent:
August 16, 2022
Assignee:
International Business Machines Corporation
Inventors:
Leonidas Georgopoulos, Peter Staar, Michele Dolfi, Christoph Auer, Konstantinos Bekas
Abstract: A system, method and apparatus for storing metadata in a metadata store in a robust and efficient manner including receiving a request from a client to perform a data transaction, updating a key-value pair in a metadata store based on the request, entering the data transaction in a transaction log, updating a read cache with the key-value pair, and replicating the last transaction log entry in at least one other storage node in the metadata store.
Type:
Grant
Filed:
September 12, 2017
Date of Patent:
August 2, 2022
Assignee:
Western Digital Technologies, Inc.
Inventors:
Frederik Jacqueline Luc De Schrijver, Joris Custers, Carl Rene D'Halluin
Abstract: A prefetcher, an operating method of the prefetcher, and a processor including the prefetcher are provided. The prefetcher includes a prefetch address generating circuit, an address tracking circuit, and an offset control circuit. The prefetch address generating circuit generates a prefetch address based on first prefetch information and an offset amount. The address tracking circuit stores the prefetch address and a plurality of historical prefetch addresses. When receiving an access address, the offset control circuit updates the offset amount based on second prefetch information, the access address, the prefetch address, and the historical prefetch addresses, and provides the prefetch address generating circuit with the updated offset amount.
Abstract: Disclosed herein is a method for operating access to a cache memory via an effective address comprising a tag field and a cache line index field. The method comprises: splitting the tag field into a first group of bits and a second group of bits. The line index bits and the first group of bits are searched in the set directory. A set identifier is generated indicating the set containing the respective cache line of the effective address. The set identifier, the line index bits and the second group of bits are searched in the validation directory. In response to determining the presence of the cache line in the set based on the second searching, a hit signal is generated.
Type:
Grant
Filed:
October 13, 2020
Date of Patent:
August 2, 2022
Assignee:
International Business Machines Corporation
Inventors:
Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Anthony Saporito, Aaron Tsai
Abstract: Micro-operations (?ops) are allocated into a ?op cache by dividing, by a micro branch target buffer (?BTB), instructions into a first basic block in which the instructions are executed by a processing device and the first basic block corresponds to an edge of the instructions being executed by the processing device. The ?BTB allocates the first basic block to an inverted basic block queue (IBBQ) and the IBBQ determines that the first basic block fits into the ?op cache. The IBBQ allocates the first basic block to the ?op cache based on a number of times the edge of the instructions corresponding to the first basic block is repeatedly executed by the processing device.
Abstract: The present disclosure relates to a method for improving the reading and/or writing phase in storage devices including a plurality of non-volatile memory portions managed by a memory controller, comprising: providing at least a faster memory portion having a lower latency and higher throughput with respect to said non-volatile memory portions and being by-directionally connected to said controller; using said faster memory portion as a read and/or write cache memory for copying the content of memory regions including more frequently read or written logical blocks of said plurality of non-volatile memory portions. A specific read cache architecture for a managed storage device is also disclosed to implement the above method.
Abstract: Various technologies described herein pertain to interactive data splitting. A program for splitting an input column of an input data set into multiple output columns can be synthesized based on input-only examples. The program can further be generated based on various user input; thus, the user input can guide the synthesis of the program. Moreover, the program can be executed on the input data set to split the input column of the input data set into the multiple output columns.
Type:
Grant
Filed:
October 24, 2016
Date of Patent:
June 28, 2022
Assignee:
MICROSOFT TECHNOLOGY LICENSING, LLC
Inventors:
Mohammad Raza, Sumit Gulwani, Ranvijay Kumar, Euan Peter Garden, Chairy Chiu Ying Cheung, Daniel Galen Simmons
Abstract: A processing device in a memory sub-system sends a program command to the memory device to cause the memory device to initiate a program operation on a corresponding wordline and sub-block of a memory array of the memory device. The processing device further receives a request to perform a read operation on data stored on the wordline and sub-block of the memory array, sends a suspend command to the memory device to cause the memory device to suspend the program operation, reads data corresponding to the read operation from a page cache of the memory device, and sends a resume command to the memory device to cause the memory device to resume the program operation.
Type:
Grant
Filed:
March 12, 2020
Date of Patent:
June 21, 2022
Assignee:
Micron Technology, Inc.
Inventors:
Abdelhakim Alhussien, Jiangang Wu, Karl D. Schuh, Qisong Lin, Jung Sheng Hoei
Abstract: Provided are a method and apparatus for managing a page cache for multiple foreground applications. A method of managing a page cache includes identifying an application accessing to data stored in storage; allocating a page used by the application for the accessed data to a page cache; setting a page variable corresponding to a type of the identified application to the allocated page; and managing demoting of the allocated page based on the set page when the allocated page is a demoting target.
Type:
Grant
Filed:
October 19, 2020
Date of Patent:
June 14, 2022
Assignee:
Research & Business Foundation Sungkyunkwan University
Abstract: Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value.
Type:
Grant
Filed:
October 14, 2019
Date of Patent:
May 31, 2022
Assignee:
EMC IP Holding Company LLC
Inventors:
Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
Abstract: Techniques are disclosed relating to filtering access to a content-addressable memory (CAM). In some embodiments, a processor monitors for certain microarchitectural states and filters access to the CAM in states where there cannot be a match in the CAM or where matching entries will not be used even if there is a match. In some embodiments, toggle control circuitry prevents toggling of input lines when filtering CAM access, which may reduce dynamic power consumption. In some example embodiments, the CAM is used to access a load queue to validate that out-of-order execution for a set of instructions matches in-order execution, and situations where ordering should be checked are relatively rare.
Type:
Grant
Filed:
February 15, 2019
Date of Patent:
May 31, 2022
Assignee:
Apple Inc.
Inventors:
Deepak Limaye, Brian R. Mestan, Gideon N. Levinsky
Abstract: Methods and apparatus for managing and optimizing data storage devices that include non-volatile memory (NVM) are described. One such method involves deriving a hint for one or more logical block addresses (LBAs) of a storage device based on information received from a host device and/or physical characteristics of the storage device, such as LBAs that are invalidated together; grouping the LBAs into one or more clusters of LBAs based on the derived hint and a statistical analysis of the physical characteristics of the storage devices; allocating available physical block addresses (PBAs) in the storage device to one of the LBAs based on the one or more clusters of LBAs to achieve optimization of a data storage device.
Type:
Grant
Filed:
June 24, 2019
Date of Patent:
May 24, 2022
Assignee:
WESTERN DIGITAL TECHNOLOGIES, INC.
Inventors:
Ariel Navon, Alexander Bazarsky, Judah Gamliel Hahn, Karin Inbar, Rami Rom, Idan Alrod, Eran Sharon
Abstract: An example apparatus includes a hybrid memory system to couple to a host and a controller coupled to the hybrid memory system. The controller may be configured to assign a sensitivity to a command and cause the command to be selectively diverted to the hybrid memory system based, at least in part, on the assigned sensitivity.
Type:
Grant
Filed:
June 2, 2020
Date of Patent:
May 24, 2022
Assignee:
Micron Technology, Inc.
Inventors:
Danilo Caraccio, Emanuele Confalonieri, Marco Dallabora, Roberto Izzi, Paolo Amato, Daniele Balluchi, Luca Porzio
Abstract: Disclosed is a cache including a dataflow controller for transmitting first data to a first processor and receiving second data from the first processor, an external direct memory access (DMA) controller for receiving the first data from an external memory to transmit the first data to the dataflow controller and receiving the second data from the dataflow controller to transmit the second data to the external memory, a scratchpad memory for storing the first data or the second data transmitted between the dataflow controller and the external DMA controller, a compression/decompression device for compressing data to be transmitted from the scratchpad memory to the external memory and decompressing data transmitted from the external memory to the scratchpad memory, and a transfer state buffer for storing transfer state information associated with data transfer between the dataflow controller and the external DMA controller.
Type:
Grant
Filed:
December 11, 2020
Date of Patent:
May 24, 2022
Assignee:
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
Inventors:
Jin Ho Han, Min-Seok Choi, Young-Su Kwon
Abstract: The present disclosure relates to a memory component for a System-on-Chip (SoC) structure including at least a memory array and at least a logic portion for interacting with the memory array and with the SoC structure wherein the memory component is a structurally independent semiconductor device coupled to and partially overlapping the SoC structure.
Abstract: An example apparatus comprises a hybrid memory system and a controller coupled to the hybrid memory system. The controller may be configured to cause data to be selectively stored in the hybrid memory system responsive to a determination that an exception involving the data has occurred.
Type:
Grant
Filed:
June 5, 2020
Date of Patent:
May 10, 2022
Assignee:
Micron Technology, Inc.
Inventors:
Danilo Caraccio, Emanuele Confalonieri, Marco Dallabora, Roberto Izzi, Paolo Amato, Daniele Balluchi, Luca Porzio
Abstract: This disclosure relates to increasing performance of database queries. A proxy server receives an input query string and a parameter value for first parameter name in the query string. The proxy server determines a second parameter name based on the parameter value and different to the first parameter name. The proxy server then determines an output query string based on the input query string. The output query string comprises a filter clause with a field name and a second field value, the second field value of the output query string being based on the second parameter name. The proxy server finally sends the output query string to a database management system to cause the database management system to execute a database query using an execution plan based on the second parameter name in the output query string.
Abstract: A Domain Name System (DNS) resolver node receives a first DNS query from a first client device. The resolver node determines that it cannot answer the query using its local cache so it performs a recursive query to obtain the answer. The answer is sent to the first client and stored in its local cache. The resolver node further transmits the answer to multiple other resolver nodes that are part of the same cluster so they can update their respective local cache with the information. Upon receiving a message from another resolver node that includes a set of resource record(s) not in its local cache, the resolver node stores that set of resource record(s) in its local cache so that it can locally answer subsequent requests for those resource record(s) locally.
Abstract: In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The memory system can include a pipeline for accessing data stored in one of the caches. Requestors can access the data stored in one of the caches by sending requests at a same time that can be arbitrated by the pipeline.
Type:
Grant
Filed:
May 24, 2020
Date of Patent:
May 3, 2022
Assignee:
Texas Instruments Incorporated
Inventors:
Abhijeet Ashok Chachad, David Matthew Thompson
Abstract: A multi-chip system and a cache processing method are provided. The multi-chip system includes multiple chips. Each chip includes multiple clusters, a crossbar interface, and a snoop system. Each cluster corresponds to a local cache. The crossbar interface is coupled to the clusters and a crossbar interface of another chip. The snoop system is coupled to the crossbar interface and performs unidirectional transmission with the crossbar interface. The snoop system includes a snoop table module and multiple trackers. The snoop table module includes a shared cache, which records a snoop table. Multiple trackers are coupled to the snoop table module, query the snoop table in the shared cache according to a memory access request initiated by one of clusters, and update the snoop table according to a query result. The snoop table corresponds to a storage structure of the local cache corresponding to the clusters in all chips.
Abstract: The invention relates to a system comprising a mobile device (1), a device (13b) for transmitting information, a device (15) hosting a device registry (14) and a wireless device (3a). The mobile device (1) comprises a receiver, a transmitter, storage means and a processor.
Type:
Grant
Filed:
January 13, 2017
Date of Patent:
April 19, 2022
Assignees:
Koninklijke KPN N.V., Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek TNO
Abstract: Systems and methods are provided for providing an object platform for datasets A definition of an object may be obtained. The object may be associated with information stored in one or more datasets. The information may be determined based at least in part on the definition of the object. The object may be stored in a cache such that the information associated with the object is also stored in the cache. One or more interfaces through which requests to perform one or more operations on the object are able to be submitted may be provided.
Type:
Grant
Filed:
June 3, 2020
Date of Patent:
April 12, 2022
Assignee:
Palantir Technologies Inc.
Inventors:
Rick Ducott, Aakash Goenka, Bianca Rahill-Marier, Tao Wei, Diogo Bonfim Moraes Morant De Holanda, Jack Grossman, Francis Screene, Subbanarasimhiah Harish, Jim Inoue, Jeremy Kong, Mark Elliot, Myles Scolnick, Quentin Spencer-Harper, Richard Niemi, Ragnar Vorel, Thomas Mcintyre, Thomas Powell, Andy Chen