Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Patent number: 12182033
    Abstract: An address translation cache (ATC) is configured to store translation entries indicating mapping information between a virtual address and a physical address of a memory device. The ATC includes a plurality flexible page group caches, a shared cache and a cache manager. Each flexible page group cache stores translation entries corresponding to a page size allocated to the flexible group cache. The shared cache stores, regardless of page sizes, translation entries that are not stored in the plurality of flexible page group caches. The cache manager allocates a page size to each flexible page group cache, manages cache page information on the page sizes allocated to the plurality of flexible page group caches, and controls the plurality of flexible page group caches and the shared cache based on the cache page information.
    Type: Grant
    Filed: October 13, 2022
    Date of Patent: December 31, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Youngsuk Moon, Hyunwoo Kang, Jaegeun Park, Sangmuk Hwang
  • Patent number: 12164906
    Abstract: A modular microcode (uCode) patch method to support runtime persistent update and associated apparatus. The method enables BIOS uCode patches to be received during platform runtime operations and written to first and second uCode extension regions as uCode images for a firmware device layout that further includes a uCode base region in which a current uCode image is stored. Following a platform reset, the first and second uCode extension regions are inspected to determine if one or more valid and newer uCode images (than the current uCode image) are present. If so, the newest uCode image is booted rather than the current uCode image. Following a successful boot, the newest uCode image is copied to the uCode base region to sync-up the current uCode image to the newest version. In one aspect, received uCode images are written to the first and second uCode extension regions in an alternating manner to support roll-back.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: December 10, 2024
    Assignee: Intel Corporation
    Inventors: Mohan J. Kumar, Sarathy Jayakumar, Chuan Song, Ruixia Li, Siyuan Fu, Jiaxin Wu, Lui He
  • Patent number: 12166690
    Abstract: Aspects of the present disclosure relate to managing storage array resources. In embodiments, a request from a client machine is received by a storage array via a command-line path. Additionally, the consumption of storage array resources can be controlled. For instance, resource consumption control can include limiting an initialization of one or more microservices based on the request's related information.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: December 10, 2024
    Assignee: Dell Products L.P.
    Inventors: Suprava Das, Bathulwar Akash, Aditya Mattaparthi, Piyush Tibrewal
  • Patent number: 12164462
    Abstract: Systems and methods described herein may relate to data transactions involving a microsector architecture. Control circuitry may organize transactions to and from the microsector architecture to, for example, enable direct addressing transactions as well as batch transactions across multiple microsectors. A data path disposed between programmable logic circuitry of a column of microsectors and a column of row controllers may form a micro-network-on-chip used by a network-on-chip to interface with the programmable logic circuitry.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: December 10, 2024
    Assignee: ALTERA CORPORATION
    Inventors: Ilya K. Ganusov, Ashish Gupta, Chee Hak Teh, Sean R. Atsatt, Scott Jeremy Weber, Parivallal Kannan, Aman Gupta, Gary Brian Wallichs
  • Patent number: 12141118
    Abstract: Characteristics associated with a device are received from the device. Firmware for the device is generated based on the received characteristics.
    Type: Grant
    Filed: June 1, 2023
    Date of Patent: November 12, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: Gordon James Coleman, Peter E. Kirkpatrick, Eric D. Seppanen
  • Patent number: 12135781
    Abstract: While a compiler compiles source code to create an executable binary, code is added into the compiled source code that, when executed, identifies and stores in a metadata table base and bounds information associated with memory allocations. Additionally, additional code is added into the compiled source code that enables hardware to determine a safety of memory access requests during an implementation of the compiled source code by performing an out-of-bounds (OOB) check in hardware using the base and bounds information stored in the metadata table. This enables the identification and avoidance of unsafe memory operations during the implementation of the executable by a GPU.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 5, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Aamer Jaleel, Mohamed Tarek Bnziad Mohamed Hassan, Mark Stephenson
  • Patent number: 12135757
    Abstract: A system and method for improving the delivery of information to a remote user by anticipating the information that might be requested, and populating a cache with that information such that it may be more quickly retrieved when the remote user requests it. The cache may be located on one of the institution's servers, or the user's computing device, on an intermediate server such as a server in the cloud, or on a combination of these servers or devices. The embodiments disclosed herein apply when an unexpected event occurs or is predicted such as a hurricane or other weather event; a financial event, or a personal event such as a car accident or a medical emergency.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: November 5, 2024
    Assignee: United Services Automobile Association (USAA)
    Inventors: Gunjan C. Vijayvergia, Anand Shah, Alan David Chase, Anil Sanghubattla, Andrew P. Jamison
  • Patent number: 12126996
    Abstract: Identification information indicates that a communication parameter to be provided in accordance with a Device Provisioning Protocol standard is a communication parameter that allows connection processing compliant with an Institute of Electrical and Electronics Engineers 802.11r standard. The identification information is set in an Authentication and Key Management field, and the communication parameter that allows connection processing compliant with the Institute of Electrical and Electronics Engineers 802.11r standard is provided.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: October 22, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hideaki Tachibana
  • Patent number: 12105631
    Abstract: An apparatus comprises a processing device configured to receive a write request to write a given portion of data to a storage system comprising a multiple-instance write cache, the multiple-instance write cache comprising a first write cache instance that utilizes replica-based data protection and a second write cache instance that utilizes data striping-based data protection, and to determine a size of the given data portion and to compare the size of the given data portion to at least one size threshold. The processing device is also configured, responsive to a first comparison result, to write the given data portion to the first write cache instance. The processing device is further configured, responsive to a second comparison result different than the first comparison result, to write at least part of the given data portion to the second write cache instance.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: October 1, 2024
    Assignee: Dell Products L.P.
    Inventors: Yosef Shatsky, Doron Tal
  • Patent number: 12105639
    Abstract: A cache system that includes a reverse cache and a main cache is disclosed. The reverse cache is configured to identify candidates for insertion into a main cache. The reverse cache stores entries such as fingerprints and index values, which are representations of or that identify data. When the entry has been accessed multiple times or is a candidate for promotion based on operation of the reverse cache, data corresponding to the entry is promoted to the main cache. The main cache is configured to evict entries using recency, frequency, and time-adjustments. The main cache and the reverse cache may be similarly configured with a recent list and a frequent list but operate differently.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: October 1, 2024
    Assignee: DELL PRODUCTS L.P.
    Inventor: Keyur B. Desai
  • Patent number: 12066935
    Abstract: A central processing unit (CPU) system including a CPU core can include an adaptive cache compressor, which is capable of monitoring a miss profile of a cache. The adaptive cache compressor can compare the miss profile to a miss threshold. Based on this comparison, the adaptive cache compressor can determine whether to enable compression of the cache.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: August 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, Alper Buyuktosunoglu, Brian Robert Prasky, Deanna Postles Dunn Berger
  • Patent number: 12066951
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. A virtual page is associated with a first memory type. A page table entry is generated to map a virtual address of the virtual page to a physical address in a first memory device of the first memory type. The page table entry is used by a memory management unit to store the virtual page at the physical address.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: August 20, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Justin M. Eno, Sean Stephen Eilert, Shivasankar Gunasekaran, Hongyu Wang, Shivam Swami
  • Patent number: 12039200
    Abstract: Disclosed is a storage device which includes a nonvolatile memory device, a buffer memory, a port that is connected with an external device, and a storage controller. When a command received from the external device through the port corresponds to a first packet format, the storage controller accesses the nonvolatile memory device by using the buffer memory in response to the command. When the command received from the external device through the port corresponds to a second packet format, the storage controller accesses the buffer memory without accessing the nonvolatile memory device in response to the command.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: July 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dingbang Mai, Seok-Jae Han
  • Patent number: 11989536
    Abstract: Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least one local memory unit that allows for data reuse opportunities. The first custom computing apparatus optimizes the code for reduced communication execution on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: May 21, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Muthu Manikandan Baskaran, Richard A. Lethin, Benoit J. Meister, Nicolas T. Vasilache
  • Patent number: 11977765
    Abstract: The functions of a mainframe environment are expanded by leveraging the functions of an open environment.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: May 7, 2024
    Assignee: Hitachi, Ltd.
    Inventors: Naoyuki Masuda, Ryusuke Ito, Kenichi Oyamada, Yuri Hiraiwa, Goro Kazama, Yunde Sun, Ryosuke Kodaira
  • Patent number: 11941250
    Abstract: A process includes determining a memory bandwidth of a processor subsystem corresponding to an execution of an application by the processor subsystem. The process includes determining an average memory latency corresponding to the execution of the application and determining an average occupancy of a miss status handling register queue associated with the execution of the application based on the memory bandwidth and the average memory latency. The process includes, based on the average occupancy of the miss status handling register queue and a capacity of the miss status handling register queue, generating data that represents a recommendation of an optimization to be applied to the application.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: March 26, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Sanyam Mehta
  • Patent number: 11914525
    Abstract: In an example, an apparatus comprises a plurality of compute engines; and logic, at least partially including hardware logic, to detect a cache line conflict in a last-level cache (LLC) communicatively coupled to the plurality of compute engines; and implement context-based eviction policy to determine a cache way in the cache to evict in order to resolve the cache line conflict. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: February 27, 2024
    Assignee: INTEL CORPORATION
    Inventors: Neta Zmora, Eran Ben-Avi
  • Patent number: 11860787
    Abstract: Methods, devices, and systems for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction includes comparing a first set of bits stored in the history table with a second set of bits stored in the history table. In some implementations, the prediction includes comparing at least a portion of an address of the request for the information with a set of bits in the history table.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ciji Isen, Paul J. Moyer
  • Patent number: 11842762
    Abstract: Disclosed is a memory system that has a memory controller and may have a memory component. The memory component may be a dynamic random access memory (DRAM). The memory controller is connectable to the memory component. The memory component has at least one data row and at least one tag row different from and associated with the at least one data row. The memory system is to implement a cache having multiple ways to hold a data group. The memory controller is operable in each of a plurality of operating modes. The operating modes include a first operating mode and a second operating mode. The first operating mode and the second operating mode have differing addressing and timing for accessing the data group. The memory controller has cache read logic that sends a cache read command, cache results logic that receives a response from the memory component, and cache fetch logic.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: December 12, 2023
    Assignee: Rambus Inc.
    Inventors: Frederick Ware, Thomas Vogelsang, Michael Raymond Miller, Collins Williams
  • Patent number: 11842059
    Abstract: A method includes accessing a first memory component of a memory sub-system via a first interface, accessing a second memory component of the memory sub-system via a second interface, and transferring data between the first memory component and the second memory component via the first interface. The method further includes initially writing data in the first memory component via a first address window and accessing data in the second memory component via a second address window in response to caching the data in first memory component to the second memory component, wherein caching the data in the first memory component to the second component includes changing an address for the data from the first address window to the second address window.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: December 12, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 11775434
    Abstract: The disclosed computer-implemented method may include receiving, from a host via a cache-coherent interconnect, a request to access an address of a coherent memory space of the host. When the request is to write data, the computer-implemented method may include (1) performing, after receiving the data, a post-processing operation on the data to generate post-processed data and (2) writing the post-processed data to a physical address of a device-attached physical memory mapped to the address. When the request is to read data, the computer-implemented method may include (1) reading the data from the physical address of a device-attached physical memory mapped to the address, (2) performing, before responding to the request, a pre-processing operation on the data to generate pre-processed data, and (3) returning the pre-processed data to the external host via the cache-coherent interconnect. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: October 3, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Narsing Krishna Vijayrao, Christian Markus Petersen
  • Patent number: 11748266
    Abstract: Embodiments are for special tracking pool enhancement for core L1 address invalidates. An invalidate request is designated to fill an entry in a queue in a local cache of a processor core, the queue including a first allocation associated with processing any type of invalidate request and a second allocation associated with processing an invalidate request not requiring a response in order for a controller to be made available, the entry being in the second allocation. Responsive to designating the invalidate request to fill the entry in the queue in the local cache, a state of the controller that made the invalidate request is changed to available based at least in part on the entry being in the second allocation.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: September 5, 2023
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Gregory William Alexander, Richard Joseph Branciforte, Aaron Tsai, Markus Kaltenbach
  • Patent number: 11709619
    Abstract: A data processing method includes receiving a message related to performance of a storage device, the message including an indicator value regarding the performance in a first time period, and a timestamp associated with the first time period. A status record of the storage device, including the number of received indicator values in a second time period including the first time period, is determined based on the timestamp, wherein the number of the received indicator values is less than a threshold number and can be updated based on the indicator value. The performance in the second time period can be determined based on the indicator value and the received indicator values in response to determining that the updated number of the received indicator values reaches the threshold number. Thus, the performance of the storage device can be quickly and accurately determined, and the consumption of computing resources is reduced.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: July 25, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Shijie Zhao, Colin Yuanfei Cai, Qirong Wang, Bei Gao
  • Patent number: 11681623
    Abstract: A pre-read data caching method and apparatus, a device, and a storage medium, the method including: receiving a read command for a target file; if determining that there is target pre-read data of the target file in a pre-read queue, then moving the pre-read data from the pre-read queue into a secondary cache queue; reading the target pre-read data in the secondary cache queue; and, after reading is complete, moving the target pre-read data from the secondary cache queue into a reset queue, the invalidation priority level of the pre-read queue being the lowest.
    Type: Grant
    Filed: January 23, 2021
    Date of Patent: June 20, 2023
    Assignee: GUANGDONG INSPUR SMART COMPUTING TECHNOLOGY CO., LTD.
    Inventors: Shuaiyang Wang, Wenpeng Li, Duan Zhang
  • Patent number: 11657002
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11615033
    Abstract: Systems, apparatuses, and methods for performing efficient translation lookaside buffer (TLB) invalidation operations for splintered pages are described. When a TLB receives an invalidation request for a specified translation context, and the invalidation request maps to an entry with a relatively large page size, the TLB does not know if there are multiple translation entries stored in the TLB for smaller splintered pages of the relatively large page. The TLB tracks whether or not splintered pages for each translation context have been installed. If a TLB invalidate (TLBI) request is received, and splintered pages have not been installed, no searches are needed for splintered pages. To refresh the sticky bits, whenever a full TLB search is performed, the TLB rescans for splintered pages for other translation contexts. If no splintered pages are found, the sticky bit can be cleared and the number of full TLBI searches is reduced.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: March 28, 2023
    Assignee: Apple Inc.
    Inventors: John D. Pape, Brian R. Mestan, Peter G. Soderquist
  • Patent number: 11604733
    Abstract: An apparatus has processing circuitry to perform data processing, at least one architectural register to store at least one partition identifier selection value which is programmable by software processed by the processing circuitry; a set-associative cache comprising a plurality of sets each comprising a plurality of ways; and partition identifier selecting circuitry to select, based on the at least one partition identifier selection value stored in the at least one architectural register, a selected partition identifier to be specified by a cache access request for accessing the set-associative cache.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: March 14, 2023
    Assignee: Arm Limited
    Inventor: Steven Douglas Krueger
  • Patent number: 11550620
    Abstract: Apparatuses and methods are disclosed for performing data processing operations in main processing circuitry and delegating certain tasks to auxiliary processing circuitry. User-specified instructions executed by the main processing circuitry comprise a task dispatch specification specifying an indication of the auxiliary processing circuitry and multiple data words defining a delegated task comprising at least one virtual address indicator. In response to the task dispatch specification the main processing circuitry performs virtual-to-physical address translation with respect to the at least one virtual address indicator to derive at least one physical address indicator, and issues a task dispatch memory write transaction to the auxiliary processing circuitry comprises the indication of the auxiliary processing circuitry and the multiple data words, wherein the at least one virtual address indicator in the multiple data words is substituted by the at least one physical address indicator.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: January 10, 2023
    Assignee: Arm Limited
    Inventors: Håkan Lars-Göran Persson, Frederic Claude Marie Piry, Matthew Lucien Evans, Albin Pierrick Tonnerre
  • Patent number: 11533580
    Abstract: A method includes determining a device location of an electronic device, and obtaining a content item to be output for display by the electronic device based on the device location, wherein the content item comprises coarse content location information and fine content location information. The method also includes determining an anchor in a physical environment based on the content item, determining a content position and a content orientation for the content item relative to the anchor based on the fine content location information, and displaying a representation of the content item using the electronic device using the content position and the content orientation.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 11513965
    Abstract: A high bandwidth memory system. In some embodiments, the system includes: a memory stack having a plurality of memory dies and eight 128-bit channels; and a logic die, the memory dies being stacked on, and connected to, the logic die; wherein the logic die may be configured to operate a first channel of the 128-bit channels in: a first mode, in which a first 64 bits operate in pseudo-channel mode, and a second 64 bits operate as two 32-bit fine-grain channels, or a second mode, in which the first 64 bits operate as two 32-bit fine-grain channels, and the second 64 bits operate as two 32-bit fine-grain channels.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 29, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Mu-Tien Chang, Dimin Niu, Hongzhong Zheng
  • Patent number: 11509740
    Abstract: There is disclosed herein computer implemented methods of cache key generation including receiving from a user a request for content; wherein the request comprises one or more of opening a browser software tab or window, launching a software application, activating a hyperlink; wherein the request causes an electronic communications network connection to be established and/or an HTTP request made; and wherein, the surrogate passes the request to an origin.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: November 22, 2022
    Assignee: Cloudinary Ltd.
    Inventors: Colin Bendell, Itai Benari
  • Patent number: 11477188
    Abstract: Methods and systems for injection of tokens or certificates for managed application communication are described. A computing device may intercept a request from an application executable on the computing device, the request being to access a remote resource. The computing device may modify future network communications between the computing device and the remote resource to include a token or a client certificate, where the token or the client certificate is an identifier that enables the future network communications to be routed to the remote resource for a given computing session without use of data from the remote resource or data indicative of a connection of the remote resource in which to receive the future network communications. The computing device may send the future network communications to the remote resource to enable action to be taken on behalf of the computing device in response to receipt of the future network communications.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: October 18, 2022
    Assignee: Citrix Systems, Inc.
    Inventor: Thierry Duchastel
  • Patent number: 11461252
    Abstract: Disclosed herein is a redundancy resource comparator for a bus architecture of a memory device for comparing an address signal being received from an address signal bus and a redundancy address being stored in a latch of the memory device. Disclosed is also a corresponding bus architecture and comparison method.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: October 4, 2022
    Assignee: SK hynix Inc.
    Inventor: Simone Mazzucchelli
  • Patent number: 11436107
    Abstract: Examples described herein relate a method, a system, and a non-transitory machine-readable medium for restoring a computing resource. The method may include determining whether the computing resource is required to be restored on a recovery node using a backup of the computing resource stored in a backup storage node. A resource restore operation may be triggered on the recovery node in response to determining that the computing resource is required to be restored. The resource restore operation include copying a subset of the objects from the backup to the recovery node to form, from the subset of objects, a partial filesystem instance of the computing resource on the recovery node that is operable as a restored computing resource on the recovery node.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: September 6, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Syama Sundararao Nadiminti
  • Patent number: 11431765
    Abstract: A session migration-based scheduling method, where the method includes: receiving a service request from a terminal, where the service request is used to obtain target content required by the terminal; querying a target Internet Protocol (IP) address in a database based on the service request, where the target IP address is an IP address of a server in which the target content is located; determining the target IP address based on a candidate IP address fed back by the database; if the target IP address is different from an IP address of the first media server, determining, by the first media server, that the first media server is missing the target content; and sending the service request to a second media server, where an IP address of the second media server is the target IP address.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: August 30, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jinwen Yang, Like Jiang, Xiaojun Gu, Zhihang Lu, Yang Cao
  • Patent number: 11397690
    Abstract: A virtualized cache implementation solution, where a memory of a virtual machine stores cache metadata. The cache metadata includes a one-to-one mapping relationship between virtual addresses and first physical addresses. After an operation request that is delivered by the virtual machine and that includes a first virtual address is obtained, when the cache metadata includes a target first physical address corresponding to the first virtual address, a target second physical address corresponding to the target first physical address is searched for based on preconfigured correspondences between the first physical addresses and second physical addresses, and data is read or written from or to a location indicated by the target second physical address.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: July 26, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lina Lu, Xian Chen
  • Patent number: 11392298
    Abstract: Examples may include techniques to control an insertion ratio or rate for a cache. Examples include comparing cache miss ratios for different time intervals or windows for a cache to determine whether to adjust a cache insertion ratio that is based on a ratio of cache misses to cache insertions.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Yipeng Wang, Ren Wang, Sameh Gobriel, Tsung-Yuan C. Tai
  • Patent number: 11379367
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: July 5, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Patent number: 11340822
    Abstract: A method includes obtaining data from a plurality of data sources associated with an n-gram indexing data structure and storing at least a portion of the obtained data in a first storage, the stored data comprising one or more n-gram strings. The method also includes estimating frequencies of occurrence of respective ones of the n-gram strings in the stored data, the estimated frequency of occurrence of a given n-gram string being based at least in part on a size of a given n-gram index in the n-gram indexing data structure corresponding to the given n-gram string. The method further includes, in response to detecting one or more designated conditions, selecting a portion of the stored data based at least in part on the estimated frequencies and moving the selected portion of the stored data from the first storage to a second storage having different read and write access times.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 24, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Sashka T. Davis, Kevin J. Arunski
  • Patent number: 10949117
    Abstract: The present disclosure includes apparatuses and methods related to direct data transfer in memory. An example apparatus can include a first number of memory devices coupled to a host via a respective first number of ports and a second number of memory devices coupled to the first number of memory device via a respective second number of ports, wherein first number of memory devices and the second number of memory devices are configured to transfer data based on a first portion of a command including instructions to read the data from first number of memory devices and send the data directly to the second number of devices and a second portion of the command that includes instructions to write the data to the second number of memory devices.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: March 16, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Frank F. Ross
  • Patent number: 10713167
    Abstract: An information processing apparatus includes a first memory and a processor coupled to the first memory. The processor is configured to acquire a first address in the first memory, at which an instruction included in a target program is stored. The processor is configured to simulate access to a second memory, such as a cache memory, corresponding to an access request for access to the first address on a basis of configuration information of the second memory. The processor is configured to generate first information, such as cache profile information, indicating whether the access to the second memory regarding the instruction is a hit or miss. The processor may be configured to acquire a number of cache misses for each of a plurality of pieces of arrangement information, and select a piece of arrangement information where the number of cache misses is smallest.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: July 14, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Masaki Arai
  • Patent number: 10671387
    Abstract: Embodiments relate to vector memory access instructions for big-endian (BE) element ordered computer code and little-endian (LE) element ordered computer code. An aspect includes determining a mode of a computer system comprising one of a BE mode and an LE mode. Another aspect includes determining a code type comprising one of BE code and LE code. Another aspect includes determining a data type of data in a main memory that is associated with the object code comprising one of BE data and LE data. Another aspect includes based on the mode, code type, and data type, inserting a memory access instruction into the object code to perform a memory access associated with the vector in the object code, such that the memory access instruction performs element ordering of elements of the vector, and data ordering within the elements of the vector, in accordance with the determined mode, code type, and data type.
    Type: Grant
    Filed: June 10, 2014
    Date of Patent: June 2, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Brett Olsson
  • Patent number: 10635614
    Abstract: An embedded system includes a program to be executed. The program is divided into overlays. The embedded system includes a processor configured to request one of the overlays. The requested overlay includes a segment of the program to be executed by the processor. The embedded system also includes a first level memory device coupled to the processor. The first level memory device stores less than all of the overlays of the program. The embedded system further includes a memory management unit coupled to the processor and the first level memory device. The memory management unit is configured to determine, based on a logical address provided by the processor, whether the requested overlay is stored in the first level memory device. The memory management unit is additionally configured to convert the logical address to a physical address when the requested overlay is stored in the first level memory device. The physical address points to the requested overlay.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: April 28, 2020
    Assignee: Macronix International Co., Ltd.
    Inventor: Yi Chun Liu
  • Patent number: 10387157
    Abstract: An instruction set conversion system and method is provided, which can convert guest instructions to host instructions for processor core execution. Through configuration, instruction sets supported by the processor core are easily expanded. A method for real-time conversion between host instruction addresses and guest instruction addresses is also provided, such that the processor core can directly read out the host instructions from a higher level cache, reducing the depth of a pipeline.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: August 20, 2019
    Assignee: SHANGHAI XINHAO MICROELECTRONICS CO. LTD.
    Inventor: Kenneth Chenghao Lin
  • Patent number: 10089196
    Abstract: A method for processing return entities associated with multiple requests in a single ISR (Interrupt Service Routine) thread, performed by one core of a processing unit of a host device, is introduced. Entities are removed from a queue, which are associated with commands issued to a storage device, and the removed entities are processed until a condition is satisfied.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: October 2, 2018
    Assignee: SHANNON SYSTEMS LTD.
    Inventors: Zhen Zhou, Xueshi Yang
  • Patent number: 9990207
    Abstract: A semiconductor device with improved operating speed is provided. A semiconductor device including a memory circuit has a function of storing a start-up routine in the memory circuit and executing the start-up routine, a function of operating the memory circuit as a buffer memory device after executing the start-up routine, and a function of loading the start-up routine into the memory circuit from outside before the semiconductor device is powered off.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: June 5, 2018
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Yoshiyuki Kurokawa
  • Patent number: 9792989
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory, a command managing unit, a command issuing unit, a data control unit and a command monitoring unit. The command issuing unit issues a command received by the command managing unit to the nonvolatile memory. The data control unit controls a reading or writing of data to the nonvolatile memory. The command monitoring unit monitors the command managing unit and outputs a receipt signal to the data control unit when the command managing unit receives the command. The data control unit interrupts the reading or writing when receiving the receipt signal, issues the command from the command issuing unit to the nonvolatile memory, and resumes the reading or writing after issuing the command.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: October 17, 2017
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Tatsuhiro Suzumura
  • Patent number: 9600361
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9600360
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9384091
    Abstract: A memory 10 stores a data block comprising a plurality of data values DV. An error code, such as an error correction code ECC, is associated with the memory and has a value dependent upon the plurality of data values which form the data block stored within the memory. If a partial write is performed on a data block, then the ECC information becomes invalid and is marked with an ECC_invalid flag. The intent is avoiding the need to read all data values to compute the ECC and thus save time and energy. The memory may be a cache line 28 within a level 1 cache memory 10. Memory scrub control circuitry 38 performs periodic memory scrub operations which trigger flushing of partially written cache lines back to main memory.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: July 5, 2016
    Assignee: ARM Limited
    Inventor: Luc Orion