Patents Examined by Mohammad S Hasan
  • Patent number: 11748251
    Abstract: Embodiments of the present disclosure include systems and methods for storing tensors in memory based on depth. In some embodiments, for each of a plurality of sets of elements in a three-dimensional (3D) matrix, a position is determined along a height axis and width axis of the 3D matrix. At the determined position, a set of elements are identified along a depth axis of the 3D matrix. The set of elements are stored in a contiguous block of memory.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: September 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nitin Garegrat, Shankar Narayan, Derek Gladding
  • Patent number: 11740993
    Abstract: An apparatus includes a plurality of processor circuits, a cache memory circuit, and a trace control circuit. The trace control circuit may be configured, in response to activation of a mode to record information indicative of program execution of at least one processor circuit of the plurality of processor circuits, to monitor memory requests transmitted between ones of the plurality of processor circuits and the cache memory circuit, and then to select a particular memory request of monitored memory requests using an arbitration algorithm. The trace control circuit may be further configured to allocate space in a trace buffer to the particular memory request, and to store, in the trace buffer, information associated with the particular memory request.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: August 29, 2023
    Assignee: Apple Inc.
    Inventors: Andrew J. Beaumont-Smith, Sandeep Gupta, Krishna C. Potnuru, Matthias Knoth
  • Patent number: 11714756
    Abstract: Embodiments of information handling systems (IHSs) and methods are provided herein to improve the security and performance of a shared cache memory contained within a multi-core host processor. Although not strictly limited to such, the techniques described herein may be used to improve the security and performance of a shared last level cache (LLC) contained within a multi-core host processor included within a virtualized and/or containerized IHS. In the disclosed embodiments, cache security and performance are improved by using pre-boot Memory Reference Code (MRC) based cache initialization methods to create page-sized cache namespaces, which may be dynamically mapped to virtualized and/or containerized applications when the applications are subsequently booted during operating system (OS) runtime.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: August 1, 2023
    Assignee: Dell Products L.P.
    Inventors: Shekar Babu Suryanarayana, Vivek Viswanathan Iyer
  • Patent number: 11709773
    Abstract: A computer-readable recording medium storing an information processing program for causing a computer to execute a process including: specifying an amount of first areas subjected to data update among a plurality of first areas that are contained in a cache storage area and allowed to be synchronized individually from each other with a nonvolatile storage area; and determining whether to individually synchronize the first areas subjected to the data update among the plurality of first areas with the nonvolatile storage area or collectively synchronize a second area that is formed by the plurality of first areas and allowed to be collectively synchronized with the nonvolatile storage area, with the nonvolatile storage area, based on the specified amount, a first processing time taken for synchronization between the first areas and the nonvolatile storage area, and a second processing time taken for synchronization between the second area and the nonvolatile storage area.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: July 25, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Satoshi Iwata
  • Patent number: 11698859
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to receive a first request to allocate a direct swap file associated with an application stored in a system memory on a persistent storage media, and map a linear and continuous space of the persistent storage media to the direct swap file associated with the application in response to the first request. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: July 11, 2023
    Assignee: SK Hynix NAND Product Solutions Corp.
    Inventor: Mariusz Barczak
  • Patent number: 11698731
    Abstract: Responsive to a power-on of a memory device, an elapsed power-off time is identified based on a difference between a time at which the power-on occurred and a time at which a previous power-off of the memory device occurred. Responsive to a determination that the elapsed power-off time satisfies the elapsed time threshold criterion, a request to perform a first write operation on a memory unit of the memory device since power on is received, a performance parameter associated with the memory unit of the memory device is changed to a first parameter value that corresponds to a reduced performance level, and the write operation is performed on the memory unit of the memory device in accordance with the first parameter value that corresponds to the reduced performance level. Responsive to completion of the write operation, the performance parameter is changed to a value that corresponds to a normal performance level.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: July 11, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Murong Lang, Zhenming Zhou
  • Patent number: 11688431
    Abstract: In one aspect of tape repositioning management in accordance with the present description, in response to loading a tape in a tape drive, mounting the tape linear tape file system (LTFS) is initiated including reading an index partition on the tape to extract metadata for mounting the tape LTFS, and prior to accessing a data area of the tape in response to any application access request, the tape is repositioned within a data partition to read a vHRTD (virtual High Resolution Tape Directory) recorded in an EOD (End of Data) portion such as an EOD data set, for example, of the data partition. The EOD portion is read to retrieve the vHRTD to facilitate application requested accesses to the tape. In one embodiment, repositioning and stopping the tape at the beginning of the data partition after reading the index partition containing metadata is bypassed.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: June 27, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tsuyoshi Miyamura, Atsushi Abe, Setsuko Masuda
  • Patent number: 11675706
    Abstract: A programmable switch includes at least one memory configured to store a cache directory for a distributed cache, and circuitry configured to receive a cache line request from a client device to obtain a cache line. The cache directory is updated based on the received cache line request, and the cache line request is sent to a memory device to obtain the requested cache line. An indication of the cache directory update is sent to a controller for the distributed cache to update a global cache directory. In one aspect, the controller sends at least one additional indication of the update to at least one other programmable switch to update at least one backup cache directory stored at the at least one other programmable switch.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: June 13, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Marjan Radi, Dejan Vucinic
  • Patent number: 11635900
    Abstract: A method includes receiving signaling indicative of performance of a shutdown operation involving a memory device to a controller resident on the memory device; initiating a power off sequence in response to the received signaling, wherein the power off sequence includes execution of instructions corresponding to a plurality of routines; and writing data comprising respective shutdown signatures associated with execution of the plurality of routines to a media associated with the memory device upon completion of each of one or more of the plurality of routines, wherein the media is bit-addressable or byte-addressable.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Kelsey J. Dobner
  • Patent number: 11625325
    Abstract: A server includes a data cache for storing data objects requested by users logged in under different user roles. Different user roles may have different permissions to access individual fields within a data object. When a cache miss occurs, the cache may begin loading portions of a requested data object from various data sources. Instead of waiting for the entire object to load to change the object state to “valid,” the cache may incrementally update the state through various levels of validity based on the user role of the request. When a portion of the data object used by a low-level user role is received, the object state can be upgraded to be valid for that user role while data for higher-level user roles continues to load. The portion of the data object can then be sent to the low-level user roles without waiting for the rest of the data object to load.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: April 11, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Yuvaraj Chandrasekaran, Mihir Kumar Das, Pushpander Singh, Lawrence Lindsey
  • Patent number: 11620233
    Abstract: An integrated circuit for offloading a page migration operation from a host processor is provided. The integrated circuit is configured to: receive, from the host processor, a request to perform the page migration operation from a first physical address to a second physical address; and based on the request, perform the page migration operation. The page migration operation comprises: performing a copy operation of data from the first physical address to the second physical address, and updating a page table entry based on the second physical address, to enable the host processor to access the data from the second physical address based on the updated page table entry.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: April 4, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Ali Ghassan Saidi, Tzachi Zidenberg
  • Patent number: 11620230
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed facilitate read-modify-write support in a coherent victim cache with parallel data paths. An example apparatus includes a random-access memory configured to be coupled to a central processing unit via a first interface and a second interface, the random-access memory configured to obtain a read request indicating a first address to read via a snoop interface, an address encoder coupled to the random-access memory, the address encoder to, when the random-access memory indicates a hit of the read request, generate a second address corresponding to a victim cache based on the first address, and a multiplexer coupled to the victim cache to transmit a response including data obtained from the second address of the victim cache.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 4, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11599464
    Abstract: An electronic device includes a memory controller having an improved operation speed. The memory controller includes a main memory, a processor configured to generate commands for accessing data stored in the main memory, a scheduler configured to store the commands and output the commands according to a preset criterion, a cache memory configured to cache and store data accessed by the processor among the data stored in the main memory, and a hazard filter configured to store information on an address of the main memory corresponding to a write command among the commands, provide a pre-completion response for the write command to the scheduler upon receiving the write command, and provide the write command to the main memory.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: March 7, 2023
    Assignee: SK hynix Inc.
    Inventor: Do Hun Kim
  • Patent number: 11592984
    Abstract: A method includes receiving at a storage device a command from a host. When learning is active on the storage device, an initial parameter value of a plurality of parameter values is used for performing a first action of a plurality of actions for the command. The first action is performed using the initial parameter value of the plurality of parameter values for the command The first parameter value is incremented to a next parameter value of the plurality of parameter values for the command for use in reperforming the first action.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: February 28, 2023
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Harry Tiotantra, Jun Cai, Kai Chen, WeiQing Zhou, Feng Shen
  • Patent number: 11593001
    Abstract: A VPU and associated components include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators are used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer is included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU executes a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 28, 2023
    Assignee: NVIDIA Corporation
    Inventors: Ching-Yu Hung, Ravi P Singh, Jagadeesh Sankaran, Yen-Te Shih, Ahmad Itani
  • Patent number: 11593265
    Abstract: A graphics processing system is disclosed having a cache system (24) arranged between memory (23) and the graphics processor (20), the cache system comprising a first cache (53) for transferring data to and from the graphics processor (20) and a second cache (54) arranged and configured to transfer data between the first cache (53) and memory (23). When data is to be written from the first cache (53) to memory (23), a cache controller (55) determines a data type of the data and, in dependence on the data type, either causes the data to be written into the second cache (54) without writing the data to memory (23), or causes the data to be written to memory (23) without storing the data in the second cache (54). In embodiments the second cache (54) is write-only allocated.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: February 28, 2023
    Assignee: Arm Limited
    Inventor: Olof Henrik Uhrenholt
  • Patent number: 11593268
    Abstract: Techniques for cache management involve accessing, when a first data block to be accessed is missing in a first cache, the first data block from a storage device storing the first data block; selecting, when the first cache is full and based on a plurality of parameters associated with a plurality of eviction policies, an eviction policy for evicting a data block in the first cache from the plurality of eviction policies, the plurality of parameters indicating corresponding possibilities that the plurality of eviction policies are selected; evicting a second data block in the first cache to a second cache based on the selected eviction policy, the second cache being configured to record the data block evicted from the first cache; and caching the accessed first data block in the first cache. Such techniques can improve the cache hit rate, thereby improving the access performance of a system.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: February 28, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Shuo Lv, Ming Zhang
  • Patent number: 11586368
    Abstract: Techniques for configuring unused memory into namespaces based on determined attributes of incoming input/output (IO). Incoming IO is analyzed to determine characteristics of the IO. Unused memory space is identified. Based on the characteristics of the IO, a portion of the unused memory space is configured into a particular namespace. This namespace is configured to handle IO having the identified characteristics. Subsequent to configuring the portion of the unused memory space into the particular namespace, a file system is created for the particular namespace. Subsequent IO, which shares the same characteristics as the IO, is routed to the namespace, which is managed using the file system.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: February 21, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Parmeshwr Prasad, Bing Liu, Rahul Deo Vishwakarma
  • Patent number: 11573900
    Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: February 7, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle
  • Patent number: 11567803
    Abstract: A memory allocation device for deployment within a host server computer includes control circuitry, a first interface to a local processing unit disposed within the host computer and local operating memory disposed within the host computer, and a second interface to a remote computer. The control circuitry allocates a first portion of the local memory to a first process executed by the local processing unit and transmits, to the remote computer via the second interface, a request to allocate to a second process executed by the local processing unit a first portion of a remote memory disposed within the remote computer. The control circuitry further receives instructions via the first interface to store data at a memory address within the first portion of the remote memory and transmits those instructions to the remote computer via the second interface.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: January 31, 2023
    Assignee: Rambus Inc.
    Inventors: Christopher Haywood, Evan Lawrence Erickson