Patents Examined by Pierre-Michel Bataille
  • Patent number: 11853211
    Abstract: A data placement program causes a computer to execute a process of data placement in a main memory and a cache. When performing an operation using a first data groups and second data groups to generate pieces of operation result data representing operation results of the operation, based on a size of one piece of the operation result data and a size of an operation result area storing some of the plurality of pieces of operation result data in the cache memory, determining a number of the first data groups and a number of the second data groups, both corresponding to the some pieces of operation result data, and placing the plurality of first data groups and the plurality of second data groups in the main memory based on the determined number of the first data groups and the determined number of the second data groups.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: December 26, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Yukihiro Komura
  • Patent number: 11853593
    Abstract: Methods and systems for managing communications is disclosed. A host device and a management controller may communicate via memory mapped communications using shared memory. To improve the security of the memory mapped communications, access requests for shared memory may be monitored. Access controls for the shared memory may be put in place to reduce the likelihood of data being made unavailable before it is processed. The access controls may be lifted when the data stored in shared memory has been read by to complete the memory mapped communications.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: December 26, 2023
    Assignee: Dell Products L.P.
    Inventors: Bassem Elazzami, Adolfo Sandor Montero, Ibrahim Sayyed
  • Patent number: 11847067
    Abstract: Methods and apparatus relating to cryptographic protection of memory attached over interconnects are described. In an embodiment, memory stores data and a processor having execution circuitry executes an instruction to program an inline memory expansion logic and a host memory encryption logic with one or more cryptographic keys. The inline memory expansion logic encrypts the data to be written to the memory and decrypts encrypted data to be read from the memory. The memory is coupled to the processor via an interconnect endpoint of a system fabric. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: Siddhartha Chhabra, Prashant Dewan
  • Patent number: 11847054
    Abstract: A processing system server and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in memory of the server. The processor executes an asynchronous computation to determine the value of an object. In response to receiving a request for the object occurring before the asynchronous computation has determined the value of the object, a value of the object is returned from the cache. In response to receiving a request for the object occurring after the asynchronous computation has determined the value of the object, a value of the object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or a process or thread. The asynchronous computation may determine the value of the object by querying at least one additional server.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: December 19, 2023
    Assignee: International Business Machines Corporation
    Inventor: Arun Iyengar
  • Patent number: 11836088
    Abstract: Guided cache replacement is described. In accordance with the described techniques, a request to access a cache is received, and a cache replacement policy which controls loading data into the cache is accessed. The cache replacement policy includes a tree structure having nodes corresponding to cachelines of the cache and a traversal algorithm controlling traversal of the tree structure to select one of the cachelines. Traversal of the tree structure is guided using the traversal algorithm to select a cacheline to allocate to the request. The guided traversal modifies at least one decision of the traversal algorithm to avoid selection of a non-replaceable cacheline.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 5, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Jeffrey Christopher Allan
  • Patent number: 11822815
    Abstract: Ring buffer storage circuitry is disclosed which stores a ring buffer comprising multiple slots to hold a queued se-quence of data items. Data processing circuitry executes a plurality of processes to add one or more data items to be processed to the queued sequence and to remove one or more data items for process-ing from the queued sequence. Each process is arranged to perform an acquire process to acquire at least one slot in the ring buffer and to subsequently perform a release process to release the at least one slot. Ring buffer metadata storage circuitry stores metadata for the ring buffer comprising a first reference indicator and a second reference indicator. Corresponding methods and instructions are also disclosed.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: November 21, 2023
    Assignee: Arm Limited
    Inventor: Eric Ola Harald Liljedahl
  • Patent number: 11809318
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for memory address encoding of multi-dimensional data in a manner that optimizes the storage and access of such data in linear data storage. The multi-dimensional data may be spatial-temporal data that includes two or more spatial dimensions and a time dimension. An improved memory architecture is provided that includes an address encoder that takes a multi-dimensional coordinate as input and produces a linear physical memory address. The address encoder encodes the multi-dimensional data such that two multi-dimensional coordinates close to one another in multi-dimensional space are likely to be stored in close proximity to one another in linear data storage. In this manner, the number of main memory accesses, and thus, overall memory access latency is reduced, particularly in connection with real-world applications in which the respective probabilities of moving along any given dimension are very close.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Pony AI Inc.
    Inventors: Yubo Zhang, Pingfan Meng
  • Patent number: 11809221
    Abstract: An artificial intelligence chip and a data operation method are provided. The artificial intelligence chip receives a command carrying first data and address information and includes a chip memory, a computing processor, a base address register, and an extended address processor. The base address register is configured to access an extended address space in the chip memory. The extended address processor receives the command. The extended address processor determines an operation mode of the first data according to the address information. When the address information points to a first section of the extended address space, the extended address processor performs a first operation on the first data. When the address information points to a section other than the first section of the extended address space, the extended address processor notifies the computing processor of the operation mode and the computing processor performs a second operation on the first data.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: November 7, 2023
    Assignee: Shanghai Biren Technology Co., Ltd
    Inventors: Zhou Hong, Qin Zheng, ChengPing Luo, GuoFang Jiao, Song Zhao, XiangLiang Yu
  • Patent number: 11809324
    Abstract: Systems, apparatuses, and methods related to tokens to indicate completion of data storage to memory are described. An example method may include storing a number of data values by a first page in a first row of an array of memory cells responsive to receipt of a first command from a host, where the first command is associated with an open transaction token, and receiving a second command from the host to store a number of data values by a second page in the first row. The method may further include sending a safety token to the host to indicate completion of storing the number of data values by the second page in the first row.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: November 7, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Zoltan Szubbocsev
  • Patent number: 11803485
    Abstract: Disclosed embodiments provide features for the architecture of microservices. A global context cache is created for a microservice environment that is accessible from multiple deployed microservices. Data from various customers/applications can be aggregated to establish a determination of when a read or write access would fail due to permissions, and/or other condition such as existence or non-existence of certain data. In such situations, an error can be returned from the global context cache in much less time than if the access request propagated throughout the computer network to the persistent storage. In this way, disclosed embodiments reduce downtime and save money for organizations, and increase the efficiency of utilization of computer resources.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: October 31, 2023
    Assignee: International Business Machines Corporation
    Inventors: Hariharan Krishna, Shajeer K Mohammed, Sudheesh S. Kairali
  • Patent number: 11797454
    Abstract: The present technique provides an apparatus and method for caching data. The apparatus has a cache storage to cache data associated with memory addresses, a first interface to receive access requests, where each access request is a request to access data at a memory address indicated by that access request, and a second interface to couple to a memory controller used to control access to memory. Further, cache control circuitry is used to control allocation of data into the cache storage in accordance with a power consumption based allocation policy that seeks to select which data is cached in the cache storage with the aim of conserving power associated with accesses to the memory via the second interface.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: October 24, 2023
    Assignee: Arm Limited
    Inventors: Graeme Leslie Ingram, Michael Andrew Campbell
  • Patent number: 11789646
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that increase data reuse for multiply and accumulate (MAC) operations. An example apparatus includes a MAC circuit to process a first context of a set of a first type of contexts stored in a first buffer and a first context of a set of a second type of contexts stored in a second buffer. The example apparatus also includes control logic circuitry to, in response to determining that there is an additional context of the second type to be processed in the set of the second type of contexts, maintain the first context of the first type in the first buffer. The control logic circuitry is also to, in response to determining that there is an additional context of the first type to be processed in the set of the first type of contexts maintain the first context of the second type in the second buffer and iterate a pointer of the second buffer from a first position to a next position in the second buffer.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: October 17, 2023
    Assignee: INTEL CORPORATION
    Inventors: Niall Hanrahan, Martin Power, Kevin Brady, Martin-Thomas Grymel, David Bernard, Gary Baugh, Cormac Brick
  • Patent number: 11782621
    Abstract: A data storage device 100 comprising: a non-volatile storage medium 108 configured to store user data 109; a data port 106 configured to transmit data and power between a host computer system 130 and the data storage device 100; a data access state indicator 140; and a controller 110 configured to: selectively set a data access state of the data storage device 100 to either: an unlocked state to enable access to the user data 109; or a locked state to disable access to the user data 109; and generate an indicator control signal to cause the data access state indicator 140 to indicate the data access state, wherein the data access state indicator 140 is configured to indicate the data access state irrespective of whether the data storage device 100 is powered through the data port 106.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: October 10, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventor: Matthew Harris Klapman
  • Patent number: 11782649
    Abstract: A list of one or more archives available to be restored is provided to an authenticated user. The list of one or more archives available to be restored is based in part on a credential provided by the authenticated user. The credential provided by the user is linked to a subset of a plurality of snapshot archives associated with an enterprise. A selection of one of the one or more archives and an external target for the selected archive is received. A cloud instantiation of a secondary storage system is utilized to reconstitute a tree data structure based on serialized data included in the selected archive. A request to restore data associated with the selected archive to the external target is received. The requested data associated with the archive is provided to the external target.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: October 10, 2023
    Assignee: Cohesity, Inc.
    Inventors: Venkata Ranga Radhanikanth Guturi, Tushar Mahata, Praveen Kumar Yarlagadda
  • Patent number: 11775171
    Abstract: Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: October 3, 2023
    Assignee: Ultrata, LLC
    Inventors: Steven J. Frank, Larry Reback
  • Patent number: 11762766
    Abstract: This disclosure provides for improvements in managing multi-drive, multi-die or multi-plane NAND flash memory. In one embodiment, the host directly assigns physical addresses and performs logical-to-physical address translation in a manner that reduces or eliminates the need for a memory controller to handle these functions, and initiates functions such as wear leveling in a manner that avoids competition with host data accesses. A memory controller optionally educates the host on array composition, capabilities and addressing restrictions. Host software can therefore interleave write and read requests across dies in a manner unencumbered by memory controller address translation. For multi-plane designs, the host writes related data in a manner consistent with multi-plane device addressing limitations. The host is therefore able to “plan ahead” in a manner supporting host issuance of true multi-plane read commands.
    Type: Grant
    Filed: September 24, 2022
    Date of Patent: September 19, 2023
    Assignee: Radian Memory Systems, Inc.
    Inventors: Andrey V. Kuzmin, James G. Wayda
  • Patent number: 11762561
    Abstract: An information processing device include: a memory; and a processor coupled to the memory and configured to: receive an access request directed to an access target and sets the access request in any one of a plurality of pending entries each of which includes latency information; issue a command that corresponds to the access request; control issuance of the command on a basis of the latency information of the any one of the pending entries; set a value that indicates latency for the access request in the latency information of the any one of the pending entries; and subtract a predetermined value from the latency information of the any one of the pending entries for each unit of time.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: September 19, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Yasuhiro Kitamura
  • Patent number: 11762772
    Abstract: A data processing apparatus including a memory circuit and a data accessing circuit is provided, in which the memory circuit includes multiple cache ways configured to store data. In response to a first logic state of an enabling signal, if a tag of an address of an access requirement is the same as a corresponding tag of the multiple cache ways, the data accessing circuit determines that a cache hit occurs. In response to a second logic state of the enabling signal, if the address is within one or more predetermined address intervals specified by the data accessing circuit, the data accessing circuit determines that the cache hit occurs, and if the address is outside the one or more predetermined address intervals, the data accessing circuit determines that a cache miss occurs.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: September 19, 2023
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Chao-Wei Huang, Chen-Hsing Wang
  • Patent number: 11756618
    Abstract: A log structure is created in persistent memory using hardware support in memory controller or software supported with additional instructions. Writes to persistent memory locations are streamed to the log and written to their corresponding memory location in cache hierarchy. An added victim cache for persistent memory addresses catches cache evictions, which would corrupt open transactions. On the completion of a group of atomic persistent memory operations, the log is closed and the persistent values in the cache can be copied to their source persistent memory location and the log cleaned.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: September 12, 2023
    Inventors: Ellis Robinson Giles, Peter Joseph Varman
  • Patent number: 11747982
    Abstract: Systems, devices, and methods related to on demand memory page size are described. A memory system may employ a protocol that supports on demand variable memory page sizes. A memory system may include one or more non-volatile memory devices that may each include a local memory controller configured to support variable memory page size operation. The memory system may include a system memory controller that interfaces between the non-volatile memory devices and a processor. The system memory controller may, for instance, use a protocol that facilitates on demand memory page size where a determination of a particular page size to use in an operation may be based on characteristics of memory commands and data involved in the memory command.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: September 5, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Duane R. Mills, Richard E. Fackenthal