Patents Examined by Jae U Yu
  • Patent number: 11416806
    Abstract: Various embodiments of a distributed numeric sequence generation system and method are described. In particular, some embodiments provide high-scale, high-availability, low-cost and low-maintenance numeric sequence generation. The distributed numeric sequence generation system comprises one or more hosts, wherein individual hosts implement a cache for caching a plurality of numeric sequences. The hosts receive a maximum gap size limit for a numeric sequence, in some embodiments, and determine a total cache size of the cache associated with the one or more hosts to store the values of the numeric sequence, such that if the values in the cache were lost then the maximum gap size limit would not be exceeded. The hosts limit the number of values of the numeric sequence in the cache associated with the one or more hosts to the determined total cache size for the values of the numeric sequence, in some embodiments.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: August 16, 2022
    Assignee: Amazon Technologies, Inc.
    Inventor: Deepak Aggarwal
  • Patent number: 11409656
    Abstract: A semiconductor device includes a core which includes a first cache and a second cache; and a third cache configured to connect to the core, wherein the core is configured to: hold a read instruction that is issued from the first cache to the second cache, hold a write-back instruction that is issued from the first cache to the second cache, process the read instruction and the write-back instruction, determine whether a target address of the read instruction is held by the first cache by using a cache tag that indicates a state of the first cache, and when data of the target address is held by the first cache, abort the read instruction until it is determined that data of the target address is not held by the first cache.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: August 9, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Hiroyuki Ishii
  • Patent number: 11409672
    Abstract: A memory module includes at least two memory devices. Each of the memory devices perform verify operations after attempted writes to their respective memory cores. When a write is unsuccessful, each memory device stores information about the unsuccessful write in an internal write retry buffer. The write operations may have only been unsuccessful for one memory device and not any other memory devices on the memory module. When the memory module is instructed, both memory devices on the memory module can retry the unsuccessful memory write operations concurrently. Both devices can retry these write operations concurrently even though the unsuccessful memory write operations were to different addresses.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: August 9, 2022
    Assignee: Rambus Inc.
    Inventors: Hongzhong Zheng, Brent Haukness
  • Patent number: 11403219
    Abstract: A method for providing targeted pre-caching of data is disclosed. The method includes receiving a user login that includes an account identifier; automatically identifying a subset of the data previously accessed by the user using an activity log and the account identifier; generating a copy of the subset of the data; associating the copy with the subset of the data by linking the subset of the data with the copy; and storing the copy in a temporary file store. The method further includes receiving a request from the user for the subset of the data and displaying the copy on a graphical user interface.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: August 2, 2022
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: David Alexander Russell, Ross Neilson, Alasdair Popple
  • Patent number: 11385825
    Abstract: A computer system includes a use state analysis program that acquires a use history of data in a first computer system and a program that uses the data; and a data migration program that extracts data that is able to be migrated from the first computer system to a second computer system on the basis of the use history, writes the migratable data to a first storage system and a second storage system, and migrates a program to the second computer system on the basis of a use history of the data used by the program.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: July 12, 2022
    Assignee: HITACHI, LTD.
    Inventors: Yoshihito Akimoto, Yuki Koizumi, Hiroshi Suzuki, Chieko Akiba
  • Patent number: 11386014
    Abstract: A method at a computing device for sharing data, the method including defining a dynamically linked data library (DLDL) to include executable code; loading the DLDL from a first process, the loading causing a memory allocation of shared executable code, private data and shared data in a physical memory location; mapping the memory allocation of shared executable code, private data and shared data to a virtual memory location for the first process; loading the DLDL from a second process, the loading causing mapping of the memory allocation of shared executable code and the shared data for the first process to be mapped to a virtual memory location for the second process; and allocating private data in physical memory and mapping to a virtual memory location for the second process.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: July 12, 2022
    Assignee: BlackBerry Limited
    Inventor: Scott Lee Linke
  • Patent number: 11386007
    Abstract: Aspects of the present disclosure include methods and system for fast allocation of memory from fragmented memory. In one example, at a processor receives a request for an address to a buffer stored in a magazine associated with the processor. Upon determining that the magazine associated with the processor is empty, a request is made to a depot layer for additional memory. Upon determining that the depot layer cannot satisfy the request for the additional memory, executing a call to a slab layer for the additional memory. The slab layer identifies one or more partially-allocated slabs and generates a new magazine. A set of addresses correspond to buffers may be stored in the new magazine. A reference to the new magazine may be transferred from the slab layer to the depot layer. The reference to the new magazine may then be transferred from the depot layer to the processor.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: July 12, 2022
    Assignee: Oracle International Corporation
    Inventor: Roch Bourbonnais
  • Patent number: 11379366
    Abstract: Memory devices might include an input/output (I/O) node, a termination device, an array of memory cells in communication with the I/O node through the termination device, and control circuitry, wherein the control circuitry is configured to compare an address received by the memory device to a plurality of instances of address information stored in the memory device. Each instance of address information of the plurality of instances of address information might correspond to a respective termination value stored in the memory device. In response to the memory device receiving an address matching an instance of address information stored in the memory device, the control circuitry might further be configured to activate the termination device using the respective termination value corresponding to the instance of address information matching the received address.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: July 5, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Terry Grunzke
  • Patent number: 11379375
    Abstract: An information handling system for managing a storage system includes storage for storing profile-based cache policy performance prediction models. The information handling system also includes a storage manager that obtains an input-output profile for a workload hosted by the information handling system during a first period of time; obtains performance metrics for cache policies for the storage system using: the input-output profile, and the profile-based cache policy performance prediction models; obtains a ranking of the cache policies based on the performance metrics; selects a cache policy of the cache policies based on the rankings; and updates operation of a cache of the storage system based on the selected cache policy for a second period of time.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: July 5, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Michel Gottin, Hugo de Oliveira Barbalho, Rômulo Teixeira de Abreu Pinho, Roberto Nery Stelling Neto, Alex Laier Bordignon, Daniel Sadoc Menasché
  • Patent number: 11372762
    Abstract: Various embodiments described herein provide for using a prefetch buffer with a cache of a memory sub-system to store prefetched data (e.g., data prefetched from the cache), which can increase read access or sequential read access of the memory sub-system over that of traditional memory sub-systems.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: June 28, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Ashay Narsale
  • Patent number: 11368448
    Abstract: Systems and methods for network security are provided. Various embodiments of the present technology provide systems and methods for an identity security gateway agent that provides for privileged access. Embodiments include a system and method that uses a single sign-on (SSO) (or similar) mechanism to facilitate a user accessing web-based service providers, but separates the assertion and entire SSO process from the user credential.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: June 21, 2022
    Assignee: SAILPOINT TECHNOLOGIES, INC.
    Inventors: Ryan Privette, Kris Keller
  • Patent number: 11360903
    Abstract: A computer-implemented method, according to one approach, includes: determining a current read heat value of each logical page which corresponds to write requests that have accumulated in a destage buffer. Each of the write requests is assigned to a respective write queue based on the current read heat value of each logical page which corresponds to the write requests. Moreover, each of the write queues correspond to a different page stripe which includes physical pages, the physical pages included in each of the respective page stripes being of a same type. Other systems, methods, and computer program products are described in additional approaches.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: June 14, 2022
    Assignee: International Business Machines Corporation
    Inventors: Roman Alexander Pletka, Timothy Fisher, Aaron Daniel Fry, Nikolaos Papandreou, Nikolas Ioannou, Sasa Tomic, Radu Ioan Stoica, Charalampos Pozidis, Andrew D. Walls
  • Patent number: 11347649
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and a cache controller configured to receive two or more cache commands, determine a conflict exists between the received two or more cache commands, determine a conflict resolution between the received two or more cache commands, and sending the two or more cache commands to the first sub-cache and the second sub-cache.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 31, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11341054
    Abstract: A method for data processing implemented by computer means and comprises: for a plurality of objects of the data processing, conducting an analysis of a computer code of the data processing defining a use of said objects in the data processing, on the basis of the analysis of the computer code (COD), allocating each object to one of a plurality of memory areas for the construction and then the destruction of each object in the corresponding memory area during the data processing, in such a way that, during the data processing, each memory area exhibits stack operation.
    Type: Grant
    Filed: September 3, 2018
    Date of Patent: May 24, 2022
    Assignee: VSORA
    Inventors: Khaled Maalej, Trung Dung Nguyen, Julien Schmitt, Pierre-Emmanuel Bernard
  • Patent number: 11327893
    Abstract: According to one embodiment, an electronic device includes an interface configured to carry out communication according to a predetermined protocol, and a control section configured to add a response frame to a response to a command to be received through the interface, and transmit the response to which the response frame is added through the interface. The control section includes a setting section configured to set an arbitrarily settable field included in the response frame to a plurality of sections.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: May 10, 2022
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventor: Masashi Shimoda
  • Patent number: 11314651
    Abstract: Provided is a method for operating a measurement system including an evaluation module and several measuring elements. The evaluation module and the measuring elements are connected via a communication line. The method includes detecting measurement data via the several measuring elements. At least two of the measuring elements detect the measurement data at least partially at the same time. The method further includes: buffering the detected measurement data in the respective measuring element; and reading out the measurement data buffered in the measuring elements with the evaluation module via the communication line.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: April 26, 2022
    Assignee: TURCK HOLDING GMBH
    Inventors: Rene Steiner, Christoph Schmermund, Peter Strunkmann
  • Patent number: 11314645
    Abstract: In a cache stash relay, first data, from a producer device, is stashed in a shared cache of a data processing system. The first data is associated with first data addresses in a shared memory of the data processing system. An address pattern of the first data addresses is identified. When a request for second data, associated with a second data address, is received from a processing unit of the data processing system, any data associated with data addresses in the identified address pattern are relayed from the shared cache to a local cache of the processing unit if the second data address is in the identified address pattern. The relaying may include pushing the data from the shared cache to the local cache or a pre-fetcher of the processing unit pulling the data from the shared cache to the local cache in response to a message.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Curtis Glenn Dunham, Jonathan Curtis Beard
  • Patent number: 11314660
    Abstract: A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: April 26, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Bipin Prasad Heremagalur Ramaprasad, David Matthew Thompson, Abhijeet Ashok Chachad, Hung Ong
  • Patent number: 11314643
    Abstract: A request to perform a write operation to write data at a memory sub-system is received. Responsive to the request to perform the write operation, the data is stored at a cache portion of cache memory of the memory sub-system. A duplicate copy of the data is stored at a write buffer portion of cache memory. An entry of the write buffer record is recorded that maps a location of the duplicate copy of the data stored at the write buffer portion to a location of the data stored at the cache portion of the cache memory. A memory operation is performed at the memory sub-system based at least in part on the write buffer record.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: April 26, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 11307985
    Abstract: Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: April 19, 2022
    Assignee: INTEL CORPORATION
    Inventors: Yao Zu Dong, Kun Tian, Fengguang Wu, Jingqi Liu