Generating Prefetch, Look-ahead, Jump, Or Predictive Address Patents (Class 711/213)
  • Patent number: 11947464
    Abstract: A data management method causes a computer to execute processing including: creating, when a predetermined data processing program performs data processing, based on an access frequency to a data store, high-frequency state item list information obtained by listing high-frequency state items of which the access frequency is high; determining, when state information that includes a value of the high-frequency state item is written to the data store, whether or not the state information corresponds to the high-frequency state item with reference to the high-frequency state item list information; grouping and writing pieces of the state information of a plurality of the high-frequency state item.
    Type: Grant
    Filed: October 5, 2022
    Date of Patent: April 2, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Julius Michaelis, Yasuhiko Kanemasa
  • Patent number: 11934673
    Abstract: A data storage device includes at least one data storage medium. The data storage device also includes a workload rating associated with data access operations carried out on the at least one data storage medium. The data storage device further includes a controller configured to enable performance of the data access operations, and change a rate of consumption of the workload rating by internal device management operations carried out in the data storage device in response to a change in a workload consumed by host commands serviced by the data storage device.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: March 19, 2024
    Assignee: Seagate Technology LLC
    Inventors: Abhay T. Kataria, Praveen Viraraghavan, Mark A. Gaertner
  • Patent number: 11888835
    Abstract: An illustrative method includes a storage management system of a container system performing, for a worker node added to a cluster of the container system based on a first authentication of the worker node, a second authentication for the worker node, and determining, based on the second authentication, whether the worker node is authorized to perform one or more operations on a storage system associated with the cluster.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: January 30, 2024
    Assignee: Pure Storage, Inc.
    Inventors: Luis Pablo Pabón, Taher Vohra, Naveen Neelakantam
  • Patent number: 11874773
    Abstract: Systems, methods, and apparatuses relating to a dual spatial pattern prefetcher are described.
    Type: Grant
    Filed: December 28, 2019
    Date of Patent: January 16, 2024
    Assignee: Intel Corporation
    Inventors: Rahul Bera, Anant Vithal Nori, Sreenivas Subramoney
  • Patent number: 11822984
    Abstract: Implementations described herein relate to run-time management of a serverless function in a serverless computing environment. In some implementations, a method includes receiving, at a processor, based on historical run-time invocation data for the serverless function in the serverless computing environment, a first number of expected invocations of the serverless function for a first time period, determining, by the processor, based on the first number of expected invocations of the serverless function for the first time period, a second number of warm-up invocation calls to be made for the first time period, and periodically invoking the second number of instances of an extended version of the serverless function during the first time period, wherein the extended version of the serverless function is configured to load and initialize the serverless function and terminate without executing the serverless function.
    Type: Grant
    Filed: March 8, 2023
    Date of Patent: November 21, 2023
    Assignee: Sedai Inc.
    Inventors: Hari Chandrasekhar, Aby Jacob, Mathew Koshy Karunattu, Nikhil Gopinath Kurup, Suresh Mathew, S Meenakshi, Sayanth S, Akash Vijayan
  • Patent number: 11797395
    Abstract: A data management and storage (DMS) cluster of peer DMS nodes manages migration of an application between a primary compute infrastructure and a secondary compute infrastructure. The secondary compute infrastructure may be a failover environment for the primary compute infrastructure. Primary snapshots of virtual machines of the application in the primary compute infrastructure are generated, and provided to the secondary compute infrastructure. During a failover, the primary snapshots are deployed in the secondary compute infrastructure as virtual machines. Secondary snapshots of the virtual machines are generated, where the secondary snapshots are incremental snapshots of the primary snapshots. In failback, the secondary snapshots are provided to the primary compute infrastructure, where they are combined with the primary snapshots into construct a current state of the application, and the application is deployed in the current state by deploying virtual machines on the primary compute infrastructure.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: October 24, 2023
    Assignee: Rubrik, Inc.
    Inventors: Zhicong Wang, Benjamin Meadowcroft, Biswaroop Palit, Atanu Chakraborty, Hardik Vohra, Abhay Mitra, Saurabh Goyal, Sanjari Srivastava, Swapnil Agarwal, Rahil Shah, Mudit Malpani, Janmejay Singh, Ajay Arvind Bhave, Prateek Pandey
  • Patent number: 11782714
    Abstract: A method comprises receiving a current instruction for metadata processing performed in a metadata processing domain that is isolated from a code execution domain including the current instruction. The method further comprises determining, by the metadata processing domain in connection with metadata for the current instruction, whether to allow execution of the current instruction in accordance with a set of one or more policies. The one or more policies may include a set of rules that enforces execution of a complete sequence of instructions in a specified order from a first instruction of the complete sequence to a last instruction of the complete sequence. The metadata processing may be implemented by a metadata processing hierarchy comprising a control module, a masking module, a hash module, a rule cache lookup module, and/or an output tag module.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: October 10, 2023
    Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.
    Inventor: Andre′ DeHon
  • Patent number: 11740992
    Abstract: A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: August 29, 2023
    Assignee: Numecent Holdings, Inc.
    Inventors: Jeffrey de Vries, Arthur S. Hitomi
  • Patent number: 11663006
    Abstract: Methods and apparatuses relating to switching of a shadow stack pointer are described. In one embodiment, a hardware processor includes a hardware decode unit to decode an instruction, and a hardware execution unit to execute the instruction to: pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread, remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Jason W. Brandt, Ravi L. Sahita, Barry E. Huntley, Baiju V. Patel, Deepak K. Gupta
  • Patent number: 11645207
    Abstract: A system and method for efficiently processing memory requests are described. A processing unit includes at least a processor core, a cache, and a non-cache storage buffer capable of storing data prevented from being stored in the cache. While processing a memory request targeting the non-cache storage buffer, the processor core inspects a flag stored in a tag of the memory request. The processor core prevents data prefetching into one or more of the non-cache storage buffer and the cache based on determining the flag specifies preventing data prefetching into one or more of the non-cache storage buffer and the cache using the target address of the memory request during processing of this instance of the memory request. While processing a prefetch hint instruction, the processor core determines from the tag whether to prevent prefetching.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 9, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Masab Ahmad, Derrick Allen Aguren
  • Patent number: 11635960
    Abstract: A method includes receiving, for metadata processing, a current instruction with associated metadata tags. The metadata processing is performed in a metadata processing domain isolated from a code execution domain including the current instruction. Each respective associated metadata tag represents a respective policy of the composite policy. For each respective metadata tag, the method includes determining, in the metadata processing domain and in accordance with the metadata tag and the current instruction, whether a rule exists for the current instruction in a rules cache. The rules cache may include rules on metadata used by the metadata processing to define allowed instructions. The determination of whether a rule exists results in a respective output, which may include generating a new rule and inserting the new rule in the rules cache. Control Status Registers, and associated tags, may be used to accomplish the metadata processing.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: April 25, 2023
    Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.
    Inventor: Andre' DeHon
  • Patent number: 11631377
    Abstract: The present disclosure relates to a method for controlling a timing controller and a timing controller. The method for controlling the timing controller includes: acquiring a bus address in a bus signal transmitted over an I2C bus, the I2C bus being connected to the timing controller; if the timing controller determining that the bus address matches an address of the timing controller, acquiring data information in the bus signal; acquiring an address of a target function circuit according to the data information; generating and transmitting a query instruction to a memory according to the address of the target function circuit, and receiving switch control data corresponding to the target function circuit fed back by the memory; controlling, according to the switch control data, a switch connected to the target function circuit to be turned on.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: April 18, 2023
    Assignees: BEIHAI HKC OPTOELECTRONICS TECHNOLOGY CO., LTD., Chongqing HKC Optoelectronics Technology Co., ltd.
    Inventor: Mingliang Wang
  • Patent number: 11625343
    Abstract: Memory systems with a communications bus (and associated systems, devices, and methods) are disclosed herein. In one embodiment, a memory device includes an input/output terminal separate from data terminals of the memory device. The input/output terminal can be operably connected to a memory controller via a communications bus. The memory device can be configured to initiate a communication with the memory controller by outputting a signal via the input/output terminal and/or over the communications bus. The memory device can be configured to output the signal in accordance with a clock signal that is different from a second clock signal used to output or receive data signals via the data terminals. In some embodiments, the memory device is configured to initiate communications over the communication bus only when it possesses a communication token. The communication token can be transferred between memory devices operably connected to the communications bus.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: April 11, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Sujeet Ayyapureddi
  • Patent number: 11622026
    Abstract: In some embodiments, an electronic device is disclosed for intelligently prefetching data via a computer network. The electronic device can include a device housing, a user interface, a memory device, and a hardware processor. The hardware processor can: communicate via a communication network; determine that the hardware processor is expected to be unable to communicate via the communication network; responsive to determining that the hardware processor is expected to be unable to communicate via the communication network, determine prefetch data to request prior to the hardware processor being unable to communicate via the communication network; request the prefetch data; receive and store the prefetch data prior to the hardware processor being unable to communicate via the communication network; and subsequent to the hardware processor being unable to communicate via the communication network, process the prefetch data with an application responsive to processing a first user input with the application.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: April 4, 2023
    Assignee: Tealium Inc.
    Inventors: Craig P. Rouse, Harry Cassell, Christopher B. Slovak
  • Patent number: 11605088
    Abstract: Methods and systems are presented for providing concurrent data retrieval and risk processing while evaluating a risk source of an online service provider. Upon receiving a request to evaluate the risk source, a risk analysis module may initiate one or more risk evaluation sub-processes to evaluate the risk source. Each risk evaluation sub-process may require different data related to the risk source to perform the evaluation. The risk analysis module may simultaneously retrieve the data related to the risk source and perform the one or more risk evaluation sub-processes such that the risk analysis module may complete a risk evaluation sub-process whenever the data required by the risk evaluation sub-process is made available.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: March 14, 2023
    Assignee: PayPal, Inc.
    Inventors: Srinivasan Manoharan, Vinesh Poruthikottu Chirakkil
  • Patent number: 11593267
    Abstract: Aspects of the present disclosure relate to asynchronous memory management. In embodiments, an input/output (IO) workload is received at a storage array. Further, one or more read-miss events corresponding to the IO workload are identified. Additionally, at least one of the storage array's cache slots is bound to a track identifier (TID) corresponding to the read-miss events based on one or more of the read-miss events' two-dimensional metrics.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: February 28, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Ramesh Doddaiah, Malak Alshawabkeh, Rong Yu, Peng Wu
  • Patent number: 11580032
    Abstract: A technique is provided for training a prediction apparatus. The apparatus has an input interface for receiving a sequence of training events indicative of program instructions, and identifier value generation circuitry for performing an identifier value generation function to generate, for a given training event received at the input interface, an identifier value for that given training event. The identifier value generation function is arranged such that the generated identifier value is dependent on at least one register referenced by a program instruction indicated by that given training event. Prediction storage is provided with a plurality of training entries, where each training entry is allocated an identifier value as generated by the identifier value generation function, and is used to maintain training data derived from training events having that allocated identifier value.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: February 14, 2023
    Assignee: Arm Limited
    Inventors: Frederic Claude Marie Piry, Natalya Bondarenko, Cédric Denis Robert Airaud, Geoffray Matthieu Lacourba
  • Patent number: 11568932
    Abstract: Methods and systems include memory devices with multiple memory cells configured to store data. The memory devices also include a cache configured to store at least a portion of the data to provide access to the at least the portion of the data without accessing the multiple memory cells. The memory devices also include control circuitry configured to receive a read command having a target address. Based on the target address, the control circuitry is configured to determine that the at least the portion of the data is present in the cache. Using the cache, the control circuitry also outputs read data from the cache without accessing the plurality of memory cells.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: January 31, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Zhongyuan Lu, Stephen H. Tang, Robert J. Gleixner
  • Patent number: 11520505
    Abstract: It is desired to provide a technique capable of reducing the time and the power consumption required for computation. Provided is an information processing apparatus including a storage control unit that writes data read from a read target area of an external memory having multiple dimensions to a storage area having the multiple dimensions and a processing unit that executes processing based on the data of the storage area, in which the storage control unit moves the read target area in a first dimension direction in the external memory and performs first overwrite of a back end area of the storage area in a direction corresponding to the first dimension direction with data of a front end area of the read target area after movement in the first dimension direction.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: December 6, 2022
    Assignee: SONY CORPORATION
    Inventor: Yuji Takahashi
  • Patent number: 11481220
    Abstract: An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: October 25, 2022
    Assignee: Arm Limited
    Inventors: Alexander Alfred Hornung, Adrian Viorel Popescu
  • Patent number: 11430523
    Abstract: A storage device is provided. The storage device includes a nonvolatile memory device including a first block and a second block, and a controller including processing circuitry configured to, predict a number of writes to be performed on the nonvolatile memory device using a machine learning model, determine a type of reclaim command based on the predicted number of writes, the reclaim command for reclaiming data of the first block to the second block, and issue the reclaim command.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: August 30, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Woo Hong, Chan Ha Kim, Yun Jung Lee
  • Patent number: 11360898
    Abstract: This technology relates to a method and apparatus for improving I/O throughput through an interleaving operation for multiple memory dies of a memory system. A memory system may include: multiple memory dies suitable for outputting data of different sizes in response to a read request; and a controller in communication with the multiple memory dies through multiple channels, and suitable for: performing a correlation operation on the read request so that the multiple memory dies interleave and output target data corresponding to the read request through the multiple channels, determining a pending credit using a result of the correlation operation, and reading, from the multiple memory dies, the target data corresponding to the read request and additional data stored in a same storage unit as the target data, based on a type of the target data corresponding to the read request and the pending credit.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: June 14, 2022
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 11347523
    Abstract: Techniques include executing a software program having a function call to a shared library and reloading the shared library without stopping execution of the software program. A global offset table (GOT) is updated responsive to resolving a link address associated with the function call. An entry in GOT included a link address field, an index field, and a resolved field, the updating including updating the index field with an affirmative value and marking the resolved field with an affirmative flag for the entry in the GOT. Responsive to reloading the shared library, the entry in the GOT is found having the affirmative value in the index field and the affirmative flag in the resolved field. An address value in the link address field is returned for the entry having the affirmative value in the index field, responsive to a subsequent execution of the function call to the shared library.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 31, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Ling Chen, Zhan Peng Huo, Yong Yin, Dong Hui Liu, Qi Li, Jia Yu, Jiang Yi Liu, Xiao Xuan Fu, Cheng Fang Wang
  • Patent number: 11288074
    Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array.
    Type: Grant
    Filed: March 31, 2019
    Date of Patent: March 29, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11204868
    Abstract: The present application discloses a memory control method, a controller, a chip and an electronic device, and relates to the field of control technology. A specific implementation solution is: obtaining first address information of an access to the memory performed by the processor within a first time window; determining, according to the first address information and an address jump relationship, a target slice of the memory that is to be accessed by the processor within a second time window; and controlling the target slice in the memory to be turned on and controlling a slice other than the target slice in the memory to be turned off within the second time window. Through the above-mentioned process, each slice is dynamically turned on and off according to the actual situation of memory access, thereby reducing the power consumption of the memory to the maximum extent.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: December 21, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Bibo Yang, Xiaoping Yan, Chao Tian, Junhui Wen
  • Patent number: 11093401
    Abstract: Various aspects provide for facilitating prediction of instruction pipeline hazards in a processor system. A system comprises a fetch component and an execution component. The fetch component is configured for storing a hazard prediction associated with a group of memory access instructions in a buffer associated with branch prediction. The execution component is configured for executing a memory access instruction associated with the group of memory access instructions as a function of the hazard prediction entry. In an aspect, the hazard prediction entry is configured for predicting whether the group of memory access instructions is associated with an instruction pipeline hazard.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: August 17, 2021
    Assignee: Ampere Computing LLC
    Inventors: Matthew Ashcraft, Richard W. Thaik
  • Patent number: 11093601
    Abstract: Embodiments described herein enable the interoperability between processes configured for pointer authentication and processes that are not configured for pointer authentication. Enabling the interoperability between such processes enables essential libraries, such as system libraries, to be compiled with pointer authentication, while enabling those libraries to still be used by processes that have not yet been compiled or configured to use pointer authentication.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: August 17, 2021
    Assignee: Apple Inc.
    Inventors: Bernard J. Semeria, Devon S. Andrade, Jeremy C. Andrus, Ahmed Bougacha, Peter Cooper, Jacques Fortier, Louis G. Gerbarg, James H. Grosbach, Robert J. McCall, Daniel A. Steffen, Justin R. Unger
  • Patent number: 11068397
    Abstract: Disclosed aspects relate to accelerator sharing among a plurality of processors through a plurality of coherent proxies. The cache lines in a cache associated with the accelerator are allocated to one of the plurality of coherent proxies. In a cache directory for the cache lines used by the accelerator, the status of the cache lines and the identification information of the coherent proxies to which the cache lines are allocated are provided. Each coherent proxy maintains a shadow directory of the cache directory for the cache lines allocated to it. In response to receiving an operation request, a coherent proxy corresponding to the request is determined. The accelerator communicates with the determined coherent proxy for the request.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: July 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Peng Fei BG Gou, Yang Liu, Yang Fan EL Liu, Yong Lu
  • Patent number: 10929136
    Abstract: Branch prediction techniques are described that can improve the performance of pipelined microprocessors. A microprocessor with a hierarchical branch prediction structure is presented. The hierarchy of branch predictors includes: a multi-cycle predictor that provides very accurate branch predictions, but with a latency of multiple cycles; a small and simple branch predictor that can provide branch predictions for a sub-set of instructions with zero-cycle latency; and a fast, intermediate level branch predictor that provides relatively accurate branch prediction, while still having a low, but non-zero instruction prediction latency of only one cycle, for example. To improve operation, the higher accuracy, higher latency branch direction predictor and the fast, lower latency branch direction predictor can share a common target predictor.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: February 23, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Shiwen Hu, Wei Yu Chen, Michael Chow, Qian Wang, Yongbin Zhou, Lixia Yang, Ning Yang
  • Patent number: 10908912
    Abstract: A method for redirecting an indirect call in an operating system kernel to a direct call is disclosed. The direct calls are contained in trampoline code called an inline jump switch (IJS) or an outline jump switch (OJS). The IJS and OJS can operate in either a use mode, redirecting an indirect call to a direct call, a learning and update mode or fallback mode. In the learning and update mode, target addresses in a trampoline code template are learned and updated by a jump switch worker thread that periodically runs as a kernel process. When building the kernel binary, a plug-in is integrated into the kernel. The plug-in replaces call sites with a trampoline code template containing a direct call so that the template can be later updated by the jump switch worker thread.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: February 2, 2021
    Assignee: VMWARE, INC.
    Inventors: Nadav Amit, Frederick Joseph Jacobs, Michael Wei
  • Patent number: 10908934
    Abstract: A simulation method performed by a computer for simulating operations by a plurality of cores based on resource access operation descriptions on the plurality of cores, the method includes steps of: extracting a resource access operation description on at least one core of the plurality of cores by executing simulation for the one core; and, under a condition where the one core and a second core among the plurality of cores have a specific relation in execution processing, generating a resource access operation description on the second core from the resource access operation description on the one core by reflecting an address difference between an address of a resource to which the one core accesses and an address of a resource to which the second core accesses.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: February 2, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Katsuhiro Yoda, Takahiro Notsu, Mitsuru Tomono
  • Patent number: 10901710
    Abstract: Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. The processor defines a special store instruction that is different from a regular store instruction. The special store instruction is used in regions of the computer program where memory aliasing may occur. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing may occur.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Srinivasan Ramani, Rohit Taneja
  • Patent number: 10901951
    Abstract: A processing module of a memory storage unit includes an interface configured to interface and communicate with a communication system, a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory that is configured to execute the operational instructions to manage data stored using append-only formatting. When the processing module determines that a section of the memory includes invalid data and the amount of invalid data compares unfavorably to a predetermined limit, the processing module determines a rate for execution of a compaction routine for the section of memory, where the rate is based on a proportion, integral and derivative (PID) function that is based on a target usage level of the memory and a current usage level of the memory.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ethan S. Wozniak, Praveen Viraraghavan, Ilya Volvovski
  • Patent number: 10893096
    Abstract: Embodiments for optimizing dynamic resource allocations in a disaggregated computing environment. A data heat map associated with a data access pattern of data elements associated with a workload is maintained. The workload is classified into one of a plurality of classes, each class characterized by the data access pattern associated with the workload. The workload is then assigned to a dynamically constructed disaggregated system optimized with resources according to the one of the plurality of classes the workload is classified into to increase efficiency during a performance of the workload.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. Bivens, Ruchi Mahindru, Eugen Schenfeld, Min Li, Valentina Salapura
  • Patent number: 10884747
    Abstract: Prediction of an affiliated register. A determination is made as to whether an affiliated register is to be predicted for a particular branch instruction. The affiliated register is a register, separate from a target address register, selected to store a predicted target address based on prediction of a target address. Based on determining that the affiliated register is to be predicted, predictive processing is performed. The predictive processing includes providing the predicted target address in a location associated with the affiliated register.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10884929
    Abstract: A Set Table of Contents (TOC) Register instruction. An instruction to provide a pointer to a reference data structure, such as a TOC, is obtained by a processor and executed. The executing includes determining a value for the pointer to the reference data structure, and storing the value in a location (e.g., a register) specified by the instruction.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10877889
    Abstract: Techniques for implementing and/or operating an apparatus, which includes a processing system communicatively coupled to a memory system via a memory bus. The processing system includes processing circuitry, one or more caches, and a memory controller. When a data block targeted by the processing circuitry results in a processor-side miss, the memory controller instructs the processing system to output a memory access request that requests return of the data block at least in part by outputting an access parameter to be used by the memory system to locate the data block in one or more hierarchical memory levels during a first clock cycle and outputting a context parameter indicative of context information associated with current targeting of the data block during a second clock cycle different from the first clock cycle to enable the memory system to predictively control data storage based at least in part on the context information.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: December 29, 2020
    Assignee: Micron Technology, Inc.
    Inventor: David Andrew Roberts
  • Patent number: 10853270
    Abstract: A computing device includes technologies for securing indirect addresses (e.g., pointers) that are used by a processor to perform memory access (e.g., read/write/execute) operations. The computing device encodes the indirect address using metadata and a cryptographic algorithm. The metadata may be stored in an unused portion of the indirect address.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: David M. Durham, Baiju Patel
  • Patent number: 10846093
    Abstract: In one embodiment, an apparatus includes: a value prediction storage including a plurality of entries each to store address information of an instruction, a value prediction for the instruction and a confidence value for the value prediction; and a control circuit coupled to the value prediction storage. In response to an instruction address of a first instruction, the control circuit is to access a first entry of the value prediction storage to obtain a first value prediction associated with the first instruction and control execution of a second instruction based at least in part on the first value prediction. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Sumeet Bandishte, Jayesh Gaur, Sreenivas Subramoney, Hong Wang
  • Patent number: 10817420
    Abstract: A method for accessing two memory locations in two different memory arrays based on a single address string includes determining three sets of address bits. A first set of address bits are common to the addresses of wordlines that correspond to the memory locations in the two memory arrays. A second set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a first memory location in a first memory array. A third set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a second memory location in a second memory array. The method includes populating the single address string with the three sets of address bits and may be performed by an address data processing unit.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 27, 2020
    Assignee: Arm Limited
    Inventors: Yew Keong Chong, Sriram Thyagarajan, Andy Wangkun Chen
  • Patent number: 10761844
    Abstract: Disclosed embodiments relate to predicting load data. In one example, a processor a pipeline having stages ordered as fetch, decode, allocate, write back, and commit, a training table to store an address, predicted data, a state, and a count of instances of unchanged return data, and tracking circuitry to determine, during one or more of the allocate and decode stages, whether a training table entry has a first state and matches a fetched first load instruction, and, if so, using the data predicted by the entry during the execute stage, the tracking circuitry further to update the training table during or after the write back stage to set the state of the first load instruction in the training table to the first state when the count reaches a first threshold.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Manjunath Shevgoor, Mark J. Dechene, Stanislav Shwartsman, Pavel I. Kryukov
  • Patent number: 10754650
    Abstract: In an embodiment, a method includes, in a hardware processor, determining, for a processor instruction, a rule for matching a predicted memory tag. The method further includes determining a predicted memory tag based on applying the rule for matching the predicted memory tag. The method further includes determining an R tag based on applying the rule. The method further includes obtaining an actual memory tag from memory based on an operand of the processor instruction. The method further includes determining whether the predicted memory tag and the actual memory tag match. The method further includes, if the predicted memory tag and actual memory tag match, using the R tag as the R tag output.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: August 25, 2020
    Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.
    Inventor: Andre′ DeHon
  • Patent number: 10754656
    Abstract: A predicted value to be used in register-indirect branching is predicted. The predicted value is to be stored in one or more locations based on the prediction. An offset for a predicted derived value is obtained. The predicted derived value is to be used as a pointer to a reference data structure providing access to variables used in processing. The predicted derived value is generated using the predicted value and the offset. The predicted derived value is used to access the reference data structure during processing.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: August 25, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10740248
    Abstract: A method or system of translating a virtualized address to a real address is disclosed that includes receiving a virtualized address for translation; generating a predicted intermediate address translation using a portion of the bit field of the virtualized address; determining a predicted real address using the predicted intermediate address or portion thereof; performing a translation of the virtualized address to an actual intermediate address; determining whether the predicted intermediate address is the same as the actual intermediate address; and in response to the predicted intermediate address being the same as the actual intermediate address, providing the predicted real address as the real address.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi
  • Patent number: 10725778
    Abstract: A method includes receiving, for metadata processing, a current instruction with a associated metadata tags. The metadata processing is performed in a metadata processing domain isolated from a code execution domain including the current instruction. Each respective associated metadata tag representing a respective policy of the composite policy. The associated metadata tags further including pointers to tags of a component policy of the composite policy. For each respective metadata tag, the method includes determining, in the metadata processing domain and in accordance with the metadata tag and the current instruction, whether a rule exists in a rule cache for the current instruction. The rule cache including rules on metadata used by said metadata processing to define allowed instructions. The determination of whether a rule exists resulting in a respective output.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: July 28, 2020
    Assignees: The Charles Stark Draper Laboratory, Inc., The Trustees of the University of Pennsylvania Penn Center for Innovation
    Inventors: Andre′ DeHon, Udit Dhawan
  • Patent number: 10719328
    Abstract: A predicted value to be used in register-indirect branching is predicted. The predicted value is to be stored in one or more locations based on the prediction. An offset for a predicted derived value is obtained. The predicted derived value is to be used as a pointer to a reference data structure providing access to variables used in processing. The predicted derived value is generated using the predicted value and the offset. The predicted derived value is used to access the reference data structure during processing.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: July 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10713052
    Abstract: Disclosed embodiments relate to a prefetcher for delinquent irregular loads. In one example, a processor includes a cache memory, fetch and decode circuitry to fetch and decode instructions from a memory; and execution circuitry including a binary translator (BT) to respond to the decoded instructions by storing a plurality of decoded instructions in a BT cache, identifying a delinquent irregular load (DIRRL) among the plurality of decoded instructions, determining whether the DIRRL is prefetchable, and, if so, generating a custom prefetcher to cause the processor to prefetch a region of instructions leading up to the prefetchable DIRRL.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Karthik Sankaranarayanan, Stephen J. Tarsa, Gautham N. Chinya, Helia Naeimi
  • Patent number: 10671307
    Abstract: Provided is a removable storage system including: a data storage device configured to store a plurality of files including a first file and a second file; a host interface configured to receive, from a host, a pattern matching request including pattern information and file information regarding the plurality of files, and transmit, to the host, a result of pattern matching regarding the plurality of files; and a pattern matching accelerator configured to perform the pattern matching in response to the pattern matching request, wherein the pattern matching accelerator includes a scan engine configured to scan data based on a pattern, and a scheduler configured to control the scan engine to stop scanning the first file and start scanning the second file.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: June 2, 2020
    Assignees: Samsung Electronics Co., Ltd., Industry-Academic Cooperation Foundation, Yonsei University
    Inventors: Jeong-ho Lee, Ho-jun Shim, Won Woo Ro, Won Seob Jeong, Myung Kuk Yoon, Won Jeon
  • Patent number: 10621097
    Abstract: Devices and systems having memory-side adaptive prefetch decision-making, including associated methods, are disclosed and described. Adaptive information can be provided to memory-side controller and prefetch components that allow such memory-side components to prefetch data in a manner that is adaptive with respect to a particular read memory request or to a thread performing read memory requests.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Patrick Lu, Francesc Guim Bernat, Shrikant M. Shah
  • Patent number: 10579385
    Abstract: Prediction of an affiliated register. A determination is made as to whether an affiliated register is to be predicted for a particular branch instruction. The affiliated register is a register, separate from a target address register, selected to store a predicted target address based on prediction of a target address. Based on determining that the affiliated register is to be predicted, predictive processing is performed. The predictive processing includes providing the predicted target address in a location associated with the affiliated register.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: March 3, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura