Generating Prefetch, Look-ahead, Jump, Or Predictive Address Patents (Class 711/213)
-
Patent number: 12229556Abstract: Processing circuitry to execute load operations, each associated with an identifier. Prediction circuitry to receive a given load value associated with a given identifier, and to make, in dependence on the given load value, a prediction indicating a predicted load value for a subsequent load operation to be executed by the processing circuitry and an ID-delta value indicating a difference between the given identifier and an identifier of the subsequent load operation. The predicted load value being predicted in dependence on at least one occurrence of each of the given load value and the predicted load value during execution of a previously-executed sequence of load operations. The prediction circuitry is configured to determine the ID-delta value in dependence on a difference between identifiers associated with the at least one occurrence of each of the given load value and the predicted load value in the previously-executed sequence of load operations.Type: GrantFiled: July 17, 2023Date of Patent: February 18, 2025Assignee: Arm LimitedInventors: Alexander Cole Shulyak, Yasuo Ishii, Joseph Michael Pusdesris
-
Patent number: 12124374Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.Type: GrantFiled: November 15, 2022Date of Patent: October 22, 2024Assignee: Texas Instruments IncorporatedInventors: Oluleye Olorode, Ramakrishnan Venkatasubramanian
-
Patent number: 12118236Abstract: A memory controller comprises a system bus interface that connects the MC to a system processor, a system memory interface that connects the MC to a system memory, a read buffer comprising a plurality of entries constituting storage areas, the entries comprising at least one read buffer entry (RBE) and at least one extended prefetch read buffer entry (EPRBE), read buffer logic, dynamic controls that are used by the read buffer logic, and an MC processor comprising at least one extended prefetch machine (EPM), each corresponding to one of the at least EPRBEs, where the MC processor is configured to allocate and deallocate EPRBEs and RBEs according to an allocation method using the dynamic controls.Type: GrantFiled: August 30, 2021Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Eric E. Retter, Lilith Hale, Brad William Michael, John Dodson
-
Patent number: 12111766Abstract: Embodiments herein relates e.g., to a method performed by a first entity, for handling memory operations of an application in a computer environment, is provided. The first entity obtains position data associated with data of the application being fragmented into a number of positions in a physical memory. The position data indicates one or more positions of the number of positions in the physical memory. The first entity then provides, to a second entity, one or more indications of the one or more positions indicated by the position data for prefetching data from the second entity, using the one or more indications.Type: GrantFiled: October 2, 2019Date of Patent: October 8, 2024Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Amir Roozbeh, Dejan Kostic, Gerald Q. Maguire, Jr., Alireza Farshin
-
Patent number: 12093130Abstract: A data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to receive a dataset management (DSM) hint, determine if a second physical memory range associated with a next read operation is located within a threshold number of physical block addresses (PBAs) to a first physical memory range associated with a current read operation, where the next read operation is provided by the DSM hint, and utilize at least a portion of a latency budget associated with the current read operation to optimize a read parameter of the first physical memory range.Type: GrantFiled: April 20, 2022Date of Patent: September 17, 2024Assignee: Sandisk Technologies, Inc.Inventors: Alexander Bazarsky, Judah Gamliel Hahn, Michael Ionin
-
Patent number: 12093188Abstract: The invention discloses a prefetch-adaptive intelligent cache replacement policy for high performance, in the presence of hardware prefetching, a prefetch request and a demand request are distinguished, a prefetch predictor based on an ISVM (Integer Support Vector Machine) is used for carrying out re-reference interval prediction on a cache line of prefetching access loading, and a demand predictor based on an ISVM is utilized to carry out re-reference interval prediction on a cache line of demand access loading. A PC of a current access load instruction and PCs of past load instructions in an access historical record are input, different ISVM predictors are designed for prefetch and demand requests, reuse prediction is performed on a loaded cache line by taking a request type as granularity, the accuracy of cache line reuse prediction in the presence of prefetching is improved, and performance benefits from hardware prefetching and cache replacement is better fused.Type: GrantFiled: April 12, 2022Date of Patent: September 17, 2024Assignee: BEIJING UNIVERSITY OF TECHNOLOGYInventors: Juan Fang, Huijing Yang, Ziyi Teng, Min Cai, Xuan Wang
-
Patent number: 12050916Abstract: Array of pointers prefetching is described. In accordance with described techniques, a pointer target instruction is detected by identifying that a destination location of a load instruction is used in an address compute for a memory operation and the load instruction is included in a sequence of load instructions having addresses separated by a step size. An instruction for fetching data of a future load instruction is injected in an instruction stream of a processor. The data of the future load instruction is stored in a temporary register. An additional instruction is injected in the instruction stream for prefetching a pointer target based on an address of the memory operation and the data of the future load instruction.Type: GrantFiled: March 25, 2022Date of Patent: July 30, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Chetana N Keltcher, Alok Garg, Paul S Keltcher
-
Patent number: 12008370Abstract: A method of verifying authenticity of a speculative load instruction is disclosed which includes receiving a new speculative source-destination pair (PAIR), wherein the source represents a speculative load instruction and the destination represents an associated destination virtual memory location holding data to be loaded onto a register with execution of the source, checking the PAIR against one or more memory tables associated with non-speculative source-destination pairs, if the PAIR exists in the one or more memory tables, then executing the instruction associated with the source of the PAIR, if the PAIR does not exist, then i) waiting until the speculation of the source instruction has cleared as being non-speculative, ii) updating the one or more memory tables, and iii) executing the instruction associated with the source, and if the speculation of the source instruction of the PAIR does not clear as non-speculative, then the source is nullified.Type: GrantFiled: May 5, 2022Date of Patent: June 11, 2024Assignee: Purdue Research FoundationInventors: Mithuna Shamabhat Thottethodi, Terani N Vijaykumar
-
Patent number: 12001842Abstract: Methods and apparatuses relating to switching of a shadow stack pointer are described. In one embodiment, a hardware processor includes a hardware decode unit to decode an instruction, and a hardware execution unit to execute the instruction to: pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread, remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor.Type: GrantFiled: May 26, 2023Date of Patent: June 4, 2024Assignee: Intel CorporationInventors: Vedvyas Shanbhogue, Jason W. Brandt, Ravi L. Sahita, Barry E. Huntley, Baiju V. Patel, Deepak K. Gupta
-
Patent number: 11995469Abstract: A method and system for preemptive caching across content delivery networks. Specifically, the disclosed method and system entail proactively seeding (or deploying) resources to edge nodes of a content delivery network based on prospective information sources such as, for example, travel itineraries, map route plans, calendar appointments, etc. Resource delivery deadlines and destinations may be derived from these prospective information sources in order to preemptively direct and cache resources near these resource delivery destinations (i.e., geo-locations) prior to or by the expected times (i.e., future point-in-times) during which a resource requestor and/or consumer is anticipated to be positioned at or within proximity to the resource delivery destinations. Through proactive seeding of resources, which may reflect content or service functionalities, reduced latency may be observed at least with respect to requesting the resources from the content delivery network.Type: GrantFiled: December 26, 2019Date of Patent: May 28, 2024Assignee: EMC IP HOLDING COMPANY LLCInventors: James Robert King, Robert Anthony Lincourt, Jr.
-
Patent number: 11994994Abstract: A memory device includes a memory array and a memory controller operatively coupled to the memory array. The memory array includes memory cells to store memory data. The memory controller includes a prefetch buffer, a read address buffer including memory registers to store addresses of memory read requests received from at least one separate device, and logic circuitry. The logic circuitry is configured to store extra read data in the prefetch buffer when an address of a read request is a continuous address of an address stored in the read address buffer, and omit prefetching the extra data when the address of the read request is a non-continuous address of an address stored in the read address buffer.Type: GrantFiled: April 25, 2022Date of Patent: May 28, 2024Assignee: Analog Devices International Unlimited CompanyInventors: Aniket Akshay Saraf, Kaushik Kandukuri, Thirukumaran Natrayan, Saurbh Srivastava
-
Patent number: 11947464Abstract: A data management method causes a computer to execute processing including: creating, when a predetermined data processing program performs data processing, based on an access frequency to a data store, high-frequency state item list information obtained by listing high-frequency state items of which the access frequency is high; determining, when state information that includes a value of the high-frequency state item is written to the data store, whether or not the state information corresponds to the high-frequency state item with reference to the high-frequency state item list information; grouping and writing pieces of the state information of a plurality of the high-frequency state item.Type: GrantFiled: October 5, 2022Date of Patent: April 2, 2024Assignee: FUJITSU LIMITEDInventors: Julius Michaelis, Yasuhiko Kanemasa
-
Patent number: 11934673Abstract: A data storage device includes at least one data storage medium. The data storage device also includes a workload rating associated with data access operations carried out on the at least one data storage medium. The data storage device further includes a controller configured to enable performance of the data access operations, and change a rate of consumption of the workload rating by internal device management operations carried out in the data storage device in response to a change in a workload consumed by host commands serviced by the data storage device.Type: GrantFiled: August 11, 2022Date of Patent: March 19, 2024Assignee: Seagate Technology LLCInventors: Abhay T. Kataria, Praveen Viraraghavan, Mark A. Gaertner
-
Patent number: 11888835Abstract: An illustrative method includes a storage management system of a container system performing, for a worker node added to a cluster of the container system based on a first authentication of the worker node, a second authentication for the worker node, and determining, based on the second authentication, whether the worker node is authorized to perform one or more operations on a storage system associated with the cluster.Type: GrantFiled: June 1, 2021Date of Patent: January 30, 2024Assignee: Pure Storage, Inc.Inventors: Luis Pablo Pabón, Taher Vohra, Naveen Neelakantam
-
Patent number: 11874773Abstract: Systems, methods, and apparatuses relating to a dual spatial pattern prefetcher are described.Type: GrantFiled: December 28, 2019Date of Patent: January 16, 2024Assignee: Intel CorporationInventors: Rahul Bera, Anant Vithal Nori, Sreenivas Subramoney
-
Patent number: 11822984Abstract: Implementations described herein relate to run-time management of a serverless function in a serverless computing environment. In some implementations, a method includes receiving, at a processor, based on historical run-time invocation data for the serverless function in the serverless computing environment, a first number of expected invocations of the serverless function for a first time period, determining, by the processor, based on the first number of expected invocations of the serverless function for the first time period, a second number of warm-up invocation calls to be made for the first time period, and periodically invoking the second number of instances of an extended version of the serverless function during the first time period, wherein the extended version of the serverless function is configured to load and initialize the serverless function and terminate without executing the serverless function.Type: GrantFiled: March 8, 2023Date of Patent: November 21, 2023Assignee: Sedai Inc.Inventors: Hari Chandrasekhar, Aby Jacob, Mathew Koshy Karunattu, Nikhil Gopinath Kurup, Suresh Mathew, S Meenakshi, Sayanth S, Akash Vijayan
-
Patent number: 11797395Abstract: A data management and storage (DMS) cluster of peer DMS nodes manages migration of an application between a primary compute infrastructure and a secondary compute infrastructure. The secondary compute infrastructure may be a failover environment for the primary compute infrastructure. Primary snapshots of virtual machines of the application in the primary compute infrastructure are generated, and provided to the secondary compute infrastructure. During a failover, the primary snapshots are deployed in the secondary compute infrastructure as virtual machines. Secondary snapshots of the virtual machines are generated, where the secondary snapshots are incremental snapshots of the primary snapshots. In failback, the secondary snapshots are provided to the primary compute infrastructure, where they are combined with the primary snapshots into construct a current state of the application, and the application is deployed in the current state by deploying virtual machines on the primary compute infrastructure.Type: GrantFiled: January 13, 2023Date of Patent: October 24, 2023Assignee: Rubrik, Inc.Inventors: Zhicong Wang, Benjamin Meadowcroft, Biswaroop Palit, Atanu Chakraborty, Hardik Vohra, Abhay Mitra, Saurabh Goyal, Sanjari Srivastava, Swapnil Agarwal, Rahil Shah, Mudit Malpani, Janmejay Singh, Ajay Arvind Bhave, Prateek Pandey
-
Patent number: 11782714Abstract: A method comprises receiving a current instruction for metadata processing performed in a metadata processing domain that is isolated from a code execution domain including the current instruction. The method further comprises determining, by the metadata processing domain in connection with metadata for the current instruction, whether to allow execution of the current instruction in accordance with a set of one or more policies. The one or more policies may include a set of rules that enforces execution of a complete sequence of instructions in a specified order from a first instruction of the complete sequence to a last instruction of the complete sequence. The metadata processing may be implemented by a metadata processing hierarchy comprising a control module, a masking module, a hash module, a rule cache lookup module, and/or an output tag module.Type: GrantFiled: July 15, 2020Date of Patent: October 10, 2023Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.Inventor: Andre′ DeHon
-
Patent number: 11740992Abstract: A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations.Type: GrantFiled: August 17, 2021Date of Patent: August 29, 2023Assignee: Numecent Holdings, Inc.Inventors: Jeffrey de Vries, Arthur S. Hitomi
-
Patent number: 11663006Abstract: Methods and apparatuses relating to switching of a shadow stack pointer are described. In one embodiment, a hardware processor includes a hardware decode unit to decode an instruction, and a hardware execution unit to execute the instruction to: pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread, remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor.Type: GrantFiled: June 7, 2021Date of Patent: May 30, 2023Assignee: Intel CorporationInventors: Vedvyas Shanbhogue, Jason W. Brandt, Ravi L. Sahita, Barry E. Huntley, Baiju V. Patel, Deepak K. Gupta
-
Patent number: 11645207Abstract: A system and method for efficiently processing memory requests are described. A processing unit includes at least a processor core, a cache, and a non-cache storage buffer capable of storing data prevented from being stored in the cache. While processing a memory request targeting the non-cache storage buffer, the processor core inspects a flag stored in a tag of the memory request. The processor core prevents data prefetching into one or more of the non-cache storage buffer and the cache based on determining the flag specifies preventing data prefetching into one or more of the non-cache storage buffer and the cache using the target address of the memory request during processing of this instance of the memory request. While processing a prefetch hint instruction, the processor core determines from the tag whether to prevent prefetching.Type: GrantFiled: December 23, 2020Date of Patent: May 9, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Masab Ahmad, Derrick Allen Aguren
-
Patent number: 11635960Abstract: A method includes receiving, for metadata processing, a current instruction with associated metadata tags. The metadata processing is performed in a metadata processing domain isolated from a code execution domain including the current instruction. Each respective associated metadata tag represents a respective policy of the composite policy. For each respective metadata tag, the method includes determining, in the metadata processing domain and in accordance with the metadata tag and the current instruction, whether a rule exists for the current instruction in a rules cache. The rules cache may include rules on metadata used by the metadata processing to define allowed instructions. The determination of whether a rule exists results in a respective output, which may include generating a new rule and inserting the new rule in the rules cache. Control Status Registers, and associated tags, may be used to accomplish the metadata processing.Type: GrantFiled: June 18, 2020Date of Patent: April 25, 2023Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.Inventor: Andre' DeHon
-
Patent number: 11631377Abstract: The present disclosure relates to a method for controlling a timing controller and a timing controller. The method for controlling the timing controller includes: acquiring a bus address in a bus signal transmitted over an I2C bus, the I2C bus being connected to the timing controller; if the timing controller determining that the bus address matches an address of the timing controller, acquiring data information in the bus signal; acquiring an address of a target function circuit according to the data information; generating and transmitting a query instruction to a memory according to the address of the target function circuit, and receiving switch control data corresponding to the target function circuit fed back by the memory; controlling, according to the switch control data, a switch connected to the target function circuit to be turned on.Type: GrantFiled: June 10, 2020Date of Patent: April 18, 2023Assignees: BEIHAI HKC OPTOELECTRONICS TECHNOLOGY CO., LTD., Chongqing HKC Optoelectronics Technology Co., ltd.Inventor: Mingliang Wang
-
Patent number: 11625343Abstract: Memory systems with a communications bus (and associated systems, devices, and methods) are disclosed herein. In one embodiment, a memory device includes an input/output terminal separate from data terminals of the memory device. The input/output terminal can be operably connected to a memory controller via a communications bus. The memory device can be configured to initiate a communication with the memory controller by outputting a signal via the input/output terminal and/or over the communications bus. The memory device can be configured to output the signal in accordance with a clock signal that is different from a second clock signal used to output or receive data signals via the data terminals. In some embodiments, the memory device is configured to initiate communications over the communication bus only when it possesses a communication token. The communication token can be transferred between memory devices operably connected to the communications bus.Type: GrantFiled: May 12, 2021Date of Patent: April 11, 2023Assignee: Micron Technology, Inc.Inventor: Sujeet Ayyapureddi
-
Patent number: 11622026Abstract: In some embodiments, an electronic device is disclosed for intelligently prefetching data via a computer network. The electronic device can include a device housing, a user interface, a memory device, and a hardware processor. The hardware processor can: communicate via a communication network; determine that the hardware processor is expected to be unable to communicate via the communication network; responsive to determining that the hardware processor is expected to be unable to communicate via the communication network, determine prefetch data to request prior to the hardware processor being unable to communicate via the communication network; request the prefetch data; receive and store the prefetch data prior to the hardware processor being unable to communicate via the communication network; and subsequent to the hardware processor being unable to communicate via the communication network, process the prefetch data with an application responsive to processing a first user input with the application.Type: GrantFiled: October 8, 2021Date of Patent: April 4, 2023Assignee: Tealium Inc.Inventors: Craig P. Rouse, Harry Cassell, Christopher B. Slovak
-
Patent number: 11605088Abstract: Methods and systems are presented for providing concurrent data retrieval and risk processing while evaluating a risk source of an online service provider. Upon receiving a request to evaluate the risk source, a risk analysis module may initiate one or more risk evaluation sub-processes to evaluate the risk source. Each risk evaluation sub-process may require different data related to the risk source to perform the evaluation. The risk analysis module may simultaneously retrieve the data related to the risk source and perform the one or more risk evaluation sub-processes such that the risk analysis module may complete a risk evaluation sub-process whenever the data required by the risk evaluation sub-process is made available.Type: GrantFiled: December 31, 2020Date of Patent: March 14, 2023Assignee: PayPal, Inc.Inventors: Srinivasan Manoharan, Vinesh Poruthikottu Chirakkil
-
Patent number: 11593267Abstract: Aspects of the present disclosure relate to asynchronous memory management. In embodiments, an input/output (IO) workload is received at a storage array. Further, one or more read-miss events corresponding to the IO workload are identified. Additionally, at least one of the storage array's cache slots is bound to a track identifier (TID) corresponding to the read-miss events based on one or more of the read-miss events' two-dimensional metrics.Type: GrantFiled: October 28, 2021Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Ramesh Doddaiah, Malak Alshawabkeh, Rong Yu, Peng Wu
-
Patent number: 11580032Abstract: A technique is provided for training a prediction apparatus. The apparatus has an input interface for receiving a sequence of training events indicative of program instructions, and identifier value generation circuitry for performing an identifier value generation function to generate, for a given training event received at the input interface, an identifier value for that given training event. The identifier value generation function is arranged such that the generated identifier value is dependent on at least one register referenced by a program instruction indicated by that given training event. Prediction storage is provided with a plurality of training entries, where each training entry is allocated an identifier value as generated by the identifier value generation function, and is used to maintain training data derived from training events having that allocated identifier value.Type: GrantFiled: January 20, 2021Date of Patent: February 14, 2023Assignee: Arm LimitedInventors: Frederic Claude Marie Piry, Natalya Bondarenko, Cédric Denis Robert Airaud, Geoffray Matthieu Lacourba
-
Patent number: 11568932Abstract: Methods and systems include memory devices with multiple memory cells configured to store data. The memory devices also include a cache configured to store at least a portion of the data to provide access to the at least the portion of the data without accessing the multiple memory cells. The memory devices also include control circuitry configured to receive a read command having a target address. Based on the target address, the control circuitry is configured to determine that the at least the portion of the data is present in the cache. Using the cache, the control circuitry also outputs read data from the cache without accessing the plurality of memory cells.Type: GrantFiled: February 22, 2021Date of Patent: January 31, 2023Assignee: Micron Technology, Inc.Inventors: Zhongyuan Lu, Stephen H. Tang, Robert J. Gleixner
-
Patent number: 11520505Abstract: It is desired to provide a technique capable of reducing the time and the power consumption required for computation. Provided is an information processing apparatus including a storage control unit that writes data read from a read target area of an external memory having multiple dimensions to a storage area having the multiple dimensions and a processing unit that executes processing based on the data of the storage area, in which the storage control unit moves the read target area in a first dimension direction in the external memory and performs first overwrite of a back end area of the storage area in a direction corresponding to the first dimension direction with data of a front end area of the read target area after movement in the first dimension direction.Type: GrantFiled: June 17, 2019Date of Patent: December 6, 2022Assignee: SONY CORPORATIONInventor: Yuji Takahashi
-
Patent number: 11481220Abstract: An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided.Type: GrantFiled: October 26, 2016Date of Patent: October 25, 2022Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Adrian Viorel Popescu
-
Patent number: 11430523Abstract: A storage device is provided. The storage device includes a nonvolatile memory device including a first block and a second block, and a controller including processing circuitry configured to, predict a number of writes to be performed on the nonvolatile memory device using a machine learning model, determine a type of reclaim command based on the predicted number of writes, the reclaim command for reclaiming data of the first block to the second block, and issue the reclaim command.Type: GrantFiled: September 29, 2020Date of Patent: August 30, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Jin Woo Hong, Chan Ha Kim, Yun Jung Lee
-
Patent number: 11360898Abstract: This technology relates to a method and apparatus for improving I/O throughput through an interleaving operation for multiple memory dies of a memory system. A memory system may include: multiple memory dies suitable for outputting data of different sizes in response to a read request; and a controller in communication with the multiple memory dies through multiple channels, and suitable for: performing a correlation operation on the read request so that the multiple memory dies interleave and output target data corresponding to the read request through the multiple channels, determining a pending credit using a result of the correlation operation, and reading, from the multiple memory dies, the target data corresponding to the read request and additional data stored in a same storage unit as the target data, based on a type of the target data corresponding to the read request and the pending credit.Type: GrantFiled: April 23, 2020Date of Patent: June 14, 2022Assignee: SK hynix Inc.Inventor: Jeen Park
-
Patent number: 11347523Abstract: Techniques include executing a software program having a function call to a shared library and reloading the shared library without stopping execution of the software program. A global offset table (GOT) is updated responsive to resolving a link address associated with the function call. An entry in GOT included a link address field, an index field, and a resolved field, the updating including updating the index field with an affirmative value and marking the resolved field with an affirmative flag for the entry in the GOT. Responsive to reloading the shared library, the entry in the GOT is found having the affirmative value in the index field and the affirmative flag in the resolved field. An address value in the link address field is returned for the entry having the affirmative value in the index field, responsive to a subsequent execution of the function call to the shared library.Type: GrantFiled: November 5, 2020Date of Patent: May 31, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Xiao Ling Chen, Zhan Peng Huo, Yong Yin, Dong Hui Liu, Qi Li, Jia Yu, Jiang Yi Liu, Xiao Xuan Fu, Cheng Fang Wang
-
Patent number: 11288074Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array.Type: GrantFiled: March 31, 2019Date of Patent: March 29, 2022Assignee: Micron Technology, Inc.Inventor: Tony M. Brewer
-
Patent number: 11204868Abstract: The present application discloses a memory control method, a controller, a chip and an electronic device, and relates to the field of control technology. A specific implementation solution is: obtaining first address information of an access to the memory performed by the processor within a first time window; determining, according to the first address information and an address jump relationship, a target slice of the memory that is to be accessed by the processor within a second time window; and controlling the target slice in the memory to be turned on and controlling a slice other than the target slice in the memory to be turned off within the second time window. Through the above-mentioned process, each slice is dynamically turned on and off according to the actual situation of memory access, thereby reducing the power consumption of the memory to the maximum extent.Type: GrantFiled: March 19, 2020Date of Patent: December 21, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Bibo Yang, Xiaoping Yan, Chao Tian, Junhui Wen
-
Patent number: 11093401Abstract: Various aspects provide for facilitating prediction of instruction pipeline hazards in a processor system. A system comprises a fetch component and an execution component. The fetch component is configured for storing a hazard prediction associated with a group of memory access instructions in a buffer associated with branch prediction. The execution component is configured for executing a memory access instruction associated with the group of memory access instructions as a function of the hazard prediction entry. In an aspect, the hazard prediction entry is configured for predicting whether the group of memory access instructions is associated with an instruction pipeline hazard.Type: GrantFiled: March 11, 2014Date of Patent: August 17, 2021Assignee: Ampere Computing LLCInventors: Matthew Ashcraft, Richard W. Thaik
-
Patent number: 11093601Abstract: Embodiments described herein enable the interoperability between processes configured for pointer authentication and processes that are not configured for pointer authentication. Enabling the interoperability between such processes enables essential libraries, such as system libraries, to be compiled with pointer authentication, while enabling those libraries to still be used by processes that have not yet been compiled or configured to use pointer authentication.Type: GrantFiled: October 25, 2019Date of Patent: August 17, 2021Assignee: Apple Inc.Inventors: Bernard J. Semeria, Devon S. Andrade, Jeremy C. Andrus, Ahmed Bougacha, Peter Cooper, Jacques Fortier, Louis G. Gerbarg, James H. Grosbach, Robert J. McCall, Daniel A. Steffen, Justin R. Unger
-
Patent number: 11068397Abstract: Disclosed aspects relate to accelerator sharing among a plurality of processors through a plurality of coherent proxies. The cache lines in a cache associated with the accelerator are allocated to one of the plurality of coherent proxies. In a cache directory for the cache lines used by the accelerator, the status of the cache lines and the identification information of the coherent proxies to which the cache lines are allocated are provided. Each coherent proxy maintains a shadow directory of the cache directory for the cache lines allocated to it. In response to receiving an operation request, a coherent proxy corresponding to the request is determined. The accelerator communicates with the determined coherent proxy for the request.Type: GrantFiled: April 4, 2019Date of Patent: July 20, 2021Assignee: International Business Machines CorporationInventors: Peng Fei BG Gou, Yang Liu, Yang Fan EL Liu, Yong Lu
-
Patent number: 10929136Abstract: Branch prediction techniques are described that can improve the performance of pipelined microprocessors. A microprocessor with a hierarchical branch prediction structure is presented. The hierarchy of branch predictors includes: a multi-cycle predictor that provides very accurate branch predictions, but with a latency of multiple cycles; a small and simple branch predictor that can provide branch predictions for a sub-set of instructions with zero-cycle latency; and a fast, intermediate level branch predictor that provides relatively accurate branch prediction, while still having a low, but non-zero instruction prediction latency of only one cycle, for example. To improve operation, the higher accuracy, higher latency branch direction predictor and the fast, lower latency branch direction predictor can share a common target predictor.Type: GrantFiled: April 11, 2018Date of Patent: February 23, 2021Assignee: Futurewei Technologies, Inc.Inventors: Shiwen Hu, Wei Yu Chen, Michael Chow, Qian Wang, Yongbin Zhou, Lixia Yang, Ning Yang
-
Patent number: 10908934Abstract: A simulation method performed by a computer for simulating operations by a plurality of cores based on resource access operation descriptions on the plurality of cores, the method includes steps of: extracting a resource access operation description on at least one core of the plurality of cores by executing simulation for the one core; and, under a condition where the one core and a second core among the plurality of cores have a specific relation in execution processing, generating a resource access operation description on the second core from the resource access operation description on the one core by reflecting an address difference between an address of a resource to which the one core accesses and an address of a resource to which the second core accesses.Type: GrantFiled: July 3, 2018Date of Patent: February 2, 2021Assignee: FUJITSU LIMITEDInventors: Katsuhiro Yoda, Takahiro Notsu, Mitsuru Tomono
-
Patent number: 10908912Abstract: A method for redirecting an indirect call in an operating system kernel to a direct call is disclosed. The direct calls are contained in trampoline code called an inline jump switch (IJS) or an outline jump switch (OJS). The IJS and OJS can operate in either a use mode, redirecting an indirect call to a direct call, a learning and update mode or fallback mode. In the learning and update mode, target addresses in a trampoline code template are learned and updated by a jump switch worker thread that periodically runs as a kernel process. When building the kernel binary, a plug-in is integrated into the kernel. The plug-in replaces call sites with a trampoline code template containing a direct call so that the template can be later updated by the jump switch worker thread.Type: GrantFiled: July 24, 2019Date of Patent: February 2, 2021Assignee: VMWARE, INC.Inventors: Nadav Amit, Frederick Joseph Jacobs, Michael Wei
-
Patent number: 10901710Abstract: Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. The processor defines a special store instruction that is different from a regular store instruction. The special store instruction is used in regions of the computer program where memory aliasing may occur. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing may occur.Type: GrantFiled: August 16, 2019Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Srinivasan Ramani, Rohit Taneja
-
Patent number: 10901951Abstract: A processing module of a memory storage unit includes an interface configured to interface and communicate with a communication system, a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory that is configured to execute the operational instructions to manage data stored using append-only formatting. When the processing module determines that a section of the memory includes invalid data and the amount of invalid data compares unfavorably to a predetermined limit, the processing module determines a rate for execution of a compaction routine for the section of memory, where the rate is based on a proportion, integral and derivative (PID) function that is based on a target usage level of the memory and a current usage level of the memory.Type: GrantFiled: July 17, 2018Date of Patent: January 26, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ethan S. Wozniak, Praveen Viraraghavan, Ilya Volvovski
-
Patent number: 10893096Abstract: Embodiments for optimizing dynamic resource allocations in a disaggregated computing environment. A data heat map associated with a data access pattern of data elements associated with a workload is maintained. The workload is classified into one of a plurality of classes, each class characterized by the data access pattern associated with the workload. The workload is then assigned to a dynamically constructed disaggregated system optimized with resources according to the one of the plurality of classes the workload is classified into to increase efficiency during a performance of the workload.Type: GrantFiled: May 17, 2018Date of Patent: January 12, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John A. Bivens, Ruchi Mahindru, Eugen Schenfeld, Min Li, Valentina Salapura
-
Patent number: 10884747Abstract: Prediction of an affiliated register. A determination is made as to whether an affiliated register is to be predicted for a particular branch instruction. The affiliated register is a register, separate from a target address register, selected to store a predicted target address based on prediction of a target address. Based on determining that the affiliated register is to be predicted, predictive processing is performed. The predictive processing includes providing the predicted target address in a location associated with the affiliated register.Type: GrantFiled: August 18, 2017Date of Patent: January 5, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael K. Gschwind, Valentina Salapura
-
Patent number: 10884929Abstract: A Set Table of Contents (TOC) Register instruction. An instruction to provide a pointer to a reference data structure, such as a TOC, is obtained by a processor and executed. The executing includes determining a value for the pointer to the reference data structure, and storing the value in a location (e.g., a register) specified by the instruction.Type: GrantFiled: September 19, 2017Date of Patent: January 5, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael K. Gschwind, Valentina Salapura
-
Patent number: 10877889Abstract: Techniques for implementing and/or operating an apparatus, which includes a processing system communicatively coupled to a memory system via a memory bus. The processing system includes processing circuitry, one or more caches, and a memory controller. When a data block targeted by the processing circuitry results in a processor-side miss, the memory controller instructs the processing system to output a memory access request that requests return of the data block at least in part by outputting an access parameter to be used by the memory system to locate the data block in one or more hierarchical memory levels during a first clock cycle and outputting a context parameter indicative of context information associated with current targeting of the data block during a second clock cycle different from the first clock cycle to enable the memory system to predictively control data storage based at least in part on the context information.Type: GrantFiled: May 16, 2019Date of Patent: December 29, 2020Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 10853270Abstract: A computing device includes technologies for securing indirect addresses (e.g., pointers) that are used by a processor to perform memory access (e.g., read/write/execute) operations. The computing device encodes the indirect address using metadata and a cryptographic algorithm. The metadata may be stored in an unused portion of the indirect address.Type: GrantFiled: December 17, 2019Date of Patent: December 1, 2020Assignee: INTEL CORPORATIONInventors: David M. Durham, Baiju Patel
-
Patent number: 10846093Abstract: In one embodiment, an apparatus includes: a value prediction storage including a plurality of entries each to store address information of an instruction, a value prediction for the instruction and a confidence value for the value prediction; and a control circuit coupled to the value prediction storage. In response to an instruction address of a first instruction, the control circuit is to access a first entry of the value prediction storage to obtain a first value prediction associated with the first instruction and control execution of a second instruction based at least in part on the first value prediction. Other embodiments are described and claimed.Type: GrantFiled: December 21, 2018Date of Patent: November 24, 2020Assignee: Intel CorporationInventors: Sumeet Bandishte, Jayesh Gaur, Sreenivas Subramoney, Hong Wang