Generating Prefetch, Look-ahead, Jump, Or Predictive Address Patents (Class 711/213)
  • Patent number: 11093601
    Abstract: Embodiments described herein enable the interoperability between processes configured for pointer authentication and processes that are not configured for pointer authentication. Enabling the interoperability between such processes enables essential libraries, such as system libraries, to be compiled with pointer authentication, while enabling those libraries to still be used by processes that have not yet been compiled or configured to use pointer authentication.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: August 17, 2021
    Assignee: Apple Inc.
    Inventors: Bernard J. Semeria, Devon S. Andrade, Jeremy C. Andrus, Ahmed Bougacha, Peter Cooper, Jacques Fortier, Louis G. Gerbarg, James H. Grosbach, Robert J. McCall, Daniel A. Steffen, Justin R. Unger
  • Patent number: 11093401
    Abstract: Various aspects provide for facilitating prediction of instruction pipeline hazards in a processor system. A system comprises a fetch component and an execution component. The fetch component is configured for storing a hazard prediction associated with a group of memory access instructions in a buffer associated with branch prediction. The execution component is configured for executing a memory access instruction associated with the group of memory access instructions as a function of the hazard prediction entry. In an aspect, the hazard prediction entry is configured for predicting whether the group of memory access instructions is associated with an instruction pipeline hazard.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: August 17, 2021
    Assignee: Ampere Computing LLC
    Inventors: Matthew Ashcraft, Richard W. Thaik
  • Patent number: 11068397
    Abstract: Disclosed aspects relate to accelerator sharing among a plurality of processors through a plurality of coherent proxies. The cache lines in a cache associated with the accelerator are allocated to one of the plurality of coherent proxies. In a cache directory for the cache lines used by the accelerator, the status of the cache lines and the identification information of the coherent proxies to which the cache lines are allocated are provided. Each coherent proxy maintains a shadow directory of the cache directory for the cache lines allocated to it. In response to receiving an operation request, a coherent proxy corresponding to the request is determined. The accelerator communicates with the determined coherent proxy for the request.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: July 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Peng Fei BG Gou, Yang Liu, Yang Fan EL Liu, Yong Lu
  • Patent number: 10929136
    Abstract: Branch prediction techniques are described that can improve the performance of pipelined microprocessors. A microprocessor with a hierarchical branch prediction structure is presented. The hierarchy of branch predictors includes: a multi-cycle predictor that provides very accurate branch predictions, but with a latency of multiple cycles; a small and simple branch predictor that can provide branch predictions for a sub-set of instructions with zero-cycle latency; and a fast, intermediate level branch predictor that provides relatively accurate branch prediction, while still having a low, but non-zero instruction prediction latency of only one cycle, for example. To improve operation, the higher accuracy, higher latency branch direction predictor and the fast, lower latency branch direction predictor can share a common target predictor.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: February 23, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Shiwen Hu, Wei Yu Chen, Michael Chow, Qian Wang, Yongbin Zhou, Lixia Yang, Ning Yang
  • Patent number: 10908912
    Abstract: A method for redirecting an indirect call in an operating system kernel to a direct call is disclosed. The direct calls are contained in trampoline code called an inline jump switch (IJS) or an outline jump switch (OJS). The IJS and OJS can operate in either a use mode, redirecting an indirect call to a direct call, a learning and update mode or fallback mode. In the learning and update mode, target addresses in a trampoline code template are learned and updated by a jump switch worker thread that periodically runs as a kernel process. When building the kernel binary, a plug-in is integrated into the kernel. The plug-in replaces call sites with a trampoline code template containing a direct call so that the template can be later updated by the jump switch worker thread.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: February 2, 2021
    Assignee: VMWARE, INC.
    Inventors: Nadav Amit, Frederick Joseph Jacobs, Michael Wei
  • Patent number: 10908934
    Abstract: A simulation method performed by a computer for simulating operations by a plurality of cores based on resource access operation descriptions on the plurality of cores, the method includes steps of: extracting a resource access operation description on at least one core of the plurality of cores by executing simulation for the one core; and, under a condition where the one core and a second core among the plurality of cores have a specific relation in execution processing, generating a resource access operation description on the second core from the resource access operation description on the one core by reflecting an address difference between an address of a resource to which the one core accesses and an address of a resource to which the second core accesses.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: February 2, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Katsuhiro Yoda, Takahiro Notsu, Mitsuru Tomono
  • Patent number: 10901951
    Abstract: A processing module of a memory storage unit includes an interface configured to interface and communicate with a communication system, a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory that is configured to execute the operational instructions to manage data stored using append-only formatting. When the processing module determines that a section of the memory includes invalid data and the amount of invalid data compares unfavorably to a predetermined limit, the processing module determines a rate for execution of a compaction routine for the section of memory, where the rate is based on a proportion, integral and derivative (PID) function that is based on a target usage level of the memory and a current usage level of the memory.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ethan S. Wozniak, Praveen Viraraghavan, Ilya Volvovski
  • Patent number: 10901710
    Abstract: Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. The processor defines a special store instruction that is different from a regular store instruction. The special store instruction is used in regions of the computer program where memory aliasing may occur. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing may occur.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Srinivasan Ramani, Rohit Taneja
  • Patent number: 10893096
    Abstract: Embodiments for optimizing dynamic resource allocations in a disaggregated computing environment. A data heat map associated with a data access pattern of data elements associated with a workload is maintained. The workload is classified into one of a plurality of classes, each class characterized by the data access pattern associated with the workload. The workload is then assigned to a dynamically constructed disaggregated system optimized with resources according to the one of the plurality of classes the workload is classified into to increase efficiency during a performance of the workload.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. Bivens, Ruchi Mahindru, Eugen Schenfeld, Min Li, Valentina Salapura
  • Patent number: 10884929
    Abstract: A Set Table of Contents (TOC) Register instruction. An instruction to provide a pointer to a reference data structure, such as a TOC, is obtained by a processor and executed. The executing includes determining a value for the pointer to the reference data structure, and storing the value in a location (e.g., a register) specified by the instruction.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10884747
    Abstract: Prediction of an affiliated register. A determination is made as to whether an affiliated register is to be predicted for a particular branch instruction. The affiliated register is a register, separate from a target address register, selected to store a predicted target address based on prediction of a target address. Based on determining that the affiliated register is to be predicted, predictive processing is performed. The predictive processing includes providing the predicted target address in a location associated with the affiliated register.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10877889
    Abstract: Techniques for implementing and/or operating an apparatus, which includes a processing system communicatively coupled to a memory system via a memory bus. The processing system includes processing circuitry, one or more caches, and a memory controller. When a data block targeted by the processing circuitry results in a processor-side miss, the memory controller instructs the processing system to output a memory access request that requests return of the data block at least in part by outputting an access parameter to be used by the memory system to locate the data block in one or more hierarchical memory levels during a first clock cycle and outputting a context parameter indicative of context information associated with current targeting of the data block during a second clock cycle different from the first clock cycle to enable the memory system to predictively control data storage based at least in part on the context information.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: December 29, 2020
    Assignee: Micron Technology, Inc.
    Inventor: David Andrew Roberts
  • Patent number: 10853270
    Abstract: A computing device includes technologies for securing indirect addresses (e.g., pointers) that are used by a processor to perform memory access (e.g., read/write/execute) operations. The computing device encodes the indirect address using metadata and a cryptographic algorithm. The metadata may be stored in an unused portion of the indirect address.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: David M. Durham, Baiju Patel
  • Patent number: 10846093
    Abstract: In one embodiment, an apparatus includes: a value prediction storage including a plurality of entries each to store address information of an instruction, a value prediction for the instruction and a confidence value for the value prediction; and a control circuit coupled to the value prediction storage. In response to an instruction address of a first instruction, the control circuit is to access a first entry of the value prediction storage to obtain a first value prediction associated with the first instruction and control execution of a second instruction based at least in part on the first value prediction. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Sumeet Bandishte, Jayesh Gaur, Sreenivas Subramoney, Hong Wang
  • Patent number: 10817420
    Abstract: A method for accessing two memory locations in two different memory arrays based on a single address string includes determining three sets of address bits. A first set of address bits are common to the addresses of wordlines that correspond to the memory locations in the two memory arrays. A second set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a first memory location in a first memory array. A third set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a second memory location in a second memory array. The method includes populating the single address string with the three sets of address bits and may be performed by an address data processing unit.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 27, 2020
    Assignee: Arm Limited
    Inventors: Yew Keong Chong, Sriram Thyagarajan, Andy Wangkun Chen
  • Patent number: 10761844
    Abstract: Disclosed embodiments relate to predicting load data. In one example, a processor a pipeline having stages ordered as fetch, decode, allocate, write back, and commit, a training table to store an address, predicted data, a state, and a count of instances of unchanged return data, and tracking circuitry to determine, during one or more of the allocate and decode stages, whether a training table entry has a first state and matches a fetched first load instruction, and, if so, using the data predicted by the entry during the execute stage, the tracking circuitry further to update the training table during or after the write back stage to set the state of the first load instruction in the training table to the first state when the count reaches a first threshold.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Manjunath Shevgoor, Mark J. Dechene, Stanislav Shwartsman, Pavel I. Kryukov
  • Patent number: 10754650
    Abstract: In an embodiment, a method includes, in a hardware processor, determining, for a processor instruction, a rule for matching a predicted memory tag. The method further includes determining a predicted memory tag based on applying the rule for matching the predicted memory tag. The method further includes determining an R tag based on applying the rule. The method further includes obtaining an actual memory tag from memory based on an operand of the processor instruction. The method further includes determining whether the predicted memory tag and the actual memory tag match. The method further includes, if the predicted memory tag and actual memory tag match, using the R tag as the R tag output.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: August 25, 2020
    Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.
    Inventor: Andre′ DeHon
  • Patent number: 10754656
    Abstract: A predicted value to be used in register-indirect branching is predicted. The predicted value is to be stored in one or more locations based on the prediction. An offset for a predicted derived value is obtained. The predicted derived value is to be used as a pointer to a reference data structure providing access to variables used in processing. The predicted derived value is generated using the predicted value and the offset. The predicted derived value is used to access the reference data structure during processing.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: August 25, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10740248
    Abstract: A method or system of translating a virtualized address to a real address is disclosed that includes receiving a virtualized address for translation; generating a predicted intermediate address translation using a portion of the bit field of the virtualized address; determining a predicted real address using the predicted intermediate address or portion thereof; performing a translation of the virtualized address to an actual intermediate address; determining whether the predicted intermediate address is the same as the actual intermediate address; and in response to the predicted intermediate address being the same as the actual intermediate address, providing the predicted real address as the real address.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi
  • Patent number: 10725778
    Abstract: A method includes receiving, for metadata processing, a current instruction with a associated metadata tags. The metadata processing is performed in a metadata processing domain isolated from a code execution domain including the current instruction. Each respective associated metadata tag representing a respective policy of the composite policy. The associated metadata tags further including pointers to tags of a component policy of the composite policy. For each respective metadata tag, the method includes determining, in the metadata processing domain and in accordance with the metadata tag and the current instruction, whether a rule exists in a rule cache for the current instruction. The rule cache including rules on metadata used by said metadata processing to define allowed instructions. The determination of whether a rule exists resulting in a respective output.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: July 28, 2020
    Assignees: The Charles Stark Draper Laboratory, Inc., The Trustees of the University of Pennsylvania Penn Center for Innovation
    Inventors: Andre′ DeHon, Udit Dhawan
  • Patent number: 10719328
    Abstract: A predicted value to be used in register-indirect branching is predicted. The predicted value is to be stored in one or more locations based on the prediction. An offset for a predicted derived value is obtained. The predicted derived value is to be used as a pointer to a reference data structure providing access to variables used in processing. The predicted derived value is generated using the predicted value and the offset. The predicted derived value is used to access the reference data structure during processing.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: July 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10713052
    Abstract: Disclosed embodiments relate to a prefetcher for delinquent irregular loads. In one example, a processor includes a cache memory, fetch and decode circuitry to fetch and decode instructions from a memory; and execution circuitry including a binary translator (BT) to respond to the decoded instructions by storing a plurality of decoded instructions in a BT cache, identifying a delinquent irregular load (DIRRL) among the plurality of decoded instructions, determining whether the DIRRL is prefetchable, and, if so, generating a custom prefetcher to cause the processor to prefetch a region of instructions leading up to the prefetchable DIRRL.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Karthik Sankaranarayanan, Stephen J. Tarsa, Gautham N. Chinya, Helia Naeimi
  • Patent number: 10671307
    Abstract: Provided is a removable storage system including: a data storage device configured to store a plurality of files including a first file and a second file; a host interface configured to receive, from a host, a pattern matching request including pattern information and file information regarding the plurality of files, and transmit, to the host, a result of pattern matching regarding the plurality of files; and a pattern matching accelerator configured to perform the pattern matching in response to the pattern matching request, wherein the pattern matching accelerator includes a scan engine configured to scan data based on a pattern, and a scheduler configured to control the scan engine to stop scanning the first file and start scanning the second file.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: June 2, 2020
    Assignees: Samsung Electronics Co., Ltd., Industry-Academic Cooperation Foundation, Yonsei University
    Inventors: Jeong-ho Lee, Ho-jun Shim, Won Woo Ro, Won Seob Jeong, Myung Kuk Yoon, Won Jeon
  • Patent number: 10621097
    Abstract: Devices and systems having memory-side adaptive prefetch decision-making, including associated methods, are disclosed and described. Adaptive information can be provided to memory-side controller and prefetch components that allow such memory-side components to prefetch data in a manner that is adaptive with respect to a particular read memory request or to a thread performing read memory requests.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Patrick Lu, Francesc Guim Bernat, Shrikant M. Shah
  • Patent number: 10579385
    Abstract: Prediction of an affiliated register. A determination is made as to whether an affiliated register is to be predicted for a particular branch instruction. The affiliated register is a register, separate from a target address register, selected to store a predicted target address based on prediction of a target address. Based on determining that the affiliated register is to be predicted, predictive processing is performed. The predictive processing includes providing the predicted target address in a location associated with the affiliated register.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: March 3, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10558461
    Abstract: A predicted value to be used in register-indirect branching is predicted. The predicted value is to be stored in one or more locations based on the prediction. An offset for a predicted derived value is obtained. The predicted derived value is to be used as a pointer to a reference data structure providing access to variables used in processing. The predicted derived value is generated using the predicted value and the offset. The predicted derived value is used to access the reference data structure during processing.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: February 11, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10509734
    Abstract: A computing device includes technologies for securing indirect addresses (e.g., pointers) that are used by a processor to perform memory access (e.g., read/write/execute) operations. The computing device encodes the indirect address using metadata and a cryptographic algorithm. The metadata may be stored in an unused portion of the indirect address.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: December 17, 2019
    Assignee: Intel Corporation
    Inventors: David M. Durham, Baiju Patel
  • Patent number: 10498517
    Abstract: The disclosure involves wireless communication device and method. The device includes a communication unit configured to perform a first feedback operation corresponding to a first feedback configuration and to perform a second feedback operation based on a second feedback configuration. The first and second feedback configurations each include a selection for sub-table of a CQI table, and the second feedback configuration is determined based on result of the first feedback operation.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: December 3, 2019
    Assignee: SONY CORPORATION
    Inventors: Zaixue Wei, Xin Zhang, Nanxi Li, Dan Zhang, Jinhui Chen
  • Patent number: 10459825
    Abstract: Methods, systems, and computer program products are provided for dynamically collecting information corresponding to characteristics of a binary. A user or program inputs a path corresponding to a binary. A testing framework accesses a testing configuration that specifies one or more characteristics of a binary for which data collection is enabled. The testing configuration parses the binary to collect the characteristics. Based on the collected characteristics, the testing configuration identifies additional characteristics of the binary. The testing configuration collects information corresponding to identified additional characteristics of the binary.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: October 29, 2019
    Assignee: RED HAT, INC.
    Inventor: Cleber Rodrigues Rosa
  • Patent number: 10394556
    Abstract: Methods and apparatuses relating to switching of a shadow stack pointer are described. In one embodiment, a hardware processor includes a hardware decode unit to decode an instruction, and a hardware execution unit to execute the instruction to: pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread, remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor.
    Type: Grant
    Filed: December 20, 2015
    Date of Patent: August 27, 2019
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Jason W. Brandt, Ravi L. Sahita, Barry E. Huntley, Baiju V. Patel, Deepak K. Gupta
  • Patent number: 10362125
    Abstract: Technologies for pre-action execution include a client computing device to request a resource from a server and receive content from the server including the requested resource and one or more pre-action hints. Each of the one or more pre-action hints identifies a suggested pre-action to be taken by the client computing device prior to receipt of a corresponding user request to perform the corresponding suggested pre-action. The client computing device determines a likelihood of success of one or more pre-actions based on historical behavior data of a user of the client computing device, wherein each pre-action corresponds to at least one of the one or more pre-action hints. The client computing device selects a pre-action to execute based on the determined likelihood of success of the one or more pre-actions.
    Type: Grant
    Filed: September 18, 2014
    Date of Patent: July 23, 2019
    Assignee: Intel Corporation
    Inventors: Pan Deng, Junyong Ding, Shu Xu
  • Patent number: 10353819
    Abstract: Next line prefetchers employing initial high prefetch prediction confidence states for throttling next line prefetches in processor-based system are disclosed. Next line prefetcher prefetches a next memory line into cache memory in response to read operation. To mitigate prefetch mispredictions, next line prefetcher is throttled to cease prefetching after prefetch prediction confidence state becomes a no next line prefetch state indicating number of incorrect predictions. Instead of initial prefetch prediction confidence state being set to no next line prefetch state, which is built up in response to correct predictions before performing a next line prefetch, initial prefetch prediction confidence state is set to next line prefetch state to allow next line prefetching. Thus, next line prefetcher starts prefetching next lines before requiring correct predictions to be “built up” in prefetch prediction confidence state.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: July 16, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Brandon Dwiel, Rami Mohammad Al Sheikh
  • Patent number: 10331891
    Abstract: Embodiments related to conducting and constructing a secure start-up process are disclosed. One embodiment provides, on a computing device, a method of conducting a secure start-up process. The method comprises recognizing the branch instruction, and, in response, calculating an integrity datum of a data segment. The method further comprises obtaining an adjustment datum, and computing a branch target address based on the integrity datum and the adjustment datum.
    Type: Grant
    Filed: February 6, 2012
    Date of Patent: June 25, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Victor Tan
  • Patent number: 10157152
    Abstract: A semiconductor device includes a plurality of memory controllers each of which includes a local buffer, a global buffer coupled to the plurality of memory controllers and including areas respectively allocated to the plurality of memory controllers, and a global buffer controller that controls sizes of the allocated areas of the global buffer.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: December 18, 2018
    Assignees: SK HYNIX INC., INDUSTRY-ACADEMIC COOPERATION FOUNDATION YONSEI UNIVERSITY
    Inventors: Kihyun Park, Su-Hae Woo, Sungho Kang
  • Patent number: 10120687
    Abstract: A programmable controller for executing a sequence program comprises a processor for reading and executing an instruction code from an external memory, an instruction cache memory for storing a branch destination program code of a branch instruction included in the sequence program, and a cache controller for entering the branch destination program code in the instruction cache memory according to data on priority, the instruction code of the branch instruction including the data on priority of an entry into the instruction cache memory.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: November 6, 2018
    Assignee: FANUC Corporation
    Inventors: Motoyoshi Miyachi, Yasushi Nomoto
  • Patent number: 10009944
    Abstract: A method is provided for controlling wireless connection of a device having a wireless communication interface to a wireless access point. The method includes: determining, by the device, if a known wireless access point is available by comparing a determined location of the device with geographical information associated with a set of known wireless access points; if the known wireless access point is available, determining, by the device, a time elapsed since a most recent data communication activity of the device; and disabling the wireless communication interface of the device if the time elapsed is less than an idle time threshold value so as to prevent wireless connection of the device to the known wireless access point.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: June 26, 2018
    Assignee: International Business Machines Corporation
    Inventors: Andrew A. Armstrong, Richard W. Pilot
  • Patent number: 9996696
    Abstract: Using various embodiments, methods and systems to optimize the execution of a software program are disclosed. In one embodiment, a system is configured to identify a first vertex of an indirect control flow graph (ICFG) of a control flow graph (CFG) of the software program representing an indirect control transfer to a first function in the software program. Thereafter, a first type signature associated with the indirect control transfer is determined and a first tag value from the first type signature is computed. The system also identifies a second vertex of the ICFG representing a second function of the software program and determines a second type signature of the second function to compute a second tag value from the second type signature. When it is determined that the first tag value equals to the second tag value, the system modifies the CFG to optimize execution of the software program.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: June 12, 2018
    Assignee: Unexploitable Holdings LLC
    Inventor: János Baji-Gál
  • Patent number: 9984004
    Abstract: Embodiments serve to balance overall performance of a finite-sized caching system having a first cache of a first cache size and a second cache of a second cache size. A tail portion and a head portion of each of the caches are defined wherein incoming data elements are initially stored in a respective head portion and wherein evicted data elements are evicted from a respective tail portion. Performance metrics are defined wherein a performance metric includes a predicted miss cost that would be incurred when replacing an evicted data elements. A quantitative function is defined to include cache performance metrics and a cache reallocation amount. The cache performance metrics are evaluated periodically to determine a then-current cache reallocation amount. The caches can be balanced by increasing the first cache size by the cache reallocation amount and decreasing the second cache size by the cache reallocation amount.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: May 29, 2018
    Assignee: Nutanix, Inc.
    Inventors: Gary Jeffrey Little, Huapeng Yuan, Karan Gupta, Peter Scott Wyckoff, Rickard Edward Faith
  • Patent number: 9971394
    Abstract: Embodiments of the disclosure include a cache array having a plurality of cache sets grouped into a plurality of subsets. The cache array also includes a read line configured to receive a read signal for the cache array and a set selection line configured to receive a set selection signal. The set selection signal indicates that the read signal corresponds to one of the plurality subsets of the cache array. The read line and the set selection line are operatively coupled to the plurality of cache sets and based on the set selection signal the subset that corresponds to the set selection signal is switched.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: May 15, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul A. Bunce, John D. Davis, Diana M. Henderson, Jigar J. Vora
  • Patent number: 9886385
    Abstract: In a content-directed prefetcher, a pointer detection circuit identifies a given memory pointer candidate within a data cache line fill from a lower level cache (LLC), where the LLC is at a lower level of a memory hierarchy relative to the data cache. A pointer filter circuit initiates a prefetch request to the LLC candidate dependent on determining that a given counter in a quality factor (QF) table satisfies QF counter threshold value. The QF table is indexed dependent upon a program counter address and relative cache line offset of the candidate. Upon initiation of the prefetch request, the given counter is updated to reflect a prefetch cost. In response to determining that a subsequent data cache line fill arriving from the LLC corresponds to the prefetch request for the given memory pointer candidate, a particular counter of the QF table may be updated to reflect a successful prefetch credit.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: February 6, 2018
    Assignee: Apple Inc.
    Inventors: Tyler J. Huberty, Stephan G. Meier, Mridul Agarwal
  • Patent number: 9798577
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a transactional memory access request including a target address and a priority of the requesting memory transaction. In response, transactional memory logic detects a conflict for the target address with a transaction footprint of an existing memory transaction and accesses a priority of the existing memory transaction. In response to detecting the conflict, the transactional memory logic resolves the conflict by causing the cache memory to fail the requesting or existing memory transaction based at least in part on their relative priorities. Resolving the conflict includes at least causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a transactional load request, and the target address is within a store footprint of the existing memory transaction.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: October 24, 2017
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hung Q. Le, William J. Starke, Derek E. Williams
  • Patent number: 9792147
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a transactional memory access request including a target address and a priority of the requesting memory transaction. In response, transactional memory logic detects a conflict for the target address with a transaction footprint of an existing memory transaction and accesses a priority of the existing memory transaction. In response to detecting the conflict, the transactional memory logic resolves the conflict by causing the cache memory to fail the requesting or existing memory transaction based at least in part on their relative priorities. Resolving the conflict includes at least causing the cache memory to fail the existing memory transaction when the requesting memory transaction has a higher priority than the existing memory transaction, the transactional memory access request is a transactional load request, and the target address is within a store footprint of the existing memory transaction.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: October 17, 2017
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hung Q. Le, William J. Starke, Derek E. Williams
  • Patent number: 9727411
    Abstract: A method for error tracking a log subsystem of a file system is provided. The method includes: when a data block of the log subsystem is recovered to an original position in the file system, calculating a verification code of the data block to obtain a second verification code; determining whether a verification result between the second verification code and a first verification code of the data block stored in a spare space in a submit block of the log subsystem in a disk is consistent; and when the verification result is inconsistent, processing the data block corresponding to the inconsistent verification result. With the above method, given that system performance is least affected, an error and a position of the error of the log subsystem of the file system can be more accurately detected to enhance the reliability of the log subsystem.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: August 8, 2017
    Assignee: MSTAR SEMICONDUCTOR, INC.
    Inventor: Tao Zhou
  • Patent number: 9710281
    Abstract: Embodiments relate to register comparison for register comparison for operand store compare (OSC) prediction. An aspect includes, for each instruction in an instruction group of a processor pipeline: determining a base register value of the instruction; determining an index register value of the instruction; and determining a displacement of the instruction. Another aspect includes comparing the base register value, index register value, and displacement of each instruction in the instruction group to the base register value, index register value, and displacement of all other instructions in the instruction group. Another aspect includes based on the comparison, determining that a load instruction of the instruction group has a probable OSC conflict with a store instruction of the instruction group. Yet another aspect includes delaying the load instruction based on the determined probable OSC conflict.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: July 18, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David Hutton, Wen Li, Eric Schwarz
  • Patent number: 9672154
    Abstract: In aspects of determining memory access patterns for cache prefetch in an out-of-order processor, data is maintained in a cache when copied from system memory of a computing device, and load data instructions are processed to access the cache data. The load data instructions include incremental load data instructions and non-incremental load data instructions that access the data from contiguous memory addresses. The data is prefetched ahead of processing the load data instructions, where prefetch requests are initiated based on the load data instructions. A stride is calculated as the distance between the incremental load data instructions. Further, the stride can be corrected for the non-incremental load data instructions to correlate with the calculated stride. The corrected stride represents the data as a sequential data stream having a fixed stride, and prefetching the data appears sequential for both the incremental and non-incremental load data instructions.
    Type: Grant
    Filed: January 15, 2015
    Date of Patent: June 6, 2017
    Assignee: Marvell International Ltd.
    Inventors: Hunglin Hsu, Viney Gautam, Yicheng Guo, Warren Menezes
  • Patent number: 9654483
    Abstract: A technology is described for limiting the rate at which a number of requests to perform a network action are granted using rate limiters. An example method may include receiving a request for a token granting permission to perform a network action via a computer network. In response, rate limiters may be identified by generating hash values using hash functions and a network address representing a source network where the hash values identify memory locations for the rate limiters. The rate limiters may have a computer memory capacity to store tokens that are distributed in response to the request. Token balances for the rate limiters may be determined, and permission to perform the network action may be granted as a result of at least one of the token balances being greater than zero.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: May 16, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Bryan Mark Benson, Michael F. Diggins, Anton Romanov, David Dongyi Lu, Xingbo Wang
  • Patent number: 9645567
    Abstract: In one aspect of the present invention, data to be written is divided into a plurality of pieces of divided data of a maximum size or less to calculate an offset of each piece of divided data, a frame issuing an instruction to a PLC to write a predetermined terminal code in a head address of a data area is transmitted, a frame issuing an instruction to the PLC to write the divided data at a position of the corresponding offset from the head address of the data area with respect to the divided data except a head divided data is transmitted, and a frame issuing an instruction to the PLC to write the head divided data in the head address of the data area is transmitted.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: May 9, 2017
    Assignee: OMRON CORPORATION
    Inventor: Yuta Nagata
  • Patent number: 9600289
    Abstract: Methods and processors for managing load-store dependencies in an out-of-order instruction pipeline. A load store dependency predictor includes a table for storing entries for load-store pairs that have been found to be dependent and execute out of order. Each entry in the table includes hashed values to identify load and store operations. When a load or store operation is detected, the PC and an architectural register number are used to create a hashed value that can be used to uniquely identify the operation. Then, the load store dependency predictor table is searched for any matching entries with the same hashed value.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: March 21, 2017
    Assignee: Apple Inc.
    Inventors: Stephan G. Meier, John H. Mylius, Gerard R. Williams, III, Suparn Vats
  • Patent number: 9569613
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to determine a valid target address for a branch instruction from information stored in a relocation table, a linkage table, or both, the relocation table and the linkage table associated with a binary file and store the valid target address in a table in memory, the valid target address to validate a target address for a translated portion of a routine of the binary file.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 14, 2017
    Assignee: INTEL CORPORATION
    Inventors: Koichi Yamada, Palanivelrajan Shanmugavelayutham, Sravani Konda
  • Patent number: 9495423
    Abstract: Query requests for RDF triples are obtained, wherein the query request(s) contain(s) at least one triple pattern; for each triple pattern, the corresponding elementary pattern is determined, and each triple pattern is converted to a weighted elementary pattern. The occurrence frequency of each elementary pattern is computed based on the weighted elementary patterns; at least one elementary pattern is chosen at least according to the occurrence frequency; and the RDF triples corresponding to the chosen at least elementary pattern are prefetched into the buffer. The corresponding apparatus is also provided. With the above method and apparatus, the frequently accessed RDF triples can be determined and prefetched into the buffer, which improves the query efficiency.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: November 15, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yue Pan, Xing Zhi Sun, Qing Fa Wang, Shuo Wu, Lin Hao Xu