Processing Control For Data Transfer Patents (Class 712/225)
  • Patent number: 10903849
    Abstract: Systems, apparatuses, and methods related to bit string compression are described. A method for bit string compression can include determining that a particular operation is to be performed using a bit string formatted according to a universal number format or a posit format to alter a bit width associated with the bit string from a first bit width to a second bit width and performing a compression operation on a bit string formatted according to a universal number format or a posit format to alter a bit width associated with the bit string from a first bit width to a second bit width. The method can further include writing the bit string having the second bit width to a first register, performing an arithmetic operation or a logical operation, or both using the bit string having the second bit string width, and monitoring a quantity of bits of a result of the operation.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: January 26, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Vijay S. Ramesh
  • Patent number: 10877777
    Abstract: Systems and methods of enabling virtual calls in a single instruction multiple data (SIMD) environment may involve detecting a virtual call of a function and using a single dispatch of the function to invoke the virtual call for two or more channels of the virtual call. In one example, it is determined that the two or more channels share a common target address and a single dispatch of the function is conducted with respect to the common target address. The process may be iterated for additional channels of the virtual call that share a common target address.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Wei-Yu Chen, Guei-Yuan Lueh, Subramaniam Maiyuran
  • Patent number: 10877548
    Abstract: In example implementations, an apparatus is provided. The apparatus includes a context switch block, a processor performance state block, and a task execution block. The context switch block is to perform a context switch. The processor performance state block is to load a processor with a processor performance state stored in a context information associated with a task. The task execution block is to execute the task with the processor operating at the processor performance state loaded from the context information.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: December 29, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Scott Faasse
  • Patent number: 10877763
    Abstract: A computer system, processor, and method for processing information is disclosed that includes a Dispatch Unit for dispatching instructions; an Issue Queue for receiving instructions dispatched from the Dispatch Unit; and a queue for receiving instructions issued from the Issue Queue, the queue having a plurality of entry locations for storing data. In an embodiment instructions are dispatched with a virtual indicator, and the virtual indicator is set to a first mode for instructions dispatched where an entry location is available, and to a second mode where an entry location is not available, in the queue to receive the dispatched instruction. In addition to virtual tagging dispatched instructions, a system, processor, and method are disclosed for regional partitioning of queues, region based deallocation of queue entries, and circular thread based assignment of queue entries.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: December 29, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian D. Barrick, Kurt A. Feiste, Hung Q. Le, Dung Q. Nguyen, Kenneth L. Ward
  • Patent number: 10860501
    Abstract: A redundancy method of a three-dimensional laminated memory includes receiving, by first, second and third processors, a command for data operation, transmitting and receiving, by each of the second and third processors, data through dedicated data buses in order to perform the data operation, receiving, by the first processor, operation result values of the second and third processors from a main memory, comparing, by a result value comparator of the first processor, the operation result values of the first, second and third processors, and outputting, by the result value comparator, operation result values in correspondence with the result of comparison.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: December 8, 2020
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Hong Yeol Lim, Jin Ha Choi, Jin Kyu Hwang
  • Patent number: 10853122
    Abstract: One example includes performing a VM restore instance type discovery process, creating a test VM with a VM restore instance type matching a VM restore instance type identified during discovery, using the test VM to create a test restore VM at a cloud storage site, restoring the test VM at the cloud storage site using the test restore VM, generating a 4-D baseline vector based on the restoration of the test VM, the 4-D baseline vector identifying a particular VM restore instance type, generating a 5-D vector based on the 4-D baseline vector, ranking the 5-D vector relative to other 5-D vectors, the 5-D vectors identifying the same production site VM, and restoring, at the cloud storage site, the production site VM identified in the 5-D vectors, the production site VM restored at the cloud storage site has a VM restore instance type identified in the highest ranked 5-D vector.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: December 1, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: David Zlotnick, Assaf Natanzon, Boris Shpilyuck
  • Patent number: 10831537
    Abstract: A computer system includes a processor, main memory, and controller. The processor includes a plurality of hardware threads configured to execute a plurality of software threads. The main memory includes a first register table configured to contain a current set of architected registers for the currently running software threads. The controller is configured to change a first number of the architected registers assigned to a given one of the software threads to a second number of architected registers when a result of monitoring current usage of the registers by the software threads indicates that the change will improve performance of the computer system. The processor includes a second register table configured to contain a subset of the architected registers and a mapping table for each software thread indicating whether the architected registers referenced by the corresponding software thread are located in the first register table or the second register table.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Harold W. Cain, III, Hubertus Franke, Charles R. Johns, Hung Q. Le, Ravi Nair
  • Patent number: 10831556
    Abstract: Various systems and methods for virtual CPU consolidation to avoid physical CPU contention between virtual machines are described herein. A processor system that includes multiple physical processors (PCPUs) includes a first virtual machine (VM) that includes multiple first virtual processors (VCPUs); a second VM that includes multiple second VCPUs; and a virtual machine monitor (VMM) to map individual ones of the first VCPUs to run on at least one of, individual PCPUs of a first subset of the PCPUs and individual PCPUs of a set of PCPUs that includes the first subset of the PCPUs and a second subset of the PCPUs, based at least in part upon compute capacity of the first subset of the PCPUs to run the first VCPUs, and to map individual ones of the second VCPUs to run on individual ones of the second subset of the PCPUs.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: November 10, 2020
    Assignee: Intel IP Corporation
    Inventors: Yuyang Du, Jian Sun, Yong Tong Chua, Mingqiu Sun, Sebastien Haezebrouck, Nicole Chalhoub, Premanand Sakarda, Richard Quinzio
  • Patent number: 10782999
    Abstract: Disclosed are a method, a device, and a single-tasking system for implementing multi-tasking in a single-tasking system. The method includes: performing a master task; allocating a hardware timer to a slave task on a central processing unit (CPU); configuring an interrupt period of the hardware timer; and generating, by the hardware timer, a hardware interrupt periodically based on the interrupt period to trigger the performance of the slave task. Therefore, independent and concurrent execution of the master task and slave task can be achieved in a single-tasking system, without the need to add an unwieldy multitasking scheduling framework to the operating system. Furthermore, the slave task is executed only when the hardware timer generates hardware interrupts, so less system resources will be consumed and the unwieldy inter-process communication mechanisms as adopted in traditional multi-tasking systems won't be needed. Example inter-process communication mechanisms may include, semaphores, spinlocks, etc.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: September 22, 2020
    Assignee: PAX COMPUTER TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Shifang Dong, Yingfeng Tang
  • Patent number: 10606638
    Abstract: A transaction is detected. The transaction has a begin-transaction indication and an end-transaction indication. If it is determined that the begin-transaction indication is not a no-speculation indication, then the transaction is processed.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 10592164
    Abstract: Portions of configuration state registers in-memory. An instruction is obtained, and a determination is made that the instruction accesses a configuration state register. A portion of the configuration state register is in-memory and another portion of the configuration state register is in-processor. Processing associated with the configuration state register is performed. The performing processing is based on a type of access and whether the portion or the other portion is being accessed.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: March 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10579457
    Abstract: A processor and methods are provided for detecting fault in a control flow. The processor includes an instruction set architecture defining a pair of FLOWSET and FLOWCHECK opcodes and FLOWSET and FLOWCHECK operations. This pair of opcodes and associated operation works together with a CFI shadow stack to detect faults in an intended flow of instructions. Upon detection of a fault, a fault notice is provided. The methods of detecting fault in a control flow may be implemented using hardware or software and a shadow stack.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: March 3, 2020
    Inventor: Andrew H White
  • Patent number: 10503506
    Abstract: A mechanism is provided for improving performance when executing unaligned load instructions which load an unaligned block of data from a data store. In a first unaligned load handling mode, a final load operation of a series of load operations performed for the instruction loads a full data word extending beyond the end of the unaligned block of data to be loaded by that instruction. If an initial portion of the unaligned block of data to be loaded by a subsequent unaligned load instruction corresponds to the excess part in the stream buffer for the earlier instruction, then an initial load operation for the subsequent instruction can be suppressed. A mechanism is also described for allowing series of dependent data access operations triggered by a given instruction to be halted partway through when a stall condition arises, and resumed partway through later, by defining overlapping sequences of transactions.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: December 10, 2019
    Assignee: ARM Limited
    Inventor: Max John Batley
  • Patent number: 10496540
    Abstract: A processor includes a cache memory, an issuing unit that issues, with respect to all element data as a processing object of a load instruction, a cache request to the cache memory for each of a plurality of groups which are divided to include element data, a comparing unit that compares addresses of the element data as the processing object of the load instruction, and determines whether element data in a same group are simultaneously accessible, and a control unit that accesses the cache memory according to the cache request registered in a load queue registering one or more cache requests issued from the issuing unit. The control unit processes by one access whole element data determined to be simultaneously accessible by the comparing unit.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: December 3, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Hideki Okawara, Noriko Takagi, Yasunobu Akizuki, Kenichi Kitamura, Mikio Hondo
  • Patent number: 10452401
    Abstract: Techniques are disclosed relating to selecting store instructions for dispatch to a shared pipeline. In some embodiments, the shared pipeline processes instructions for different target clients with different data rate capabilities. Therefore, in some embodiments, the pipeline is configured to generate state information that is based on a determined amount of work in the pipeline that targets at least one slower target. In some embodiments, the state information indicates whether the amount of work is above a threshold for the particular target. In some embodiments, scheduling circuitry is configured to select instructions for dispatch to the pipeline based on the state information. For example, the scheduling circuitry may refrain from selecting instructions with a slower target when the slower target is above its threshold amount of work in the pipeline.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: October 22, 2019
    Assignee: Apple Inc.
    Inventor: Robert D. Kenney
  • Patent number: 10445240
    Abstract: Digital signal processors often operate on two operands per instruction, and it is desirable to retrieve both operands in one cycle. Some data caches connect to the processor over two busses and internally uses two or more memory banks to store cache lines. The allocation of cache lines to specific banks is based on the address that the cache line is associated. When two memory accesses map to the same memory bank, fetching the operands incurs extra latency because the accesses are serialized. An improved bank organization for providing conflict-free dual-data cache access—a bus-based data cache system having two data buses and two memory banks—is disclosed. Each memory bank works as a default memory bank for the corresponding data bus. As long as the two values of data being accessed belong to two separate data sets assigned to the two respective data buses, memory bank conflicts are avoided.
    Type: Grant
    Filed: August 1, 2014
    Date of Patent: October 15, 2019
    Assignee: ANALOG DEVICES GLOBAL UNLIMITED COMPANY
    Inventors: Abhijit Giri, Saurbh Srivastava, Michael S. Allen
  • Patent number: 10409983
    Abstract: A system that includes a guest virtual machine is in communication with a hypervisor. The guest virtual machine comprises virtual machine measurement points and a hypervisor control point. The hypervisor control point is configured to collect virtual machine memory metadata from the guest virtual machine and from the hypervisor, and to compare the virtual machine memory metadata to the hypervisor memory metadata. The hypervisor control point is further configured to determine whether the virtual machine memory metadata is the same as the hypervisor memory metadata and to communicate the virtual machine memory metadata to the virtual vault machine in response to determining that the virtual machine memory metadata is the same as the hypervisor memory metadata. The virtual vault machine is in communication with the hypervisor and configured to classify the state of the guest virtual based on the virtual machine memory metadata.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: September 10, 2019
    Assignee: Armor Defense, Inc.
    Inventors: Jeffrey Ray Schilling, Chase Cooper Cunningham, Tawfiq Mohan Shah, Srujan Das Kotikela
  • Patent number: 10402935
    Abstract: A method of profiling the performance of a graphics unit when rendering a scene according to a graphics pipeline, includes executing stages of the graphics pipeline using one or more units of rendering circuitry to perform at least one rendering task that defines a portion of the work required to render the scene, the at least one rendering task associated with a set flag; propagating an indication of the flag through stages of the graphics pipeline as the scene is rendered so that work done as part of the at least one rendering task is associated with the set flag; changing the value of a counter associated with a unit of rendering circuitry in response to an occurrence of an event while that unit performs an item of work associated with the set flag; and reading the value of the counter to thereby measure the occurrences of the event caused by completing the at least one rendering task.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: September 3, 2019
    Assignee: Imagination Technologies Limited
    Inventor: Yoong-Chert Foo
  • Patent number: 10402425
    Abstract: Techniques provide for hardware accelerated data movement between main memory and an on-chip data movement system that comprises multiple core processors that operate on the tabular data. The tabular data is moved to or from the scratch pad memories of the core processors. While the data is in-flight, the data may be manipulated by data manipulation operations. The data movement system includes multiple data movement engines, each dedicated to moving and transforming tabular data from main memory data to a subset of the core processors. Each data movement engine is coupled to an internal memory that stores data (e.g. a bit vector) that dictates how data manipulation operations are performed on tabular data moved from a main memory to the memories of a core processor, or to and from other memories. The internal memory of each data movement engine is private to the data movement engine.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: September 3, 2019
    Assignee: Oracle International Corporation
    Inventors: David A. Brown, Rishabh Jain, Michael Duller, Sam Idicula, Erik Schlanger, David Joseph Hawkins, Christopher Joseph Daniels
  • Patent number: 10372452
    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: August 6, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Paul E. Kitchin, Rama S. Gopal, Karthik Sundaram
  • Patent number: 10365926
    Abstract: A programmable processor and method for improving the performance of processors by expanding at least two source operands, or a source and a result operand, to a width greater than the width of either the general purpose register or the data path width. The present invention provides operands which are substantially larger than the data path with of the processor by using the contents of a general purpose register to specify a memory address at which a plurality of data path widths of data can be read or written, as well as the size and shape of the operand. In addition, several instructions and apparatus for implementing these instructions are described which obtain performance advantages if the operands are not limited to the width and accessible number of general purpose registers.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: July 30, 2019
    Assignee: MicroUnity Systems Engineering, Inc.
    Inventors: Craig Hansen, John Moussouris, Alexia Massalin
  • Patent number: 10360162
    Abstract: Embodiments include processing systems that determine, based on an instruction address range indicator stored in a first register, whether a next instruction fetch address corresponds to a location within a first memory region associated with a current privilege state or within a second memory region associated with a different privilege state. When the next instruction fetch address is not within the first memory region, the next instruction is allowed to be fetched only when a transition to the different privilege state is legal. In a further embodiment, when a data access address is generated for an instruction, a determination is made, based on a data address range indicator stored in a second register, whether access to a memory location corresponding to the data access address is allowed. The access is allowed when the current privilege state is a privilege state in which access to the memory location is allowed.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: July 23, 2019
    Assignee: NXP USA, Inc.
    Inventors: Daniel M. McCarthy, Joseph C. Circello, Kristen A. Hausman
  • Patent number: 10360353
    Abstract: Execution control of computer software instructions. A determination is made as to whether a record exists that indicates an outcome of a previous attempt to execute a computer software instruction in a first execution privilege mode. A current attempt to execute the computer software instruction is controlled by causing the current attempt to execute the computer software instruction in a second execution privilege mode if the record exists and if the outcome indicates that the attempt to execute the computer software instruction in the first execution privilege mode failed.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: July 23, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ben Chen, Amir Glaser, Roman Minkov
  • Patent number: 10331357
    Abstract: A system and method for tracking stores and loads to reduce load latency when forming the same memory address by bypassing a load store unit within an execution unit is disclosed. The system and method include storing data in one or more memory dependent architectural register numbers (MdArns), allocating the one or more MdArns to a MEMFILE, writing the allocated one or more MdArns to a map file, wherein the map file contains a MdArn map to enable subsequent access to an entry in the MEMFILE, upon receipt of a load request, checking a base, an index, a displacement and a match/hit via the map file to identify an entry in the MEMFILE and an associated store, and on a hit, providing the entry responsive to the load request from the one or more MdArns.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 25, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Betty Ann McDaniel, Michael D. Achenbach, David N. Suggs, Frank C. Galloway, Kai Troester, Krishnan V. Ramani
  • Patent number: 10331543
    Abstract: Methods and systems for performance measurements of a program are provided. An execution trace of the program may be captured and stored. The stored execution trace may be replayed in an offline mode. Performance measurements for the program may be determined based on the replaying of the execution trace in the offline mode.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: June 25, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mark Marron, Arunesh Chandra, Todd Douglas Mytkowicz, Hitesh Kanwathirtha
  • Patent number: 10310979
    Abstract: A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: June 4, 2019
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser
  • Patent number: 10303399
    Abstract: An apparatus and method are provided for controlling vector memory accesses. The apparatus comprises a set of vector registers, and flag setting circuitry that is 5 responsive to a determination that a vector generated for storage in one of the vector registers comprises a plurality of elements that meet specified contiguousness criteria, to generate flag information associated with that vector register. Processing circuitry is then used to perform a vector memory access operation in order to access in memory a plurality of data values at addresses determined from an address vector operand 10 comprising a plurality of address elements. The address vector operand is provided in a specified vector register of the vector register set, such that the plurality of elements of the vector stored in that specified vector register form the plurality of address elements.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: May 28, 2019
    Assignee: ARM Limited
    Inventors: François Christopher Jacques Botman, Thomas Christopher Grocutt
  • Patent number: 10296395
    Abstract: Performing a rooted-v collective operation by an operational group of compute nodes in a parallel computer includes: upon encountering a rooted-v collection operation during execution, identifying, by a root node of an operational group of compute nodes, a count to use for the selection of a collective algorithm for effecting the rooted-v collective operation; broadcasting, by the root node to the other computer nodes in the operational group, an active message, wherein the active message includes the identified count to use for the selection of the collective algorithm; and selecting, by all the compute nodes of the operational group based on the identified count, a same collective algorithm to effect the rooted-v collective operation; and executing the rooted-v collective operation by all compute nodes of the operational group using the selected algorithm.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: May 21, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nysal Jan K. A., Sameh S. Sharkawi
  • Patent number: 10268521
    Abstract: A electronic system includes: a cluster manager configured to: divide a user program into a group of parallel execution tasks, and generate shuffling metadata to map intermediate data and processed data from the parallel execution tasks; a shuffling cluster node, coupled to the cluster manager, configured to: store the shuffling metadata by an in-storage computer (ISC), and incrementally shuffle each of the sub-packets of the intermediate data and the processed data, by the ISC, based on the shuffling metadata when the parallel execution task is in process; and a local storage, coupled to the shuffling cluster node and mapped through the shuffling metadata, for receiving the sub-packets of the processed data from the shuffling cluster node.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: April 23, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Inseok Stephen Choi, Yang Seok Ki
  • Patent number: 10255187
    Abstract: A method for weak stream software data and instruction prefetching using a hardware data prefetcher is disclosed. A method includes, determining if software includes software prefetch instructions, using a hardware data prefetcher, and, accessing the software prefetch instructions if the software includes software prefetch instructions. Using the hardware data prefetcher, weak stream software data and instruction prefetching operations are executed based on the software prefetch instructions, free of training operations.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: April 9, 2019
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 10248593
    Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM) that specifies an event target number and a number of bits to ignore. In response to a slot being available in an interrupt request queue, the IPC enqueues the ENM in the slot. In response to the ENM being dequeued from the interrupt request queue, the IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number and the number of bits to ignore specified in the ENM. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.
    Type: Grant
    Filed: June 4, 2017
    Date of Patent: April 2, 2019
    Assignee: International Business Machines Corporation
    Inventors: Florian A. Auernhammer, Daniel Wind
  • Patent number: 10235231
    Abstract: An exemplary method for detecting one or more anomalies in a system includes building a temporal causality graph describing functional relationship among local components in normal period; applying the causality graph as a propagation template to predict a system status by iteratively applying current system event signatures; and detecting the one or more anomalies of the system by examining related patterns on the template causality graph that specifies normal system behaviors. The system can align event patterns on the causality graph to determine an anomaly score.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: March 19, 2019
    Assignee: NEC Corporation
    Inventors: Kai Zhang, Jianwu Xu, Hui Zhang, Guofei Jiang
  • Patent number: 10229074
    Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM) that specifies an event target number and a number of bits to ignore. In response to a slot being available in an interrupt request queue, the IPC enqueues the ENM in the slot. In response to the ENM being dequeued from the interrupt request queue, the IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number and the number of bits to ignore specified in the ENM. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: March 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Florian A. Auernhammer, Daniel Wind
  • Patent number: 10228992
    Abstract: Corruption of call stacks is detected by using guard words placed in the call stacks. A determination is made that a caller routine is to facilitate detection of corruption of stacks. Based on the determination, a store of a guard word in a stack frame of the caller routine is provided in the caller routine. The stored guard word is then used to detect corruption of the stack frame.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: March 12, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Ronald I. McIntosh
  • Patent number: 10223196
    Abstract: Techniques for error correction in a processor include detecting an error in first data stored in a register. The method also includes generating an instruction to read the first data stored in the register, where the register is both a source register and a destination register of the instruction. The method further includes transmitting the first data to an execution unit, where the first data bypasses an issue queue. The method also includes decoding the instruction and correcting the error to generate corrected data and writing the corrected data to the destination register.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Brian D. Barrick, James W. Bishop, Maarten J. Boersma, Marcy E. Byers, Sundeep Chadha, Jentje Leenstra, Dung Q. Nguyen, David R. Terry
  • Patent number: 10185566
    Abstract: In one embodiment, the present invention includes a multicore processor having first and second cores to independently execute instructions, the first core visible to an operating system (OS) and the second core transparent to the OS and heterogeneous from the first core. A task controller, which may be included in or coupled to the multicore processor, can cause dynamic migration of a first process scheduled by the OS to the first core to the second core transparently to the OS. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 22, 2019
    Assignee: Intel Corporation
    Inventors: Alon Naveh, Yuval Yosef, Eliezer Weissmann, Anil Aggarwal, Efraim Rotem, Avi Mendelson, Ronny Ronen, Boris Ginzburg, Michael Mishaeli, Scott D. Hahn, David A. Koufaty, Ganapati Srinivasa, Guy Therien
  • Patent number: 10185588
    Abstract: A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: January 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Christian Jacobi, Marcel Mitran, Timothy J. Slegel
  • Patent number: 10169043
    Abstract: A method includes determining that an operation should be performed to restore 80 bits stored in memory for an 80 bit register of a guest architecture on a host having 64-bit registers. The method further includes storing 64 bits from the 80 bits in a host register. The method further includes storing the remaining 16 bits from 80 bits in supplemental memory storage. The method further includes identifying a floating point operation that should be performed to operate on the 80-bit register for the guest architecture. As a result, the method further includes using the 64 bits in the host register and the remaining 16 bits stored in memory in a supplemental memory storage to translate a floating point number represented by the 80 bits to a 64-bit floating point number and store the 64-bit floating point number in the host register.
    Type: Grant
    Filed: November 17, 2015
    Date of Patent: January 1, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aaron Sebastian Giles, Clarence Siu Yeen Dang
  • Patent number: 10169244
    Abstract: The described embodiments perform a method for handling memory accesses by virtual machines in a computing device. The described embodiments include a reverse map table (RMT) and a separate guest accessed pages table (GAPT) for each virtual machine. The RMT has a plurality of entries, each entry including information for identifying a virtual machine that is permitted to access an associated page of data in a memory. Each GAPT has a record of pages being accessed by a corresponding virtual machine. During operation, a table walker receives a request from a given virtual machine to translate a guest physical address to a system physical address. The table walker checks at least one of the RMT and a corresponding GAPT to determine whether the given virtual machine has access to a corresponding page. If not, the table walker terminates the translating. Otherwise, the table walker completes the translating.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: January 1, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: David A. Kaplan, Jeremy W. Powell, Thomas R. Woller
  • Patent number: 10162680
    Abstract: A method of exchanging data in a real-time operating system, between a primary core and a secondary core in a multi-core processor, includes executing a primary path via the primary core and executing a secondary path via the secondary core. The primary path is configured to be a relatively faster processing task and the secondary path is configured to be a relatively slower processing task. The method includes devising a freeze in process flag to have a respective flag status set and cleared by the primary path. The method includes devising a data frozen flag to have a respective flag status set and cleared by both the primary and the secondary paths. A component that is operatively connected to the multi-core processor may be controlled based at least partially on a difference between primary and secondary sets of calculations executed by the primary and secondary cores, respectively.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: December 25, 2018
    Assignee: GM Global Technology Operations LLC
    Inventors: Young Joo Lee, Daniel J. Berry, Brian A. Welchko
  • Patent number: 10114773
    Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM). The ENM specifies a level, an event target number, and a number of bits to ignore. The IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number, the number of bits to ignore, and a process identifier (ID) when the level specified in the ENM corresponds to a user level. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: October 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian A. Auernhammer
  • Patent number: 10114645
    Abstract: A technique suppresses the occurrence of stalling caused by data dependency other than register dependency in an out-of-order processor. A stall reducing program includes a handler for detecting a stall occurring during execution of execution code using a performance monitoring unit, and identifying, based on dependencies, a second instruction on which a first instruction is data dependent, the stall based on this dependency; a profiler registering the second instruction as profile information; and an optimization module for inserting a thread yield instruction in the appropriate position inside the execution code or original code file based on the profile information, and outputting the optimized execution code.
    Type: Grant
    Filed: August 13, 2013
    Date of Patent: October 30, 2018
    Assignee: International Business Machines Corporation
    Inventor: Takeshi Ogasawara
  • Patent number: 10067969
    Abstract: Techniques are disclosed for implementing a unified partitioning scheme within distributed database systems to allow a table to be horizontally partitioned and those partitions stored on and serviced by a storage group. A storage group is a subset of storage manager (SM) nodes, and each SM node is configured to persist database data in durable storage. The distributed database system assigns each storage group to a subset of SM nodes. The distributed database system can address each storage group using a symbolic mapping that allows transactions to identify a particular storage group, and to direct read and write operations to a subset of SM nodes servicing that storage group. An administrator can update this mapping on-the-fly to cause the distributed database system to dynamically adjust an implemented partitioning scheme without necessarily interrupting on-going database operations.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: September 4, 2018
    Assignee: NuoDB, INC.
    Inventors: Michael Thomas Rice, Oleg Levin, Yan Avlasov, Seth Theodore Proctor, Thomas Jonathan Harwood
  • Patent number: 10042580
    Abstract: A lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed, as well as a barrier request that requests ordering of memory access requests prior to and after the barrier request. The barrier request precedes a copy-type request and a paste-type request of the memory move in program order. Prior to completion of processing of the barrier request, the lower level cache allocates first and second state machines to service the copy-type and paste-type requests. The first state machine speculatively reads a data granule identified by a source real address of the copy-type request into a non-architected buffer. After processing of the barrier request is complete, the second state machine writes the data granule from the non-architected buffer to a storage location identified by a destination real address of the paste-type request.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: August 7, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Derek E. Williams
  • Patent number: 9983882
    Abstract: Execution of instructions in a transactional environment is selectively controlled. A TRANSACTION BEGIN instruction initiates a transaction and includes controls that selectively indicate whether certain types of instructions are permitted to execute within the transaction. The controls include one or more of an allow access register modification control and an allow floating point operation control.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: May 29, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Christian Jacobi, Robert R. Rogers, Timothy J. Slegel
  • Patent number: 9977743
    Abstract: A processing device includes a first counter having a first count value of a number of child pages among a plurality of child pages present in an enclave memory of a first virtual machine (VM). The plurality of child pages are associated with a parent page in the enclave memory. The processing device includes a second counter having a second count value of a number of child pages among the plurality of child pages not present in the enclave memory and being shared by a second VM, wherein the second VM is different from the first VM. A non-zero value of at least one of the first counter or the second counter prevents eviction of the parent page from the enclave memory.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: May 22, 2018
    Assignee: Intel Corporation
    Inventors: Rebekah M. Leslie-Hurd, Francis X. McKeen, Carlos V. Rozas, Somnath Chakrabarti, Asit Mallick
  • Patent number: 9959123
    Abstract: An approach is provided is provided in which a computing system matches a writeback instruction tag (ITAG) to an entry instruction tag (ITAG) included in an issue queue entry. The writeback ITAG is provided by a first of multiple load store units. The issue queue entry includes multiple ready bits, each of which corresponds to one of the multiple load store units. In response to matching the writeback ITAG to the entry ITAG, the computer system sets a first ready bit corresponding to the first load store unit. In turn, the computing system issues an instruction corresponding to the entry ITAG based upon detecting that each of the multiple ready bits is set.
    Type: Grant
    Filed: August 15, 2015
    Date of Patent: May 1, 2018
    Assignee: International Business Machines Corporation
    Inventors: Joshua W. Bowman, Sundeep Chadha, Michael J. Genden, Dhivya Jeganathan, Dung Q. Nguyen, David R. Terry, Eula F. Tolentino
  • Patent number: 9959101
    Abstract: External references are resolved in a software compiling and linking environment by identifying a group of related external references and by processing the group of external references until a stopping condition is satisfied. The external references are processed by selecting a next external reference from the group of external references as a current external reference and by resolving the current external reference with a matching definition if a matching definition for the current external reference exists. The stopping condition is designated as being satisfied if either the selected external reference is resolved, or if each external reference in the group of external references has been selected.
    Type: Grant
    Filed: May 14, 2013
    Date of Patent: May 1, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Leona D. Baumgart, Allan H. Kielstra, John R. Ehrman, Barry L. Lichtenstein
  • Patent number: 9952976
    Abstract: A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: April 24, 2018
    Assignee: International Business Machines Corporation
    Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 9952840
    Abstract: External references are resolved in a software compiling and linking environment by identifying a group of related external references and by processing the group of external references until a stopping condition is satisfied. The external references are processed by selecting a next external reference from the group of external references as a current external reference and by resolving the current external reference with a matching definition if a matching definition for the current external reference exists. The stopping condition is designated as being satisfied if either the selected external reference is resolved, or if each external reference in the group of external references has been selected.
    Type: Grant
    Filed: May 15, 2012
    Date of Patent: April 24, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Leona D. Baumgart, Allan H. Kielstra, John R. Ehrman, Barry L. Lichtenstein