Processing Control Patents (Class 712/220)
  • Patent number: 11210233
    Abstract: A method and system of managing addresses translations where in response to a request to invalidate an address translation, the scope of the address translation invalidation operation is determined; an address translation invalidation probe is installed or activated in a memory management unit (MMU) pipeline; whether an address translation undergoing a table walk operation is within a scope of the address translation invalidation probe is determined; and in response to the address translation undergoing a table walk operation being within the scope of the address translation invalidation probe, preventing or blocking the table walk operation from writing data to a translation buffer in the MMU. The probe also performs an address translation comparison to determine whether an address translation request coming down the MMU pipeline is within the scope of the probe, and if within the scope of the probe, prevents, blocks and/or rejects the address translation.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Jake Truelove, David Campbell
  • Patent number: 11196822
    Abstract: Embodiments provide a service acceleration method, system, apparatus, and server in an NFV system. For achieving these, a programmable package determining entity in the NFV system can determine a target service function that needs to be accelerated. A target programmable package corresponding to the target service function that needs to be accelerated can be obtained and the target programmable package to an acceleration engine in a network functions virtualization infrastructure (NFVI) can be sent. The acceleration engine runs the target programmable package to accelerate the target service function that needs to be accelerated. A programmable package of the acceleration engine can thus be dynamically replaced, and a service diversity requirement can be met, thereby improving scalability of a service acceleration function in the NFV system.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: December 7, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Donglei Luo, Yong Liu
  • Patent number: 11194478
    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: December 7, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Lee Dobson McFearin, Alan Gatherer, Hao Luan
  • Patent number: 11188428
    Abstract: A method, an apparatus, and a computer-readable storage medium having instructions for cancelling a redundancy of two or more redundant modules. Results of the two or more redundant modules are received; reliabilities of the results are ascertained; and, based on the ascertained reliabilities, an overall result is determined from the results. The overall result is output for further processing.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: November 30, 2021
    Inventors: Peter Schlicht, Fabian Hüger
  • Patent number: 11176055
    Abstract: A pipeline in a processor core includes: at least one stage that decodes instructions including load instructions that retrieve data stored at respective virtual addresses, at least one stage that issues at least some decoded load instructions out-of-order, and at least one stage that initiates at least one prefetch operation. Copies of page table entries mapping virtual addresses to physical addresses are stored in a TLB. Managing misses in the TLB includes: handling a load instruction issued out-of-order using a hardware page table walker, after a miss in the TLB, handling a prefetch operation using the hardware page table walker, after a miss in the TLB, and handling any software-calling faults triggered by out-of-order load instructions handled by the hardware page table walker differently from any software-calling faults triggered by prefetch operations handled by the hardware page table walker.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: November 16, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shubhendu Sekhar Mukherjee, David Albert Carlson, Michael Bertone
  • Patent number: 11138138
    Abstract: Control circuitry controls the operations of a central processing unit, CPU, which is associated with a nominal clock frequency. The CPU is further coupled to an I/O range and configured to deliver input to an application. The control circuitry controls the CPU to poll the I/O range for input to the application. The control circuitry also monitors whether or not each poll results in input to the application and adjusts a clock frequency at which the CPU operates to a clock frequency lower than the nominal clock frequency if a pre-defined number of polls resulting in no input is detected.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: October 5, 2021
    Assignee: Nasdaq Technology AB
    Inventor: Hakan Winbom
  • Patent number: 11132755
    Abstract: Provided are techniques for extracting, deriving, and using legal matter semantics to generate e-discovery queries in an e-discovery system. A semantic knowledge graph is iteratively built by receiving meet and confer document instances, legal matter types, historical e-discovery queries for different legal matters, and legal semantic types extracted from the historical e-discovery queries. The legal semantic types are added to the semantic knowledge graph, and a list of terms that serve as a basis of an initial query are identified. An e-discovery query is generated for an e-discovery system. The e-discovery query is modified using the semantic knowledge graph and additional input by receiving a legal matter type and meet and confer information, obtaining the legal semantic types that are relevant to the legal matter type and the meet and confer information, and modifying the e-discovery query. The modified e-discovery query is provided. Then, the modified e-discovery query is executed.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: September 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Roger C. Raphael, Rajesh M. Desai, Nazrul Islam, Satwik Hebbar
  • Patent number: 11121996
    Abstract: A method may include the following. Preset events related to a peer communication party are determined. The preset events are generated by operations performed by the peer communication party based on a communication application. Whether a communication page of a local communication party with the peer communication party is in an open state is detected. Description information of the preset events is displayed in an expedited processing page associated with the communication page in a centralized manner when the communication page is detected to be in an open state. Using the technical solutions of the present application, the local communication party can open and view the expedited processing page conveniently and quickly when communicating with the peer communication party, and view and process corresponding preset events, thereby further simplifying the user operations and improving processing efficiency.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: September 14, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Hang Chen, Zhenhao Wu, Lili Zhang, Daping Zhang, Di Zhang, Lidong Cao, Di Su, Yixin Huang, Jianjun Zhao
  • Patent number: 11086632
    Abstract: A computer system is presented. The computer system comprises a memory system that stores data, a computer processor, and a memory access engine. The memory access engine is configured to: receive a first instruction of a computing process from the computer processor, wherein the first instruction is for accessing the data from the memory system; acquire at least a part of the data from the memory system based on the first instruction; and after the acquisition of the at least a first part of the data, transmit an indication to the computer processor to enable the computer processor to execute a second instruction of the computing process.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 10, 2021
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Xiaowei Jiang
  • Patent number: 11068303
    Abstract: A computer-implemented method is provided and includes allocating, by a processor, an instruction to a first thread, decoding, by the processor, the instruction, determining, by the processor, a type of the instruction based on information obtained by decoding the instruction, and based on determining that the instruction is a disruptive complex instruction, changing a mode of allocating hardware resources to an instruction-based allocation mode. In the instruction-based allocation mode, the processor adjusts allocation of the hardware resources among a first thread and a second thread based on types of instructions allocated to the first and second threads.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: July 20, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Avery Francois, Gregory William Alexander, Christian Jacobi
  • Patent number: 11029960
    Abstract: Apparatus and method for widened SIMD execution on a limited register file. For example, one embodiment of an apparatus comprises: instruction dispatch circuitry to dispatch instructions of a thread for execution, including a first instruction to indicate a start of a double execution instruction sequence and a second instruction to indicate an end of a double execution instruction sequence; and execution circuitry including single instruction multiple data (SIMD) circuitry, the execution circuitry to execute the double execution instruction sequence in a first pass using a first set of lanes of the SIMD circuitry and to execute the double execution instruction sequence in a second pass following the first pass using a second set of lanes of the SIMD circuitry.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Marek Targowski, Konrad Trifunović
  • Patent number: 11018957
    Abstract: A method, system, and computer program product, the method comprising: obtaining a data path representing flow of data in processing a service request within a network computing environment having system resources; analyzing the data path to identify usage of the system resources required by the service request processing; determining, based on the usage of the system resources, an optimization action expected to improve the usage of the system resources; and implementing the optimization action in accordance with the data path, thereby modifying operation of the cloud computing environment in handling future service requests.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: May 25, 2021
    Assignee: GRANULATE CLOUD SOLUTIONS LTD.
    Inventors: Asaf Ezra, Tal Saiag, Ron Gruner
  • Patent number: 10990568
    Abstract: Systems and methods of automated machine learning for modeling a data set according to a modeling intent are presented. A modeling service receives a data set from a submitting party as well as a set of constraints. A pipeline generator generates a set of pipelines according to a modeling intent of a data set and in view of the set of constraints. A machine learned trained judge conducts an analysis of the pipelines to identify an optimal pipeline to train. Optimal results are generated according to the optimal pipeline and the optimal results are provided to the submitting party in response to receiving the data set and constraints.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: April 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Justin Ormont, Yunling Wang, Aidan C Crook, Sarthak Shah
  • Patent number: 10970119
    Abstract: Technologies for hybrid acceleration of code include a computing device (100) having a processor (120), a field-programmable gate array (FPGA) (130), and an application-specific integrated circuit (ASIC) (132). The computing device (100) offloads a service request, such as a cryptographic request or a packet processing request, to the FPGA (130). The FPGA (130) performs one or more algorithmic tasks of an algorithm to perform the service request. The FPGA (130) determines one or more primitive tasks associated with an algorithm task and encapsulates each primitive task in a buffer that is accessible by the ASIC (132). The ASIC (132) performs the primitive tasks in response to encapsulation in the buffer, and the FPGA (130) returns results of the algorithm. The primitive operations may include cryptographic primitives such as modular exponentiation, modular multiplicative inverse, and modular multiplication.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: April 6, 2021
    Assignee: INTEL CORPORATION
    Inventors: Ned M. Smith, Changzheng Wei, Songwu Shen, Ziye Yang, Junyuan Wang, Weigang Li, Wenqian Yu
  • Patent number: 10962593
    Abstract: A system-on-chip (SoC) includes: a plurality of processors configured to store respective debugging information in response to respective information extraction commands received in a deadlock state, the plurality of processors having different architectures; a system bus connected to the plurality of processors; and an SoC manager configured to generate the respective information extraction commands differently according to an architecture of each of the plurality of processors in response to detecting occurrence of the deadlock state, and transmit the respective information extraction commands to the plurality of processors through a bus separate from the system bus.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: March 30, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyung Il Woo, Hyun Chul Baek
  • Patent number: 10956217
    Abstract: Methods, apparatus, and articles of manufacture are disclosed to trigger a scaling action for scaling an application having a set of one or more virtual machines (VMs). Virtualized Network Functions (VNF) are scaled by adding or removing resources to/from existing VMs. In an example method for triggering a scaling action for scaling an application having a set of one or more VMs, a threshold value is adapted based on an evaluation of a monitored system key performance indicator and a monitored external key performance indicator. The threshold value is used for triggering the scaling action. The scaling action is validated based on the monitored external key performance indicator.
    Type: Grant
    Filed: December 4, 2015
    Date of Patent: March 23, 2021
    Assignee: Telefonaktiebolaget LM Ericsson (Publ)
    Inventors: Ibtissam El Khayat, Joerg Aelken
  • Patent number: 10949256
    Abstract: A controller includes one or more hardware components for performing operations, an interconnect, and a plurality of processors connected to the one or more hardware components through the interconnect. Each processor of the plurality of processors is configured to perform multithreading to concurrently handle multiple threads of execution, and assign a different thread identifier or master ID value to each concurrently handled thread of execution. An instruction is generated for a hardware component by executing a thread of the concurrently handled threads of execution. The instruction includes the thread identifier or indicates the master ID value assigned to the thread. The generated instruction is sent to the hardware component through the interconnect.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: March 16, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Leonid Minz, Tal Sharifie
  • Patent number: 10915375
    Abstract: A method and device for the synchronization of processes, a first signal being sent by a clock-giving processor, the first signal having, in an alternating manner, first edges having a first direction and second edges having a second direction opposite the first edge, a temporal distance between at least one of the first edges and at least one of the second edges being determined as a function of a state of a counter in the clock-giving processor. A method for the synchronization of processes, a first signal being received by a clock-receiving processor, the first signal having, in an alternating manner, first edges having a first direction and second edges having a second direction opposite the first edge, a state of a counter in the clock-receiving processor being determined as a function of a temporal distance between at least one of the first edges and at least one of the second edges.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: February 9, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Thomas Gebauer, Christoph Mueller, Cristina Murillo Miranda
  • Patent number: 10911164
    Abstract: The present disclosure provides apparatus and methods for the calibration of analog circuitry on an integrated circuit. One embodiment relates to a method of calibrating analog circuitry within an integrated circuit. A microcontroller that is embedded in the integrated circuit is booted up. A reset control signal is sent to reset an analog circuit in the integrated circuit, and a response signal for the analog circuit is monitored by the microcontroller. Based on the response signal, a calibration parameter for the analog circuit is determined, and the analog circuit is 10 configured using the calibration parameter. Other embodiments, aspects and features are also disclosed.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: February 2, 2021
    Assignee: Altera Corporation
    Inventors: Neville Carvalho, Tim Tri Hoang, Sergey Shumarayev
  • Patent number: 10901705
    Abstract: A solution providing for the dynamic design, use, and modification of models is provided. The solution can receive an electronic communication identifying a request or event and process the electronic communication in a runtime environment by binding a model of the collection of models to dynamically construct an implementation of the model. Collective properties of the set of related models can emerge dynamically. The binding can comprise late-binding of an application associated with the collection of models to enable at least one user to perform at least one interaction using the environment without disrupting any of the environment or the application.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: January 26, 2021
    Assignee: EnterpriseWeb LLC
    Inventors: Dave M. Duggal, William J. Malyk
  • Patent number: 10892944
    Abstract: A cloud-based hardware accelerator is selected by deploying an accelerator image to first and second clouds to generate first and second cloud-based hardware accelerators, executing a first request on the first and second cloud-based hardware accelerators, monitoring characteristics of the first and second cloud-based hardware accelerators executing the first request, which may include execution time and monetary cost, and selecting one of the first and second hardware accelerators according to defined selection criteria. Subsequent requests are then routed to the selected cloud-based accelerator.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: January 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
  • Patent number: 10877765
    Abstract: Methods and apparatuses relating to assigning a logical thread to a physical thread. In one embodiment, an apparatus includes a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform the following: translating an instruction into a translated instruction, assigning a logical thread for the translated instruction, and providing a thread map hint for the translated instruction; and a hardware scheduler to assign a physical thread of the hardware processor to execute the logical thread based on the thread map hint.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Sebastian Winkel, Ethan Schuchman, Rainer Theuer, Gregor Stellpflug, Tyler N. Sondag
  • Patent number: 10877926
    Abstract: A method and system for partial wavefront merger is described. Vector processing machines employ the partial wavefront merger to merge partial wavefronts into one or more wavefronts. The system includes a partial wavefront manager and unified registers. The partial wavefront manager detects wavefronts in different single-instruction-multiple-data (“SIMD”) units which contain inactive work items and active work items (hereinafter referred to as “partial wavefronts”), moves the partial wavefronts into one or more SIMD unit(s) and merges the partial wavefronts into one or more wavefront(s). The unified register allows each active work item in the one or more merged wavefront(s) to access the previously allocated registers in the originating SIMD units. Consequently, the contents of the unified registers do not have to be copied to the SIMD unit(s) executing the one or merged wavefront(s).
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 29, 2020
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Yunpeng Zhu, Jimshed Mirza
  • Patent number: 10846417
    Abstract: Techniques for identifying permitted illegal access operations in a module system are disclosed. An operation, expressed in a first module, that attempts to access a module element of a second module is identified. Based on a module declaration associated with the second module, the module element is determined inaccessible to the first module. Additionally or alternatively, based on an access modifier associated with the module element, the module element is determined inaccessible to the operation. The operation is determined as an illegal access operation. The illegal access operation is permitted to access the module element. A warning corresponding to the illegal access operation is generated.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: November 24, 2020
    Assignee: Oracle International Corporation
    Inventors: Alan Bateman, Chris Hegarty, Alexander R. Buckley, Brian Goetz, Mark B. Reinhold
  • Patent number: 10831551
    Abstract: A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Alicia E. Chin, Yonggang Hu, Zhenhua Hu, Jason T S Lam, Zhimin Lin
  • Patent number: 10831490
    Abstract: Provided are an apparatus and a method for effectively managing threads diverged by a conditional branch based on Single Instruction Multiple-based Data (SIMD). The apparatus includes: a plurality of Front End Units (FEUs) configured to fetch, for execution by SIMD lanes, instructions of thread groups of a program flow; and a controller configured to schedule a thread group based on SIMD lane availability information, activate an FEU of the plurality of FEUs, and control the activated FEU to fetch an instruction for processing the scheduled thread group.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: November 10, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Seung-Hun Jin
  • Patent number: 10831620
    Abstract: A method, executed by a computer, includes pairing a first core with a second core to form a first core group, wherein each core of the group has a plurality of functional units, transferring instructions received by the first core to the second core for execution via a first inter-core communication bus, and executing the instructions on the second core. A computer system and computer program product corresponding to the above method are also disclosed herein.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Manoj Dusanapudi, Prasanna Jayaraman, Rahul M. Rao
  • Patent number: 10819638
    Abstract: In an embodiment, a method includes identifying a core of a multicore processor to which an incoming packet that is received in a packet buffer is to be directed, and if the core is powered down, transmitting a first message to cause the core to be powered up prior to arrival of the incoming packet at a head of the packet buffer. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: October 27, 2020
    Assignee: Intel Corporation
    Inventors: Steen K. Larsen, Bryan E. Veal, Daniel S. Lake, Travis T. Schluessler, Mazhar I. Memon
  • Patent number: 10803409
    Abstract: A system and method of managing and prioritizing tasks amongst resources and, more particularly, to a system and method for providing automatic task assignment and notification amongst globally dispersed human resources. The system includes a change of management application configured to store a list of tasks and a task notifier configured to retrieve a list of geographically-dispersed resources and notify selected ones of the geographically-dispersed resources of a priority of completion of one or more tasks retrieved from the change of management application. The system further includes a message application configured to be polled by the task notifier to determine which of the geographically dispersed resources is online or currently working.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: October 13, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: William P. Shaouy
  • Patent number: 10776120
    Abstract: There is provided an apparatus comprising processing circuitry to execute a transaction comprising a number of program instructions that execute to generate updates to state data, to commit the updates if the transaction completes without a conflict, and to generate trace control signals during execution of the number of program instructions. The processing circuitry uses at least one resource during execution of the program instructions. Transaction trace circuitry generates trace items in response to the trace control signals. In response to the trace control signals indicating that a change in a usage level of the at least one resource has occurred during execution of the program instructions, the transaction trace circuitry generates at least one trace item that indicates the usage level of the at least one resource.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: September 15, 2020
    Assignee: ARM Limited
    Inventors: Michael John Williams, John Michael Horley, Stephan Diestelhorst, Richard Roy Grisenthwaite
  • Patent number: 10761885
    Abstract: An apparatus and method are provided for executing thread groups. The apparatus comprises scheduling circuitry for selecting for execution a first thread group from a plurality of thread groups, and thread processing circuitry that is responsive to the scheduling circuitry to execute active threads of the first thread group in dependence on a common program counter shared between the active threads. In response to an exit event occurring for the first thread group, the thread processing circuitry determines whether a program counter check condition is present, and this can be used to trigger program counter checking circuitry to perform a program counter check operation to update the common program counter and an active thread indication for the first thread group.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: September 1, 2020
    Assignee: ARM Limited
    Inventors: Isidoros Sideris, Eugenia Cordero-Crespo, Amir Kleen
  • Patent number: 10747873
    Abstract: In one example, a system for a system management mode (SMM) privilege architecture includes a computing device comprising: a first portion of SMM instructions to set up a number of resources and implement a privilege architecture for the SMM of a computing device and a second portion of SMM instructions to execute a number of functions during the SMM of the computing device, wherein the privilege architecture assigns the first portion of SMM instructions to a first privilege level and assigns the second portion of SMM instructions to a second privilege level.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: August 18, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Richard A. Bramley, Jr., David Plaquin, Maugan Villatel, Jeffrey K. Jeansonne
  • Patent number: 10748236
    Abstract: A warp processing unit controls, in dependence on a warp program counter shared between a plurality of threads processing respective graphics fragments, fetching of a next instruction to be executed for at least some of the plurality of threads. In response to a determination that a given subset of threads is to be discarded when at least one other subset of threads is to continue, the warp processing unit processes the given subset of threads in a discarded state. For a thread processed in the discarded state, execution of instructions continues for the discarded thread, and at least one of: generation of data access messages triggered by the discarded thread is suppressed; and at least one processing operation, which would be deferred until completion of the discarded thread had the thread not been discarded, is enabled to be commenced independently of an outcome of the discarded thread.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: August 18, 2020
    Assignee: ARM Limited
    Inventors: Stephane Forey, Isidoros Sideris, Reimar Gisbert Döffinger
  • Patent number: 10725784
    Abstract: A data processing system has an execution pipeline with programmable execution stages which execute instructions to perform data processing operations provided by a host processor and in which execution threads are grouped together into groups in which the threads are executed in lockstep. The system also includes a compiler that compiles programs to generate instructions for the execution stages. The compiler is configured to, for an operation that comprises a memory transaction: issue to the execution stage instructions for executing the operation for the thread group to: perform the operation for the thread group as a whole; and provide the result of the operation to all the active threads of the group. At least one execution stage is configured to, in response to the instructions: perform the operation for the thread group as a whole; and provide the result of the operation to all the active threads of the group.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: July 28, 2020
    Assignee: Arm Limited
    Inventors: Robert Martin Elliott, Vatsalya Prasad
  • Patent number: 10684875
    Abstract: A mobile device including a memory including computer-executable instructions for synchronizing a virtual machine and a processor executing the computer-executable instructions, the computer-executable instructions, when executed by the processor, cause the processor to perform operations including executing a virtual machine using a memory; executing a hypervisor providing a synchronization daemon, the synchronization daemon monitoring the memory, the synchronization daemon generating a checkpoint indicating a change in the memory; the hypervisor initiating transmission of the change in the memory over a wireless network for delivery to a standby mobile device to synchronize the virtual machine on the standby mobile device.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: June 16, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Jeffrey E. Bickford, Ramon Caceres
  • Patent number: 10671562
    Abstract: A system-on-chip bus system includes a bus configured to connect function blocks of a system-on-chip to each other, and a clock gating unit connected to an interface unit of the bus and configured to basically gate a clock used in the operation of a bus bridge device mounted on the bus according to a state of a transaction detection signal.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: June 2, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaegeun Yun, Lingling Liao, Bub-chul Jeong
  • Patent number: 10658283
    Abstract: The invention relates to a method for manufacturing a device with a secure integrated-circuit chip, said device having an insulating substrate, electrically conductive surfaces on the substrate, which surfaces are connected or coupled to said electronic chip, said electrically conductive surfaces being produced by a step of depositing or transferring conductive material; the method is characterised in that said step of depositing or transferring conductive material is carried out by a technique of directly depositing metal microparticles, which are free of polymer or solvent, onto the substrate, said deposit being obtained by coalescence of the microparticles forming at least one or more uniform cohesive layers that rest directly in contact with the substrate. The invention also relates to the device obtained.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: May 19, 2020
    Assignee: THALES DIS FRANCE SA
    Inventors: Line Degeilh, Remy Janvrin, Lucile Dossetto, Alain Le Loc'h, Jean-Christophe Fidalgo
  • Patent number: 10649774
    Abstract: A method in one aspect may include receiving a multiply instruction. The multiply instruction may indicate a first source operand and a second source operand. A product of the first and second source operands may be stored in one or more destination operands indicated by the multiply instruction. Execution of the multiply instruction may complete without writing a carry flag. Other methods are also disclosed, as are apparatus, systems, and instructions on machine-readable medium.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Wajdi K. Feghali, Erdinc Ozturk, Gilbert M. Wolrich, Martin G. Dixon, Mark C. Davis, Sean P. Mirkes, Alexandre J. Farcy, Bret L. Toll, Maxim Loktyukhin
  • Patent number: 10635157
    Abstract: An information processing apparatus configured to control a parallel computer system, the information processing apparatus includes a processor configured to determine a plurality of power supply control domains by dividing a plurality of computation nodes, acquire scheduling information that indicates an allocation state of one or more first jobs to the plurality of computation nodes, for each of the plurality of power supply control domains, identify, based on the scheduling information, a first number of the computation nodes each of which does not execute the one or more first jobs, receive a request to execute a second job, identify a second number of computation nodes each of which is to be used for processing of the second job, and control to turn on power supply to a first power supply control domain of the plurality of power supply control domains based on the first and second numbers.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: April 28, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Keitaro Tagawa
  • Patent number: 10628373
    Abstract: Some embodiments described herein provide a method for transmitting an access request via a flexible register access bus. An access request may be received to access resource on an integrated circuit. The access request may be translated to a request packet having a data format compliant with the flexible register access bus. A routing path may be determined for the request packet based on a target register associated with the request packet. The request packet may be transmitted via the routing path to the target register. Information within the request packet may be translated to a local access protocol for the target register. Access to the resource may then be obtained via the target register based on the local access protocol.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: April 21, 2020
    Assignee: Marvell International Ltd.
    Inventor: Xiongzhi Ning
  • Patent number: 10606594
    Abstract: A method of executing, by a processor, a multi-thread including threads of the processor, includes setting a mask value indicating execution of one of the threads of the processor based on an instruction, setting an inverted mask value based on the set mask value; and executing the thread of the processor based on the set mask value and the set inverted mask value.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: March 31, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jin-seok Lee, Dong-kwan Suh, Seung-won Lee
  • Patent number: 10606675
    Abstract: A system for monitoring job execution includes an interface and a processor. The interface is configured to receive an indication to start a cluster processing job. The processor is configured to determine whether processing a data instance associated with the cluster processing job satisfies a watchdog criterion; and in the event that processing the data instance satisfies the watchdog criterion, cause the processing of the data instance to be killed.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: March 31, 2020
    Assignee: Databricks Inc.
    Inventors: Alicja Luszczak, Srinath Shankar, Shi Xin
  • Patent number: 10599548
    Abstract: There is disclosed in one example a computing apparatus, including: a processor; a multilevel cache including a plurality of cache levels; a peripheral device configured to write data directly to a directly writable cache; and a cache monitoring circuit, including cache counters La to be incremented when a cache line is allocated into the directly writable cache, Lp to be incremented when a cache line is processed by the processor and deallocated from the directly writable cache, and Le to be incremented when a cache line is evicted from the directly writable cache to the memory, wherein the cache monitoring circuit is to determine a direct write policy according to the cache counters.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: March 24, 2020
    Assignee: Intel Corporation
    Inventors: Ren Wang, Bin Li, Andrew J. Herdrich, Tsung-Yuan C. Tai, Ramakrishna Huggahalli
  • Patent number: 10572282
    Abstract: Techniques for implicit coscheduling of CPUs to improve corun performance of scheduled contexts are described. One technique minimizes skew by implementing corun migrations, and another technique minimizes skew by implementing a corun bonus mechanism. Skew between schedulable contexts may be calculated based on guest progress, where guest progress represents time spent executing guest operating system and guest application code. A non-linear skew catch-up algorithm is described that adjusts the progress of a context when the progress falls far behind its sibling contexts.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: February 25, 2020
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Carl A. Waldspurger
  • Patent number: 10552934
    Abstract: Methods and apparatus relating to reducing memory latency in graphics operations are described. In an embodiment, uniform data is transferred from a buffer to a General Register File (GRF) of a processor based at least in part on information stored in a gather table. The uniform data comprises data that is uniform across a plurality of primitives in a graphics operation. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: February 4, 2020
    Assignee: Intel Corporation
    Inventors: Michael Apodaca, David M. Cimini, Thomas F. Raoux, Somnath Ghosh, Uddipan Mukherjee, Debraj Bose, Sthiti Deka, Yohai Gevim
  • Patent number: 10545892
    Abstract: A multi-thread processor includes a plurality of hardware threads each of which generates an independent instruction flow, a thread scheduler that manages in what order a plurality of hardware threads are processed with a pre-established schedule, and an interrupt controller that receives an input interrupt request signal and assigns the interrupt request to an associated hardware thread, wherein the interrupt controller comprises a register in which information is stored for each channel of an interrupt request signal, and the information includes information regarding to which one or more than one of the plurality of hardware threads the interrupt request signal is associated.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: January 28, 2020
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Koji Adachi, Kazunori Miyamoto
  • Patent number: 10540737
    Abstract: Methods for estimating accelerator performance for dynamic hardware behaviors are disclosed. Computer program code to be executed on a first processing unit is received, and an execution of the computer code on the first processing unit is monitored to determine a plurality of performance characteristics. A plurality of dynamic hardware behaviors is determined by applying a clustering algorithm to the performance characteristics, and an equivalent accelerator portion of computer code to be executed on a second processing unit is generated by translating a set of instructions in a first portion of computer code corresponding to a first one of the plurality of dynamic hardware behaviors to an equivalent set of instructions to be executed on the second processing unit. An estimated measure of performance for executing the equivalent accelerator portion on the second processing unit is determined for the first one of the plurality of dynamic hardware behaviors.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Fausto Artico, Jose R. Brunheroto, Juan Gonzalez Garcia, Nelson Mimura Gonzalez
  • Patent number: 10534747
    Abstract: Technologies for providing a scalable architecture to efficiently perform compute operations in memory include a memory having media access circuitry coupled to a memory media. The media access circuitry is to access data from the memory media to perform a requested operation, perform, with each of multiple compute logic units included in the media access circuitry, the requested operation concurrently on the accessed data, and write, to the memory media, resultant data produced from execution of the requested operation.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: January 14, 2020
    Assignee: Intel Corporation
    Inventors: Shigeki Tomishima, Srikanth Srinivasan, Chetan Chauhan, Rajesh Sundaram, Jawad B. Khan
  • Patent number: 10514919
    Abstract: A data processing apparatus has processing circuitry for processing vector operands from a vector register store in response to vector micro-operations, some of which have control information identifying which data elements of the vector operands are selected for processing. Control circuitry detects vector micro-operations for which the control information specifies that a portion of the vector operand to be processed has no selected elements. If this is the case, then the control circuitry controls the processing circuitry to process a lower latency replacement micro-operation instead of the original micro-operation. This provides better performance than if a branch instruction is used to bypass the micro-operation if there are no selected elements.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: December 24, 2019
    Assignee: ARM Limited
    Inventors: Matthias Boettcher, Mbou Eyole-Monono, Giacomo Gabrielli
  • Patent number: 10467011
    Abstract: A processor of an aspect includes a decode unit to decode a thread pause instruction from a first thread. A back-end portion of the processor is coupled with the decode unit. The back-end portion of the processor, in response to the thread pause instruction, is to pause processing of subsequent instructions of the first thread for execution. The subsequent instructions occur after the thread pause instruction in program order. The back-end portion, in response to the thread pause instruction, is also to keep at least a majority of the back-end portion of the processor, empty of instructions of the first thread, except for the thread pause instruction, for a predetermined period of time. The majority may include a plurality of execution units and an instruction queue unit.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: November 5, 2019
    Assignee: Intel Corporation
    Inventors: Lihu Rappoport, Zeev Sperber, Michael Mishaeli, Stanislav Shwartsman, Lev Makovsky, Adi Yoaz, Ofer Levy