Patents Examined by George Giroux
  • Patent number: 10761849
    Abstract: A processor of an aspect includes a decode unit to decode a prior instruction that is to have at least a first context, and a subsequent instruction. The subsequent instruction is to be after the prior instruction in original program order. The decode unit is to use the first context of the prior instruction to determine a second context for the subsequent instruction. The processor also includes an execution unit coupled with the decode unit. The execution unit is to perform the subsequent instruction based at least in part on the second context. Other processors, methods, systems, and machine-readable medium are also disclosed.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Ching-Tsun Chou, Oleg Margulis, Tyler N. Sondag
  • Patent number: 10762429
    Abstract: Emotional/cognitive state presentation is described. When two or more users, each using a device configured to present emotional/cognitive state data, are in proximity to one another, each device communicates an emotional/cognitive state of the user of the device to another device. Upon receiving data indicating an emotional/cognitive state of another user, an indication of the emotional/cognitive state of the user is presented.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: September 1, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Patent number: 10754818
    Abstract: A multiprocessor device includes external memory, processors, a memory aggregate unit, register memory, a multiplexer, and an overall control unit. The memory aggregate unit aggregates memory accesses of the processors. The register memory is prepared by a number equal to the product of the number of registers managed by the processors and the maximum number of processes of the processors. The multiplexer accesses the register memory according to a command given against register access of the processors. The overall control unit extracts a parameter from the command and provides the parameter to the processors and multiplexer, and controls them, as well as has a given number of processes consecutively processed using the same command while having addressing for the register memory changed by the processors, and when the given number of processes ends, has the command switched to a next command and processing repeated for a given number of processes.
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: August 25, 2020
    Assignee: ArchiTek Corporation
    Inventor: Shuichi Takada
  • Patent number: 10719587
    Abstract: Some embodiments of an entitlement model have been presented. In one embodiment, a centralized server distributes copies of an operating system from a software vendor to a set of virtual guests of a virtual host running on a physical computing machine. The centralized server and the physical computing machine are coupled to each other within an internal network of a customer of the software vendor, whereas the centralized server has access to the software vendor external to the internal network of the customer. The centralized server may interact with a hypervisor of the physical computing machine to determine what type of license of the operating system the virtual host has and a number of copies of the operating system requested by the virtual guests.
    Type: Grant
    Filed: June 25, 2008
    Date of Patent: July 21, 2020
    Assignee: Red Hat, Inc.
    Inventors: Michael B. McCune, Peter A. Vetere, Robin L. Norwood, Maureen E. Duffy
  • Patent number: 10713044
    Abstract: A processor includes packed data registers and a decode unit to decode an instruction. The instruction is to indicate a first source operand having at least one lane of bits, and a second source packed data operand having a number of sub-lane sized bit selection elements. An execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the instruction, stores a result operand in a destination storage location. The result operand includes, a different corresponding bit for each of the number of sub-lane sized bit selection elements. A value of each bit of the result operand corresponding to a sub-lane sized bit selection element is that of a bit of a corresponding lane of bits, of the at least one lane of bits of the first source operand, which is indicated by the corresponding sub-lane sized bit selection element.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Roger Espasa, Guillem Sole, David Guillen Fandos
  • Patent number: 10705844
    Abstract: In a data processing method, a method and device for adjusting the number of registers used in a running thread according to a situation are disclosed.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: July 7, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Choonki Jang
  • Patent number: 10699195
    Abstract: Systems and methods are disclosed herein for ensuring a safe mutation of a neural network. A processor determines a threshold value representing a limit on an amount of divergence of response for the neural network. The processor identifies a set of weights for the neural network, the set of weights beginning as an initial set of weights. The processor trains the neural network by repeating steps including determining a safe mutation representing a perturbation that results in a response of the neural network that is within the threshold divergence, and modifying the set of weights of the neural network in accordance with the safe mutation.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 30, 2020
    Assignee: Uber Technologies, Inc.
    Inventors: Joel Anthony Lehman, Kenneth Owen Stanley, Jeffrey Michael Clune
  • Patent number: 10698858
    Abstract: A multiprocessor system includes a first microprocessor and a second microprocessor. An external memory system is coupled to the first and second microprocessors and is configured to receive and temporarily store messages transferred between the first and second microprocessors. A first signaling pathway may be configured to send message transmission coordination signals from the first microprocessor to the second microprocessor. A second signaling pathway may be configured to send message transmission coordination signals from the second microprocessor to the first microprocessor. The first signaling pathway may be independent of the second signaling pathway. The first signaling pathway may be coupled to at least two flag registers associated with the second microprocessor. The second signaling pathway may be coupled to at least two flag registers associated with the first microprocessor.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: June 30, 2020
    Assignee: EMC IP Holding Company LLC
    Inventor: Paul A. Shubel
  • Patent number: 10691460
    Abstract: A method includes a processor providing at least one line entry address tag in each line of a branch predictor; indexing into the branch predictor with a current line address to predict a taken branch's target address and a next line address; re-indexing into the branch predictor with one of a predicted next line address or a sequential next line address when the at least one line entry address tag does not match the current line address; using branch prediction content compared against a search address to predict a direction and targets of branches and determining when a new line address is generated; and re-indexing into the branch predictor with a corrected next line address when it is determined that one of the predicted next line address or the sequential next line address differs from the new line address.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: June 23, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. Bonanno, Brian R. Prasky
  • Patent number: 10657445
    Abstract: Disclosed are systems and methods for training and executing a neural network for collaborative monitoring of resource usage metrics. For example, a method may include receiving user data sets, grouping the user data sets into one or more clusters of user data sets, grouping each of the one or more clusters into a plurality of subclusters, for each of the plurality of subclusters, training the neural network to associate the subcluster with one or more sequential patterns found within the subcluster, grouping the plurality of user data sets into a plurality of teams, receiving a first series of transactions of a first user, inputting the first series of transactions into the trained neural network, classifying, using the trained neural network, the first user into a subcluster among the plurality of subclusters, generating a metric associated with the first series of transactions, generating a recommendation to the first user.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 19, 2020
    Assignee: Capital One Services, LLC
    Inventors: Reza Farivar, Jeremy Goodsitt, Fardin Abdi Taghi Abad, Austin Walters, Mark Watson, Anh Truong, Vincent Pham
  • Patent number: 10649774
    Abstract: A method in one aspect may include receiving a multiply instruction. The multiply instruction may indicate a first source operand and a second source operand. A product of the first and second source operands may be stored in one or more destination operands indicated by the multiply instruction. Execution of the multiply instruction may complete without writing a carry flag. Other methods are also disclosed, as are apparatus, systems, and instructions on machine-readable medium.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Wajdi K. Feghali, Erdinc Ozturk, Gilbert M. Wolrich, Martin G. Dixon, Mark C. Davis, Sean P. Mirkes, Alexandre J. Farcy, Bret L. Toll, Maxim Loktyukhin
  • Patent number: 10642601
    Abstract: An architecture disposed in an integrated circuit for in-application programming of flash-based programmable logic devices includes a processor coupled to a processor system bus. An I/O peripheral is coupled to the processor over the system bus and is also coupled to an off-chip data source. A programmable logic device fabric includes flash-based programmable devices. A program controller is coupled to the flash-based programmable devices. An in-application programming controller is coupled to the program controller and is coupled to the processor over the system bus.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: May 5, 2020
    Assignee: Microsemi SoC Corporation
    Inventors: Venkatesh Narayanan, Kenneth R. Irving, Ming-Hoe Kiu
  • Patent number: 10642619
    Abstract: Embodiments relate to branch prediction using a pattern history table (PHT) that is indexed using a global path vector (GPV). An aspect includes receiving a search address by a branch prediction logic that is in communication with the PHT and the GPV. Another aspect includes starting with the search address, simultaneously determining a plurality of branch predictions by the branch prediction logic based on the PHT, wherein the plurality of branch predictions comprises one of: (i) at least one not taken prediction and a single taken prediction, and (ii) a plurality of not taken predictions. Another aspect includes updating the GPV by shifting an instruction identifier of a branch instruction associated with a taken prediction into the GPV, wherein the GPV is not updated based on any not taken prediction.
    Type: Grant
    Filed: October 30, 2014
    Date of Patent: May 5, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. Bonanno, Matthias D. Heizmann, Daniel Lipetz, Brian R. Prasky
  • Patent number: 10628163
    Abstract: A method and apparatus for controlling pre-fetching in a processor. A processor includes an execution pipeline and an instruction pre-fetch unit. The execution pipeline is configured to execute instructions. The instruction pre-fetch unit is coupled to the execution pipeline. The instruction pre-fetch unit includes instruction storage to store pre-fetched instructions, and pre-fetch control logic. The pre-fetch control logic is configured to fetch instructions from memory and store the fetched instructions in the instruction storage. The pre-fetch control logic is also configured to provide instructions stored in the instruction storage to the execution pipeline for execution. The pre-fetch control logic is further configured set a maximum number of instruction words to be pre-fetched for execution subsequent to execution of an instruction currently being executed in the execution pipeline.
    Type: Grant
    Filed: April 17, 2014
    Date of Patent: April 21, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Christian Wiencke, Johann Zipperer
  • Patent number: 10620956
    Abstract: An instruction defined to be a looping instruction that repeats a plurality of times to perform an operation on a defined amount of data is obtained. The looping instruction is expanded into a sequence of operations. The sequence of operations is a non-looping sequence of operations to perform the operation on the defined amount of data.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: April 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10613862
    Abstract: An instruction architecturally defined to be a looping instruction, in which a loop is configured to repeat a plurality of times to perform an operation on up to a defined number of units of data, is to be processed. The processing includes replicating a selected character a number of times to provide a replicated selected character, and using a sequence of operations to perform the operation, the sequence of operations replacing the loop and providing a non-looping sequence to perform the operation on up to the defined number of units of data. The sequence of operations is configured to repeat one or more times, and to terminate based on the replicated selected character.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: April 7, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10606599
    Abstract: A system and method for using an operation (op) cache is disclosed. The system and method include an op cache for caching previously decoded instructions. The op cache includes a plurality of physically indexed and tagged instructions allowing sharing of instructions between threads. The op cache is chained through multiple ways allowing service of a plurality of instructions in a cache line. The op cache is stored between a shared operation storage and immediate/displacement storage to maximize capacity.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 31, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: David N. Suggs
  • Patent number: 10599428
    Abstract: Processing circuitry supports overlapped execution of vector instructions when at least one beat of a first vector instruction is performed in parallel with at least one beat of a second vector instruction. The processing circuitry also supports mixed-scalar-vector instructions for which one of a destination register and one or more source registers is a vector register and another is a scalar register. In a sequence including first and subsequent mixed-scalar-vector instructions, instances of relaxed execution which can potentially lead to uncertain and incorrect results are permitted by the processing circuitry when the instructions are separated by fewer than a predetermined number of intervening instructions. In practice the situations which lead to the uncertain results are very rare and so it is not justified providing relatively expensive dependency checking circuitry for eliminating such cases.
    Type: Grant
    Filed: March 23, 2016
    Date of Patent: March 24, 2020
    Assignee: ARM Limited
    Inventor: Thomas Christopher Grocutt
  • Patent number: 10592248
    Abstract: Techniques for improving branch target buffer (“BTB”) operation. A compressed BTB is included within a branch prediction unit along with an uncompressed BTB. To support prediction of up to two branch instructions per cycle, the uncompressed BTB includes entries that each store data for up to two branch predictions. The compressed BTB includes entries that store data for only a single branch instruction for situations where storing that single branch instruction in the uncompressed BTB would waste space in that buffer. Space would be wasted in the uncompressed BTB due to the fact that, in order to support two branch lookups per cycle, prediction data for two branches must have certain features in common (such as cache line address) in order to be stored together in a single entry.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: March 17, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Steven R. Havlir
  • Patent number: 10592466
    Abstract: A GPU architecture employs a crossbar switch to preferentially store operand vectors in a compressed form allowing reduction in the number of memory circuits that must be activated during an operand fetch and to allow existing execution units to be used for scalar execution. Scalar execution can be performed during branch divergence.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: March 17, 2020
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Nam Sung Kim, Zhenhong Liu