Patents Examined by Steven Snyder
  • Patent number: 9658829
    Abstract: Embodiments of a near optimal configurable adder tree for arbitrary shaped 2D block sum of absolute differences (SAD) calculation engine are generally described herein. Other embodiments may be described and claimed. In some embodiments, a configurable two-dimensional adder tree architecture for computing a sum of absolute differences (SAD) for various block sizes up to 16 by 16 comprises a first stage of one-dimensional adder trees and a second stage of one-dimensional adder trees, wherein each one-dimensional adder tree comprises an input routing network, a plurality of adder units, and an output routing network.
    Type: Grant
    Filed: October 19, 2009
    Date of Patent: May 23, 2017
    Assignee: Intel Corporation
    Inventors: Karthikeyan Vaithianathan, Arvind Sudarsanam
  • Patent number: 9653039
    Abstract: A method, apparatus and system for changing to which remote device a local device is in communication via a communication medium, communicates with a matrix switch forming part of the system by interruption of the communication medium by the local device. Upon receipt of a unit of information via interruption of the communication medium, the matrix switch causes the local device to be in communication with another remote device other than the remote device that it was previously in communication. In one embodiment, the switching is to a next available remote device of a plurality of remote devices while in another embodiment, the matrix switch switches the local device to a switch configuration device for further communication therewith via the communication medium, thereby allowing the local device to select which other remote device it desires to be in communication.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: May 16, 2017
    Assignee: THINKLOGICAL, LLC
    Inventor: Martin Green
  • Patent number: 9651942
    Abstract: A process control arrangement (PKA), having a number of fieldbus systems (DP1, PA1, FH), especially different fieldbus systems (DP1, PA1, FH), and having a number of fieldbus interfaces (PAP1, PAP2, PAP3), wherein each of the fieldbus systems (DP1, PA1, FH) is connected to at least one of the fieldbus interfaces (PAP1, PAP2, PAP3), wherein the fieldbus interfaces (PAP1, PAP2, PAP3) serve for communication between the fieldbus systems (DP1, PA1, FH) and a communication plane (ET2) superordinated to the fieldbus systems, only a first of the fieldbus interfaces (PAP1) is directly connected to the superordinated communication plane (ET2), and wherein the fieldbus interfaces (PAP1, PAP2, PAP3) are connected in series with one another beginning with the first of the fieldbus interfaces (PAP1).
    Type: Grant
    Filed: October 7, 2010
    Date of Patent: May 16, 2017
    Assignee: ENDRESS + HAUSER PROCESS SOLUTIONS AG
    Inventors: Robert Kolblin, Eugenio Ferreira Da Silva Neto, Michael Maneval, Jorg Reinkensmeier, Axel Poschmann
  • Patent number: 9639347
    Abstract: Updating a firmware package including receiving an update package for the firmware package, the firmware package including currently installed components supporting one of a plurality of software layers, the update package including update components that correspond to the currently installed components; retrieving information describing a state of the currently installed components; comparing the information describing the state of the currently installed components to information describing a state of the corresponding update components; constructing a revised update package that includes only update components having a state more recent than the state of the corresponding currently installed components; and updating the currently installed components with corresponding update components of the revised update package.
    Type: Grant
    Filed: December 21, 2009
    Date of Patent: May 2, 2017
    Assignee: International Business Machines Corporation
    Inventors: Michael H. Nolterieke, William G. Pagan
  • Patent number: 9639324
    Abstract: A system including an encoder module, a buffer first-in first-out (FIFO) module, a buffer manager module, N FIFO modules, and N input/output (I/O) modules. The encoder module encodes data received from a host and generates P units of encoded data, where P is an integer greater than 1. The buffer FIFO module receives the P units from the encoder module and outputs the P units. The buffer manager module receives the P units from the buffer FIFO module, stores the P units in a buffer, retrieves N of the P units from the buffer, and outputs the N units in parallel, where N is an integer greater than 1. The N FIFO modules respectively receive the N units in parallel directly from the buffer manager. The N I/O modules receive the N units from the N FIFO modules in parallel, respectively, and output the N units to a storage medium.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: May 2, 2017
    Assignee: Marvell World Trade LTD.
    Inventors: Tony Yoon, Siu-Hung Fred Au
  • Patent number: 9632783
    Abstract: Techniques are described for determining whether execution of an instruction would require reading more values from a memory cell of a general purpose register (GPR) than a read port of the memory cell would allow. In such a case, the techniques may store, prior to execution of the instruction, one or more values from the memory cell in a separate conflict queue. During execution of the instruction to implement an operation defined by the instruction, one value that is an operand of the operation would be read from the memory cell and another value that is an operand of the operation other would be read from the conflict queue.
    Type: Grant
    Filed: October 3, 2014
    Date of Patent: April 25, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Yun Du, Hongjiang Shang, Haikun Zhu
  • Patent number: 9632793
    Abstract: Current tasks being executed in a set of modules of a signal processing system managed via an interface block are aborted so as to permit the execution of new tasks by pipelining eliminating transactions of said current tasks and executing transactions of the new tasks. Upon arrival of a signal to abort the current tasks, data and/or memory accesses present in said interface block are discarded.
    Type: Grant
    Filed: May 6, 2013
    Date of Patent: April 25, 2017
    Assignee: STMicroelectronics S.r.l.
    Inventor: Daniele Mangano
  • Patent number: 9626184
    Abstract: A processor includes a plurality of packed data registers. The processor also includes a decode unit to decode a packed variable length code point length determination instruction. The instruction is to indicate a first source packed data that is to have a plurality of packed variable length code points that are each to represent a character. The instruction is also to indicate a destination storage location. The processor also has an execution unit coupled with the decode unit and the packed data registers. The execution unit, in response to the instruction, is to store a result packed data in the indicated destination storage location. The result packed data is to have a length for each of the plurality of the packed variable length code points. Other processors, methods, systems, and instructions are also disclosed.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: April 18, 2017
    Assignee: Intel Corporation
    Inventor: Shihjong Kuo
  • Patent number: 9619232
    Abstract: Predictive fetching and decoding for selected instructions (e.g., operating system instructions, hypervisor instructions or other such instructions). A determination is made that a selected instruction, such as a system call instruction, an asynchronous interrupt, a return from system call instruction or return from asynchronous interrupt, is to be executed. Based on determining that such an instruction is to be executed, a predicted address is determined for the selected instruction, which is the address to which processing transfers in order to provide the requested services. Then, fetching of instructions beginning at the predicted address prior to execution of the selected instruction is commenced. Further, speculative state relating to a selected instruction, including, for instance, an indication of the privilege level of the selected instruction or instructions executed on behalf of the selected instruction, is predicted and maintained.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: April 11, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 9619750
    Abstract: An apparatus and method for store dependence prediction is described. For example, one embodiment of the invention includes a processor comprising a store buffer for buffering store operations prior to completion, the store operations to store data to a memory hierarchy; and a store dependence predictor to predict whether load operations should be permitted to speculatively skip over each store operation and responsively setting an indication within an entry associated with each store operation in the store buffer; wherein a load operation checks the indication in the store buffer to determine whether to speculatively execute ahead of each store operation.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: April 11, 2017
    Assignee: INTEL CORPORATION
    Inventors: Ho-Seop Kim, Robert S. Chappell, Choon Yip Soo
  • Patent number: 9619230
    Abstract: Predictive fetching and decoding for selected instructions (e.g., operating system instructions, hypervisor instructions or other such instructions). A determination is made that a selected instruction, such as a system call instruction, an asynchronous interrupt, a return from system call instruction or return from asynchronous interrupt, is to be executed. Based on determining that such an instruction is to be executed, a predicted address is determined for the selected instruction, which is the address to which processing transfers in order to provide the requested services. Then, fetching of instructions beginning at the predicted address prior to execution of the selected instruction is commenced. Further, speculative state relating to a selected instruction, including, for instance, an indication of the privilege level of the selected instruction or instructions executed on behalf of the selected instruction, is predicted and maintained.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: April 11, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 9569214
    Abstract: In one embodiment, in an execution pipeline having a plurality of execution subunits, a method of using a bypass network to directly forward data from a producing execution subunit to a consuming execution subunit is provided. The method includes producing output data with the producing execution subunit, consuming input data with the consuming execution subunit, for one or more intervening operations whose input is the output data from the producing execution subunit and whose output is the input data to the consuming execution subunit, evaluating those one or more intervening operations to determine whether their execution would compose an identify function, and if the one or more intervening operations would compose such an identity function, controlling the bypass network to forward the producing execution subunit's output data directly to the consuming execution subunit.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: February 14, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Gokul Govindu, Parag Gupta, Scott Pitkethly, Guillermo J. Rozas
  • Patent number: 9547511
    Abstract: A language-based model to support asynchronous operations set forth in a synchronous syntax is provided. The asynchronous operations are transformed in a compiler into an asynchronous pattern, such as an APM-based pattern (or asynchronous programming model based pattern). The ability to compose asynchronous operations comes from the ability to efficiently call asynchronous methods from other asynchronous methods, pause them and later resume them, and effectively implementing a single-linked stack. One example includes support for ordered and unordered compositions of asynchronous operations. In an ordered composition, each asynchronous operation is started and finished before another operation in the composition is started. In an unordered composition, each asynchronous operation is started and completed independently of the operations in the unordered composition.
    Type: Grant
    Filed: June 5, 2009
    Date of Patent: January 17, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Niklas Gustafsson, Geoffrey M. Kizer
  • Patent number: 9535703
    Abstract: A predictor data structure is used for pipelined processing by a pipelined processor. The predictor data structure includes a predicted address to be used in return from execution of a selected instruction, and a predicted operating state associated with the predicted address. Based on determining a selected return instruction is to be executed, the predicted address to which processing is to be returned is obtained from the predictor data structure. Further, based on determining the selected return instruction is to be executed, a transitional operating state to be entered based on the predicted operating state stored in the predictor data structure is predicted, wherein at least one of the predicted address and the predicted transitional operating state are to be used to validate execution of the selected return instruction.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: January 3, 2017
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 9535695
    Abstract: Techniques are disclosed relating to completion of load and store instructions in a weakly-ordered memory model. In one embodiment, a processor includes a load queue and a store queue and is configured to associate queue information with a load instruction in an instruction stream. In this embodiment, the queue information indicates a location of the load instruction in the load queue and one or more locations in the store queue that are associated with one or more store instructions that are older than the load instruction. The processor may determine, using the queue information, that the load instruction does not conflict with a store instruction in the store queue that is older than the load instruction. The processor may remove the load instruction from the load queue while the store instruction remains in the store queue. The queue information may include a wrap value for the load queue.
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: January 3, 2017
    Assignee: Apple Inc.
    Inventors: John H. Mylius, Rajat Goel, Pradeep Kanapathipillai, Hari S. Kannan
  • Patent number: 9535744
    Abstract: A processor, system, and method are described for continued retirement of operations during a commit of a speculative region of program code. For example, one embodiment of a method comprises the operations of identifying a plurality of transactional memory regions in program code, including a first transactional memory region; and retiring one or more of a plurality of operations which follow the first transactional memory region even when a commit operation associated with the first transactional memory region is waiting to complete.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: January 3, 2017
    Assignee: INTEL CORPORATION
    Inventors: Ravi Rajwar, Matthew C. Merten, Christine E. Wang, Vijaykumar B. Kadgi, Rajesh S. Parthasarathy
  • Patent number: 9529567
    Abstract: A digital processor is provided having an instruction set with a complex exponential function. The digital processor evaluates a complex exponential function for an input value, x, by obtaining a complex exponential software instruction having the input value, x, as an input; and in response to the complex exponential software instruction: invoking at least one complex exponential functional unit that implements complex exponential software instructions to apply the complex exponential function to the input value, x; and generating an output corresponding to the complex exponential of the input value, x. A complex exponential function for an input value, x, can be evaluated by wrapping the input value to maintain a given range; computing a coarse approximation angle using a look-up table; scaling the coarse approximation angle to obtain an angle from 0 to ?; and computing a fine corrective value using a polynomial approximation.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: December 27, 2016
    Assignee: Intel Corporation
    Inventors: Kameran Azadet, Albert Molina, Joseph H. Othmer, Parakalan Venkataraghavan, Meng-Lin Yu, Joseph Williams
  • Patent number: 9529760
    Abstract: A technique for handling cache-inhibited operations in a data processing system includes receiving, at a replicated bus unit, a cache-inhibited (CI) operation. The replicated bus unit determines whether an address associated with the CI operation matches an address for the replicated bus unit and whether a source indicated by the CI operation is associated with the replicated bus unit. In response to the address associated with the CI operation matching the address for the replicated bus unit and the source indicated by the CI operation being associated with the replicated bus unit, the replicated bus unit processes the CI operation. In response to the address associated with the CI operation not matching the address for the replicated bus unit or the source indicated by the CI operation not being associated with the replicated bus unit, the replicated bus unit ignores the CI operation.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: December 27, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian Auernhammer, Hugh Shen, Derek E. Williams
  • Patent number: 9529386
    Abstract: According to one embodiment, an information processing apparatus includes a first logic circuit, a second logic circuit and a controller. The first logic circuit selectively supplies either the detection signal indicating the connection of the external display to a first connector or the detection signal indicating the connection of the external display to a second connector on an extension unit to the input/output port. The second logic circuit switches between supplying the detection signal to the input/output port and cutting off the detection signal to the input/output port. The controller controls the second logic circuit to cut off the supply of the detection signal to the input/output port for a first period when the extension unit is attached or detached.
    Type: Grant
    Filed: January 10, 2014
    Date of Patent: December 27, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Taku Naruse
  • Patent number: 9519486
    Abstract: A method of processing data in an integrated circuit is described. The method comprises establishing a pipeline of processing blocks, wherein each processing block has a different function; coupling a data packet having data and meta-data to an input of the pipeline of processing blocks; and processing the data of the data packet using predetermined processing blocks based upon the meta-data. A device for processing data in an integrated circuit is also described.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: December 13, 2016
    Assignee: XILINX, INC.
    Inventors: Michaela Blott, Thomas B. English, Kornelis A. Vissers