Multiplication Followed By Addition (i.e., X*y+z) Patents (Class 708/523)
  • Patent number: 11409536
    Abstract: A method and apparatus for performing a multi-precision computation in a plurality of arithmetic logic units (ALUs) includes pairing a first Single Instruction/Multiple Data (SIMD) block channel device with a second SIMD block channel device to create a first block pair having one-level staggering between the first and second channel devices. A third SIMD block channel device is paired with a fourth SIMD block channel device to create a second block pair having one-level staggering between the third and fourth channel devices. A plurality of source inputs are received at the first block pair and the second block pair. The first block pair computes a first result, and the second block pair computes a second result.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: August 9, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Bin He, YunXiao Zou, Jiasheng Chen, Michael Mantor
  • Patent number: 11334319
    Abstract: An apparatus and method for multiplying packed unsigned words.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: May 17, 2022
    Assignee: Intel Corporation
    Inventors: Venkateswara Rao Madduri, Elmoustapha Ould-Ahmed-Vall, Robert Valentine
  • Patent number: 11308574
    Abstract: Embodiments described herein provide a graphics processor that can perform a variety of mixed and multiple precision instructions and operations. One embodiment provides a streaming multiprocessor that can concurrently execute multiple thread groups, wherein the streaming multiprocessor includes a single instruction, multiple thread (SIMT) architecture and the streaming multiprocessor is to execute multiple threads for each of multiple instructions. The streaming multiprocessor can perform concurrent integer and floating-point operations and includes a mixed precision core to perform operations at multiple precisions.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: April 19, 2022
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland
  • Patent number: 11281428
    Abstract: A data processing apparatus is provided to convert a plurality of signed digits to an output value, the data processing apparatus comprising: receiver circuitry to receive, at each of a plurality of iterations, a signed digit from the plurality of signed digits, and previous intermediate data. Conversion circuitry performs a negative-output conversion from the signed digit to an unsigned digit, such that the output value comprising the unsigned digit is negative. Concatenation circuitry concatenate bits of the unsigned digit and bits of the previous intermediate data to produce updated intermediate data and output circuitry provides the updated intermediate data as the previous intermediate data of a next iteration. After the plurality of iterations, the output circuitry outputs at least part of the updated intermediate data as the output value.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: March 22, 2022
    Assignee: ARM LIMITED
    Inventor: Javier Diaz Bruguera
  • Patent number: 11270405
    Abstract: An apparatus to facilitate compute optimization is disclosed. The apparatus includes a mixed precision core to perform a mixed precision multi-dimensional matrix multiply and accumulate operation on 8-bit and/or 32 bit signed or unsigned integer elements.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: March 8, 2022
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Linda L. Hurd, Dukhwan Kim, Mike B. Macpherson, John C. Weast, Feng Chen, Farshad Akhbari, Narayan Srinivasa, Nadathur Rajagopalan Satish, Joydeep Ray, Ping T. Tang, Michael S. Strickland, Xiaoming Chen, Anbang Yao, Tatiana Shpeisman
  • Patent number: 11256476
    Abstract: A tile of an FPGA includes a multiple mode arithmetic circuit. The multiple mode arithmetic circuit is configured by control signals to operate in an integer mode, a floating-point mode, or both. In some example embodiments, multiple integer modes (e.g., unsigned, two's complement, and sign-magnitude) are selectable, multiple floating-point modes (e.g., 16-bit mantissa and 8-bit sign, 8-bit mantissa and 6-bit sign, and 6-bit mantissa and 6-bit sign) are supported, or any suitable combination thereof. The tile may also fuse a memory circuit with the arithmetic circuits. Connections directly between multiple instances of the tile are also available, allowing multiple tiles to be treated as larger memories or arithmetic circuits. By using these connections, referred to as cascade inputs and outputs, the input and output bandwidth of the arithmetic circuit is further increased.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: February 22, 2022
    Assignee: Achronix Semiconductor Corporation
    Inventors: Daniel Pugh, Raymond Nijssen, Michael Philip Fitton, Marcel Van der Goot
  • Patent number: 11249723
    Abstract: A method related to posit tensor processing can include receiving, by a plurality of multiply-accumulator (MAC) units coupled to one another, a plurality of universal number (unum) or posit bit strings organized in a matrix and to be used as operands in a plurality of respective recursive operations performed using the plurality of MAC units and performing, using the MAC units, the plurality of respective recursive operations. Iterations of the respective recursive operations are performed using at least one bit string that is a same bit string as was used in a preceding iteration of the respective recursive operations. The method can further include prior to receiving the plurality of unum or posit bit strings, performing an operation to organize the plurality of unum or posit bit strings to achieve a threshold bandwidth ratio, a threshold latency, or both during performance of the plurality of respective recursive operations.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: February 15, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Vijay S. Ramesh
  • Patent number: 11237833
    Abstract: The present invention discloses an instruction processing apparatus, comprising a first register adapted to store first source data, a second register adapted to store second source data, a third register adapted to store accumulated data, a decoder adapted to receive and decode a multiply-accumulate instruction, and an execution unit. The multiply-accumulate instruction indicates that the first register serves as a first operand, the second register serves as a second operand, the third register serves as a third operand, and a shift flag.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: February 1, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Jiahui Luo, Zhijian Chen, Yubo Guo, Wenmeng Zhang
  • Patent number: 11200723
    Abstract: A texture filtering unit includes a datapath block and a control block. The datapath block includes one or more parallel computation pipelines, each containing at least one hardware logic component configured to receive a plurality of inputs and generate an output value as part of a texture filtering operation. The control block includes a plurality of sequencers and an arbiter. Each sequencer executes a micro-program that defines a sequence of operations to be performed by the one or more pipelines in the datapath block as part of a texture filtering operation and the arbiter controls access, by the sequencers, to the one or more pipelines in the datapath based on predefined prioritization rules.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: December 14, 2021
    Assignee: Imagination Technologies Limited
    Inventor: Casper Van Benthem
  • Patent number: 11113084
    Abstract: This application concerns methods, apparatus, and systems for performing quantum circuit synthesis and/or for implementing the synthesis results in a quantum computer system. In certain example embodiments: a universal gate set, a target unitary described by a target angle, and target precision is received (input); a corresponding quaternion approximation of the target unitary is determined; and a quantum circuit corresponding to the quaternion approximation is synthesized, the quantum circuit being over a single qubit gate set, the single qubit gate set being realizable by the given universal gate set for the target quantum computer architecture.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vadym Kliuchnikov, Jon Yard, Martin Roetteler, Alexei Bocharov
  • Patent number: 11061854
    Abstract: A vector reduction circuit configured to reduce an input vector of elements comprises a plurality of cells, wherein each of the plurality of cells other than a designated first cell that receives a designated first element of the input vector is configured to receive a particular element of the input vector, receive, from another of the one or more cells, a temporary reduction element, perform a reduction operation using the particular element and the temporary reduction element, and provide, as a new temporary reduction element, a result of performing the reduction operation using the particular element and the temporary reduction element. The vector reduction circuit also comprises an output circuit configured to provide, for output as a reduction of the input vector, a new temporary reduction element corresponding to a result of performing the reduction operation using a last element of the input vector.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: July 13, 2021
    Assignee: Google LLC
    Inventors: Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam
  • Patent number: 10990354
    Abstract: An accelerating device includes a signal detector that converts a first input signal and a second input signal into a first converted input signal and a second converted input signal, respectively, and that generates a final zero-value flag signal, a first one-value flag signal, and a second one-value flag signal. The accelerating device further includes a processing element (PE) that processes the first converted input signal and the second converted input signal based on the final zero-value flag signal, the first one-value flag signal, and the second one-value flag signal and that skips a first arithmetic operation and a second arithmetic operation when the final zero-value flag signal has a first value. The first value of the final zero-value flag signal indicates that the first input signal, or the second input signal, or both have a value of 0.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: April 27, 2021
    Assignee: SK hynix Inc.
    Inventor: Jae Hyeok Jang
  • Patent number: 10853037
    Abstract: Embodiments of the present disclosure pertain to digital circuits with compressed carries. In one embodiment, an adder circuit generates a sum and carry. The carry is compressed to reduce the number of bits required to represent the carry. In one embodiment, a multiplier circuit generates output product values. The output product values may be summed to produce a sum and carry. The carry may be compressed. In other embodiments, a multiplier circuit receives an input sum and compressed carry. The compressed input carry is decompressed and added to output product values and the input sum, and a resulting carry is compressed. The output of such a multiplier is another sum and compressed carry.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: December 1, 2020
    Assignee: Groq, Inc.
    Inventors: Christopher Aaron Clark, Jonathan Ross
  • Patent number: 10846088
    Abstract: When executing a program on a data processor comprising an execution unit for executing instructions in a program to be executed by the data processor, the execution unit being associated with one or more hardware units operable to execute instructions, at least one instruction in a program is associated with an indication of whether the instruction should be issued directly for execution by a hardware unit or should be intercepted during its execution by the execution unit. The execution unit then, when decoding the instruction for execution by a hardware unit in the program, determines from the indication associated with the instruction whether the instruction should be issued directly for execution by a hardware unit or intercepted during its execution by the execution unit, and issues the instruction for execution by a hardware unit directly, or pauses execution of the instruction and performs another operation, accordingly.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 24, 2020
    Assignee: Arm Limited
    Inventors: Mark Underwood, Hakan Lars-Goran Persson, Arne Aas
  • Patent number: 10838695
    Abstract: The present embodiments relate to circuitry that efficiently performs floating-point arithmetic operations and fixed-point arithmetic operations. Such circuitry may be implemented in specialized processing blocks. If desired, the specialized processing blocks may include configurable interconnect circuitry to support a variety of different use modes. For example, the specialized processing block may efficiently perform a fixed-point or floating-point addition operation or a portion thereof, a fixed-point or floating-point multiplication operation or a portion thereof, a fixed-point or floating-point multiply-add operation or a portion thereof, just to name a few. In some embodiments, two or more specialized processing blocks may be arranged in a cascade chain and perform together more complex operations such as a recursive mode dot product of two vectors of floating-point numbers or a Radix-2 Butterfly circuit, just to name a few.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: November 17, 2020
    Assignee: Altera Corporation
    Inventor: Martin Langhammer
  • Patent number: 10817587
    Abstract: A reconfigurable matrix multiplier (RMM) system/method allowing tight or loose coupling to supervisory control processor application control logic (ACL) in a system-on-a-chip (SOC) environment is disclosed. The RMM provides for C=A*B matrix multiplication operations having A-multiplier-matrix (AMM), B-multiplicand-matrix (BMM), and C-product-matrix (CPM), as well as C=A*B+D operations in which D-summation-matrix (DSM) represents the result of a previous multiplication operation or another previously defined matrix. The RMM provides for additional CPM LOAD/STORE paths allowing overlapping of compute/data transfer operations and provides for CPM data feedback to the AMM or BMM operand inputs from a previously calculated CPM result.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: October 27, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Donald Edward Steiss, Timothy David Anderson, Kai Chirca
  • Patent number: 10795676
    Abstract: An apparatus and method for multiplying packed real and imaginary components of complex numbers.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 6, 2020
    Assignee: Intel Corporation
    Inventors: Venkateswara Madduri, Elmoustapha Ould-Ahmed-Vall, Jesus Corbal, Mark Charney, Robert Valentine, Binwei Yang
  • Patent number: 10776109
    Abstract: A microprocessor with dynamically adjustable bit width is provided, which has a bit width register, a datapath, a statistical register, and a bit width adjuster. The bit width register stores at least one bit width. The datapath operates according to the bit width stored in the bit width register to acquire input operands from received data and process input operands. The statistical register collects calculation results of the datapath. The bit width adjuster adjusts the bit width stored in the bit width register based on the calculation results collected in the statistical register.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: September 15, 2020
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Jing Chen, Xiaoyang Li, Juanli Song, Zhenhua Huang, Weilin Wang, Jiin Lai
  • Patent number: 10769746
    Abstract: A data queuing and format apparatus is disclosed. A first selection circuit may be configured to selectively couple a first subset of data to a first plurality of data lines dependent upon control information, and a second selection circuit may be configured to selectively couple a second subset of data to a second plurality of data lines dependent upon the control information. A storage array may include multiple storage units, and each storage unit may be configured to receive data from one or more data lines of either the first or second plurality of data lines dependent upon the control information.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: September 8, 2020
    Assignee: Apple Inc.
    Inventors: Liang Xia, Robert D. Kenney, Benjiman L. Goodman, Terence M. Potter
  • Patent number: 10664270
    Abstract: An apparatus and method for performing signed multiplication of packed signed/unsigned doublewords and accumulation with a quadword.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: May 26, 2020
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Robert Valentine, Mark Charney, Jesus Corbal, Venkateswara Madduri
  • Patent number: 10628124
    Abstract: Techniques and circuits are provided for stochastic rounding. In an embodiment, a circuit includes carry-save adder (CSA) logic having three or more CSA inputs, a CSA sum output, and a CSA carry output. One of the three or more CSA inputs is presented with a random number value, while other CSA inputs are presented with input values to be summed. The circuit further includes adder logic having adder inputs and a sum output. The CSA carry output of the CSA logic is coupled with one of the adder inputs of the adder logic, and the CSA sum output of the CSA logic is coupled with another input of the adder inputs of the adder logic. A particular number of most significant bits of the sum output of the adder logic represent a stochastically rounded sum of the input values.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: April 21, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Gabriel H. Loh
  • Patent number: 10546045
    Abstract: Systems and methods are provided for performing a dot product. Each of a first series of numbers is divided into a first value, comprising the N most significant bits of the number, and a second value to form first and second sets of values. Each of a second series of numbers is divided into a third value, comprising the N most significant bits of the number, and a fourth value to form third and fourth sets of values. A dot product of the first and fourth sets of values is computed to provide a first partial sum. A dot product of the first and third sets of values is computed to provide a second partial sum. A dot product of the second and third sets of values is computed to provide a third partial sum. The partial sums are summed to provide a result for the dot product.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: January 28, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Lester Anderson Longley, Misael Lopez Cruz, Victor Cheng
  • Patent number: 10528346
    Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: January 7, 2020
    Assignee: Intel Corporation
    Inventors: Dipankar Das, Naveen K. Mellempudi, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek Kundu
  • Patent number: 10372417
    Abstract: Disclosed herein is a computer implemented method for performing multiply-add operations of binary numbers P, Q, R, S, B in an arithmetic unit of a processor, the operation calculating a result as an accumulated sum, which equals to B+n×P×Q+m×R×S, where n and m are natural numbers. Further disclosed herein is an arithmetic unit configured to implement multiply-add operations of binary numbers P, Q, R, S, B comprising at least a first binary arithmetic unit for calculating an aligned high part result and a second binary arithmetic unit for calculating an aligned low part result of the multiply-add operations.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: August 6, 2019
    Assignee: International Business Machines Corporation
    Inventors: Tina Babinsky, Michael Klein, Cedric Lichtenau, Silvia M. Mueller
  • Patent number: 10365860
    Abstract: A circuit that includes a plurality of array cores, each array core of the plurality of array cores comprising: a plurality of distinct data processing circuits; and a data queue register file; a plurality of border cores, each border core of the plurality of border cores comprising: at least a register file, wherein: [i] at least a subset of the plurality of border cores encompasses a periphery of a first subset of the plurality of array cores; and [ii] a combination of the plurality of array cores and the plurality of border cores define an integrated circuit array.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: July 30, 2019
    Assignee: quadric.io, Inc.
    Inventors: Nigel Drego, Aman Sikka, Mrinalini Ravichandran, Ananth Durbha, Robert Daniel Firu, Veerbhan Kheterpal
  • Patent number: 10338925
    Abstract: Tensor register files in a hardware accelerator are disclosed. An apparatus may comprise tensor operation calculators each configured to perform a type of tensor operation. The apparatus may also comprises tensor register files, each of which is associated with one of the tensor operation calculators. The apparatus may also comprises logic configured to store respective ones of the tensors in the plurality of tensor register files in accordance with the type of tensor operation to be performed on the respective tensors. The apparatus may also control read access to tensor register files based on a type of tensor operation that a machine instruction is to perform.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: July 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeremy Halden Fowers, Steven Karl Reinhardt, Kalin Ovtcharov, Eric Sen Chung
  • Patent number: 10261796
    Abstract: A processor and a method for executing an instruction on a processor are provided. In the method, a to-be-executed instruction is fetched, the instruction including a source address field, a destination address field, an operation type field, and an operation parameter field; in at least one execution unit, an execution unit controlled by a to-be-generated control signal according to the operation type field is determined, a source address and a destination address of data operated by the execution unit are determined according to the source address field and the destination address field, and a data amount of the data operated by the execution unit controlled by the to-be-generated control signal is determined according to the operation parameter field; the control signal is generated; and the execution unit in the at least one execution unit is controlled by using the control signal.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: April 16, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD
    Inventors: Jian Ouyang, Wei Qi, Yong Wang
  • Patent number: 10198263
    Abstract: Apparatus and methods are disclosed for nullifying one or more registers identified in a target field of a nullification instruction. In some examples of the disclosed technology, an apparatus can include memory and one or more block-based processor cores configured to fetch and execute a plurality of instruction blocks. One of the cores can include a control unit configured, based at least in part on receiving a nullification instruction, to obtain a register identification of at least one of a plurality of registers, based on a target field of the nullification instruction. A write to the at least one register associated with the register identification is nullified. The nullification instruction is in a first instruction block of the plurality of instruction blocks. Based on the nullified write to the at least one register, a subsequent instruction is executed from a second, different instruction block.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: February 5, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Aaron L. Smith
  • Patent number: 10169297
    Abstract: In one example in accordance with the present disclosure a resistive memory array is described. The array includes a number of resistive memory elements to receive a common-valued read signal. The array also includes a number of multiplication engines to perform a multiply operation by receiving a memory element output from a corresponding resistive memory element, receiving an input signal, and generating a multiplication output based on a received memory element output and a received input signal. The array also includes an accumulation engine to sum multiplication outputs from the number of multiplication engines.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: January 1, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventor: Brent Buchanan
  • Patent number: 10152456
    Abstract: A correlation operation circuit includes a first SRAM storing a plurality of pieces of detection pattern data, product-sum operators, a second SRAM storing intermediate data, and a comparator. When time series data is sequentially input, the intermediate data of all correlation functions referring to one time series data in a period during which the one time series data is input. When one time series data is input, the product-sum operator multiplies the detection pattern data sequentially read from the first SRAM by the one input time series data. The corresponding intermediate data is read from the second SRAM in synchronization with the multiplication, and the sequentially-calculated products are cumulatively added to the read intermediate data to be written back into the second SRAM as the intermediate data. As a result, the calculated correlation function data is supplied to the comparator to be compared with a predetermined specified value.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: December 11, 2018
    Assignee: Renesas Electronics Corporation
    Inventor: Hiroshi Ueki
  • Patent number: 10146248
    Abstract: A model calculation unit for calculating a data-based function model in a control unit is provided, the model calculation unit having a processor core which includes: a multiplication unit for carrying out a multiplication on the hardware side; an addition unit for carrying out an addition on the hardware side; an exponential function unit for calculating an exponential function on the hardware side; a memory in the form of a configuration register for storing hyperparameters and node data of the data-based function model to be calculated; and a logic circuit for controlling, on the hardware side, the calculation sequence in the multiplication unit, the addition unit, the exponential function unit and the memory in order to ascertain the data-based function model.
    Type: Grant
    Filed: April 7, 2014
    Date of Patent: December 4, 2018
    Assignee: ROBERT BOSCH GMBH
    Inventors: Tobias Lang, Heiner Markert, Axel Aue, Wolfgang Fischer, Ulrich Schulmeister, Nico Bannow, Felix Streichert, Andre Guntoro, Christian Fleck, Anne Von Vietinghoff, Michael Saetzler, Michael Hanselmann, Matthias Schreiber
  • Patent number: 10140090
    Abstract: Methods, systems and computer program products for computing and summing up multiple products in a single multiplier are provided. Aspects include receiving a first number and a second number, creating partial products of the first number and the second number based on a multiplication of the first number and the second number, and reducing the number of partial products to create an intermediate result. Aspects also include receiving a third number and a fourth number, creating partial products of the third number and the fourth number based on a multiplication of the third number and the fourth number, creating a reduction tree and adding the intermediate result to the reduction tree. Aspects further include reducing the number of partial products in the reduction tree to create a second sum value and a second carry value and adding the second sum value and the second carry value to create a result.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: November 27, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Klein, Manuela Niekisch
  • Patent number: 10140251
    Abstract: A processor and a method for executing a matrix multiplication operation on a processor. A specific implementation of the processor includes a data bus and an array processor having k processing units. The data bus is configured to sequentially read n columns of row vectors from an M×N multiplicand matrix and input same to each processing unit in the array processor, read an n×k submatrix from an N×K multiplier matrix and input each column vector of the submatrix to a corresponding processing unit in the array processor, and output a result obtained by each processing unit after executing a multiplication operation. Each processing unit in the array processor is configured to execute in parallel a vector multiplication operation on the input row and column vectors. Each processing unit includes a Wallace tree multiplier having n multipliers and n?1 adders. This implementation improves the processing efficiency of a matrix multiplication operation.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: November 27, 2018
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Ni Zhou, Wei Qi, Yong Wang, Jian Ouyang
  • Patent number: 10127013
    Abstract: Integrated circuits with specialized processing blocks that can support both fixed-point and floating-point operations are provided. A specialized processing block of this type may include partial product generators, compression circuits, and a main adder. The main adder may include a high adder, a middle adder, a low adder, floating-point rounding circuitry, and associated selection circuitry. The middle adder may include prefix networks for outputting generate and propagate vectors, and redundant LSB processing logic for outputting LSB generate and propagate bits. The middle adder may include additional logic circuitry for generating a sum output, a sum-plus-1 output, and a sum-plus-2 output. The specialized processing block may further include accumulation circuitry for support multiply-accumulation functions for any suitable number of channels.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: November 13, 2018
    Assignee: Altera Corporation
    Inventor: Martin Langhammer
  • Patent number: 10108581
    Abstract: A vector reduction circuit configured to reduce an input vector of elements comprises a plurality of cells, wherein each of the plurality of cells other than a designated first cell that receives a designated first element of the input vector is configured to receive a particular element of the input vector, receive, from another of the one or more cells, a temporary reduction element, perform a reduction operation using the particular element and the temporary reduction element, and provide, as a new temporary reduction element, a result of performing the reduction operation using the particular element and the temporary reduction element. The vector reduction circuit also comprises an output circuit configured to provide, for output as a reduction of the input vector, a new temporary reduction element corresponding to a result of performing the reduction operation using a last element of the input vector.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: October 23, 2018
    Assignee: Google LLC
    Inventors: Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam
  • Patent number: 10097185
    Abstract: In an example embodiment, a digital block comprises a datapath circuit, one or more programmable logic devices (PLDs), and one or more control registers. The datapath circuit comprises structural arithmetic elements. The one or more PLDs comprise uncommitted programmable logic. The one or more control circuits comprise a control register configured to store user-defined control bits, where the one or more control circuits are configured to control both the structural arithmetic elements and the uncommitted programmable logic based on the user-defined control bits.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: October 9, 2018
    Assignee: Cypress Semiconductor Corporation
    Inventors: Bert Sullam, Warren Snyder, Haneef Mohammed
  • Patent number: 10089078
    Abstract: A circuit includes a multiplier, an adder, a first result register and a second result register coupled to outputs of the multiplier and the adder, respectively. The circuit further includes: a first selection unit configured to selectively provide, to the multiplier and in response to a first control signal, a first value from a first plurality of values; and a second selection unit configured to selectively provide, to the multiplier and in response to a second control signal, a second value from a second plurality of values. The circuit also includes: a third selection unit configured to selectively provide, to the adder and in response to a third control signal, a third value from a third plurality of values; and a fourth selection unit configured to selectively provide, to the adder and in response to a fourth control signal, a fourth value from a fourth plurality of values.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: October 2, 2018
    Assignee: STMICROELECTRONICS S.R.L.
    Inventors: David Vincenzoni, Samuele Raffaelli
  • Patent number: 10042639
    Abstract: According to one embodiment, a processor includes an instruction decoder to receive an instruction to process a multiply-accumulate operation, the instruction having a first operand, a second operand, a third operand, and a fourth operand. The first operand is to specify a first storage location to store an accumulated value; the second operand is to specify a second storage location to store a first value and a second value; and the third operand is to specify a third storage location to store a third value. The processor further includes an execution unit coupled to the instruction decoder to perform the multiply-accumulate operation to multiply the first value with the second value to generate a multiply result and to accumulate the multiply result and at least a portion of a third value to an accumulated value based on the fourth operand.
    Type: Grant
    Filed: January 3, 2017
    Date of Patent: August 7, 2018
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, Erdinc Ozturk, James D. Guilford, Gilbert M. Wolrich
  • Patent number: 10037210
    Abstract: An apparatus is described that includes a semiconductor chip having an instruction execution pipeline having one or more execution units with respective logic circuitry to: a) execute a first instruction that multiplies a first input operand and a second input operand and presents a lower portion of the result, where, the first and second input operands are respective elements of first and second input vectors; b) execute a second instruction that multiplies a first input operand and a second input operand and presents an upper portion of the result, where, the first and second input operands are respective elements of first and second input vectors; and, c) execute an add instruction where a carry term of the add instruction's adding is recorded in a mask register.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: July 31, 2018
    Assignee: INTEL CORPORATION
    Inventors: Gilbert M. Wolrich, Kirk S. Yap, James D. Guilford, Erdinc Ozturk, Vinodh Gopal, Wajdi K. Feghali, Sean M. Gulley, Martin G. Dixon
  • Patent number: 9946612
    Abstract: Implementations of encoding techniques are disclosed. In one embodiment, an encoding system includes a codec device, a switching network, a rerouting circuit, a logic integrated circuit, and memory devices. The codec device includes a plurality of input and output (I/O) ports to transport data signals. The switching network is coupled both to the plurality of I/O ports and to a plurality of channels external to the device. The plurality of I/O ports includes at least one spare channel. The rerouting circuitry is coupled to and configured to control the switching network and the logic integrated circuit has logic circuity including command and decode queueing circuitry, redundancy circuits, and error correction circuitry. The memory devices do include any circuitry included in the logic circuitry. Other systems and apparatuses are also described.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: April 17, 2018
    Assignee: Micron Technology, Inc.
    Inventor: Timothy M. Hollis
  • Patent number: 9760110
    Abstract: Methods and systems for memory-based computing include combining multiple operations into a single lookup table and combining multiple memory-based operation requests into a single read request. Operation result values are read from a multi-operation lookup table that includes result values for a first operation above a diagonal of the lookup table and includes result values for a second operation below the diagonal. Numerical inputs are used as column and row addresses in the lookup table and the requested operation determines which input corresponds to the column address and which input corresponds to the row address. Multiple operations are combined into a single request by combining respective members from each operation into respective inputs an reading an operation result value from a lookup table to produce a combined result output. The combined result output is separated into a plurality of individual result outputs corresponding to the plurality of requests.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: September 12, 2017
    Assignee: International Business Machines Corporation
    Inventors: Minsik Cho, Ruchir Puri
  • Patent number: 9753695
    Abstract: A datapath circuit may include a digital multiply and accumulate circuit (MAC) and a digital hardware calculator for parallel computation. The digital hardware calculator and the MAC may be coupled to an input memory element for receipt of input operands. The MAC may include a digital multiplier structure with partial product generators coupled to an adder to multiply a first and second input operands and generate a multiplication result. The digital hardware calculator may include a first look-up table coupled between a calculator input and a calculator output register. The first look-up table may include table entry values mapped to corresponding math function results in accordance with a first predetermined mathematical function. The digital hardware calculator may be configured to calculate, based on the first look-up table, a computationally hard mathematical function such as a logarithm function, an exponential function, a division function and a square root function.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: September 5, 2017
    Assignee: Analog Devices Global
    Inventors: Mikael M. Mortensen, Jeffrey G. Bernstein
  • Patent number: 9743082
    Abstract: The present invention relates to an apparatus and method for encoding and decoding an image by skip encoding. The image-encoding method by skip encoding, which performs intra-prediction, comprises: performing a filtering operation on the signal which is reconstructed prior to an encoding object signal in an encoding object image; using the filtered reconstructed signal to generate a prediction signal for the encoding object signal; setting the generated prediction signal as a reconstruction signal for the encoding object signal; and not encoding the residual signal which can be generated on the basis of the difference between the encoding object signal and the prediction signal, thereby performing skip encoding on the encoding object signal.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: August 22, 2017
    Assignees: Electronics and Telecommunications Research Institute, Kwangwoon University Industry-Academic Collaboration Foundation, University-Industry Cooperation Group Of Kyung Hee University
    Inventors: Sung Chang Lim, Ha Hyun Lee, Se Yoon Jeong, Hui Yong Kim, Suk Hee Cho, Jong Ho Kim, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Dong Gyu Sim, Seoung Jun Oh, Gwang Hoon Park, Sea Nae Park, Chan Woong Jeon
  • Patent number: 9690579
    Abstract: A first floating-point operation unit receives first and second variables and performs a first operation generating a first output. A first rounding unit receives and rounds the first output to generate a second output if a control bit is in a first state. A second floating-point operation unit receives a third variable and either the first output or the second output and performs a second operation on the third variable and either the first output or the second output, to generate a third output. The second floating-point operation unit receives and operates on the first output if the control bit is in the first state, or the second output if the control bit is in the second state. A second rounding unit receives and rounds the third output.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: June 27, 2017
    Assignee: ARM Finance Overseas Limited
    Inventor: David Yiu-Man Lau
  • Patent number: 9692579
    Abstract: According to some embodiments, a secondary network node detects a first data transmission of media content from a primary network node to a first wireless device. The first data transmission has a first data quality description D(n1) and a first transport format T(k1). The secondary network node selects a second data quality description D(n2?) and a second transport format T(k2?) for a second data transmission. The second data quality description D(n2?) and second transport format T(k2?) differ from the first data quality description D(1) and first transport format T(k1), respectively. The secondary network node transmits the second data transmission to a second wireless device according to the second data quality description D(n2?) and the second transport format T(k2?). The second data transmission includes at least a portion of the media content.
    Type: Grant
    Filed: August 5, 2014
    Date of Patent: June 27, 2017
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventor: Ali S. Khayrallah
  • Patent number: 9535706
    Abstract: According to one embodiment, a processor includes an instruction decoder to receive an instruction to process a multiply-accumulate operation, the instruction having a first operand, a second operand, a third operand, and a fourth operand. The first operand is to specify a first storage location to store an accumulated value; the second operand is to specify a second storage location to store a first value and a second value; and the third operand is to specify a third storage location to store a third value. The processor further includes an execution unit coupled to the instruction decoder to perform the multiply-accumulate operation to multiply the first value with the second value to generate a multiply result and to accumulate the multiply result and at least a portion of a third value to an accumulated value based on the fourth operand.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: January 3, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, Erdinc Ozturk, James D. Guilford, Gilbert M. Wolrich
  • Patent number: 9519460
    Abstract: A single-instruction multiple-data (SIMD) multiplier-accumulator apparatus and method. A multiplier block with two 16-bit by 32-bit multiplier circuits transform a selectable number of input multipliers and multiplicands into a selected number of products. Each multiplier circuit comprises an array of full adders that generates and sums partial products using carry-save addition. An accumulator block, with additional data width to help prevent overflow, adds the products to a selectable number of input addends and outputs a number of results. Embodiments perform one to four multiplications together, depending on the number of bits (eight, 16, 24, or 32) selected for the input operands. Embodiments output 20-bit, 40-bit, or 80-bit multiply-accumulate results at rates of at least 1.1 GHz. Embodiments support signed inputs, negated multiplication products, and Q-format data. A hybrid sign extension management approach improves performance for 80-bit outputs.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: December 13, 2016
    Assignee: Cadence Design Systems, Inc.
    Inventors: Aamir A. Farooqui, David Lawrence Heine
  • Patent number: 9495154
    Abstract: Embodiments disclosed herein include vector processing engines (VPEs) having programmable data path configurations for providing multi-mode vector processing. Related vector processors, systems, and methods are also disclosed. The VPEs include a vector processing stage(s) configured to process vector data according to a vector instruction executed in the vector processing stage. Each vector processing stage includes vector processing blocks each configured to process vector data based on the vector instruction being executed. The vector processing blocks are capable of providing different vector operations for different types of vector instructions based on data path configurations. Data paths of the vector processing blocks are programmable to be reprogrammable to process vector data differently according to the particular vector instruction being executed.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: November 15, 2016
    Assignee: QUALCOMM Incorporated
    Inventor: Raheel Khan
  • Patent number: 9483442
    Abstract: According to an embodiment, a matrix operation apparatus executing a matrix operation includes multiple nodes, the nodes including: a multiplier configured to perform a first operation for a first input, which is column data and a second input which is row data for the matrix operation and output element components of an operation result of the matrix operation; and an accumulator configured to perform cumulative addition of operation results of the multiplier.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: November 1, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Seiji Maeda, Hiroyuki Usui
  • Patent number: 9465578
    Abstract: A system and method are provided for performing 32-bit or dual 16-bit floating-point arithmetic operations using logic circuitry. An operating mode that specifies an operating mode for a multiplication operation is received, where the operating mode is one of a 32-bit floating-point mode and a dual 16-bit floating-point mode. Based on the operating mode, nine recoding terms for a mantissa of at least one floating-point input operand are determined. A dual-mode multiplier array circuit that is configurable to generate partial products for either one 32-bit floating-point result or for two 16-bit floating-point results computes the partial products based on the nine recoding terms. The partial products are processed to generate an output based on the operating mode.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: October 11, 2016
    Assignee: NVIDIA Corporation
    Inventors: David C. Tannenbaum, Srinivasan Iyer