Format Conversion Patents (Class 708/204)
  • Patent number: 11511418
    Abstract: A data processing system for controlling different types of mobile platforms. The data processing system includes an abstraction component, a standardization component and a driver management. The abstraction component is designed to be connected to one or to multiple platforms, to determine types of mobile platforms, to indicate the types to the driver management and to use drivers provided by the driver management in order to convert messages between an interface to the standardization component and interfaces to the mobile platforms, and/or in order to activate functions of the mobile platforms and of the standardization component. The interfaces of the abstraction component to the mobile platforms include interfaces to the software components of the mobile platforms.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: November 29, 2022
    Assignee: Robert Bosch GmbH
    Inventors: David Lenhart, Matthias Figura, Philipp Gmaehle, Raphael Knorpp
  • Patent number: 11513770
    Abstract: A processor-implemented includes receiving a first floating point operand and a second floating point operand, each having an n-bit format comprising a sign field, an exponent field, and a significand field, normalizing a binary value obtained by performing arithmetic operations for fields corresponding to each other in the first and second floating point operands for an n-bit multiplication operation, determining whether the normalized binary value is a number that is representable in the n-bit format or an extended normal number that is not representable in the n-bit format, according to a result of the determining, encoding the normalized binary value using an extension bit format in which an extension pin identifying whether the normalized binary value is the extended normal number is added to the n-bit format, and outputting the encoded binary value using the extended bit format, as a result of the n-bit multiplication operation.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: November 29, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Shinhaeng Kang, Sukhan Lee
  • Patent number: 11500630
    Abstract: An embodiment of the invention is a processor including execution circuitry to, in response to a decoded instruction, convert a half-precision floating-point value to a single-precision floating-point value and store the single-precision floating-point value in each of the plurality of element locations of a destination register. The processor also includes a decoder and the destination register. The decoder is to decode an instruction to generate the decoded instruction.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: November 15, 2022
    Assignee: Intel Corporation
    Inventors: Robert Valentine, Mark Charney, Raanan Sade, Elmoustapha Ould-Ahmed-Vall, Jesus Corbal
  • Patent number: 11347511
    Abstract: An apparatus has floating-point multiplying circuitry to perform a floating-point multiply operation to multiply first and second floating-point operands to generate a product floating-point value. Shared hardware circuitry of the floating-point multiplying circuitry is reused to also support a floating-point scaling instruction specifying an input floating-point operand and an integer operand, which causes a floating-point scaling operation to be performed to generate an output floating-point value corresponding to a product of the input floating-point operand and a scaling factor 2X, where X is an integer represented by the integer operand.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: May 31, 2022
    Assignee: Arm Limited
    Inventor: David Raymond Lutz
  • Patent number: 11341085
    Abstract: Methods and apparatus for a low energy accelerator processor architecture with short parallel instruction word. An integrated circuit includes a system bus having a data width N, where N is a positive integer; a central processor unit coupled to the system bus and configured to execute instructions retrieved from a memory coupled to the system bus; and a low energy accelerator processor coupled to the system bus and configured to execute instruction words retrieved from a low energy accelerator code memory, the low energy accelerator processor having a plurality of execution units including a load store unit, a load coefficient unit, a multiply unit, and a butterfly/adder ALU unit, each of the execution units configured to perform operations responsive to op-codes decoded from the retrieved instruction words, wherein the width of the instruction words is equal to the data width N. Additional methods and apparatus are disclosed.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 24, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Srinivas Lingam, Seok-Jun Lee, Johann Zipperer, Manish Goel
  • Patent number: 11327717
    Abstract: A computation unit computes a function f(I). The function f(I) has a target output range over a first domain of an input I encoded using a first format. A first circuit receives the encoded input I in the first format including X bits, to add an offset C to the encoded input I to generate an offset input SI=I+C, in a second format including fewer than X bits. The offset C is equal to a difference between the first domain in f(I) and a higher precision domain of the second format for the offset input SI. A second circuit is operatively coupled to receive the offset input SI in the second format, to output a value equal to a function f(SI) to provide an encoded output value f(I).
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: May 10, 2022
    Assignee: SambaNova Systems, Inc.
    Inventors: Mingran Wang, Xiaoyan Li, Yongning Sheng
  • Patent number: 11301542
    Abstract: An apparatus includes front-end circuitry to receive radar wave signals and a fast fourier transforms (FFT) signal processor. The FFT signal processor includes multiplication logic circuitry and other logic circuitry. The FFT signal processor derives doppler information from the radar wave signals by operating on a digital stream of input data representing the radar wave signals including using the multiplication logic circuitry to perform multiplication operations on first data in the digital stream while the first data is represented in a signed magnitude form, and using the other logic circuitry to perform other mathematical operations on second data in the digital stream while the second data is represented in a two's complement form.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: April 12, 2022
    Assignee: NXP B.V.
    Inventor: Marco Jan Gerrit Bekooij
  • Patent number: 11281463
    Abstract: Methods and apparatus relating to conversion of an unsigned normalized (unorm) integer values to floating-point (float) values in low power are described. In an embodiment, conversion logic converts a unorm integer value to a floating-point value based on detection of whether the unorm integer matches one of three cases, wherein the unorm integer value comprises n bits. Memory stores a count value corresponding to n?1 bits of the unorm integer value after detection of a leading 1 in the unorm integer value. The three cases include: a first case with all zeros, a second case with all ones, and a third case with a combination of one or more zeros and one or more ones. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 25, 2018
    Date of Patent: March 22, 2022
    Assignee: INTEL CORPORATION
    Inventors: Benjamin Pletcher, Rahul Kumar
  • Patent number: 11275560
    Abstract: A floating-point number in a first format representation is received. Based on an identification of a floating-point format type of the floating-point number, different components of the first format representation are identified. The different components of the first format representation are placed in corresponding components of a second format representation of the floating-point number, wherein a total number of bits of the second format representation is larger than a total number of bits of the first format representation. At least one of the components of the second format representation is padded with one or more zero bits. The floating-point number in the second format representation is stored in a register. A multiplication using the second format representation of the floating-point number is performed.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: March 15, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Thomas Mark Ulrich, Abdulkadir Utku Diril, Krishnakumar Narayanan Nair, Zhao Wang, Rakesh Komuravelli
  • Patent number: 11263009
    Abstract: Disclosed embodiments relate to systems and methods for performing 16-bit floating-point vector dot product instructions. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode and locations of first source, second source, and destination vectors, the opcode to indicate execution circuitry is to multiply N pairs of 16-bit floating-point formatted elements of the specified first and second sources, and accumulate the resulting products with previous contents of a corresponding single-precision element of the specified destination, decode circuitry to decode the fetched instruction, and execution circuitry to respond to the decoded instruction as specified by the opcode.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: March 1, 2022
    Assignee: Intel Corporation
    Inventors: Alexander F. Heinecke, Robert Valentine, Mark J. Charney, Raanan Sade, Menachem Adelman, Zeev Sperber, Amit Gradstein, Simon Rubanovich
  • Patent number: 11216275
    Abstract: The embodiments herein describe a conversion engine that converts floating point data into integer data using a dynamic scaling factor. To select the scaling factor, the conversion engine compares a default (or initial) scaling factor value to an exponent portion of the floating point value to determine a shift value with which to bit shift a mantissa of the floating point value. After bit shifting the mantissa, the conversion engine determines whether the shift value caused an overflow or an underflow and whether that overflow or underflow violates a predefined policy. If the policy is violated, the conversion engine adjusts the scaling factor and restarts the conversion process. In this manner, the conversion engine can adjust the scaling factor until identifying a scaling factor that converts all the floating point values in the batch without violating the policy.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: January 4, 2022
    Assignee: XILINX, INC.
    Inventors: Philip B. James-Roxby, Eric F. Dellinger
  • Patent number: 11210063
    Abstract: A programmable device may be configured to support machine learning training operations using matrix multiplication circuitry implemented on a systolic array. The systolic array includes an array of processing elements, each of which includes hybrid floating-point dot-product circuitry. The hybrid dot-product circuitry has a hard data path that uses digital signal processing (DSP) blocks operating in floating-point mode and a hard/soft data path that uses DSP blocks operating in fixed-point mode operated in conjunction with general purpose soft logic. The hard/soft data path includes 2-element dot-product circuits that feed an adder tree. Results from the hard data path are combined with the adder tree using format conversion and normalization circuitry. Inputs to the hybrid dot-product circuitry may be in the BFLOAT16 format. The hard data path may be in the single precision format. The hard/soft data path uses a custom format that is similar to but different than BFLOAT16.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 28, 2021
    Assignee: Intel Corporation
    Inventors: Martin Langhammer, Bogdan Pasca, Sergey Gribok, Gregg William Baeckler, Andrei Hagiescu
  • Patent number: 11188299
    Abstract: A method includes dividing a fraction of a floating point result into a first portion and a second portion. The method includes outputting a first normalizer result based on the first portion during to a first clock cycle. The method includes storing a first segment of the first portion during to the first clock cycle. The method includes outputting a first rounder result based on the first normalizer result during to the first clock cycle. The method includes outputting a second normalizer result based on the second portion during to a second clock cycle. The method includes outputting a second rounder result based on the second normalizer result and the first segment during to the second clock cycle.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: November 30, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nicol Hofmann, Michael Klein, Petra Leber, Kerstin Claudia Schelm
  • Patent number: 11169803
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 9, 2021
    Assignee: Shanghai Cambricon Information Technology Co., Ltd.
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11137982
    Abstract: Systems, apparatuses, and methods related to acceleration circuitry are described. The acceleration circuitry may be deployed in a memory device and can include a memory resource and/or logic circuitry. The acceleration circuitry can perform operations on data to convert the data between one or more numeric formats, such as floating-point and/or universal number (e.g., posit) formats. The acceleration circuitry can perform arithmetic and/or logical operations on the data after the data has been converted to a particular format. For instance, the memory resource can receive data comprising a bit string having a first format that provides a first level of precision. The logic circuitry can receive the data from the memory resource and convert the bit string to a second format that provides a second level of precision that is different from the first level of precision.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: October 5, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Vijay S. Ramesh, Richard C. Murphy
  • Patent number: 11126428
    Abstract: Embodiments detailed herein relate to arithmetic operations of float-point values. An exemplary processor includes decoding circuitry to decode an instruction, where the instruction specifies locations of a plurality of operands, values of which being in a floating-point format. The exemplary processor further includes execution circuitry to execute the decoded instruction, where the execution includes to: convert the values for each operand, each value being converted into a plurality of lower precision values, where an exponent is to be stored for each operand; perform arithmetic operations among lower precision values converted from values for the plurality of the operands; and generate a floating-point value by converting a resulting value from the arithmetic operations into the floating-point format and store the floating-point value.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: September 21, 2021
    Assignee: INTEL CORPORATION
    Inventors: Gregory Henry, Alexander Heinecke
  • Patent number: 11068784
    Abstract: Systems and methods for performing a quantization of artificial neural networks (ANNs) are provided. An example method may include receiving a description of an ANN and input data associated with the ANN, wherein the input data are represented according to a first data type; selecting a first value interval of the first data type to be mapped to a second value interval of a second data type; performing, based on the input data and the description of the ANN, the computations of one or more neurons of the ANN, wherein the computations are performed for at least one value within the second value interval, the value being a result of mapping a value of the first value interval to a value of the second value interval; determining, a measure of saturations in neurons of the ANN, and adjusting, based on the measure of saturations, the value intervals.
    Type: Grant
    Filed: January 26, 2019
    Date of Patent: July 20, 2021
    Assignee: MIPSOLOGY SAS
    Inventors: Benoit Chappet de Vangel, Vincent Moutoussamy, Ludovic Larzul
  • Patent number: 11023801
    Abstract: The present application discloses a data processing method and apparatus. A specific implementation of the method includes: receiving floating point data sent from an electronic device; converting the received floating point data into fixed point data according to a data length and a value range of the received floating point data; performing calculation on the obtained fixed point data according to a preset algorithm to obtain result data in a fixed point form; and converting the obtained result data in the fixed point form into result data in a floating point form and sending the result data in the floating point form to the electronic device. This implementation improves the data processing efficiency.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: June 1, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Jian Ouyang, Wei Qi, Yong Wang, Lin Liu
  • Patent number: 10990390
    Abstract: An instruction generates a value for use in processing within a computing environment. The instruction obtains a sign control associated with the instruction, and shifts an input value of the instruction in a specified direction by a selected amount to provide a result. The result is placed in a first designated location in a register, and the sign, which is based on the sign control, is placed in a second designated location of the register. The result and the sign provide a signed value to be used in processing within the computing environment.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: April 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Reid T. Copeland, Silvia Melitta Mueller
  • Patent number: 10936285
    Abstract: Processing circuitry may support processing of anchor-data values comprising one or more anchored-data elements which represent portions of bits of a two's complement number. The anchored-data processing may depend on anchor information indicating at least one property indicative of a numeric range representable by the result anchored-data element or the anchored-data value. When the operation causes an overflow or an underflow, usage information may be stored indicating a cause of the overflow or underflow and/or an indication of how to update the anchor information and/or number of elements in the anchored-data value to prevent the overflow or underflow. This can support dynamic range adjustment in software algorithms which involve anchored-data processing.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: March 2, 2021
    Assignee: Arm Limited
    Inventors: David Raymond Lutz, Neil Burgess, Christopher Neal Hinds
  • Patent number: 10936284
    Abstract: Aspects for neural network operations with floating-point number of short bit length are described herein. The aspects may include a neural network processor configured to process one or more floating-point numbers to generate one or more process results. Further, the aspects may include a floating-point number converter configured to convert the one or more process results in accordance with at least one format of shortened floating-point numbers. The floating-point number converter may include a pruning processor configured to adjust a length of a mantissa field of the process results and an exponent modifier configured to adjust a length of an exponent field of the process results in accordance with the at least one format.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: March 2, 2021
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Shaoli Liu, Qi Guo, Yunji Chen
  • Patent number: 10891109
    Abstract: An arithmetic processor includes a plurality of arithmetic circuits that individually execute an arithmetic operation for fixed point data; and at least one of first and second statistical information is acquired regarding a plurality of fixed point data that are results of arithmetic operation executed by the plurality of arithmetic circuits. The first statistical information is obtained by accumulating a bit pattern, which is obtained by setting a flag bit to each of bit positions corresponding to a range from a least-significant-bit position to a highest-order bit position for each of the digits corresponding to the bit positions, and the second statistical information is obtained by accumulating a bit pattern, which is obtained by setting a flag bit to each of bit positions corresponding to a range from the position of the sign bit to a lowest-order-bit position for each of the digits corresponding to the bit positions.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: January 12, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Katsuhiro Yoda, Makiko Ito
  • Patent number: 10886942
    Abstract: A binary logic circuit converts a number in floating point format having an exponent E, an exponent bias B=2ew-1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: January 5, 2021
    Assignee: Imagination Technologies Limited
    Inventor: Kenneth Rovers
  • Patent number: 10778245
    Abstract: Systems, apparatuses, and methods related to bit string conversion are described. Circuitry can perform operations on bit strings, such as universal number and/or posit bit strings, to alter a level of precision (e.g., a dynamic range, resolution, etc.) of the bit strings. For instance, bit string conversion can include receiving, by a memory resource coupled to logic circuitry, a first bit string having a first bit string length. The first quantity of bits can comprise a first bit sub-set, a second bit sub-set, a third bit sub-set, and a fourth bit sub-set. The logic circuitry monitor numerical values corresponding to at least one bit sub-set of the bit string to determine a dynamic range corresponding to the data and/or precision corresponding to the data and generate a second bit string having a second bit string length based, at least in part, on the determined dynamic range of the data, the precision of the data.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: September 15, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Vijay S. Ramesh, Katie Blomster Park
  • Patent number: 10763891
    Abstract: Embodiments of an instruction, its operation, and executional support for the instruction are described. In some embodiments, a processor comprises decode circuitry to decode an instruction having fields for an opcode, a packed data source operand identifier, and a packed data destination operand identifier; and execution circuitry to execute the decoded instruction to convert a single precision floating point data element of a least significant packed data element position of the identified packed data source operand to a fixed-point representation, store the fixed-point representation as 32-bit integer and a 32-bit integer exponent in the two least significant packed data element positions of the identified packed data destination operand, and zero of all remaining packed data elements of the identified packed data destination operand.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Venkateswara Madduri, Elmoustapha Ould-Ahmed-Vall, Robert Valentine, Jesus Corbal, Mark Charney
  • Patent number: 10756754
    Abstract: A binary logic circuit converts a number in floating point format having an exponent E, an exponent bias B=2ew?1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: August 25, 2020
    Assignee: Imagination Technologies Limited
    Inventor: Kenneth Rovers
  • Patent number: 10725780
    Abstract: Machine instructions, referred to herein as a long Convert from Zoned instruction (CDZT) and extended Convert from Zoned instruction (CXZT), are provided that read EBCDIC or ASCII data from memory, convert it to the appropriate decimal floating point format, and write it to a target floating point register or floating point register pair. Further, machine instructions, referred to herein as a long Convert to Zoned instruction (CZDT) and extended Convert to Zoned instruction (CZXT), are provided that convert a decimal floating point (DFP) operand in a source floating point register or floating point register pair to EBCDIC or ASCII data and store it to a target memory location.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: July 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven R. Carlough, Reid T. Copeland, Charles W. Gainey, Jr., Marcel Mitran, Eric M. Schwarz, Timothy J. Slegel
  • Patent number: 10726514
    Abstract: One embodiment provides a general-purpose graphics processing unit comprising a dynamic precision floating-point unit including a control unit having precision tracking hardware logic to track an available number of bits of precision for computed data relative to a target precision, wherein the dynamic precision floating-point unit includes computational logic to output data at multiple precisions.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland
  • Patent number: 10691411
    Abstract: A binary logic circuit converts a number in floating point format having an exponent E of ew bits, an exponent bias B given by B=2ew-1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits. The circuit includes a shifter operable to receive a significand input comprising a contiguous set of the most significant bits of the significand and configured to left-shift the significand input by a number of bits equal to the value represented by k least significant bits of the exponent to generate a shifter output, wherein min{(ew?1), bitwidth(iw?2?sy)}?k?(ew?1) where sy=1 for a signed floating point number and sy=0 for an unsigned floating point number, and a multiplexer coupled to the shifter and configured to: receive an input comprising a contiguous set of bits of the shifter output; and output the input if the most significant bit of the exponent is equal to one.
    Type: Grant
    Filed: January 4, 2020
    Date of Patent: June 23, 2020
    Assignee: Imagination Technologies Limited
    Inventor: Kenneth Rovers
  • Patent number: 10684854
    Abstract: An embodiment of the invention is a processor including execution circuitry to, in response to a decoded instruction, convert a half-precision floating-point value to a single-precision floating-point value and store the single-precision floating-point value in each of the plurality of element locations of a destination register. The processor also includes a decoder and the destination register. The decoder is to decode an instruction to generate the decoded instruction.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Robert Valentine, Mark Charney, Raanan Sade, Elmoustapha Ould-Ahmed-Vall, Jesus Corbal
  • Patent number: 10664236
    Abstract: Low precision computers can be efficient at finding possible answers to search problems. However, sometimes the task demands finding better answers than a single low precision search. A computer system augments low precision computing with a small amount of high precision computing, to improve search quality with little additional computing.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: May 26, 2020
    Assignee: Singular Computing LLC
    Inventor: Joseph Bates
  • Patent number: 10606588
    Abstract: A Set Boolean machine instruction is provided that has associated therewith a result location to be used for a set Boolean operation and a mask. The mask is configured to test a plurality of types of conditions, including simple conditions and composite conditions. The machine instruction is executed, and the executing includes performing a first logical operation between the mask and contents of a selected field to obtain an output. The mask indicates a condition to be tested, and the condition is one type of condition of the plurality of types of conditions. The executing further includes performing a second logical operation on the output to obtain a first value represented as one data type, and placing a result in the result location based on the first value. The result including a second a value of another data type, the other data type being different from the one data type.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: March 31, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Brett Olsson
  • Patent number: 10564932
    Abstract: The invention introduces a method for calculating floating-point operands, which contains at least the following steps: receiving an FP (floating-point) operand in a first format from a source register, wherein the first format is one of a group of first formats of different kinds; converting the FP operand in the first format into an FP operand in a second format; generating a calculation result in the second format by calculating the FP operand in the second format; converting the calculation result in the second format into a calculation result in the first format; and writing-back the calculation result of the first format.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: February 18, 2020
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventors: Zhi Zhang, Jing Chen
  • Patent number: 10558428
    Abstract: A binary logic circuit converts a number in floating point format having an exponent E of ew bits, an exponent bias B given by B=2ew-1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: February 11, 2020
    Assignee: Imagination Technologies Limited
    Inventor: Kenneth Rovers
  • Patent number: 10560115
    Abstract: A binary logic circuit converts a number in floating point format having an exponent E, an exponent bias B=2ew?1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: February 11, 2020
    Assignee: Imagination Technologies Limited
    Inventor: Kenneth Rovers
  • Patent number: 10514891
    Abstract: Methods, systems, and apparatus, including an apparatus for adding three or more floating-point numbers. In one aspect, a method includes receiving, for each of three or more operands, a set of bits that include a floating-point representation of the operand. A given operand is identified. For each other operand, the mantissa bits of the operand are shifted such that the bits of the operand align with the bits of the given operand. A sticky bit for each other operand is determined. An overall sticky bit value is determined based on each sticky bit. The overall sticky bit value is zero whenever all of the sticky bits are zero or at least two sticky bits are non-zero and do not match. The overall sticky bit value matches the value of each non-zero sticky bit whenever all of the non-zero sticky bits match or there is only one non-zero sticky bit.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: December 24, 2019
    Assignee: Google LLC
    Inventors: Hsin-Jung Yang, Andrew Everett Phelps
  • Patent number: 10489152
    Abstract: Embodiments are directed to a computer implemented method for executing machine instructions in a central processing unit. The executing includes loading a first operand into a first operand register, and loading a second operand into a second operand register. The executing further includes shifting either the first operand or the second operand to form a shifted operand. The executing further includes adding or subtracting the first operand and the second operand to obtain a sum or a difference, and loading the sum or the difference having a least significant bit into a third register or a memory. The executing further includes performing a probability analysis on least significant bits of the shifted operand or the non-shifted operand, and initiating a rounding operation on the least significant bit of the sum or the difference based at least in part on the probability analysis.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Steven R. Carlough, Brian R. Prasky, Eric M. Schwarz
  • Patent number: 10489153
    Abstract: Embodiments are directed to a computer implemented method for executing machine instructions in a central processing unit. The executing includes loading a first operand into a first operand register, and loading a second operand into a second operand register. The executing further includes shifting either the first operand or the second operand to form a shifted operand. The executing further includes adding or subtracting the first operand and the second operand to obtain a sum or a difference, and loading the sum or the difference having a least significant bit into a third register or a memory. The executing further includes performing a probability analysis on least significant bits of the shifted operand or the non-shifted operand, and initiating a rounding operation on the least significant bit of the sum or the difference based at least in part on the probability analysis.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Steven R. Carlough, Brian R. Prasky, Eric M. Schwarz
  • Patent number: 10491239
    Abstract: A computational device includes an input memory, which receives a first array of input numbers having a first precision represented by N bits. An output memory stores a second array of output numbers having a second precision represented by M bits, M<N. Quantization logic reads the input numbers from the input memory, extracts from each input number a set of M bits, at a bit offset within the input number that is indicated by a quantization factor, and writes a corresponding output number based on the extracted set of bits to the second array in the output memory. A quantization controller sets the quantization factor so as to optimally fit an available range of the output numbers in the second array to an actual range of the input numbers in the first array in extraction of the M bits from the input numbers.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: November 26, 2019
    Assignee: Habana Labs Ltd.
    Inventor: Itay Hubara
  • Patent number: 10423626
    Abstract: According to one embodiment, a translation component is configured to operate on document encoded data to translate the document encoded data into a canonical format comprising a plurality of canonical types that fold together into a byte stream. The translation component is configured to accept any storage format of data (e.g., column store, row store, LSM tree, etc. and/or data from any storage engine, WIREDTIGER, MMAP, AR tree, Radix tree, etc.) and translate that data into a byte stream to enable efficient comparison. When executing searches and using the translated data to provide comparisons there is necessarily a trade-off based on the cost of translating the data and how much the translated data can be leveraged to increase comparison efficiency.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: September 24, 2019
    Assignee: MongoDB, Inc.
    Inventors: Mathias Benjamin Stearn, Eliot Horowitz, Geert Bosch
  • Patent number: 10365893
    Abstract: The disclosure relates to technology for generating a data set comprising random numbers that are distributed by a multivariate population distribution. A set of empirical cumulative distribution functions are constructed from a collection of multidimensional random samples of the multivariate population, where each empirical cumulative distribution function is constructed from observations of a random variable. A number of multidimensional sample points are sampled from the collection of multidimensional random samples and the number of multidimensional sample points are each replaced with random neighbors to generate cloned data.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: July 30, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Jiangsheng Yu, Shijun Ma
  • Patent number: 10365892
    Abstract: Processing within a computing environment is facilitated. An operand of an instruction is obtained, which includes decimal floating point data encoded in a compressed format. An operation is performed on the operand absent decompressing a source value of a trailing significand of the decimal floating point data in the compressed format.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: July 30, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven R. Carlough, Petra Leber, Silvia Melitta Mueller, Kerstin Schelm
  • Patent number: 10353862
    Abstract: A neural network unit includes a random bit source that generates random bits and a plurality of neural processing units (NPU). Each NPU includes an accumulator into which the NPU accumulates a plurality of products as an accumulated value and a rounder that receives the random bits from the random bit source and stochastically rounds the accumulated value based on a random bit received from the random bit source.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: July 16, 2019
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventors: G. Glenn Henry, Terry Parks
  • Patent number: 10331404
    Abstract: Apparatus for processing data includes processing circuitry 16, 18, 20, 22, 24, 26 and decoder circuitry 14 for decoding program instructions. The program instructions decoded include a floating point pre-conversion instruction which performs round-to-nearest ties to even rounding upon the mantissa field of an input floating number to generate an output floating point number with the same mantissa length but with the mantissa rounded to a position corresponding to a shorter mantissa field. The output mantissa field includes a suffix of zero values concatenated the rounded value. The decoder for circuitry 14 is also responsive to an integer pre-conversion instruction to quantise and input integer value using round-to-nearest ties to even rounding to form an output integer operand with a number of significant bits matched to the mantissa size of a floating point number to which the integer is later to be converted using an integer-to-floating point conversion instruction.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: June 25, 2019
    Assignee: ARM Limited
    Inventors: Jorn Nystad, Andreas Due Engh-Halstvedt, Simon Alex Charles
  • Patent number: 10303478
    Abstract: Machine instructions, referred to herein as a long Convert from Zoned instruction (CDZT) and extended Convert from Zoned instruction (CXZT), are provided that read EBCDIC or ASCII data from memory, convert it to the appropriate decimal floating point format, and write it to a target floating point register or floating point register pair. Further, machine instructions, referred to herein as a long Convert to Zoned instruction (CZDT) and extended Convert to Zoned instruction (CZXT), are provided that convert a decimal floating point (DFP) operand in a source floating point register or floating point register pair to EBCDIC or ASCII data and store it to a target memory location.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: May 28, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven R. Carlough, Reid T. Copeland, Charles W. Gainey, Jr., Marcel Mitran, Eric M. Schwarz, Timothy J. Slegel
  • Patent number: 10303438
    Abstract: A floating-point unit, configured to implement a fused-multiply-add operation on three 128 bit wide operands is provided, which includes a 113×113-bit multiplier; a left shifter; a right shifter; a select circuit including a 3-to-2 compressor; an adder connected to the dataflow from the select circuit; a first feedback path connecting a carry output of the adder to the select circuit; a second feedback path connecting the output of the adder to the shifters for passing an intermediate wide result through the shifters.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: May 28, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tina Babinsky, Udo Krautz, Klaus M. Kroener, Silvia M. Mueller, Andreas Wagner
  • Patent number: 10296344
    Abstract: Machine instructions, referred to herein as a long Convert from Zoned instruction (CDZT) and extended Convert from Zoned instruction (CXZT), are provided that read EBCDIC or ASCII data from memory, convert it to the appropriate decimal floating point format, and write it to a target floating point register or floating point register pair. Further, machine instructions, referred to herein as a long Convert to Zoned instruction (CZDT) and extended Convert to Zoned instruction (CZXT), are provided that convert a decimal floating point (DFP) operand in a source floating point register or floating point register pair to EBCDIC or ASCII data and store it to a target memory location.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: May 21, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven R. Carlough, Reid T. Copeland, Charles W. Gainey, Jr., Marcel Mitran, Eric M. Schwarz, Timothy J. Slegel
  • Patent number: 10275215
    Abstract: A method of operating a data processing system when determining a b-bit unsigned normalized integer representation U of a number x is disclosed. When the number x has a value between 0 and 1, the method comprises determining the integer part I of (x×2b), and determining whether to use the integer part I, an incremented version of the integer part I, or a decremented version of the integer part I for the unsigned normalized integer representation U of the number x based on a comparison that uses the fractional part F of (x×2b) and the number x.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: April 30, 2019
    Assignee: Arm Limited
    Inventor: Toni Viki Brkic
  • Patent number: 10209958
    Abstract: A method for generating a random number for use in a stochastic rounding operation is provided. The method includes executing an instruction that causes at least two operands to produce an intermediate result and incrementing a state of a random number generator. The method d further includes causing the random number generator to generate a random number in accordance with the state and producing a final result by utilizing the random number to determine a rounding of the intermediate result.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: February 19, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Steven R. Carlough, Brian R. Prasky, Eric M. Schwarz
  • Patent number: 10163258
    Abstract: A tessellation method and apparatus are provided, where the tessellation method includes receiving a first value that is calculated in performing tessellation, the first value being a first floating-point real number represented by a first exponent and a first mantissa; determining a second precision of the first mantissa on the basis of a value of the first exponent and a first precision; and adjusting the first mantissa to have the second precision.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: December 25, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jeongsoo Park, Kwontaek Kwon, Wonjong Lee