Patents by Inventor Uri SHERMAN
Uri SHERMAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240045684Abstract: Techniques for converting FP16 to BF8 using bias are described.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Mark Charney, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber, Robert Valentine
-
Publication number: 20240045681Abstract: Techniques for comparing FP8 data elements are described. An exemplary FP8 comparison instruction includes fields for an opcode, an identification of a location of a first packed data source operand, and an identification of a location of a second packed data source operand, wherein the opcode is to indicate that execution circuitry is to perform, for a particular data element position of the packed data source operands, a comparison of a data element at that position, and update a flags register based on the comparison.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045683Abstract: Techniques for performing square root or reciprocal square root calculations on FP8 data elements in response to an instruction are described. An example of an instruction is one that includes fields for an opcode, an identification of a location of a packed data source operand, and an identification of a packed data destination operand, wherein the opcode is to indicate that execution circuitry is to perform, for each data element position of the packed data source operand, a calculation of a square root value of a FP8 data element in that position and store a result of each square root into a corresponding data element position of the packed data destination operand.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045688Abstract: Techniques for performing FP8 FMA in response to an instruction are described. In some examples, an instruction has fields for an opcode, an identification of location of a packed data source/destination operand (a first source), an identification of a location of a second packed data source operand, an identification of a location of a third packed data source operand, and an identification of location of a packed data source/destination operand, wherein the opcode is to indicate operand ordering and that execution circuitry is to, per data element position, perform a FP8 value fused multiply-accumulate operation using the first, second, and third source operands and store a result in a corresponding data element position of the source/destination operand, wherein the FP8 value has an 8-bit floating point format that comprises one bit for a sign, at least 4 bits for an exponent, and at least two bits for a fraction.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045677Abstract: Techniques for converting FP16 or FP32 data elements to FP8 data elements using a single instruction are described. An exemplary apparatus includes decoder circuitry to decode a single instruction, the single instruction to include a one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed bfloat8 data into corresponding data element positions of the identified destination operand; and execution circuitry to execute the decoded instruction according to the opcode to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed bfloat8 data and store the packed bfloat8 data into corresponding data element positions.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Mark Charney, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber, Robert Valentine
-
Publication number: 20240045654Abstract: Techniques for performing arithmetic operations on FP8 values are described. An exemplary instruction includes fields for an opcode, an identification of a location of a first packed data source operand, an identification of a location of a second packed data source operand, and an identification of location of a packed data destination operand, wherein the opcode is to indicate an arithmetic operation execution circuitry is to perform, for each data element position of the identified packed data source operands, the arithmetic operation on FP8 data elements in that data element position in FP8 format and store a result of each arithmetic operation into a corresponding data element position of the identified packed data destination operand.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045691Abstract: Systems, methods, and apparatuses relating to 8-bit floating-point matrix dot product instructions are described.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Naveen Mellempudi, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Alexander Heinecke, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045682Abstract: Techniques for scale and reduction of FP8 data elements are described. An exemplary instruction includes fields for an having fields for an opcode, an identification of a location of a first packed data source operand, an identification of a location of a second packed data source operand, and an identification of a packed data destination operand, wherein the opcode is to indicate that execution circuitry is to perform, for each data element position of the packed data source operands, a floating point scale operation of a FP8 data element of the first packed data source by multiplying the data element by a power of 2 value, wherein a value of the exponent of the power of 2 value is a floor value of a FP8 data element of the second packed data source, and store a result of the floating point scale operation into a corresponding data element position of the packed data destination operand.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045686Abstract: Techniques for converting FP8 data elements to FP16 or FP32 data elements using a single instruction are described. An example apparatus includes decoder circuitry to decode a single instruction, the single instruction to indicate that execution circuitry is to convert packed FP8 data from the identified source to packed half-precision floating-point data or single-precision floating point data and store the packed half-precision floating-point data or single-precision floating point data into corresponding data element positions of the identified destination operand.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045689Abstract: Disclosed embodiments relate to systems and methods for performing 8-bit floating-point vector dot product instructions. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode and locations of first source, second source, and destination vectors, the opcode to indicate execution circuitry is to multiply pairs of 8-bit floating-point formatted elements of the specified first and second sources, and accumulate the resulting products with previous contents of a corresponding single-precision element of the specified destination, decode circuitry to decode the fetched instruction, and execution circuitry to respond to the decoded instruction as specified by the opcode.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20240045685Abstract: Systems, methods, and apparatuses relating sparsity based FMA. In some examples, an instance of a single FMA instruction has one or more fields for an opcode, one or more fields to identify a source/destination matrix operand, one or more fields to identify a first plurality of source matrix operands, one or more fields to identify a second plurality of matrix operands, wherein the opcode is to indicate that execution circuitry is to select a proper subset of FP8 data elements from the first plurality of source matrix operands based on sparsity controls from a first matrix operand of the second plurality of matrix operands and perform a FMA.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Menachem Adelman, Amit Gradstein, Alexander Heinecke, Christopher Hughes, Naveen Mellempudi, Shahar Mizrahi, Dana Rip, Simon Rubanovich, Uri Sherman, Guy Boudoukh, Evangelos Georganas, Nilesh Jain, Barukh Ziv
-
Publication number: 20240045687Abstract: Techniques for FP8 classification or manipulation using single instructions are described. An exemplary instruction includes fields for an opcode, an identification of a location of a packed data source operand, an indication of one or more classification checks to perform, and an identification of a packed data destination operand, wherein the opcode is to indicate that execution circuitry is to perform, for each data element position of the packed data source operand, a classification according to the indicated one or more classification checks and store a result of the classification in a corresponding data element position of the destination operand.Type: ApplicationFiled: October 1, 2022Publication date: February 8, 2024Inventors: Alexander Heinecke, Menachem Adelman, Evangelos Georganas, Amit Gradstein, Christopher Hughes, Naveen Mellempudi, Simon Rubanovich, Uri Sherman, Zeev Sperber
-
Publication number: 20230409326Abstract: Techniques and mechanisms for processor circuitry to execute a load and expand instruction of an instruction set to generate decompressed matrix data. In an embodiment, the instruction comprises a source operand which indicates a location from which compressed matrix data, and corresponding metadata, are to be accessed. A destination operand of the instruction indicates a location which is to receive decompressed metadata, which is generated, during execution of the instruction, based on the compressed matrix data and the corresponding metadata. The metadata comprises compression mask information which identifies which elements of the matrix have been masked from the compressed matrix data. In another embodiment, the instruction further comprises a count operand which identifies a total number of the unmasked matrix elements which are represented in the compressed matrix data.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Applicant: Intel CorporationInventors: Menachem Adelman, Amit Gradstein, Simon Rubanovich, Barukh Ziv, Uri Sherman, Dana Rip, Shahar Mizrahi, Dan Baum, Rinat Rappoport, Nilesh Jain, Zeev Sperber, Gideon Stupp, Alexander Heinecke, Christopher Hughes, Evangelos Georganas
-
Patent number: 11734504Abstract: A system and method for generating visual representations of interesting plots of tabular data.Type: GrantFiled: January 3, 2022Date of Patent: August 22, 2023Assignee: Datorama Technologies Ltd.Inventors: Uri Sherman, Roee David
-
Publication number: 20220121890Abstract: A system and method for generating visual representations of interesting plots of tabular data.Type: ApplicationFiled: January 3, 2022Publication date: April 21, 2022Applicant: Datorama Technologies Ltd.Inventors: Uri SHERMAN, Roee DAVID
-
Patent number: 11216706Abstract: A system and method for generating visual representations of interesting plots of tabular data.Type: GrantFiled: March 14, 2019Date of Patent: January 4, 2022Assignee: Datorama Technologies Ltd.Inventors: Uri Sherman, Roee David
-
Publication number: 20190286949Abstract: A system and method for generating visual representations of interesting plots of tabular data.Type: ApplicationFiled: March 14, 2019Publication date: September 19, 2019Applicant: Datorama Technologies, Ltd.Inventors: Uri SHERMAN, Roee DAVID
-
Publication number: 20190197578Abstract: A system and method for providing significant performance insights on marketing campaign data. The method includes training a regression model using a training set including segments and corresponding performance metrics of a plurality of potentially significant insights, each segment being a combination of a dimension and a value, wherein each insight includes a segment and a corresponding performance metric; filtering, based on the regression model, at least one insight from the plurality of potentially significant insights to result in at least one significant insight; computing, based in part on the regression model, a total significance score for each of the at least one significant insight; and ranking the at least one significant insight based on the computed total significance scores.Type: ApplicationFiled: December 26, 2017Publication date: June 27, 2019Applicant: c/o Datorama Technologies, Ltd.Inventors: Uri SHERMAN, Yonatan GINAT, Amir KAFRI