Patents by Inventor Dan Baum
Dan Baum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230101512Abstract: Techniques for shared data prefetch are described. An exemplary instruction for shared data prefetch includes at least one field for an opcode, at least one field for a source operand to provide a memory address at least a byte of data, wherein the opcode is to indicate that circuitry is to fetch of a line of data from memory at the provided address that contains the byte specified with the source operand and store that byte in at least a cache local to a requester, wherein the byte of data is to be stored in a shared state.Type: ApplicationFiled: September 25, 2021Publication date: March 30, 2023Inventors: Christopher HUGHES, Zhe WANG, Dan BAUM, Alexander HEINECKE, Evangelos GEORGANAS, Lingxiang XIANG, Joseph NUZMAN, Ritu GUPTA
-
Publication number: 20230102279Abstract: Systems, methods, and apparatuses relating sparsity based FMA. In some examples, an instance of a single FMA instruction has one or more fields for an opcode, one or more fields to identify a source/destination matrix operand, one or more fields to identify a first plurality of source matrix operands, one or more fields to identify a second plurality of matrix operands, wherein the opcode is to indicate that execution circuitry is to select a proper subset of data elements from the first plurality of source matrix operands based on sparsity controls from a first matrix operand of the second plurality of matrix operands and perform a FMA.Type: ApplicationFiled: September 25, 2021Publication date: March 30, 2023Inventors: Menachem ADELMAN, Robert VALENTINE, Dan BAUM, Amit GRADSTEIN, Simon RUBANOVICH, Regev SHEMY, Zeev SPERBER, Alexander HEINECKE, Christopher HUGHES, Evangelos GEORGANAS, Mark CHARNEY, Arik NARKIS, Rinat RAPPOPORT, Barukh ZIV, Yaroslav POLLAK, Nilesh JAIN, Yash AKHAURI, Brinda GANESH, Rajesh POORNACHANDRAN, Guy BOUDOUKH
-
Patent number: 11579880Abstract: Disclosed embodiments relate to systems for performing instructions to quickly convert and use matrices (tiles) as one-dimensional vectors. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode, locations of a two-dimensional (2D) matrix and a one-dimensional (1D) vector, and a group of elements comprising one of a row, part of a row, multiple rows, a column, part of a column, multiple columns, and a rectangular sub-tile of the specified 2D matrix, and wherein the opcode is to indicate a move of the specified group between the 2D matrix and the 1D vector, decode circuitry to decode the fetched instruction; and execution circuitry, responsive to the decoded instruction, when the opcode specifies a move from 1D, to move contents of the specified 1D vector to the specified group of elements.Type: GrantFiled: April 26, 2021Date of Patent: February 14, 2023Assignee: Intel CorporationInventors: Bret Toll, Christopher J. Hughes, Dan Baum, Elmoustapha Ould-Ahmed-Vall, Raanan Sade, Robert Valentine, Mark J. Charney, Alexander F. Heinecke
-
Patent number: 11579883Abstract: Disclosed embodiments relate to systems and methods for performing instructions specifying horizontal tile operations. In one example, a processor includes fetch circuitry to fetch an instruction specifying a horizontal tile operation, a location of a M by N source matrix comprising K groups of elements, and locations of K destinations, wherein each of the K groups of elements comprises the same number of elements, decode circuitry to decode the fetched instruction, and execution circuitry to respond to the decoded instruction by generating K results, each result being generated by performing the specified horizontal tile operation across every element of a corresponding group of the K groups, and writing each generated result to a corresponding location of the K specified destination locations.Type: GrantFiled: September 14, 2018Date of Patent: February 14, 2023Assignee: Intel CorporationInventors: Christopher J. Hughes, Bret Toll, Dan Baum, Elmoustapha Ould-Ahmed-Vall, Raanan Sade, Robert Valentine, Mark J. Charney, Alexander F. Heinecke
-
Patent number: 11567765Abstract: Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory. For example, support for a loading instruction is described in the form of decode circuitry to decode an instruction having fields for an opcode, a destination matrix operand identifier, and source memory information, and execution circuitry to execute the decoded instruction to load groups of strided data elements from memory into configured rows of the identified destination matrix operand to memory.Type: GrantFiled: July 1, 2017Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Robert Valentine, Menachem Adelman, Milind B. Girkar, Zeev Sperber, Mark J. Charney, Bret L. Toll, Rinat Rappoport, Jesus Corbal, Stanislav Shwartsman, Dan Baum, Igor Yanover, Alexander F. Heinecke, Barukh Ziv, Elmoustapha Ould-Ahmed-Vall, Yuri Gebil
-
Publication number: 20230027329Abstract: A processor, a system, a machine readable medium, and a method.Type: ApplicationFiled: December 26, 2020Publication date: January 26, 2023Applicant: Intel CorporationInventors: David M. Durham, Michael D. LeMay, Salmin Sultana, Karanvir S. Grewal, Michael E. Kounavis, Sergej Deutsch, Andrew James Weiler, Abhishek Basak, Dan Baum, Santosh Ghosh
-
Patent number: 11507376Abstract: Disclosed embodiments relate to instructions for fast element unpacking. In one example, a processor includes fetch circuitry to fetch an instruction whose format includes fields to specify an opcode and locations of an Array-of-Structures (AOS) source matrix and one or more Structure of Arrays (SOA) destination matrices, wherein: the specified opcode calls for unpacking elements of the specified AOS source matrix into the specified Structure of Arrays (SOA) destination matrices, the AOS source matrix is to contain N structures each containing K elements of different types, with same-typed elements in consecutive structures separated by a stride, the SOA destination matrices together contain K segregated groups, each containing N same-typed elements, decode circuitry to decode the fetched instruction, and execution circuitry, responsive to the decoded instruction, to unpack each element of the specified AOS matrix into one of the K element types of the one or more SOA matrices.Type: GrantFiled: January 19, 2021Date of Patent: November 22, 2022Assignee: Intel CorporationInventors: Bret Toll, Alexander F. Heinecke, Christopher J. Hughes, Ronen Zohar, Michael Espig, Dan Baum, Raanan Sade, Robert Valentine, Mark J. Charney, Elmoustapha Ould-Ahmed-Vall
-
Publication number: 20220291927Abstract: Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory.Type: ApplicationFiled: March 28, 2022Publication date: September 15, 2022Inventors: Robert VALENTINE, Menachem ADELMAN, Elmoustapha OULD-AHMED-VALL, Bret L. TOLL, Milind B. GIRKAR, Zeev SPERBER, Mark J. CHARNEY, Rinat RAPPOPORT, Jesus CORBAL, Stanislav SHWARTSMAN, Igor YANOVER, Alexander F. HEINECKE, Barukh ZIV, Dan BAUM, Yuri GEBIL
-
Publication number: 20220291926Abstract: Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory.Type: ApplicationFiled: March 28, 2022Publication date: September 15, 2022Inventors: Robert VALENTINE, Menachem ADELMAN, Elmoustapha OULD-AHMED-VALL, Bret L. TOLL, Milind B. GIRKAR, Zeev SPERBER, Mark J. CHARNEY, Rinat RAPPOPORT, Jesus CORBAL, Stanislav SHWARTSMAN, Igor YANOVER, Alexander F. HEINECKE, Barukh ZIV, Dan BAUM, Yuri GEBIL
-
Patent number: 11422809Abstract: An apparatus and method for processing efficient multicast operation.Type: GrantFiled: May 13, 2020Date of Patent: August 23, 2022Assignee: INTEL CORPORATIONInventors: Christopher J. Hughes, Dan Baum
-
Publication number: 20220236989Abstract: Detailed herein are embodiment systems, processors, and methods for matrix move. For example, a processor comprising decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and a destination matrix operand identifier; and execution circuitry to execute the decoded instruction to move each data element of the identified source matrix operand to corresponding data element position of the identified destination matrix operand is described.Type: ApplicationFiled: January 28, 2022Publication date: July 28, 2022Inventors: Robert VALENTINE, Zeev SPERBER, Mark J. CHARNEY, Bret L. TOLL, Jesus CORBAL, Dan BAUM, Alexander HEINECKE, Elmoustapha OULD-AHMED-VALL
-
Publication number: 20220197822Abstract: Techniques to allow use of metadata in unused bits of virtual addresses are described. A processor of an aspect includes a decode circuit to decode a memory access instruction. The instruction to indicate one or more memory address operands that are to have address generation information and metadata. An execution circuit coupled with the decode circuit to generate a 64-bit virtual address based on the one or more memory address operands. The 64-bit virtual address having a bit 63, an X-bit address field starting at a bit 0 to store an address generated from the address generation information, and one or more metadata bits to store the metadata. The execution circuit also to perform a canonicality check on the 64-bit virtual address that does not fail due to non-canonical values of the metadata stored in the one or more metadata bits. Other processors, methods, systems, and instructions are disclosed.Type: ApplicationFiled: December 23, 2020Publication date: June 23, 2022Inventors: Vedvyas SHANBHOGUE, Gilbert NEIGER, Stephen ROBINSON, Dan BAUM, Ron GABOR
-
Publication number: 20220171627Abstract: Disclosed embodiments relate to matrix compress/decompress instructions. In one example, a processor includes fetch circuitry to fetch a compress instruction having a format with fields to specify an opcode and locations of decompressed source and compressed destination matrices, decode circuitry to decode the fetched compress instructions, and execution circuitry, responsive to the decoded compress instruction, to: generate a compressed result according to a compress algorithm by compressing the specified decompressed source matrix by either packing non-zero-valued elements together and storing the matrix position of each non-zero-valued element in a header, or using fewer bits to represent one or more elements and using the header to identify matrix elements being represented by fewer bits; and store the compressed result to the specified compressed destination matrix.Type: ApplicationFiled: February 15, 2022Publication date: June 2, 2022Inventors: Dan BAUM, Michael ESPIG, James GUILFORD, Wajdi K. FEGHALI, Raanan SADE, Christopher J. HUGHES, Robert VALENTINE, Bret TOLL, Elmoustapha OULD-AHMED-VALL, Mark J. CHARNEY, Vinodh GOPAL, Ronen ZOHAR, Alexander F. HEINECKE
-
Publication number: 20220171623Abstract: Embodiments detailed herein relate to matrix operations. In particular, support for matrix (tile) addition, subtraction, and multiplication is described. For example, circuitry to support instructions for element-by-element matrix (tile) addition, subtraction, and multiplication are detailed. In some embodiments, for matrix (tile) addition, decode circuitry is to decode an instruction having fields for an opcode, a first source matrix operand identifier, a second source matrix operand identifier, and a destination matrix operand identifier; and execution circuitry is to execute the decoded instruction to, for each data element position of the identified first source matrix operand: add a first data value at that data element position to a second data value at a corresponding data element position of the identified second source matrix operand, and store a result of the addition into a corresponding data element position of the identified destination matrix operand.Type: ApplicationFiled: December 10, 2021Publication date: June 2, 2022Applicant: Intel CorporationInventors: Robert VALENTINE, Dan BAUM, Zeev SPERBER, Jesus CORBAL, Elmoustapha OULD-AHMED-VALL, Bret L. TOLL, Mark J. CHARNEY, Barukh ZIV, Alexander HEINECKE, Milind GIRKAR, Simon RUBANOVICH
-
Patent number: 11327754Abstract: Methods and apparatus for approximation using polynomial functions are disclosed. In one embodiment, a processor comprises decoding and execution circuitry. The decoding circuitry is to decode an instruction, where the instruction comprises a first operand specifying an output location and a second operand specifying a plurality of data element values to be computed. The execution circuitry is to execute the decoded instruction. The execution includes to compute a result for each of the plurality of data element values using a polynomial function to approximate a complex function, where the computation uses coefficients stored in a lookup location for the complex function, and where data element values within different data element value ranges use different sets of coefficients. The execution further includes to store results of the computation in the output location.Type: GrantFiled: March 27, 2019Date of Patent: May 10, 2022Assignee: INTEL CORPORATIONInventors: Jorge Parra, Dan Baum, Robert S. Chappell, Michael Espig, Varghese George, Alexander Heinecke, Christopher Hughes, Subramaniam Maiyuran, Prasoonkumar Surti, Ronen Zohar, Elmoustapha Ould-Ahmed-Vall
-
Patent number: 11294671Abstract: Disclosed embodiments relate to systems and methods for performing duplicate detection instructions on two-dimensional (2D) data. In one example, a processor includes fetch circuitry to fetch an instruction, decode circuitry to decode the fetched instruction having fields to specify an opcode and locations of a source matrix comprising M×N elements and a destination, the opcode to indicate execution circuitry is to use a plurality of comparators to discover duplicates in the source matrix, and store indications of locations of discovered duplicates in the destination. The execution circuitry to execute the decoded instruction as per the opcode.Type: GrantFiled: December 26, 2018Date of Patent: April 5, 2022Assignee: Intel CorporationInventors: Christopher J. Hughes, Michael Espig, Dan Baum, Robert Valentine, Bret Toll, Elmoustapha Ould-Ahmed-Vall
-
Publication number: 20220100515Abstract: Disclosed embodiments relate to systems for performing instructions to quickly convert and use matrices (tiles) as one-dimensional vectors. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode, locations of a two-dimensional (2D) matrix and a one-dimensional (1D) vector, and a group of elements comprising one of a row, part of a row, multiple rows, a column, part of a column, multiple columns, and a rectangular sub-tile of the specified 2D matrix, and wherein the opcode is to indicate a move of the specified group between the 2D matrix and the 1D vector, decode circuitry to decode the fetched instruction; and execution circuitry, responsive to the decoded instruction, when the opcode specifies a move from 1D, to move contents of the specified 1D vector to the specified group of elements.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Inventors: Bret TOLL, Christopher J. HUGHES, Dan BAUM, Elmoustapha OULD-AHMED-VALL, Raanan SADE, Robert VALENTINE, Mark J. CHARNEY, Alexander F. HEINECKE
-
Publication number: 20220100505Abstract: Disclosed embodiments relate to systems for performing instructions to quickly convert and use matrices (tiles) as one-dimensional vectors. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode, locations of a two-dimensional (2D) matrix and a one-dimensional (1D) vector, and a group of elements comprising one of a row, part of a row, multiple rows, a column, part of a column, multiple columns, and a rectangular sub-tile of the specified 2D matrix, and wherein the opcode is to indicate a move of the specified group between the 2D matrix and the 1D vector, decode circuitry to decode the fetched instruction; and execution circuitry, responsive to the decoded instruction, when the opcode specifies a move from 1D, to move contents of the specified 1D vector to the specified group of elements.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Inventors: Bret TOLL, Christopher J. HUGHES, Dan BAUM, Elmoustapha OULD-AHMED-VALL, Raanan SADE, Robert VALENTINE, Mark J. CHARNEY, Alexander F. HEINECKE
-
Patent number: 11288069Abstract: Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory.Type: GrantFiled: July 1, 2017Date of Patent: March 29, 2022Assignee: Intel CorporationInventors: Robert Valentine, Menachem Adelman, Elmoustapha Ould-Ahmed-Vall, Bret L. Toll, Milind B. Girkar, Zeev Sperber, Mark J. Charney, Rinat Rappoport, Jesus Corbal, Stanislav Shwartsman, Igor Yanover, Alexander F. Heinecke, Barukh Ziv, Dan Baum, Yuri Gebil
-
Patent number: 11288068Abstract: Detailed herein are embodiment systems, processors, and methods for matrix move. For example, a processor comprising decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and a destination matrix operand identifier; and execution circuitry to execute the decoded instruction to move each data element of the identified source matrix operand to corresponding data element position of the identified destination matrix operand is described.Type: GrantFiled: July 1, 2017Date of Patent: March 29, 2022Assignee: Intel CorporationInventors: Robert Valentine, Zeev Sperber, Mark J. Charney, Bret L. Toll, Jesus Corbal, Dan Baum, Alexander Heinecke, Elmoustapha Ould-Ahmed-Vall