Patents by Inventor Naveen K
Naveen K has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250258639Abstract: Application casting is provided, in which an application running on an electronic device is casted to another electronic device that does not have access to the application. The application is casted by providing sufficient information for rendering of a user interface of the application, at the device that does not have access to the application, with applied modifications such as user preferences for the device that does not have access to the application.Type: ApplicationFiled: May 2, 2025Publication date: August 14, 2025Inventors: Joshua J. TAYLOR, Pablo P. CHENG, Michael E. BUERLI, Naveen K. VEMURI
-
Publication number: 20250217142Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.Type: ApplicationFiled: March 14, 2025Publication date: July 3, 2025Inventors: Dipankar DAS, Naveen K. MELLEMPUDI, Mrinmay DUTTA, Arun KUMAR, Dheevatsa MUDIGERE, Abhisek KUNDU
-
Patent number: 12314727Abstract: Described herein is a graphics processor including a processing resource including a multiplier configured to multiply input associated with the instruction at one of a first plurality of bit widths, an adder configured to add a product output from the multiplier with an accumulator value at one of a second plurality of bit widths, and circuitry to select a first bit width of the first plurality of bit widths for the multiplier and a second bit width of the second plurality of bit widths for the adder.Type: GrantFiled: May 12, 2022Date of Patent: May 27, 2025Assignee: Intel CorporationInventors: Dipankar Das, Roger Gramunt, Mikhail Smelyanskiy, Jesus Corbal, Dheevatsa Mudigere, Naveen K. Mellempudi, Alexander F. Heinecke
-
Publication number: 20250146029Abstract: The present invention relates to an enzymatic conversion of aldehyde to carboxylic acid for the preparation of ibuprofen. In particular, the present disclosure provides a method of preparation of 2-(4-isobutylphenyl) propanoic acid that is ibuprofen by enzymatic conversion of 2-(4-isobutylphenyl) propanal that is ibuprofen aldehyde to 2-(4-Isobutylphenyl) propanoic acid that is ibuprofen in presence of an oxidoreductase enzyme with high conversion efficiency.Type: ApplicationFiled: November 2, 2023Publication date: May 8, 2025Inventor: Naveen K. Kulkarni
-
Patent number: 12288062Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.Type: GrantFiled: December 28, 2023Date of Patent: April 29, 2025Assignee: Intel CorporationInventors: Dipankar Das, Naveen K. Mellempudi, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek Kundu
-
Patent number: 12273410Abstract: During web application development, receiving a request for a webpage for a business object type, the request comprising a business object type identifier of the business object type, receiving an expression for selecting an instance of the first business object type from a plurality of instances of the first business object type, the expression specifying a data source and an operation. The method can further comprise generating the webpage, the webpage comprising a user interface (UI) widget for the business object type and an instruction for prepopulating the first UI widget with data from the instance of the first business object type, the instruction including the expression, the expression executable to perform an action on data from the data source to generate a result identifying the instance of the first business object type.Type: GrantFiled: May 15, 2023Date of Patent: April 8, 2025Assignee: Open Text CorporationInventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
-
Publication number: 20250110741Abstract: An apparatus to facilitate supporting 8-bit floating point format for parallel computing and stochastic rounding operations in a graphics architecture is disclosed. The apparatus includes a processor comprising: a decoder to decode an instruction fetched for execution into a decoded instruction, wherein the decoded instruction is a matrix instruction that is to operate on 8-bit floating point operands to perform a parallel dot product operation; a scheduler to schedule the decoded instruction and provide input data for the 8-bit floating point operands in accordance with an 8-bit floating data format indicated by the decoded instruction; and circuitry to execute the decoded instruction to perform 32-way dot-product using 8-bit wide dot-product layers, each 8-bit wide dot-product layer comprises one or more sets of interconnected multipliers, shifters, and adders, wherein each set of multipliers, shifters, and adders is to generate a dot product of the 8-bit floating point operands.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Intel CorporationInventors: Jorge Eduardo Parra Osorio, Fangwen Fu, Guei-Yuan Lueh, Hong Jiang, Jiasheng Chen, Naveen K. Mellempudi, Kevin Hurd, Chunhui Mei, Alexandre Hadj-Chaib, Elliot Taylor, Shuai Mu
-
Publication number: 20250110733Abstract: An apparatus to facilitate conversion operations and special value use cases supporting 8-bit floating point format in a graphics architecture is disclosed. The apparatus includes a processor comprising a decoder to decode an instruction fetched for execution into a decoded instruction, wherein the decoded instruction to cause the processor to perform conversion operation corresponding to an 8-bit floating point format operand; a scheduler to schedule the decoded instruction and provide input data for an input operand of the conversion operation indicated by the decoded instruction; and conversion circuitry to execute the decoded instruction to perform the conversion operation to convert the input operand to an output operand in accordance with the 8-bit floating point format operand, the conversion circuitry comprising hardware circuitry to rescale, normalize, and convert the input operand to the output operand.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Intel CorporationInventors: Jorge Eduardo Parra Osorio, Fangwen Fu, Guei-Yuan Lueh, Jiasheng Chen, Naveen K. Mellempudi, Kevin Hurd, Alexandre Hadj-Chaib, Elliot Taylor, Marius Cornea-Hasegan
-
Publication number: 20240412318Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.Type: ApplicationFiled: June 24, 2024Publication date: December 12, 2024Applicant: Intel CorporationInventors: Naveen K. MELLEMPUDI, DHEEVATSA MUDIGERE, DIPANKAR DAS, SRINIVAS SRIDHARAN
-
Publication number: 20240320000Abstract: An apparatus to facilitate utilizing structured sparsity in systolic arrays is disclosed. The apparatus includes a processor comprising a systolic array to receive data from a plurality of source registers, the data comprising unpacked source data, structured source data that is packed based on sparsity, and metadata corresponding to the structured source data; identify portions of the unpacked source data to multiply with the structured source data, the portions of the unpacked source data identified based on the metadata; and output, to a destination register, a result of multiplication of the portions of the unpacked source data and the structured source data.Type: ApplicationFiled: March 29, 2024Publication date: September 26, 2024Applicant: Intel CorporationInventors: Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei, Durgesh Borkar, Shubra Marwaha, Supratim Pal, Varghese George, Wei Xiong, Yan Li, Yongsheng Liu, Dipankar Das, Sasikanth Avancha, Dharma Teja Vooturi, Naveen K. Mellempudi
-
Patent number: 12033237Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.Type: GrantFiled: April 24, 2023Date of Patent: July 9, 2024Assignee: Intel CorporationInventors: Naveen K. Mellempudi, Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan
-
Patent number: 11977885Abstract: An apparatus to facilitate utilizing structured sparsity in systolic arrays is disclosed. The apparatus includes a processor comprising a systolic array to receive data from a plurality of source registers, the data comprising unpacked source data, structured source data that is packed based on sparsity, and metadata corresponding to the structured source data; identify portions of the unpacked source data to multiply with the structured source data, the portions of the unpacked source data identified based on the metadata; and output, to a destination register, a result of multiplication of the portions of the unpacked source data and the structured source data.Type: GrantFiled: November 30, 2020Date of Patent: May 7, 2024Assignee: INTEL CORPORATIONInventors: Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei, Durgesh Borkar, Shubra Marwaha, Supratim Pal, Varghese George, Wei Xiong, Yan Li, Yongsheng Liu, Dipankar Das, Sasikanth Avancha, Dharma Teja Vooturi, Naveen K. Mellempudi
-
Publication number: 20240126544Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.Type: ApplicationFiled: December 28, 2023Publication date: April 18, 2024Inventors: Dipankar DAS, Naveen K. MELLEMPUDI, Mrinmay DUTTA, Arun KUMAR, Dheevatsa MUDIGERE, Abhisek KUNDU
-
Patent number: 11900107Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.Type: GrantFiled: March 25, 2022Date of Patent: February 13, 2024Assignee: Intel CorporationInventors: Dipankar Das, Naveen K. Mellempudi, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek Kundu
-
Publication number: 20230351542Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.Type: ApplicationFiled: April 24, 2023Publication date: November 2, 2023Applicant: Intel CorporationInventors: Naveen K. MELLEMPUDI, DHEEVATSA MUDIGERE, DIPANKAR DAS, SRINIVAS SRIDHARAN
-
Publication number: 20230291790Abstract: During web application development, receiving a request for a webpage for a business object type, the request comprising a business object type identifier of the business object type, receiving an expression for selecting an instance of the first business object type from a plurality of instances of the first business object type, the expression specifying a data source and an operation. The method can further comprise generating the webpage, the webpage comprising a user interface (UI) widget for the business object type and an instruction for prepopulating the first UI widget with data from the instance of the first business object type, the instruction including the expression, the expression executable to perform an action on data from the data source to generate a result identifying the instance of the first business object type.Type: ApplicationFiled: May 15, 2023Publication date: September 14, 2023Inventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
-
Patent number: 11689609Abstract: During web application development, receiving a request for a webpage for a first business object type, the first request comprising a first business object type identifier of the first business object type, receiving a first expression for selecting an instance of the first business object type from a plurality of instances of the first business object type from an object data source, the expression specifying a first data source and an operation and generating the webpage, the webpage comprising a first user interface (UI) widget for the first business object type and a first instruction for prepopulating the first UI widget with first data from the instance of the first business object type, the first instruction including the first expression, the first expression executable to perform the operation on data from the first data source to generate a result identifying the instance of the first business object type.Type: GrantFiled: June 30, 2022Date of Patent: June 27, 2023Assignee: Open Text CorporationInventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
-
Patent number: 11669933Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to quantize elements of a floating-point tensor to convert the floating-point tensor into a dynamic fixed-point tensor.Type: GrantFiled: April 27, 2022Date of Patent: June 6, 2023Assignee: Intel CorporationInventors: Naveen K. Mellempudi, Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan
-
Publication number: 20230040528Abstract: In some aspects, the disclosure provides compositions and methods for detecting and monitoring the activity of proteases in vivo using affinity assays. The disclosure relates, in part, to the discovery that biomarker nanoparticles targeted to the lymph nodes of a subject are useful for the diagnosis and monitoring of certain medical conditions (e.g., metastatic cancer, infection with certain pathogenic agents).Type: ApplicationFiled: August 2, 2022Publication date: February 9, 2023Applicant: Massachusetts Institute of TechnologyInventors: Sangeeta N. Bhatia, Darrell J. Irvine, Karl Dane Wittrup, Andrew David Warren, Jaideep S. Dudani, Naveen K. Mehta
-
Publication number: 20220387961Abstract: A method for chemical production includes applying electromagnetic heating to a composition that includes a catalytic component and an electromagnetic susceptor. Responsive to application of radio frequency energy, the electromagnetic susceptor causes the catalytic component to become heated. The heated electromagnetic susceptor and catalytic component interact with a chemical to form a product.Type: ApplicationFiled: September 16, 2020Publication date: December 8, 2022Inventors: Micah J. Green, Naveen K. Mishra, Nutan S. Patil, Benjamin A. Wilhite