Patents by Inventor Naveen K

Naveen K has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250258639
    Abstract: Application casting is provided, in which an application running on an electronic device is casted to another electronic device that does not have access to the application. The application is casted by providing sufficient information for rendering of a user interface of the application, at the device that does not have access to the application, with applied modifications such as user preferences for the device that does not have access to the application.
    Type: Application
    Filed: May 2, 2025
    Publication date: August 14, 2025
    Inventors: Joshua J. TAYLOR, Pablo P. CHENG, Michael E. BUERLI, Naveen K. VEMURI
  • Publication number: 20250217142
    Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
    Type: Application
    Filed: March 14, 2025
    Publication date: July 3, 2025
    Inventors: Dipankar DAS, Naveen K. MELLEMPUDI, Mrinmay DUTTA, Arun KUMAR, Dheevatsa MUDIGERE, Abhisek KUNDU
  • Patent number: 12314727
    Abstract: Described herein is a graphics processor including a processing resource including a multiplier configured to multiply input associated with the instruction at one of a first plurality of bit widths, an adder configured to add a product output from the multiplier with an accumulator value at one of a second plurality of bit widths, and circuitry to select a first bit width of the first plurality of bit widths for the multiplier and a second bit width of the second plurality of bit widths for the adder.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: May 27, 2025
    Assignee: Intel Corporation
    Inventors: Dipankar Das, Roger Gramunt, Mikhail Smelyanskiy, Jesus Corbal, Dheevatsa Mudigere, Naveen K. Mellempudi, Alexander F. Heinecke
  • Publication number: 20250146029
    Abstract: The present invention relates to an enzymatic conversion of aldehyde to carboxylic acid for the preparation of ibuprofen. In particular, the present disclosure provides a method of preparation of 2-(4-isobutylphenyl) propanoic acid that is ibuprofen by enzymatic conversion of 2-(4-isobutylphenyl) propanal that is ibuprofen aldehyde to 2-(4-Isobutylphenyl) propanoic acid that is ibuprofen in presence of an oxidoreductase enzyme with high conversion efficiency.
    Type: Application
    Filed: November 2, 2023
    Publication date: May 8, 2025
    Inventor: Naveen K. Kulkarni
  • Patent number: 12288062
    Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
    Type: Grant
    Filed: December 28, 2023
    Date of Patent: April 29, 2025
    Assignee: Intel Corporation
    Inventors: Dipankar Das, Naveen K. Mellempudi, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek Kundu
  • Patent number: 12273410
    Abstract: During web application development, receiving a request for a webpage for a business object type, the request comprising a business object type identifier of the business object type, receiving an expression for selecting an instance of the first business object type from a plurality of instances of the first business object type, the expression specifying a data source and an operation. The method can further comprise generating the webpage, the webpage comprising a user interface (UI) widget for the business object type and an instruction for prepopulating the first UI widget with data from the instance of the first business object type, the instruction including the expression, the expression executable to perform an action on data from the data source to generate a result identifying the instance of the first business object type.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: April 8, 2025
    Assignee: Open Text Corporation
    Inventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
  • Publication number: 20250110741
    Abstract: An apparatus to facilitate supporting 8-bit floating point format for parallel computing and stochastic rounding operations in a graphics architecture is disclosed. The apparatus includes a processor comprising: a decoder to decode an instruction fetched for execution into a decoded instruction, wherein the decoded instruction is a matrix instruction that is to operate on 8-bit floating point operands to perform a parallel dot product operation; a scheduler to schedule the decoded instruction and provide input data for the 8-bit floating point operands in accordance with an 8-bit floating data format indicated by the decoded instruction; and circuitry to execute the decoded instruction to perform 32-way dot-product using 8-bit wide dot-product layers, each 8-bit wide dot-product layer comprises one or more sets of interconnected multipliers, shifters, and adders, wherein each set of multipliers, shifters, and adders is to generate a dot product of the 8-bit floating point operands.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Applicant: Intel Corporation
    Inventors: Jorge Eduardo Parra Osorio, Fangwen Fu, Guei-Yuan Lueh, Hong Jiang, Jiasheng Chen, Naveen K. Mellempudi, Kevin Hurd, Chunhui Mei, Alexandre Hadj-Chaib, Elliot Taylor, Shuai Mu
  • Publication number: 20250110733
    Abstract: An apparatus to facilitate conversion operations and special value use cases supporting 8-bit floating point format in a graphics architecture is disclosed. The apparatus includes a processor comprising a decoder to decode an instruction fetched for execution into a decoded instruction, wherein the decoded instruction to cause the processor to perform conversion operation corresponding to an 8-bit floating point format operand; a scheduler to schedule the decoded instruction and provide input data for an input operand of the conversion operation indicated by the decoded instruction; and conversion circuitry to execute the decoded instruction to perform the conversion operation to convert the input operand to an output operand in accordance with the 8-bit floating point format operand, the conversion circuitry comprising hardware circuitry to rescale, normalize, and convert the input operand to the output operand.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Applicant: Intel Corporation
    Inventors: Jorge Eduardo Parra Osorio, Fangwen Fu, Guei-Yuan Lueh, Jiasheng Chen, Naveen K. Mellempudi, Kevin Hurd, Alexandre Hadj-Chaib, Elliot Taylor, Marius Cornea-Hasegan
  • Publication number: 20240412318
    Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.
    Type: Application
    Filed: June 24, 2024
    Publication date: December 12, 2024
    Applicant: Intel Corporation
    Inventors: Naveen K. MELLEMPUDI, DHEEVATSA MUDIGERE, DIPANKAR DAS, SRINIVAS SRIDHARAN
  • Publication number: 20240320000
    Abstract: An apparatus to facilitate utilizing structured sparsity in systolic arrays is disclosed. The apparatus includes a processor comprising a systolic array to receive data from a plurality of source registers, the data comprising unpacked source data, structured source data that is packed based on sparsity, and metadata corresponding to the structured source data; identify portions of the unpacked source data to multiply with the structured source data, the portions of the unpacked source data identified based on the metadata; and output, to a destination register, a result of multiplication of the portions of the unpacked source data and the structured source data.
    Type: Application
    Filed: March 29, 2024
    Publication date: September 26, 2024
    Applicant: Intel Corporation
    Inventors: Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei, Durgesh Borkar, Shubra Marwaha, Supratim Pal, Varghese George, Wei Xiong, Yan Li, Yongsheng Liu, Dipankar Das, Sasikanth Avancha, Dharma Teja Vooturi, Naveen K. Mellempudi
  • Patent number: 12033237
    Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: July 9, 2024
    Assignee: Intel Corporation
    Inventors: Naveen K. Mellempudi, Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan
  • Patent number: 11977885
    Abstract: An apparatus to facilitate utilizing structured sparsity in systolic arrays is disclosed. The apparatus includes a processor comprising a systolic array to receive data from a plurality of source registers, the data comprising unpacked source data, structured source data that is packed based on sparsity, and metadata corresponding to the structured source data; identify portions of the unpacked source data to multiply with the structured source data, the portions of the unpacked source data identified based on the metadata; and output, to a destination register, a result of multiplication of the portions of the unpacked source data and the structured source data.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 7, 2024
    Assignee: INTEL CORPORATION
    Inventors: Subramaniam Maiyuran, Jorge Parra, Ashutosh Garg, Chandra Gurram, Chunhui Mei, Durgesh Borkar, Shubra Marwaha, Supratim Pal, Varghese George, Wei Xiong, Yan Li, Yongsheng Liu, Dipankar Das, Sasikanth Avancha, Dharma Teja Vooturi, Naveen K. Mellempudi
  • Publication number: 20240126544
    Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
    Type: Application
    Filed: December 28, 2023
    Publication date: April 18, 2024
    Inventors: Dipankar DAS, Naveen K. MELLEMPUDI, Mrinmay DUTTA, Arun KUMAR, Dheevatsa MUDIGERE, Abhisek KUNDU
  • Patent number: 11900107
    Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: February 13, 2024
    Assignee: Intel Corporation
    Inventors: Dipankar Das, Naveen K. Mellempudi, Mrinmay Dutta, Arun Kumar, Dheevatsa Mudigere, Abhisek Kundu
  • Publication number: 20230351542
    Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.
    Type: Application
    Filed: April 24, 2023
    Publication date: November 2, 2023
    Applicant: Intel Corporation
    Inventors: Naveen K. MELLEMPUDI, DHEEVATSA MUDIGERE, DIPANKAR DAS, SRINIVAS SRIDHARAN
  • Publication number: 20230291790
    Abstract: During web application development, receiving a request for a webpage for a business object type, the request comprising a business object type identifier of the business object type, receiving an expression for selecting an instance of the first business object type from a plurality of instances of the first business object type, the expression specifying a data source and an operation. The method can further comprise generating the webpage, the webpage comprising a user interface (UI) widget for the business object type and an instruction for prepopulating the first UI widget with data from the instance of the first business object type, the instruction including the expression, the expression executable to perform an action on data from the data source to generate a result identifying the instance of the first business object type.
    Type: Application
    Filed: May 15, 2023
    Publication date: September 14, 2023
    Inventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
  • Patent number: 11689609
    Abstract: During web application development, receiving a request for a webpage for a first business object type, the first request comprising a first business object type identifier of the first business object type, receiving a first expression for selecting an instance of the first business object type from a plurality of instances of the first business object type from an object data source, the expression specifying a first data source and an operation and generating the webpage, the webpage comprising a first user interface (UI) widget for the first business object type and a first instruction for prepopulating the first UI widget with first data from the instance of the first business object type, the first instruction including the first expression, the first expression executable to perform the operation on data from the first data source to generate a result identifying the instance of the first business object type.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: June 27, 2023
    Assignee: Open Text Corporation
    Inventors: Naveen K. Vidyananda, Sachin Gopaldas Totale
  • Patent number: 11669933
    Abstract: One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to quantize elements of a floating-point tensor to convert the floating-point tensor into a dynamic fixed-point tensor.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: June 6, 2023
    Assignee: Intel Corporation
    Inventors: Naveen K. Mellempudi, Dheevatsa Mudigere, Dipankar Das, Srinivas Sridharan
  • Publication number: 20230040528
    Abstract: In some aspects, the disclosure provides compositions and methods for detecting and monitoring the activity of proteases in vivo using affinity assays. The disclosure relates, in part, to the discovery that biomarker nanoparticles targeted to the lymph nodes of a subject are useful for the diagnosis and monitoring of certain medical conditions (e.g., metastatic cancer, infection with certain pathogenic agents).
    Type: Application
    Filed: August 2, 2022
    Publication date: February 9, 2023
    Applicant: Massachusetts Institute of Technology
    Inventors: Sangeeta N. Bhatia, Darrell J. Irvine, Karl Dane Wittrup, Andrew David Warren, Jaideep S. Dudani, Naveen K. Mehta
  • Publication number: 20220387961
    Abstract: A method for chemical production includes applying electromagnetic heating to a composition that includes a catalytic component and an electromagnetic susceptor. Responsive to application of radio frequency energy, the electromagnetic susceptor causes the catalytic component to become heated. The heated electromagnetic susceptor and catalytic component interact with a chemical to form a product.
    Type: Application
    Filed: September 16, 2020
    Publication date: December 8, 2022
    Inventors: Micah J. Green, Naveen K. Mishra, Nutan S. Patil, Benjamin A. Wilhite