Patents by Inventor Rosario Cammarota

Rosario Cammarota has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861484
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware pre-processing of NDMA data in the read buffer and post-processing of NDMA data in the write buffer on blocks of a data stripe to process tensors in artificial neural networks.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: January 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Jinxia Bai, Rosario Cammarota, Michael Goldfarb
  • Patent number: 11777707
    Abstract: Embodiments are directed to homomorphic encryption for machine learning and neural networks using high-throughput Chinese remainder theorem (CRT) evaluation. An embodiment of an apparatus includes a hardware accelerator to receive a ciphertext generated by homomorphic encryption (HE) for evaluation, decompose coefficients of the ciphertext into a set of decomposed coefficients, multiply the decomposed coefficients using a set of smaller modulus determined based on a larger modulus, and convert results of the multiplying back to an original form corresponding to the larger modulus by performing a reverse Chinese remainder theorem (CRT) transform on the results of multiplying the decomposed coefficients.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: October 3, 2023
    Assignee: INTEL CORPORATION
    Inventors: Santosh Ghosh, Andrew Reinders, Rafael Misoczki, Rosario Cammarota, Manoj Sastry
  • Patent number: 11769036
    Abstract: An apparatus for optimizing a computational network is configure to receive an input at a first processing component. The first processing component may include at least a first programmable processing component and a second programmable processing component. The first programmable processing component is configured to compute a first nonlinear function and the second programmable processing component is configured to compute a second nonlinear function which is different than the second nonlinear function. The computational network which may be a recurrent neural network such as a long short-term memory may be operated to generate an inference based at least in part on outputs of the first programmable processing component and the second programmable processing component.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: September 26, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Rosario Cammarota, Michael Goldfarb, Manu Rastogi, Sarang Ozarde
  • Patent number: 11763141
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware memory bandwidth optimization for reading/writing NDMA data in the read buffer and/or NDMA data in the write buffer. The NDMA core is also configured to transparently combine NDMA transaction requests for a data stripe to increase local access to available tensors in artificial neural networks.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: September 19, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Jinxia Bai, Rosario Cammarota, Michael Goldfarb
  • Patent number: 11638146
    Abstract: This disclosure provides systems, methods and apparatus, including computer programs encoded on computer storage media, for onboarding one or more Multi-AP devices using a device provisioning protocol (DPP) and a Multi-AP communication protocol. In one aspect, a first Multi-AP device may determine, during an onboarding process, DPP configuration information that was derived using the DPP. The first Multi-AP device may establish a Multi-AP network configuration between the first Multi-AP device and a second Multi-AP device using the Multi-AP communication protocol based, at least in part, on the DPP configuration information. In one aspect, the DPP configuration information may be derived remotely by the network operator prior to device deployment. In one aspect, a configurator station (STA) may be delegated as the DPP configurator by the network operator, and may onboard one or more STAs into the Multi-AP network using the DPP and the Multi-AP communication protocol.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: April 25, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Rosario Cammarota, Sai Yiu Duncan Ho, Brian Michael Buesker
  • Patent number: 11507699
    Abstract: An example private processing pipeline may include: a masked decryption unit to perform a masked decryption operation transforming input data into masked decrypted data; a masked functional unit to produce a masked result by performing a masked operation on the masked decrypted data; and a masked encryption unit to perform a masked encryption operation transforming the masked result into an encrypted result.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 22, 2022
    Assignee: Intel Corporation
    Inventors: Casimir Wierzynski, Fabian Boemer, Rosario Cammarota
  • Publication number: 20220321321
    Abstract: Embodiments are directed to homomorphic encryption for machine learning and neural networks using high-throughput Chinese remainder theorem (CRT) evaluation. An embodiment of an apparatus includes a hardware accelerator to receive a ciphertext generated by homomorphic encryption (HE) for evaluation, decompose coefficients of the ciphertext into a set of decomposed coefficients, multiply the decomposed coefficients using a set of smaller modulus determined based on a larger modulus, and convert results of the multiplying back to an original form corresponding to the larger modulus by performing a reverse Chinese remainder theorem (CRT) transform on the results of multiplying the decomposed coefficients.
    Type: Application
    Filed: June 6, 2022
    Publication date: October 6, 2022
    Applicant: Intel Corporation
    Inventors: Santosh Ghosh, Andrew Reinders, Rafael Misoczki, Rosario Cammarota, Manoj Sastry
  • Patent number: 11405176
    Abstract: Embodiments are directed to homomorphic encryption for machine learning and neural networks using high-throughput Chinese remainder theorem (CRT) evaluation. An embodiment of an apparatus includes a hardware accelerator to receive a ciphertext generated by homomorphic encryption (HE) for evaluation, decompose coefficients of the ciphertext into a set of decomposed coefficients, multiply the decomposed coefficients using a set of smaller modulus determined based on a larger modulus, and convert results of the multiplying back to an original form corresponding to the larger modulus.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: August 2, 2022
    Assignee: INTEL CORPORATION
    Inventors: Santosh Ghosh, Andrew Reinders, Rafael Misoczki, Rosario Cammarota, Manoj Sastry
  • Publication number: 20220230058
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware memory bandwidth optimization for reading/writing NDMA data in the read buffer and/or NDMA data in the write buffer. The NDMA core is also configured to transparently combine NDMA transaction requests for a data stripe to increase local access to available tensors in artificial neural networks.
    Type: Application
    Filed: April 4, 2022
    Publication date: July 21, 2022
    Inventors: Jinxia BAI, Rosario CAMMAROTA, Michael GOLDFARB
  • Patent number: 11295205
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware memory bandwidth optimization for reading/writing NDMA data in the read buffer and/or NDMA data in the write buffer. The NDMA core is also configured to transparently combine NDMA transaction requests for a data stripe to increase local access to available tensors in artificial neural networks.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: April 5, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Jinxia Bai, Rosario Cammarota, Michael Goldfarb
  • Publication number: 20220094517
    Abstract: Embodiments are directed to homomorphic encryption for machine learning and neural networks using high-throughput Chinese remainder theorem (CRT) evaluation. An embodiment of an apparatus includes a hardware accelerator to receive a ciphertext generated by homomorphic encryption (HE) for evaluation, decompose coefficients of the ciphertext into a set of decomposed coefficients, multiply the decomposed coefficients using a set of smaller modulus determined based on a larger modulus, and convert results of the multiplying back to an original form corresponding to the larger modulus.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Applicant: Intel Corporation
    Inventors: Santosh Ghosh, Andrew Reinders, Rafael Misoczki, Rosario Cammarota, Manoj Sastry
  • Publication number: 20220094518
    Abstract: Embodiments are directed to low circuit depth homomorphic encryption evaluations. An embodiment of an apparatus includes a hardware accelerator to receive a ciphertext generated by homomorphic encryption (HE) for evaluation, determine two coefficients of the ciphertext for HE evaluation, input the two coefficients as a first operand and a second operand to a pipeline multiplier for low circuit depth HE evaluation, perform combinatorial multiplication between the first operand and portions of the second operand, accumulate results of the combinatorial multiplication at each stage of the pipeline multiplier, and perform reduction with Mersenne prime modulus on a resulting accumulated output of the combinatorial multipliers of the pipeline multiplier.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Applicant: Intel Corporation
    Inventors: Santosh Ghosh, Andrew Reinders, Rafael Misoczki, Rosario Cammarota, Manoj Sastry
  • Publication number: 20210119766
    Abstract: Technologies for memory and I/O efficient operations on homomorphically encrypted data are disclosed. In the illustrative embodiment, a cloud compute device is to perform operations on homomorphically encrypted data. In order to reduce memory storage space and network and I/O bandwidth, ciphertext blocks can be manipulated as data structures, allowing operands for operations on a compute engine to be created on the fly as the compute engine is performing other operations, using orders of magnitude less storage space and bandwidth.
    Type: Application
    Filed: December 24, 2020
    Publication date: April 22, 2021
    Inventors: Vikram B. Suresh, Rosario Cammarota, Sanu K. Mathew, Zeshan A. Chishti, Raghavan Kumar, Rafael Misoczki
  • Publication number: 20210097206
    Abstract: An example private processing pipeline may include: a masked decryption unit to perform a masked decryption operation transforming input data into masked decrypted data; a masked functional unit to produce a masked result by performing a masked operation on the masked decrypted data; and a masked encryption unit to perform a masked encryption operation transforming the masked result into an encrypted result.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Casimir Wierzynski, Fabian Boemer, Rosario Cammarota
  • Publication number: 20200320206
    Abstract: Systems, methods, apparatus, and articles of manufacture to prevent unauthorized release of information associated with a function as a service are disclosed. A system disclosed herein operates on in-use information. The system includes a function as a service of a service provider that operates on encrypted data. The encrypted data includes encrypted in-use data. The system also includes a trusted execution environment (TEE) to operate within a cloud-based environment of a cloud provider. The function as a service operates on the encrypted data within the TEE, and the TEE protects service provider information from access by the cloud provider. The encrypted in-use data and the service provider information form at least a portion of the in-use information.
    Type: Application
    Filed: June 24, 2020
    Publication date: October 8, 2020
    Inventors: Rosario Cammarota, Fabian Boemer, Casimir M. Wierzynski, Anand Rajan, Rafael Misoczki
  • Publication number: 20200235910
    Abstract: Techniques for mitigating side-channel attacks on cryptographic algorithms are provided. An example method according to these techniques includes applying a block cipher algorithm to an input data to generate a cryptographic output, such that applying the block cipher to input data comprises modifying an output of a stage of the block cipher algorithm such that each output of the stage of the block cipher algorithm has a constant Hamming weight, and outputting the cryptographic output.
    Type: Application
    Filed: April 6, 2020
    Publication date: July 23, 2020
    Inventors: Rosario CAMMAROTA, Indranil BANERJEE, Matthew McGregor
  • Patent number: 10673616
    Abstract: Techniques for mitigating side-channel attacks on cryptographic algorithms are provided. An example method according to these techniques includes applying a block cipher algorithm to an input data to generate a cryptographic output, such that applying the block cipher to input data comprises modifying an output of a stage of the block cipher algorithm such that each output of the stage of the block cipher algorithm has a constant Hamming weight, and outputting the cryptographic output.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: June 2, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Rosario Cammarota, Indranil Banerjee, Matthew McGregor
  • Publication number: 20200104690
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware pre-processing of NDMA data in the read buffer and post-processing of NDMA data in the write buffer on blocks of a data stripe to process tensors in artificial neural networks.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 2, 2020
    Inventors: Jinxia BAI, Rosario CAMMAROTA, Michael GOLDFARB
  • Publication number: 20200104076
    Abstract: The present disclosure provides a method of accessing data from a first memory. The method may include receiving a command for accessing a first portion of the data. The data includes a plurality of words arranged as a multi-dimensional array of words that is stored contiguously in the first memory. The method may further include mapping the first portion of the data to a first portion of the plurality of words. The first portion of the plurality of words is not stored contiguously in the first memory. The method may further include accessing the first portion of the plurality of words while refraining from accessing at least a second portion of the plurality of words that is contiguously stored between at least two words of the first portion of the plurality of words.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Jinxia BAI, Rosario CAMMAROTA, Michael GOLDFARB
  • Publication number: 20200104691
    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware memory bandwidth optimization for reading/writing NDMA data in the read buffer and/or NDMA data in the write buffer. The NDMA core is also configured to transparently combine NDMA transaction requests for a data stripe to increase local access to available tensors in artificial neural networks.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 2, 2020
    Inventors: Jinxia BAI, Rosario CAMMAROTA, Michael GOLDFARB