Patents by Inventor Nishit SHAH

Nishit SHAH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240106863
    Abstract: Methods, systems, and computer readable media for network security are described. In some implementations, security tasks and roles can be allocated between an endpoint device and a firewall device based on tag information sent from the endpoint, the tag information including one or more characteristics of a traffic flow, information of resource availability, and/or reputation of a process associated with a traffic flow.
    Type: Application
    Filed: October 9, 2023
    Publication date: March 28, 2024
    Applicant: Sophos Limited
    Inventors: Andy THOMAS, Nishit SHAH, Daniel STUTZ
  • Patent number: 11942970
    Abstract: Embodiments of the present disclosure include techniques for compressing data using a tree encoded bit mask that may result in higher compression ratios. In one embodiment, an input vector having a plurality of values is received by a first plurality of switch circuits. Selection of the input values is controlled by sets of bits from the bit mask. The sets of bits specify locations of portions of the input vector where particular value of interest reside. The switch circuits output multiple values of the input vector, which include the particular value of interest. A second stage of switch circuits is controlled by logic circuit that detects values on the outputs of the first stage of switch circuits and outputs the values of interest. In some embodiments, the values of interest may be non-zero values of a sparse input vector, and the switch circuits may be multiplexers.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: March 26, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nishit Shah, Ankit More, Mattheus C. Heddes
  • Publication number: 20240062168
    Abstract: A system and associated method for authorizing a transaction between a teller device and a user device is disclosed. The system can include a teller device configured to generate a remote signing notification. The system can include a server device configured to receive the remote signing notification from the teller device and generate a transaction document based on the remote signing notification. The system can include a user device configured to display a webpage based on the URI and perform a signature in the signature section of the webpage and transmit the signature from the user device to the server device to authorize the transaction.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Applicant: INTEGRATED MEDIA MANAGEMENT, LLC
    Inventors: Nishit SHAH, David Aranovsky, John A. Levy
  • Patent number: 11886981
    Abstract: A compiler generates a computer program implementing a machine learning network on a machine learning accelerator (MLA) including interconnected processing elements. The computer program includes data transfer instructions for non-colliding data transfers between the processing elements. To generate the data transfer instructions, the compiler determines non-conflicting data transfer paths for data transfers based on a topology of the interconnections between processing elements, on dependencies of the instructions and on a duration for execution of the instructions. Each data transfer path specifies a routing and a time slot for the data transfer. The compiler generates data transfer instructions that specify routing of the data transfers and generates a static schedule that schedules execution of the data transfer instructions during the time slots for the data transfers.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: January 30, 2024
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Srivathsa Dhruvanarayan, Reed Kotler
  • Patent number: 11848689
    Abstract: Embodiments of the present disclosure include a digital circuit and method for compressing input digital values. A plurality of input digital values may include zero values and non-zero values. The input digital values are received on M inputs of a first switching stage. The first switching stage is arranged in groups that rearrange the non-zero values on first switching stage outputs according to a compression and shift. The compression and shift position the non-zero values on outputs coupled to inputs of a second switching stage. The second switching stage consecutively couples non-zero values to N outputs, where N is less than M.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: December 19, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ankit More, Mattheus C. Heddes, Nishit Shah
  • Patent number: 11824537
    Abstract: An interleaved ring oscillator includes a first ring oscillator having n stages, and a second ring oscillator having n stages, wherein each stage includes a nth first gated inverter in the first ring oscillator and a nth second gated inverter in the second ring oscillator, such that output from the nth first gated inverter enables the nth second gated inverter, and output from the nth second gated inverter enables the nth first gated inverter.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: November 21, 2023
    Inventors: Nishit Shah, Pedram Lajevardi, Kenneth Wojciechowski, Christoph Lang
  • Patent number: 11803740
    Abstract: A compiler manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: October 31, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Publication number: 20230334374
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The compiler allocates instructions of the computer program to different groups of processing elements (Tiles) for execution such that different groups of Tiles implement different layers of the machine learning network. The compiler may determine the size of the different groups based on a partial computation metric associated with the computations performed to implement the corresponding layer. Furthermore, the compiler may assign specific Tiles to each group based on a set of predefined layout constraints. The compiler may statically schedule at least a portion of the instructions into one or more deterministic phases for execution by the groups of Tiles.
    Type: Application
    Filed: June 26, 2023
    Publication date: October 19, 2023
    Inventors: Reed Kotler, Nishit Shah
  • Publication number: 20230333739
    Abstract: Embodiments of the present disclosure include a digital circuit and method for multi-stage compression. Digital data values are compressed using a multi-stage compression algorithm and stored in a memory. A decompression circuit receives the values and performs a partial decompression. The partially compressed values are provided to a processor, which performs the final decompression. In one embodiment, a vector of N length compressed values are decompressed using a first bit mask into two N length sets having non-zero values. The two N length sets are further decompressed using two M length bit masks into M length sparse vectors, each having non-zero values.
    Type: Application
    Filed: June 23, 2023
    Publication date: October 19, 2023
    Inventors: Mattheus C. HEDDES, Ankit MORE, Nishit SHAH, Torsten HOEFLER
  • Patent number: 11792228
    Abstract: Methods, systems, and computer readable media for network security are described. In some implementations, security tasks and roles can be allocated between an endpoint device and a firewall device based on tag information sent from the endpoint, the tag information including one or more characteristics of a traffic flow, information of resource availability, and/or reputation of a process associated with a traffic flow.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: October 17, 2023
    Assignee: Sophos Limited
    Inventors: Andy Thomas, Nishit Shah, Daniel Stutz
  • Publication number: 20230318620
    Abstract: Embodiments of the present disclosure include a digital circuit and method for compressing input digital values. A plurality of input digital values may include zero values and non-zero values. The input digital values are received on M inputs of a first switching stage. The first switching stage is arranged in groups that rearrange the non-zero values on first switching stage outputs according to a compression and shift. The compression and shift position the non-zero values on outputs coupled to inputs of a second switching stage. The second switching stage consecutively couples non-zero values to N outputs, where N is less than M.
    Type: Application
    Filed: March 4, 2022
    Publication date: October 5, 2023
    Inventors: Ankit MORE, Mattheus C. HEDDES, Nishit SHAH
  • Publication number: 20230283296
    Abstract: Embodiments of the present disclosure include techniques for compressing data using a tree encoded bit mask that may result in higher compression ratios. In one embodiment, an input vector having a plurality of values is received by a first plurality of switch circuits. Selection of the input values is controlled by sets of bits from the bit mask. The sets of bits specify locations of portions of the input vector where particular value of interest reside. The switch circuits output multiple values of the input vector, which include the particular value of interest. A second stage of switch circuits is controlled by logic circuit that detects values on the outputs of the first stage of switch circuits and outputs the values of interest. In some embodiments, the values of interest may be non-zero values of a sparse input vector, and the switch circuits may be multiplexers.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 7, 2023
    Inventors: Nishit SHAH, Ankit MORE, Mattheus C. HEDDES
  • Patent number: 11734605
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The compiler allocates instructions of the computer program to different groups of processing elements (Tiles) for execution such that different groups of Tiles implement different layers of the machine learning network. The compiler may determine the size of the different groups based on a partial computation metric associated with the computations performed to implement the corresponding layer. Furthermore, the compiler may assign specific Tiles to each group based on a set of predefined layout constraints. The compiler may statically schedule at least a portion of the instructions into one or more deterministic phases for execution by the groups of Tiles.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: August 22, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11734549
    Abstract: A compiler receives a description of a machine learning network (MLN) and generates a computer program that implements the MLN on a machine learning accelerator (MLA). To implement the MLN, the compiler generates compute instructions that implement computations of the MLN on different processing units (Tiles), and data transfer instructions that transfer data used in the computations. The compiler may statically schedule at least a portion of the instructions for execution by the Tiles according to fixed timing. The compiler may initially implement data transfers between non-adjacent Tiles (or external memories) by implementing a sequence of transfers through one or more intermediate Tiles (or external memories) in accordance with a set of default routing rules that dictates the data path. The computer program may then be simulated to identify routing conflicts. When routing conflicts are detected, the compiler updates the computer program in a manner that avoids the conflicts.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: August 22, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11720252
    Abstract: Embodiments of the present disclosure include a digital circuit and method for multi-stage compression. Digital data values are compressed using a multi-stage compression algorithm and stored in a memory. A decompression circuit receives the values and performs a partial decompression. The partially compressed values are provided to a processor, which performs the final decompression. In one embodiment, a vector of N length compressed values are decompressed using a first bit mask into two N length sets having non-zero values. The two N length sets are further decompressed using two M length bit masks into M length sparse vectors, each having non-zero values.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: August 8, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mattheus C. Heddes, Ankit More, Nishit Shah, Torsten Hoefler
  • Publication number: 20230186063
    Abstract: A compiler manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
    Type: Application
    Filed: February 8, 2023
    Publication date: June 15, 2023
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11641408
    Abstract: A system of configuring a new device may include a new device that is not configured with one or more settings. The new device includes a short range communication transmitter and programming instructions configured to cause the new device to operate in a discoverable mode. The system includes an existing device that is configured with the settings, and that includes a short range communication receiver and programming instructions. The programming instructions are configured to cause the existing device to receive instructions to set up the new device, in response to receiving the instructions, detect, by the short range communication receiver, a presence of the new device by detecting the broadcast signal within a communication range of the short range communication receiver, and in response to detecting the presence of the new device, transmit at least a portion of the one or more settings directly to the new device.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: May 2, 2023
    Assignee: Google LLC
    Inventors: Ushasree Kode, Nishit Shah, Ibrahim Damlaj, Michal Levin, Thomas Weedon Hume
  • Patent number: 11631001
    Abstract: A system-on-chip (SoC) integrated circuit product includes a machine learning accelerator (MLA). It also includes other processor cores, such as general purpose processors and application-specific processors. It also includes a network-on-chip for communication between the different modules. The SoC implements a heterogeneous compute environment because the processor cores are customized for different purposes and typically will use different instruction sets. Applications may use some or all of the functionalities offered by the processor cores, and the processor cores may be programmed into different pipelines to perform different tasks.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: April 18, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Srivathsa Dhruvanarayan, Nishit Shah, Bradley Taylor, Moenes Zaher Iskarous
  • Patent number: 11586894
    Abstract: A compiler efficiently manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies a set of partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: February 21, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Publication number: 20230023303
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The computer program includes statically scheduled instructions that are executed by a mesh of processing elements (Tiles). The instructions executed by the Tiles are statically scheduled because the compiler can determine which instructions are executed by which Tiles at what times. For example, for the statically scheduled instructions, there are no conditions, branching or data dependencies that can be resolved only at run-time, and which would affect the timing and order of the execution of the instructions.
    Type: Application
    Filed: October 3, 2022
    Publication date: January 26, 2023
    Inventors: Nishit Shah, Reed Kotler, Srivathsa Dhruvanarayan, Moenes Zaher Iskarous, Kavitha Prasad, Yogesh Laxmikant Chobe, Sedny S.J Attia, Spenser Don Gilliland, Bradley Taylor