Patents by Inventor Nicholas Fraser
Nicholas Fraser has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250111231Abstract: Embodiments herein describe pruning of technology-mapped machine learning-related circuits at bit-level granularity, including techniques to efficiently remove look-up tables (LUTs) of a technology-mapped netlist while maintaining a baseline accuracy of an underlying machine learning model. In an embodiment, a LUT output of a current circuit design is replaced with a constant value, and at least the LUT and LUTs within a maximum fanout-free cone (MFFC) are removed, to provide an optimized circuit design. The current circuit design or the optimized circuit design is selected as a solution based on corresponding training data-based accuracies and metrics (e.g., LUT utilization), and optimization criteria. If the optimized circuit design is rejected, inputs to the LUT may be evaluated for pruning. A set of solutions may be evaluated based on validation data-based accuracies and metrics of the corresponding circuit design. Solutions that do not meet a baseline accuracy may be discarded.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Inventors: Linus Matthias WITSCHEN, Michaela BLOTT, Nicholas FRASER, Thomas Bernd PREUSSER, Yaman UMUROGLU
-
Patent number: 12067484Abstract: An example method of training a neural network includes defining hardware building blocks (HBBs), neuron equivalents (NEQs), and conversion procedures from NEQs to HBBs; defining the neural network using the NEQs in a machine learning framework; training the neural network on a training platform; and converting the neural network as trained into a netlist of HBBs using the conversion procedures to convert the NEQs in the neural network to the HBBs of the netlist.Type: GrantFiled: June 21, 2019Date of Patent: August 20, 2024Assignee: XILINX, INC.Inventors: Yaman Umuroglu, Nicholas Fraser, Michaela Blott, Kristof Denolf, Kornelis A. Vissers
-
Patent number: 11934932Abstract: Examples herein propose operating redundant ML models which have been trained using a boosting technique that considers hardware faults. The embodiments herein describe performing an evaluation process where the performance of a first ML model is measured in the presence of a hardware fault. The errors introduced by the hardware fault can then be used to train a second ML model. In one embodiment, a second evaluation process is performed where the combined performance of both the first and second trained ML models is measured in the presence of a hardware fault. The resulting errors can then be used when training a third ML model. In this manner, the three trained ML models are trained to be error aware. As a result, during operation, if a hardware fault occurs, the three ML models have better performance relative to three ML models that where not trained to be error aware.Type: GrantFiled: November 10, 2020Date of Patent: March 19, 2024Assignee: XILINX, INC.Inventors: Giulio Gambardella, Nicholas Fraser, Ussama Zahid, Michaela Blott, Kornelis A. Vissers
-
Patent number: 11615300Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. A first layer circuit implements a first layer of the one or more hidden layers. The first layer includes a first weight space including one or more subgroups. A forward path circuit of the first layer circuit includes a multiply and accumulate circuit to receive an input from a layer preceding the first layer; and provide a first subgroup weighted sum using the input and a first plurality weights associated with a first subgroup. A scaling coefficient circuit provides a first scaling coefficient associated with the first subgroup, and applies the first scaling coefficient to the first subgroup weighted sum to generate a first subgroup scaled weighted sum. An activation circuit generates an activation based on the first subgroup scaled weighted sum and provide the activation to a layer following the first layer.Type: GrantFiled: June 13, 2018Date of Patent: March 28, 2023Assignee: XILINX, INC.Inventors: Julian Faraone, Michaela Blott, Nicholas Fraser
-
Patent number: 11274955Abstract: A sensor device includes a signal transmitting component configured to transmit a signal into a container engaged by the sensor device, wherein a frequency of the signal is selected to allow the signal to at least in part pass through a content fouling at least a portion of the signal transmitting component. The sensor device includes a processor configured to process a received reflected version of the transmitted signal to determine an identifier associated with an amount of content in the container engaged by the sensor device. The sensor device includes a wireless transmitter configured to transmit the identifier associated with the amount of content in the container.Type: GrantFiled: June 11, 2019Date of Patent: March 15, 2022Assignee: Nectar, Inc.Inventors: Prabhanjan C. Gurumohan, Samuel Bae, Nicholas Fraser, Krishna Gadiyaram, Aayush Phumbhra
-
Publication number: 20200401882Abstract: An example method of training a neural network includes defining hardware building blocks (HBBs), neuron equivalents (NEQs), and conversion procedures from NEQs to HBBs; defining the neural network using the NEQs in a machine learning framework; training the neural network on a training platform; and converting the neural network as trained into a netlist of HBBs using the conversion procedures to convert the NEQs in the neural network to the HBBs of the netlist.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Applicant: Xilinx, Inc.Inventors: Yaman Umuroglu, Nicholas Fraser, Michaela Blott, Kristof Denolf, Kornelis A. Vissers
-
Patent number: 10839286Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. The input layer receives a training set including a sequence of batches and provides to its following layer output activations associated with the sequence of batches respectively. A first hidden layer receives, from its preceding layer, a first input activation associated with a first batch, receive a first input gradient associated with a second batch preceding the first batch, and provide, to its following layer a first output activation associated with the first batch based on the first input activation and first input gradient. The first and second batches have a delay factor associated with at least two batches. The output layer receives, from its preceding layer, a second input activation, and provide, to its preceding layer, a first output gradient based on the second input activation and the first training set.Type: GrantFiled: September 14, 2017Date of Patent: November 17, 2020Assignee: XILINX, INC.Inventors: Nicholas Fraser, Michaela Blott
-
Publication number: 20200249066Abstract: A device includes a coupling component configured to be coupled to a moveable component of a content dispensing mechanism. The device includes a sensor configured to detect movement of the content dispensing mechanism. The device includes a processor configured to, based at least in part on the detected movement of the content dispensing mechanism, determine one or more values corresponding to an amount of content dispensed by the content dispensing mechanism. The device includes a wireless signal transmitter configured to report the one or more values corresponding to the amount of content dispensed by the content dispensing mechanism.Type: ApplicationFiled: February 4, 2020Publication date: August 6, 2020Inventors: Prabhanjan C. Gurumohan, Aayush Phumbhra, Samuel Bae, Nicholas Fraser, Sheshagiri Shenoy, Cedric Lecroc
-
Publication number: 20200104715Abstract: An example method of implementing a neural network includes selecting a first neural network architecture from a search space and training the neural network having the first neural network architecture to obtain an accuracy and an implementation cost. The implementation cost is based on a programmable device of an inference platform. The method further includes selecting a second neural network architecture from the search space based on the accuracy and the implementation cost, and outputting weights and hyperparameters for the neural network having the second neural network architecture.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Applicant: Xilinx, Inc.Inventors: Kristof Denolf, Nicholas Fraser, Kornelis A. Vissers, Giulio Gambardella
-
Publication number: 20200003602Abstract: A sensor device includes a signal transmitting component configured to transmit a signal into a container engaged by the sensor device, wherein a frequency of the signal is selected to allow the signal to at least in part pass through a content fouling at least a portion of the signal transmitting component. The sensor device includes a processor configured to process a received reflected version of the transmitted signal to determine an identifier associated with an amount of content in the container engaged by the sensor device. The sensor device includes a wireless transmitter configured to transmit the identifier associated with the amount of content in the container.Type: ApplicationFiled: June 11, 2019Publication date: January 2, 2020Inventors: Prabhanjan C. Gurumohan, Samuel Bae, Nicholas Fraser, Krishna Gadiyaram, Aayush Phumbhra
-
Publication number: 20190080223Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. The input layer receives a training set including a sequence of batches and provides to its following layer output activations associated with the sequence of batches respectively. A first hidden layer receives, from its preceding layer, a first input activation associated with a first batch, receive a first input gradient associated with a second batch preceding the first batch, and provide, to its following layer a first output activation associated with the first batch based on the first input activation and first input gradient. The first and second batches have a delay factor associated with at least two batches. The output layer receives, from its preceding layer, a second input activation, and provide, to its preceding layer, a first output gradient based on the second input activation and the first training set.Type: ApplicationFiled: September 14, 2017Publication date: March 14, 2019Applicant: Xilinx, Inc.Inventors: Nicholas Fraser, Michaela Blott
-
Publication number: 20180328776Abstract: A sensor device includes a transmitter located in a container cover that is configured to engage an opening of a container. The sensor device includes a propagation chamber with a first opening configured to receive a signal emitted by the transmitter and guide the signal out of the propagation chamber via a second opening. Cross-sections of the propagation chamber vary along at least a portion of a length of the propagation chamber and the propagation chamber includes one or more transitions where a rate of change of a property of the propagation chamber changes. The sensor device includes a signal processor included in the container cover and configured to use data associated with a reflection of the emitted signal to determine an identifier of a content fill level of the container.Type: ApplicationFiled: May 10, 2018Publication date: November 15, 2018Inventors: Prabhanjan C. Gurumohan, Samuel Bae, Krishna Gadiyaram, Nicholas Fraser