Patents by Inventor John Wakefield

John Wakefield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11669736
    Abstract: When performing neural network processing, the order in which the neural network processing is to be performed is determined, and the order in which weight values to be used for the neural network processing will be used is determined based on the determined order of the neural network processing. The weight values are then provided to the processor that is to perform the neural network processing in the determined order for the weight values, with the processor, when performing the neural network processing, then using the weight values in the determined order that they are provided to the processor.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: June 6, 2023
    Assignee: Arm Limited
    Inventor: John Wakefield Brothers, III
  • Publication number: 20230167164
    Abstract: The present application relates to immunocytokines comprising a cytokine or variant thereof positioned at the hinge region of a heavy chain of an antibody (e.g., full-length antibody), or positioned at a hinge region between an antigen-binding fragment (e.g., ligand, receptor, or antibody fragment) and an Fc domain subunit or portion thereof, methods of making, and uses thereof.
    Type: Application
    Filed: January 19, 2023
    Publication date: June 1, 2023
    Applicant: Immunowake Inc.
    Inventors: Ellen WU, Xiaoyun WU, John WAKEFIELD
  • Patent number: 11561795
    Abstract: Herein described is a method of operating an accumulation process in a data processing apparatus. The accumulation process comprises a plurality of accumulations which output a respective plurality of accumulated values, each based on a stored value and a computed value generated by a data processing operation. The method comprises storing a first accumulated value, the first accumulated value being one of said plurality of accumulated values, into a first storage device comprising a plurality of single-bit storage elements; determining that a predetermined trigger has been satisfied with respect to the accumulation process; and in response to the determining, storing at least a portion of a second accumulated value, the second accumulated value being one of said plurality of accumulated values, into a second storage device.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: January 24, 2023
    Assignee: Arm Limited
    Inventors: Jens Olson, John Wakefield Brothers, III, Jared Corey Smolens, Chi-wen Cheng, Daren Croxford, Sharjeel Saeed, Dominic Hugo Symes
  • Patent number: 11537860
    Abstract: A neural network processor is disclosed that includes a combined convolution and pooling circuit that can perform both convolution and pooling operations. The circuit can perform a convolution operation by a multiply circuit determining products of corresponding input feature map and convolution kernel weight values, and an add circuit accumulating the products determined by the multiply circuit in storage. The circuit can perform an average pooling operation by the add circuit accumulating input feature map data values in the storage, a divisor circuit determining a divisor value, and a division circuit dividing the data value accumulated in the storage by the determined divisor value. The circuit can perform a maximum pooling operation by a maximum circuit determining a maximum value of input feature map data values, and storing the determined maximum value in the storage.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: December 27, 2022
    Assignee: Arm Limited
    Inventors: Rune Holm, John Wakefield Brothers, III
  • Patent number: 11493971
    Abstract: A method of power test analysis for an integrated circuit design including loading test vectors into a first sequence of flip-flops in scan mode, evaluating the test vectors and saving results of the evaluating in a second sequence of flip-flops in scan mode, reading results out of the second sequence of flip-flops to a scan chain, and calculating power generation based on the results. In one embodiment, the test vectors are received from an automatic test pattern generator.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: November 8, 2022
    Assignee: Synopsys, Inc.
    Inventors: Alexander John Wakefield, Khader Abdel-Hafez
  • Publication number: 20220291203
    Abstract: The disclosure provides for compositions, methods, and kits for evaluating the effect of a cell-killing agent on a population of tumor cells (e.g., tumor cells that can inducibly express reporter protein).
    Type: Application
    Filed: July 24, 2020
    Publication date: September 15, 2022
    Inventors: Ellen WU, Xiaoyun WU, John WAKEFIELD
  • Patent number: 11443087
    Abstract: A system is disclosed that includes a memory and a processor configured to perform operations stored in the memory. The processor performs the operations to select a master clock for a plurality of clocks in a design logic circuit. The processor further performs the operations to align a clock edge of a clock of the plurality of clocks with a corresponding nearest clock transition of the master clock. The aligned clock edge of the clock limits a number of emulation cycles for the design logic to a fixed number of emulation cycles required for the master clock The processor further performs the operation to determine a clock period for measuring power required for the design logic circuit and estimate, at the aligned clock edge, the power required for the design logic circuit corresponding to the determined clock period, which corresponds to a clock selected from the plurality of clocks and the master clock.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: September 13, 2022
    Assignee: SYNOPSYS, INC.
    Inventors: Alexander John Wakefield, Jitendra Gupta, Vaibhav Jain, Rahul Jain, Shweta Bansal
  • Publication number: 20220269941
    Abstract: A system and method to reduce weight storage bits for a deep-learning network includes a quantizing module and a cluster-number reduction module. The quantizing module quantizes neural weights of each quantization layer of the deep-learning network. The cluster-number reduction module reduces the predetermined number of clusters for a layer having a clustering error that is a minimum of the clustering errors of the plurality of quantization layers. The quantizing module requantizes the layer based on the reduced predetermined number of clusters for the layer and the cluster-number reduction module further determines another layer having a clustering error that is a minimum of the clustering errors of the plurality of quantized layers and reduces the predetermined number of clusters for the another layer until a recognition performance of the deep-learning network has been reduced by a predetermined threshold.
    Type: Application
    Filed: May 9, 2022
    Publication date: August 25, 2022
    Inventors: Zhengping JI, John Wakefield BROTHERS
  • Publication number: 20220237461
    Abstract: A convolutional layer in a convolutional neural network uses a predetermined horizontal input stride and a predetermined vertical input stride that are greater than 1 while the hardware forming the convolutional layer operates using an input stride of 1. Each original weight kernel of a plurality of sets of original weight kernels is subdivided based on the predetermined horizontal and vertical input strides to form a set of a plurality of sub-kernels for each set of original weight kernels. Each of a plurality of IFMs is subdivided based on the predetermined horizontal and vertical input strides to form a plurality of sub-maps. Each sub-map is convolved by the corresponding sub-kernel for a set of original weight kernels using an input stride of 1. A convolved result of each sub-map and the corresponding sub-kernel is summed to form an output feature map.
    Type: Application
    Filed: April 15, 2022
    Publication date: July 28, 2022
    Inventor: John Wakefield BROTHERS
  • Patent number: 11392825
    Abstract: A system and method to reduce weight storage bits for a deep-learning network includes a quantizing module and a cluster-number reduction module. The quantizing module quantizes neural weights of each quantization layer of the deep-learning network. The cluster-number reduction module reduces the predetermined number of clusters for a layer having a clustering error that is a minimum of the clustering errors of the plurality of quantization layers. The quantizing module requantizes the layer based on the reduced predetermined number of clusters for the layer and the cluster-number reduction module further determines another layer having a clustering error that is a minimum of the clustering errors of the plurality of quantized layers and reduces the predetermined number of clusters for the another layer until a recognition performance of the deep-learning network has been reduced by a predetermined threshold.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: July 19, 2022
    Inventors: Zhengping Ji, John Wakefield Brothers
  • Patent number: 11373097
    Abstract: A convolutional layer in a convolutional neural network uses a predetermined horizontal input stride and a predetermined vertical input stride that are greater than 1 while the hardware forming the convolutional layer operates using an input stride of 1. Each original weight kernel of a plurality of sets of original weight kernels is subdivided based on the predetermined horizontal and vertical input strides to form a set of a plurality of sub-kernels for each set of original weight kernels. Each of a plurality of IFMs is subdivided based on the predetermined horizontal and vertical input strides to form a plurality of sub-maps. Each sub-map is convolved by the corresponding sub-kernel for a set of original weight kernels using an input stride of 1. A convolved result of each sub-map and the corresponding sub-kernel is summed to form an output feature map.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: June 28, 2022
    Inventor: John Wakefield Brothers
  • Publication number: 20220198243
    Abstract: A method of processing input data for a given layer of a neural network using a data processing system comprising compute resources for performing convolutional computations is described. The input data comprises a given set of input feature maps, IFMs, and a given set of filters. The method comprises generating a set of part-IFMs including pluralities of part-IFMs which correspond to respective IFMs, of the given set of IFMs. The method further includes grouping part-IFMs in the set of part-IFMs into a set of selections of part-IFMs. The method further includes convolving, by respective compute resources of the data processing system, the set of selections with the given set of filters to compute a set of part-output feature maps. A data processing system for processing input data for a given layer of a neural network is also described.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: John Wakefield BROTHERS, III, Kartikeya BHARDWAJ, Alexander Eugene CHALFIN, Danny Daysang LOH
  • Patent number: 11334700
    Abstract: A simulation application can be executed by a computer system to develop thermal maps for an electronic architectural design. The simulation application can simulate the electronic architectural design over time. The simulation application can capture electronic signals from the electronic architectural design as the electronic architectural design is being simulated over time. The simulation application can determine power consumptions of the electronic architectural design over time from the electronic signals. The simulation application can derive temperatures of the electronic architectural design over time from the power consumptions. The simulation application can map the temperatures onto an electronic circuit design real estate of the electronic architectural design to develop the thermal maps over time.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: May 17, 2022
    Assignee: Synopsys, Inc.
    Inventors: Alexander John Wakefield, Jitendra Kumar Gupta
  • Publication number: 20220092409
    Abstract: To perform neural network processing to modify an input data array to generate a corresponding output data array using a filter comprising an array of weight data, at least one of the input data array and the filter are subdivided into a plurality of portions, a plurality of neural network processing passes using the portions are performed, and the output generated by each processing pass is combined to provide the output data array.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 24, 2022
    Applicant: Arm Limited
    Inventors: John Wakefield Brothers, III, Rune Holm, Elliott Maurice Simon Rosemarine
  • Publication number: 20220066909
    Abstract: A process is disclosed to identify the minimal set of sequential and combinational signals needed to fully reconstruct the combinational layout after emulation is complete. A minimal subset of sequential and combinational elements is output from the emulator to maximize the emulator speed and limit the utilization of emulator resources, e.g., FPGA resources. An efficient reconstruction of combinational waveforms or SAIF data is performed using a parallel computing grid.
    Type: Application
    Filed: November 11, 2021
    Publication date: March 3, 2022
    Inventors: Gagan Vishal Jain, Johnson Adaikalasamy, Alexander John Wakefield, Ritesh Mittal, Solaiman Rahim, Olivier Coudert
  • Patent number: 11200149
    Abstract: A process is disclosed to identify the minimal set of sequential and combinational signals needed to fully reconstruct the combinational layout after emulation is complete. A minimal subset of sequential and combinational elements is output from the emulator to maximize the emulator speed and limit the utilization of emulator resources, e.g., FPGA resources. An efficient reconstruction of combinational waveforms or SAIF data is performed using a parallel computing grid.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: December 14, 2021
    Assignee: Synopsys, Inc.
    Inventors: Gagan Vishal Jain, Johnson Adaikalasamy, Alexander John Wakefield, Ritesh Mittal, Solaiman Rahim, Olivier Coudert
  • Publication number: 20210333853
    Abstract: A method of power test analysis for an integrated circuit design including loading test vectors into a first sequence of flip-flops in scan mode, evaluating the test vectors and saving results of the evaluating in a second sequence of flip-flops in scan mode, reading results out of the second sequence of flip-flops to a scan chain, and calculating power generation based on the results. In one embodiment, the test vectors are received from an automatic test pattern generator.
    Type: Application
    Filed: April 26, 2021
    Publication date: October 28, 2021
    Applicant: Synopsys, Inc.
    Inventors: Alexander John Wakefield, Khader Abdel-Hafez
  • Publication number: 20210334643
    Abstract: A processing unit is described that receives an instruction to perform a first operation on a first layer of a neural network, block dependency data, and an instruction to perform a second operation on a second layer of the neural network. The processing unit performs the first operation, which includes dividing the first layer into a plurality of input blocks, and operating on the input blocks to generate a plurality of output blocks. The processing unit then performs the second operation after the first operation has generated a set number of output blocks defined by the block dependency data.
    Type: Application
    Filed: April 27, 2020
    Publication date: October 28, 2021
    Inventors: Dominic Hugo SYMES, John Wakefield BROTHERS, III, Fredrik Peter STOLT
  • Publication number: 20210303307
    Abstract: Herein described is a method of operating an accumulation process in a data processing apparatus. The accumulation process comprises a plurality of accumulations which output a respective plurality of accumulated values, each based on a stored value and a computed value generated by a data processing operation. The method comprises storing a first accumulated value, the first accumulated value being one of said plurality of accumulated values, into a first storage device comprising a plurality of single-bit storage elements; determining that a predetermined trigger has been satisfied with respect to the accumulation process; and in response to the determining, storing at least a portion of a second accumulated value, the second accumulated value being one of said plurality of accumulated values, into a second storage device.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventors: Jens OLSON, John Wakefield BROTHERS, III, Jared Corey SMOLENS, Chi-wen CHENG, Daren CROXFORD, Sharjeel SAEED, Dominic Hugo SYMES
  • Publication number: 20210303992
    Abstract: When performing neural network processing, the order in which the neural network processing is to be performed is determined, and the order in which weight values to be used for the neural network processing will be used is determined based on the determined order of the neural network processing. The weight values are then provided to the processor that is to perform the neural network processing in the determined order for the weight values, with the processor, when performing the neural network processing, then using the weight values in the determined order that they are provided to the processor.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Arm Limited
    Inventor: John Wakefield Brothers, III