Patents by Inventor John Wakefield
John Wakefield has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11669736Abstract: When performing neural network processing, the order in which the neural network processing is to be performed is determined, and the order in which weight values to be used for the neural network processing will be used is determined based on the determined order of the neural network processing. The weight values are then provided to the processor that is to perform the neural network processing in the determined order for the weight values, with the processor, when performing the neural network processing, then using the weight values in the determined order that they are provided to the processor.Type: GrantFiled: March 31, 2020Date of Patent: June 6, 2023Assignee: Arm LimitedInventor: John Wakefield Brothers, III
-
Publication number: 20230167164Abstract: The present application relates to immunocytokines comprising a cytokine or variant thereof positioned at the hinge region of a heavy chain of an antibody (e.g., full-length antibody), or positioned at a hinge region between an antigen-binding fragment (e.g., ligand, receptor, or antibody fragment) and an Fc domain subunit or portion thereof, methods of making, and uses thereof.Type: ApplicationFiled: January 19, 2023Publication date: June 1, 2023Applicant: Immunowake Inc.Inventors: Ellen WU, Xiaoyun WU, John WAKEFIELD
-
Patent number: 11561795Abstract: Herein described is a method of operating an accumulation process in a data processing apparatus. The accumulation process comprises a plurality of accumulations which output a respective plurality of accumulated values, each based on a stored value and a computed value generated by a data processing operation. The method comprises storing a first accumulated value, the first accumulated value being one of said plurality of accumulated values, into a first storage device comprising a plurality of single-bit storage elements; determining that a predetermined trigger has been satisfied with respect to the accumulation process; and in response to the determining, storing at least a portion of a second accumulated value, the second accumulated value being one of said plurality of accumulated values, into a second storage device.Type: GrantFiled: March 30, 2020Date of Patent: January 24, 2023Assignee: Arm LimitedInventors: Jens Olson, John Wakefield Brothers, III, Jared Corey Smolens, Chi-wen Cheng, Daren Croxford, Sharjeel Saeed, Dominic Hugo Symes
-
Patent number: 11537860Abstract: A neural network processor is disclosed that includes a combined convolution and pooling circuit that can perform both convolution and pooling operations. The circuit can perform a convolution operation by a multiply circuit determining products of corresponding input feature map and convolution kernel weight values, and an add circuit accumulating the products determined by the multiply circuit in storage. The circuit can perform an average pooling operation by the add circuit accumulating input feature map data values in the storage, a divisor circuit determining a divisor value, and a division circuit dividing the data value accumulated in the storage by the determined divisor value. The circuit can perform a maximum pooling operation by a maximum circuit determining a maximum value of input feature map data values, and storing the determined maximum value in the storage.Type: GrantFiled: March 23, 2020Date of Patent: December 27, 2022Assignee: Arm LimitedInventors: Rune Holm, John Wakefield Brothers, III
-
Patent number: 11493971Abstract: A method of power test analysis for an integrated circuit design including loading test vectors into a first sequence of flip-flops in scan mode, evaluating the test vectors and saving results of the evaluating in a second sequence of flip-flops in scan mode, reading results out of the second sequence of flip-flops to a scan chain, and calculating power generation based on the results. In one embodiment, the test vectors are received from an automatic test pattern generator.Type: GrantFiled: April 26, 2021Date of Patent: November 8, 2022Assignee: Synopsys, Inc.Inventors: Alexander John Wakefield, Khader Abdel-Hafez
-
Publication number: 20220291203Abstract: The disclosure provides for compositions, methods, and kits for evaluating the effect of a cell-killing agent on a population of tumor cells (e.g., tumor cells that can inducibly express reporter protein).Type: ApplicationFiled: July 24, 2020Publication date: September 15, 2022Inventors: Ellen WU, Xiaoyun WU, John WAKEFIELD
-
Patent number: 11443087Abstract: A system is disclosed that includes a memory and a processor configured to perform operations stored in the memory. The processor performs the operations to select a master clock for a plurality of clocks in a design logic circuit. The processor further performs the operations to align a clock edge of a clock of the plurality of clocks with a corresponding nearest clock transition of the master clock. The aligned clock edge of the clock limits a number of emulation cycles for the design logic to a fixed number of emulation cycles required for the master clock The processor further performs the operation to determine a clock period for measuring power required for the design logic circuit and estimate, at the aligned clock edge, the power required for the design logic circuit corresponding to the determined clock period, which corresponds to a clock selected from the plurality of clocks and the master clock.Type: GrantFiled: May 15, 2020Date of Patent: September 13, 2022Assignee: SYNOPSYS, INC.Inventors: Alexander John Wakefield, Jitendra Gupta, Vaibhav Jain, Rahul Jain, Shweta Bansal
-
Publication number: 20220269941Abstract: A system and method to reduce weight storage bits for a deep-learning network includes a quantizing module and a cluster-number reduction module. The quantizing module quantizes neural weights of each quantization layer of the deep-learning network. The cluster-number reduction module reduces the predetermined number of clusters for a layer having a clustering error that is a minimum of the clustering errors of the plurality of quantization layers. The quantizing module requantizes the layer based on the reduced predetermined number of clusters for the layer and the cluster-number reduction module further determines another layer having a clustering error that is a minimum of the clustering errors of the plurality of quantized layers and reduces the predetermined number of clusters for the another layer until a recognition performance of the deep-learning network has been reduced by a predetermined threshold.Type: ApplicationFiled: May 9, 2022Publication date: August 25, 2022Inventors: Zhengping JI, John Wakefield BROTHERS
-
Publication number: 20220237461Abstract: A convolutional layer in a convolutional neural network uses a predetermined horizontal input stride and a predetermined vertical input stride that are greater than 1 while the hardware forming the convolutional layer operates using an input stride of 1. Each original weight kernel of a plurality of sets of original weight kernels is subdivided based on the predetermined horizontal and vertical input strides to form a set of a plurality of sub-kernels for each set of original weight kernels. Each of a plurality of IFMs is subdivided based on the predetermined horizontal and vertical input strides to form a plurality of sub-maps. Each sub-map is convolved by the corresponding sub-kernel for a set of original weight kernels using an input stride of 1. A convolved result of each sub-map and the corresponding sub-kernel is summed to form an output feature map.Type: ApplicationFiled: April 15, 2022Publication date: July 28, 2022Inventor: John Wakefield BROTHERS
-
Patent number: 11392825Abstract: A system and method to reduce weight storage bits for a deep-learning network includes a quantizing module and a cluster-number reduction module. The quantizing module quantizes neural weights of each quantization layer of the deep-learning network. The cluster-number reduction module reduces the predetermined number of clusters for a layer having a clustering error that is a minimum of the clustering errors of the plurality of quantization layers. The quantizing module requantizes the layer based on the reduced predetermined number of clusters for the layer and the cluster-number reduction module further determines another layer having a clustering error that is a minimum of the clustering errors of the plurality of quantized layers and reduces the predetermined number of clusters for the another layer until a recognition performance of the deep-learning network has been reduced by a predetermined threshold.Type: GrantFiled: March 20, 2017Date of Patent: July 19, 2022Inventors: Zhengping Ji, John Wakefield Brothers
-
Patent number: 11373097Abstract: A convolutional layer in a convolutional neural network uses a predetermined horizontal input stride and a predetermined vertical input stride that are greater than 1 while the hardware forming the convolutional layer operates using an input stride of 1. Each original weight kernel of a plurality of sets of original weight kernels is subdivided based on the predetermined horizontal and vertical input strides to form a set of a plurality of sub-kernels for each set of original weight kernels. Each of a plurality of IFMs is subdivided based on the predetermined horizontal and vertical input strides to form a plurality of sub-maps. Each sub-map is convolved by the corresponding sub-kernel for a set of original weight kernels using an input stride of 1. A convolved result of each sub-map and the corresponding sub-kernel is summed to form an output feature map.Type: GrantFiled: July 14, 2020Date of Patent: June 28, 2022Inventor: John Wakefield Brothers
-
Publication number: 20220198243Abstract: A method of processing input data for a given layer of a neural network using a data processing system comprising compute resources for performing convolutional computations is described. The input data comprises a given set of input feature maps, IFMs, and a given set of filters. The method comprises generating a set of part-IFMs including pluralities of part-IFMs which correspond to respective IFMs, of the given set of IFMs. The method further includes grouping part-IFMs in the set of part-IFMs into a set of selections of part-IFMs. The method further includes convolving, by respective compute resources of the data processing system, the set of selections with the given set of filters to compute a set of part-output feature maps. A data processing system for processing input data for a given layer of a neural network is also described.Type: ApplicationFiled: December 23, 2020Publication date: June 23, 2022Inventors: John Wakefield BROTHERS, III, Kartikeya BHARDWAJ, Alexander Eugene CHALFIN, Danny Daysang LOH
-
Patent number: 11334700Abstract: A simulation application can be executed by a computer system to develop thermal maps for an electronic architectural design. The simulation application can simulate the electronic architectural design over time. The simulation application can capture electronic signals from the electronic architectural design as the electronic architectural design is being simulated over time. The simulation application can determine power consumptions of the electronic architectural design over time from the electronic signals. The simulation application can derive temperatures of the electronic architectural design over time from the power consumptions. The simulation application can map the temperatures onto an electronic circuit design real estate of the electronic architectural design to develop the thermal maps over time.Type: GrantFiled: May 15, 2020Date of Patent: May 17, 2022Assignee: Synopsys, Inc.Inventors: Alexander John Wakefield, Jitendra Kumar Gupta
-
Publication number: 20220092409Abstract: To perform neural network processing to modify an input data array to generate a corresponding output data array using a filter comprising an array of weight data, at least one of the input data array and the filter are subdivided into a plurality of portions, a plurality of neural network processing passes using the portions are performed, and the output generated by each processing pass is combined to provide the output data array.Type: ApplicationFiled: September 23, 2020Publication date: March 24, 2022Applicant: Arm LimitedInventors: John Wakefield Brothers, III, Rune Holm, Elliott Maurice Simon Rosemarine
-
Publication number: 20220066909Abstract: A process is disclosed to identify the minimal set of sequential and combinational signals needed to fully reconstruct the combinational layout after emulation is complete. A minimal subset of sequential and combinational elements is output from the emulator to maximize the emulator speed and limit the utilization of emulator resources, e.g., FPGA resources. An efficient reconstruction of combinational waveforms or SAIF data is performed using a parallel computing grid.Type: ApplicationFiled: November 11, 2021Publication date: March 3, 2022Inventors: Gagan Vishal Jain, Johnson Adaikalasamy, Alexander John Wakefield, Ritesh Mittal, Solaiman Rahim, Olivier Coudert
-
Patent number: 11200149Abstract: A process is disclosed to identify the minimal set of sequential and combinational signals needed to fully reconstruct the combinational layout after emulation is complete. A minimal subset of sequential and combinational elements is output from the emulator to maximize the emulator speed and limit the utilization of emulator resources, e.g., FPGA resources. An efficient reconstruction of combinational waveforms or SAIF data is performed using a parallel computing grid.Type: GrantFiled: November 13, 2017Date of Patent: December 14, 2021Assignee: Synopsys, Inc.Inventors: Gagan Vishal Jain, Johnson Adaikalasamy, Alexander John Wakefield, Ritesh Mittal, Solaiman Rahim, Olivier Coudert
-
Publication number: 20210333853Abstract: A method of power test analysis for an integrated circuit design including loading test vectors into a first sequence of flip-flops in scan mode, evaluating the test vectors and saving results of the evaluating in a second sequence of flip-flops in scan mode, reading results out of the second sequence of flip-flops to a scan chain, and calculating power generation based on the results. In one embodiment, the test vectors are received from an automatic test pattern generator.Type: ApplicationFiled: April 26, 2021Publication date: October 28, 2021Applicant: Synopsys, Inc.Inventors: Alexander John Wakefield, Khader Abdel-Hafez
-
Publication number: 20210334643Abstract: A processing unit is described that receives an instruction to perform a first operation on a first layer of a neural network, block dependency data, and an instruction to perform a second operation on a second layer of the neural network. The processing unit performs the first operation, which includes dividing the first layer into a plurality of input blocks, and operating on the input blocks to generate a plurality of output blocks. The processing unit then performs the second operation after the first operation has generated a set number of output blocks defined by the block dependency data.Type: ApplicationFiled: April 27, 2020Publication date: October 28, 2021Inventors: Dominic Hugo SYMES, John Wakefield BROTHERS, III, Fredrik Peter STOLT
-
Publication number: 20210303307Abstract: Herein described is a method of operating an accumulation process in a data processing apparatus. The accumulation process comprises a plurality of accumulations which output a respective plurality of accumulated values, each based on a stored value and a computed value generated by a data processing operation. The method comprises storing a first accumulated value, the first accumulated value being one of said plurality of accumulated values, into a first storage device comprising a plurality of single-bit storage elements; determining that a predetermined trigger has been satisfied with respect to the accumulation process; and in response to the determining, storing at least a portion of a second accumulated value, the second accumulated value being one of said plurality of accumulated values, into a second storage device.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Jens OLSON, John Wakefield BROTHERS, III, Jared Corey SMOLENS, Chi-wen CHENG, Daren CROXFORD, Sharjeel SAEED, Dominic Hugo SYMES
-
Publication number: 20210303992Abstract: When performing neural network processing, the order in which the neural network processing is to be performed is determined, and the order in which weight values to be used for the neural network processing will be used is determined based on the determined order of the neural network processing. The weight values are then provided to the processor that is to perform the neural network processing in the determined order for the weight values, with the processor, when performing the neural network processing, then using the weight values in the determined order that they are provided to the processor.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Applicant: Arm LimitedInventor: John Wakefield Brothers, III