Patents by Inventor Partha Prasun MAJI
Partha Prasun MAJI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250231987Abstract: For a set of data points which are desired to be processed according to neural network processing, each data point corresponding to a position in space, data point information indicative of one or more properties of the data points is received (500), and connectivity information indicative of connections between the data points is determined (503). An order for the data points is then determined (504) based on the positions in space of the data points, and updated connectivity information (505) is generated based on the initial connectivity information and the determined order for the set of data points. The updated connectivity information and data point information are provided for further processing (507) to be performed by a processor operable to execute neural network processing.Type: ApplicationFiled: March 29, 2023Publication date: July 17, 2025Applicant: Arm LimitedInventors: Shyam Tailor, Tiago Manuel Lourenço Azevedo, Partha Prasun Maji
-
Publication number: 20240331106Abstract: A method for filtering adversarial noise from an input signal is provided. The method comprises receiving an input signal which has an unknown level of adversarial noise. The input signal is filtered with a neural network to remove noise from the received input signal, thereby producing a filtered signal. A confidence value is calculated, the confidence value being associated with the filtered signal, and indicative of a level of trust relating to the filtered signal. The filtered signal and the confidence value may then be output.Type: ApplicationFiled: August 10, 2022Publication date: October 3, 2024Applicant: Arm LimitedInventors: Irenéus Johannes De Jong, Partha Prasun Maji
-
Publication number: 20240045653Abstract: An apparatus and method of converting data into an Enhanced Block Floating Point (EBFP) format with a shared exponent is provided. The EBFP format enables data within a wide range of values to be stored using a reduced number of bits compared with conventional floating-point or fixed-point formats. The data to be converted may be in any other format, such as fixed-point, floating-point, block floating-point or EBFP.Type: ApplicationFiled: August 1, 2022Publication date: February 8, 2024Applicant: Arm LimitedInventors: Neil Burgess, Sangwon Ha, Partha Prasun Maji
-
Publication number: 20240036821Abstract: In a data processor, an input datum, having a sign, a tag and a payload, is decoded by first determining a format of the payload based on the tag. For a first format, an exponent difference and an output fraction are decoded from the payload. For a second format, an exponent difference is decoded from the payload and the output fraction may be assumed to be zero. The exponent difference is subtracted from a shared exponent to produce the output exponent. The decoded output may be stored in a standard format for floating-point numbers.Type: ApplicationFiled: May 18, 2023Publication date: February 1, 2024Applicant: Arm LimitedInventors: Neil Burgess, Sangwon Ha, Partha Prasun Maji
-
Publication number: 20240036824Abstract: In a data processor, an input value having a sign, an exponent and a significand is encoded by determining an exponent difference between a base exponent and the exponent. When the exponent difference is not less than a first threshold, only the exponent difference, or a designated value, is encoded to a payload of the output value and one or more tag bits of the output value are set to a first value. When the exponent difference is less than the first threshold, the significand and exponent difference are encoded to the payload of an output value and, optionally, the one or more tag bits of the output value. A sign bit in the output value is set corresponding to the sign of the input value, and the output value is stored.Type: ApplicationFiled: June 23, 2023Publication date: February 1, 2024Applicant: Arm LimitedInventors: Neil Burgess, Sangwon Ha, Partha Prasun Maji
-
Publication number: 20240036822Abstract: A data processing apparatus is configured to determine a product of two operands stored in an Extended Block Floating-Point format. The operands are decoded, based on their tags and payloads, to generate exponent differences and at least the fractional parts of significands. The significands are multiplied to generate an output significand and shared exponents and exponent differences of the operands are combined to generate an output exponent. Signs of the operands may also be combined to provide an output sign. The apparatus may be combined with an accumulator having one or more lanes to provide an apparatus for determining dot products.Type: ApplicationFiled: August 1, 2022Publication date: February 1, 2024Applicant: Arm LimitedInventors: Neil Burgess, Sangwon Ha, Partha Prasun Maji
-
Publication number: 20240028235Abstract: Briefly, embodiments, such as methods and/or systems for employing external memory devices in the execution of activation function such as activation functions implemented in a neural network. In one aspect, a first activation input tensor may be partitioned as a plurality of tensor segments stored in one or more external memory devices. Individual stored tensor segments may be sequentially loaded to memories local to processing circuitry to apply activation functions associated with the stored tensor segments.Type: ApplicationFiled: July 19, 2022Publication date: January 25, 2024Inventors: Sulaiman Sadiq, Jonathon Hare, Geoffrey Merrett, Partha Prasun Maji, Simon John Craske
-
Publication number: 20240013051Abstract: The present disclosure relates to a method of inter-layer format conversion for a neural network, the neural network comprising at least two computation layers including a first layer to process first data in a first data format and a second layer to process second data in a second data format, the method comprising: extracting data statistics from data output by the first layer, said data statistics being representative of the data output by the first layer; determining one or more conversion parameters based on the extracted data statistics and the second data format; and generating the second data for the second layer by modifying said data output by the first layer using the one or more conversion parameters.Type: ApplicationFiled: July 8, 2022Publication date: January 11, 2024Inventors: Partha Prasun MAJI, Sangwon HA
-
Publication number: 20230409286Abstract: An apparatus has processing circuitry to perform an accumulation operation in which a first addend is added to a second addend. The apparatus has storage circuitry to store the second addend in a plurality of lanes, each lane having a significance different to that of each other lane. Each lane within at least a subset of the lanes comprises at least one overlap bit having the same bit significance as a bit in an adjacent more significant lane in the plurality of lanes. The accumulation operation includes selecting an accumulating lane out of the plurality of lanes and performing an addition operation between bits of the accumulating lane and the first addend. The at least one overlap bit of the accumulating lane enables the addition operation to be performed without a possibility of overflowing the accumulating lane.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Sangwon HA, Neil BURGESS, Partha Prasun MAJI
-
Publication number: 20230394281Abstract: A hardware accelerator and method for a mixed-precision deep neural network (DNN) ensemble are provided. The hardware accelerator includes a DNN primary module, a number of DNN auxiliary modules and a fusion module. The DNN primary module processes a DNN primary model having a primary precision level, and each DNN auxiliary module processes a DNN auxiliary model having an auxiliary precision level less than the primary precision level. The DNN primary model and each DNN auxiliary model are configured to determine a mean predicted category and a variance based on input data. The fusion module is configured to receive the mean predicted categories and variances from the DNN primary model and each DNN auxiliary model, determine an average mean predicted category and an average variance based on the mean predicted categories and variances, and output the average mean predicted category and the average variance.Type: ApplicationFiled: December 10, 2021Publication date: December 7, 2023Applicant: Arm LImitedInventor: Partha Prasun Maji
-
Patent number: 11823445Abstract: A hardware accelerator for an object detection network and a method for detecting an object are provided. The present disclosure provides robust object detection that advantageously augments traditional deterministic bounding box predictions with spatial uncertainties for various computer vision applications, such as, for example, autonomous driving, robotic surgery, etc.Type: GrantFiled: February 19, 2021Date of Patent: November 21, 2023Assignee: Arm LimitedInventors: Partha Prasun Maji, Tiago Manuel Lourenco Azevedo
-
Publication number: 20220277159Abstract: A hardware accelerator for an object detection network and a method for detecting an object are provided. The present disclosure provides robust object detection that advantageously augments traditional deterministic bounding box predictions with spatial uncertainties for various computer vision applications, such as, for example, autonomous driving, robotic surgery, etc.Type: ApplicationFiled: February 19, 2021Publication date: September 1, 2022Applicant: Arm LimitedInventors: Partha Prasun Maji, Tiago Manuel Lourenco Azevedo
-
Patent number: 8924612Abstract: A bidirectional communications link between a master device and a slave device includes first endpoint circuitry coupled to the master device generating forward data packets, second endpoint circuitry coupled to the slave device for receiving reverse data packets, and bidirectional communication circuitry for transferring forward data packets from the first endpoint circuitry to the second endpoint circuitry and reverse data packets from the second endpoint circuitry to the first endpoint circuitry. In response to a power down condition requiring a power down of at least one of the first endpoint circuitry and the second endpoint circuitry, performance of said power down is deferred until both said outstanding forward credit signal and said outstanding reverse credit signal have been de-asserted.Type: GrantFiled: April 4, 2012Date of Patent: December 30, 2014Assignee: ARM LimitedInventors: Partha Prasun Maji, Steven Richard Mellor
-
Patent number: 8630358Abstract: A system-on-chip integrated circuit 2 includes a packet transmitter 28 for generating data packets to be sent via a communication circuit 34 to a packet receiver 30 containing a buffer circuit 32. A transmitter counter 36 stores a transmitter count value counting data packets sent. A receiver counter 38 stores a receiver count value tracking data packets emptied from the buffer circuit 32. A comparison circuitry 40 is used to compare the transmitter count value and the receiver count value to determine whether or not there is storage space available within the buffer circuit 30 to receive transmission of further data packets. The packet transmitter 28 operates in a transmitter clock domain that is asynchronous from a receiver clock domain in which the packet receiver operates. One of the count values is passed across this asynchronous clock boundary in order that the comparison may be performed and flow control exercised.Type: GrantFiled: March 20, 2012Date of Patent: January 14, 2014Assignee: ARM LimitedInventors: Partha Prasun Maji, Steven Richard Mellor
-
Publication number: 20130268705Abstract: A bidirectional communications link between a master device and a slave device includes first endpoint circuitry coupled to the master device generating forward data packets, second endpoint circuitry coupled to the slave device for receiving reverse data packets, and bidirectional communication circuitry for transferring forward data packets from the first endpoint circuitry to the second endpoint circuitry and reverse data packets from the second endpoint circuitry to the first endpoint circuitry. In response to a power down condition requiring a power down of at least one of the first endpoint circuitry and the second endpoint circuitry, performance of said power down is deferred until both said outstanding forward credit signal and said outstanding reverse credit signal have been de-asserted.Type: ApplicationFiled: April 4, 2012Publication date: October 10, 2013Applicant: ARM LIMITED,Inventors: Partha Prasun MAJI, Steven Richard Mellor
-
Publication number: 20130251006Abstract: A system-on-chip integrated circuit 2 includes a packet transmitter 28 for generating data packets to be sent via a communication circuit 34 to a packet receiver 30 containing a buffer circuit 32. A transmitter counter 36 stores a transmitter count value counting data packets sent. A receiver counter 38 stores a receiver count value tracking data packets emptied from the buffer circuit 32. A comparison circuitry 40 is used to compare the transmitter count value and the receiver count value to determine whether or not there is storage space available within the buffer circuit 30 to receive transmission of further data packets. The packet transmitter 28 operates in a transmitter clock domain that is asynchronous from a receiver clock domain in which the packet receiver operates. One of the count values is passed across this asynchronous clock boundary in order that the comparison may be performed and flow control exercised.Type: ApplicationFiled: March 20, 2012Publication date: September 26, 2013Applicant: ARM LIMITEDInventors: Partha Prasun MAJI, Steven Richard Mellor