Patents by Inventor Fernando Escobar

Fernando Escobar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028256
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Application
    Filed: October 3, 2023
    Publication date: January 25, 2024
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar
  • Patent number: 11875248
    Abstract: A multicore hardware implementation of a deep neural network includes a plurality of layers arranged in plurality of layer groups. The input data to the network comprises a multidimensional tensor including one or more traversed dimensions that are traversed by strides in at least one layer of a first layer group, and one or more non-traversed dimensions. If a size of the input data in a first dimension is greater than a threshold, the hardware implementation splits the input data for the first layer group into at least a first tile and a second tile, along the first dimension. If the size of the input data in the first dimension is not greater than the threshold, the hardware implementation splits the evaluation of the first layer group into at least a first pass and a second pass, along a dimension other than the first dimension.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: January 16, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Xiran Huang, Fernando Escobar
  • Patent number: 11853866
    Abstract: A multicore hardware implementation of a deep neural network includes a plurality of layers arranged in plurality of layer groups. The input data to the network comprises a multidimensional tensor including one or more traversed dimensions, being dimensions that are traversed by strides in at least one layer of a first layer group. The hardware implementation is configured to split the input data for the first layer group into at least a first tile and a second tile, along at least one of the traversed dimensions, each tile comprising a plurality of data elements in each of the one or more traversed dimensions. A first core is configured to evaluate multiple layer groups, depth-first, based on the first tile. A second core is configured to evaluate multiple layer groups, depth-first, based on the second tile.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: December 26, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Xiran Huang, Fernando Escobar
  • Patent number: 11775206
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: October 3, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar
  • Publication number: 20220147832
    Abstract: A multicore hardware implementation of a deep neural network includes a plurality of layers arranged in plurality of layer groups. The input data to the network comprises a multidimensional tensor including one or more traversed dimensions, being dimensions that are traversed by strides in at least one layer of a first layer group. The hardware implementation is configured to split the input data for the first layer group into at least a first tile and a second tile, along at least one of the traversed dimensions, each tile comprising a plurality of data elements in each of the one or more traversed dimensions. A first core is configured to evaluate multiple layer groups, depth-first, based on the first tile. A second core is configured to evaluate multiple layer groups, depth-first, based on the second tile.
    Type: Application
    Filed: October 13, 2021
    Publication date: May 12, 2022
    Inventors: Xiran Huang, Fernando Escobar
  • Publication number: 20220129741
    Abstract: A multicore hardware implementation of a deep neural network includes a plurality of layers arranged in plurality of layer groups. The input data to the network comprises a multidimensional tensor including one or more traversed dimensions that are traversed by strides in at least one layer of a first layer group, and one or more non-traversed dimensions. The hardware implementation splits the evaluation of the first layer group into a first pass and a second pass, along one of the traversed dimensions or one of the non-traversed dimensions. A first core evaluates the first layer group for the first pass, to generate a first portion of output data. A second core evaluates the first layer group for the second pass, to generate a second portion of output data. The hardware implementation combines the first portion of output data and the second portion of output data to produce the output data of the first layer group.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 28, 2022
    Inventors: Xiran Huang, Fernando Escobar
  • Publication number: 20220121914
    Abstract: A multicore hardware implementation of a deep neural network includes a plurality of layers arranged in plurality of layer groups. The input data to the network comprises a multidimensional tensor including one or more traversed dimensions that are traversed by strides in at least one layer of a first layer group, and one or more non-traversed dimensions. If a size of the input data in a first dimension is greater than a threshold, the hardware implementation splits the input data for the first layer group into at least a first tile and a second tile, along the first dimension. If the size of the input data in the first dimension is not greater than the threshold, the hardware implementation splits the evaluation of the first layer group into at least a first pass and a second pass, along a dimension other than the first dimension.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 21, 2022
    Inventors: Xiran Huang, Fernando Escobar
  • Publication number: 20210373801
    Abstract: A hardware unit for manipulating data stored in a memory comprises an internal buffer, a memory reading block, configured to read the data from the memory and write the data to the internal buffer. a memory writing block, configured to read the data from the internal buffer and write the data to the memory. The hardware unit optionally also comprises a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The hardware unit may be configured to apply one or more transformations to multidimensional data in the memory. The hardware unit may be configured to traverse the multidimensional array using a plurality of nested loops.
    Type: Application
    Filed: June 2, 2021
    Publication date: December 2, 2021
    Inventors: Alan Vines, Stephen Spain, Fernando Escobar