Patents by Inventor Marc Lupon

Marc Lupon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230020571
    Abstract: An apparatus and method are described for distributed and cooperative computation in artificial neural networks. For example, one embodiment of an apparatus comprises: an input/output (I/O) interface; a plurality of processing units communicatively coupled to the I/O interface to receive data for input neurons and synaptic weights associated with each of the input neurons, each of the plurality of processing units to process at least a portion of the data for the input neurons and synaptic weights to generate partial results; and an interconnect communicatively coupling the plurality of processing units, each of the processing units to share the partial results with one or more other processing units over the interconnect, the other processing units using the partial results to generate additional partial results or final results. The processing units may share data including input neurons and weights over the shared input bus.
    Type: Application
    Filed: September 20, 2022
    Publication date: January 19, 2023
    Inventors: Frederico C. PRATAS, Ayose J. FALCON, Marc LUPON, Fernando LATORRE, Pedro LOPEZ, Enric HERRERO ABELLANAS, Georgios TOURNAVITIS
  • Patent number: 11281965
    Abstract: A processing device includes a processor core and a number of calculation modules that each is configurable to perform any one of operations for a convolutional neuron network system. A first set of the calculation modules are configured to perform convolution operations, a second set of the calculation modules are reconfigured to perform averaging operations, and a third set of the calculation modules are reconfigured to perform dot product operations.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: March 22, 2022
    Assignee: Intel Corporation
    Inventors: Marc Lupon, Enric Herrero Abellanas, Ayose Falcon, Fernando Latorre, Pedro Lopez, Frederico Pratas
  • Publication number: 20210326405
    Abstract: An apparatus and method are described for distributed and cooperative computation in artificial neural networks. For example, one embodiment of an apparatus comprises: an input/output (I/O) interface; a plurality of processing units communicatively coupled to the I/O interface to receive data for input neurons and synaptic weights associated with each of the input neurons, each of the plurality of processing units to process at least a portion of the data for the input neurons and synaptic weights to generate partial results; and an interconnect communicatively coupling the plurality of processing units, each of the processing units to share the partial results with one or more other processing units over the interconnect, the other processing units using the partial results to generate additional partial results or final results. The processing units may share data including input neurons and weights over the shared input bus.
    Type: Application
    Filed: May 3, 2021
    Publication date: October 21, 2021
    Inventors: Frederico C. PRATAS, Ayose J. FALCON, Marc LUPON, Fernando LATORRE, Pedro LOPEZ, Enric HERRERO ABELLANAS, Georgios TOURNAVITIS
  • Patent number: 10997273
    Abstract: An apparatus and method are described for distributed and cooperative computation in artificial neural networks. For example, one embodiment of an apparatus comprises: an input/output (I/O) interface; a plurality of processing units communicatively coupled to the I/O interface to receive data for input neurons and synaptic weights associated with each of the input neurons, each of the plurality of processing units to process at least a portion of the data for the input neurons and synaptic weights to generate partial results; and an interconnect communicatively coupling the plurality of processing units, each of the processing units to share the partial results with one or more other processing units over the interconnect, the other processing units using the partial results to generate additional partial results or final results. The processing units may share data including input neurons and weights over the shared input bus.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: May 4, 2021
    Assignee: Intel Corporation
    Inventors: Frederico C. Pratas, Ayose J. Falcon, Marc Lupon, Fernando Latorre, Pedro Lopez, Enric Herrero Abellanas, Georgios Tournavitis
  • Patent number: 10792923
    Abstract: Certain examples described herein relate to calibrating a cleaning element for a printhead. A print medium sensor is aligned with a nozzle plane of the printhead to detect displacement, during a print job, of a print medium out of the nozzle plane. Relative movement between the cleaning element and the printhead is enacted. During this movement, activation of the print medium sensor is detected. A reference position for the cleaning element is stored based on the detected activation.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: October 6, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Francesc Tarrida Tirado, Marc Lupon Navazo, Alejandro Mielgo Barba
  • Publication number: 20200247129
    Abstract: Certain examples described herein relate to calibrating a cleaning element for a printhead. A print medium sensor is aligned with a nozzle plane of the printhead to detect displacement, during a print job, of a print medium out of the nozzle plane. Relative movement between the cleaning element and the printhead is enacted. During this movement, activation of the print medium sensor is detected. A reference position for the cleaning element is stored based on the detected activation.
    Type: Application
    Filed: October 27, 2017
    Publication date: August 6, 2020
    Inventors: Francesc Tarrida Tirado, Marc Lupon Navazo, Alejandro Mielgo Barba
  • Patent number: 10402468
    Abstract: Systems and methods for performing convolution operations. An example processing system comprises: a processing core; and a convolver unit to apply a convolution filter to a plurality of input data elements represented by a two-dimensional array, the convolver unit comprising a plurality of multipliers coupled to two or more sets of latches, wherein each set of latches is to store a plurality of data elements of a respective one-dimensional section of the two-dimensional array.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Enric Herrero Abellanas, Marc Lupon, Ayose J. Falcon, Frederico C. Pratas, Fernando Latorre, Pedro Lopez
  • Publication number: 20190004916
    Abstract: A combination of hardware and software collect profile data for asynchronous events, at code region granularity. An exemplary embodiment is directed to collecting metrics for prefetching events, which are asynchronous in nature. Instructions that belong to a code region are identified using one of several alternative techniques, causing a profile bit to be set for the instruction, as a marker. Each line of a data block that is prefetched is similarly marked. Events corresponding to the profile data being collected and resulting from instructions within the code region are then identified. Each time that one of the different types of events is identified, a corresponding counter is incremented. Following execution of the instructions within the code region, the profile data accumulated in the counters are collected, and the counters are reset for use with a new code region.
    Type: Application
    Filed: July 3, 2018
    Publication date: January 3, 2019
    Inventors: Raul Martinez, Enric Gibert Codina, Pedro Lopez, Marti Torrents Lapuerta, Polychronis Xekalakis, Georgios Tournavitis, Kyriakos A. Stavrou, Demos Pavlou, Daniel Ortega, Alejandro Martinez Vicente, Pedro Marcuello, Grigorios Magklis, Josep M. Codina, Crispin Gomez Requena, Antonio Gonzalez, Mirem Hyuseinova, Christos Kotselidis, Fernando Latorre, Marc Lupon, Carlos Madriles
  • Patent number: 10157063
    Abstract: A computer-readable storage medium, method and system for optimization-level aware branch prediction is described. A gear level is assigned to a set of application instructions that have been optimized. The gear level is also stored in a register of a branch prediction unit of a processor. Branch prediction is then performed by the processor based upon the gear level.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: December 18, 2018
    Assignee: Intel Corporation
    Inventors: Polychronis Xekalakis, Pedro Marcuello, Alejandro Vicente Martinez, Christos E. Kotselidis, Grigorios Magklis, Fernando Latorre, Raul Martinez, Josep M. Codina, Enric Gibert Codina, Crispin Gomez Requena, Antonio Gonzelez, Mirem Hyuseinova, Pedro Lopez, Marc Lupon, Carlos Madriles, Daniel Ortega, Demos Pavlou, Kyriakos A. Stavrou, Georgios Tournavitis
  • Publication number: 20180349763
    Abstract: A processing device includes a processor core and a number of calculation modules that each is configurable to perform any one of operations for a convolutional neuron network system. A first set of the calculation modules are configured to perform convolution operations, a second set of the calculation modules are reconfigured to perform averaging operations, and a third set of the calculation modules are reconfigured to perform dot product operations.
    Type: Application
    Filed: April 5, 2018
    Publication date: December 6, 2018
    Inventors: Marc Lupon, Enric Herrero Abellanas, Ayose Falcon, Fernando Latorre, Pedro Lopez, Frederico Pratas
  • Publication number: 20180329867
    Abstract: Systems and methods for performing convolution operations. An example processing system comprises: a processing core; and a convolver unit to apply a convolution filter to a plurality of input data elements represented by a two-dimensional array, the convolver unit comprising a plurality of multipliers coupled to two or more sets of latches, wherein each set of latches is to store a plurality of data elements of a respective one-dimensional section of the two-dimensional array.
    Type: Application
    Filed: April 9, 2018
    Publication date: November 15, 2018
    Inventors: Enric Herrero Abellanas, Marc Lupon, Ayose J. Falcon, Frederico C. Pratas, Fernando Latorre, Pedro Lopez
  • Patent number: 10061587
    Abstract: A processor includes a front end, a decoder, an allocator, and a retirement unit. The decoder includes logic to identify an end-of-live-range (EOLR) indicator. The EOLR indicator specifies an architectural register and a location in code for which the architectural register is unused. The allocator includes logic to scan for a mapping of the architectural register to a physical register, based upon the EOLR indicator. The allocator also includes logic to generate a request to disassociate the architectural register from the physical register. The retirement unit includes logic to disassociate the architectural register from the physical register.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: August 28, 2018
    Assignee: Intel Corporation
    Inventors: David Pardo Keppel, Denis M. Khartikov, Fernando LaTorre, Marc Lupon, Grigorios Magklis, Naveen Neelakantam, Georgios Tournavitis, Polychronis Xekalakis
  • Patent number: 10013326
    Abstract: A combination of hardware and software collect profile data for asynchronous events, at code region granularity. An exemplary embodiment is directed to collecting metrics for prefetching events, which are asynchronous in nature. Instructions that belong to a code region are identified using one of several alternative techniques, causing a profile bit to be set for the instruction, as a marker. Each line of a data block that is prefetched is similarly marked. Events corresponding to the profile data being collected and resulting from instructions within the code region are then identified. Each time that one of the different types of events is identified, a corresponding counter is incremented. Following execution of the instructions within the code region, the profile data accumulated in the counters are collected, and the counters are reset for use with a new code region.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: July 3, 2018
    Assignee: Intel Corporation
    Inventors: Raul Martinez, Enric Gibert Codina, Pedro Lopez, Marti Torrents Lapuerta, Polychronis Xekalakis, Georgios Tournavitis, Kyriakos A. Stavrou, Demos Pavlou, Daniel Ortega, Alejandro Martinez Vicente, Pedro Marcuello, Grigorios Magklis, Josep M. Codina, Crispin Gomez Requena, Antonio Gonzalez, Mirem Hyuseinova, Christos Kotselidis, Fernando Latorre, Marc Lupon, Carlos Madriles
  • Patent number: 10002108
    Abstract: Systems and methods for performing convolution operations. An example processing system comprises: a processing core; and a convolver unit to apply a convolution filter to a plurality of input data elements represented by a two-dimensional array, the convolver unit comprising a plurality of multipliers coupled to two or more sets of latches, wherein each set of latches is to store a plurality of data elements of a respective one-dimensional section of the two-dimensional array.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: June 19, 2018
    Assignee: Intel Corporation
    Inventors: Enric Herrero Abellanas, Marc Lupon, Ayose J. Falcon, Frederico C. Pratas, Fernando Latorre, Pedro Lopez
  • Patent number: 9978014
    Abstract: A processing device includes a processor core and a number of calculation modules that each is configurable to perform any one of operations for a convolutional neuron network system. A first set of the calculation modules are configured to perform convolution operations, a second set of the calculation modules are reconfigured to perform averaging operations, and a third set of the calculation modules are reconfigured to perform dot product operations.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: May 22, 2018
    Assignee: Intel Corporation
    Inventors: Marc Lupon, Enric Herrero Abellanas, Ayose Falcon, Fernando Latorre, Pedro Lopez, Frederico Pratas
  • Patent number: 9971540
    Abstract: A storage device and method are described for performing convolution operations. For example, one embodiment of an apparatus to perform convolution operations comprises a plurality of processing units to execute convolution operations on input data and partial results; a unified scratchpad memory comprising a plurality of memory banks communicatively coupled to the plurality of processing units through a plurality of read/write ports, each of the plurality of memory banks partitioned to store both the input data and partial results; a control unit to allocate the input data and partial results to the memory banks to ensure a minimum quality of service in accordance with the specified number of read/write ports and the specified convolution operation to be performed.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: May 15, 2018
    Assignee: INTEL CORPORATION
    Inventors: Enric Herrero Abellanas, Georgios Tournavitis, Frederico C. Pratas, Marc Lupon, Fernando Latorre, Pedro Lopez, Ayose J. Falcon
  • Patent number: 9811341
    Abstract: Disclosed is an apparatus and method to manage instruction cache prefetching from an instruction cache. A processor may comprise: a prefetch engine; a branch prediction engine to predict the outcome of a branch; and dynamic optimizer. The dynamic optimizer may be used to control: identifying common instruction cache misses and inserting a prefetch instruction from the prefetch engine to the instruction cache.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: November 7, 2017
    Assignee: Intel Corporation
    Inventors: Kyriakos A. Stavrou, Enric Gibert Codina, Josep M. Codina, Crispin Gomez Requena, Antonio Gonzalez, Mirem Hyuseinova, Christos E. Kotselidis, Fernando Latorre, Pedro Lopez, Marc Lupon, Carlos Madriles Gimeno, Grigorios Magklis, Pedro Marcuello, Alejandro Martinez Vicente, Raul Martinez, Daniel Ortega, Demos Pavlou, Georgios Tournavitis, Polychronis Xekalakis
  • Patent number: 9778909
    Abstract: Methods, apparatus, instructions and logic are disclosed providing double rounded combined floating-point multiply and add functionality as scalar or vector SIMD instructions or as fused micro-operations. Embodiments include detecting floating-point (FP) multiplication operations and subsequent FP operations specifying as source operands results of the FP multiplications. The FP multiplications and the subsequent FP operations are encoded as combined FP operations including rounding of the results of FP multiplication followed by the subsequent FP operations. The encoding of said combined FP operations may be stored and executed as part of an executable thread portion using fused-multiply-add hardware that includes overflow detection for the product of FP multipliers, first and second FP adders to add third operand addend mantissas and the products of the FP multipliers with different rounding inputs based on overflow, or no overflow, in the products of the FP multiplier.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: October 3, 2017
    Assignee: Intel Corporation
    Inventors: Sridhar Samudrala, Grigorios Magklis, Marc Lupon, David R. Ditzel
  • Publication number: 20170277658
    Abstract: An apparatus and method are described for distributed and cooperative computation in artificial neural networks. For example, one embodiment of an apparatus comprises: an input/output (I/O) interface; a plurality of processing units communicatively coupled to the I/O interface to receive data for input neurons and synaptic weights associated with each of the input neurons, each of the plurality of processing units to process at least a portion of the data for the input neurons and synaptic weights to generate partial results; and an interconnect communicatively coupling the plurality of processing units, each of the processing units to share the partial results with one or more other processing units over the interconnect, the other processing units using the partial results to generate additional partial results or final results. The processing units may share data including input neurons and weights over the shared input bus.
    Type: Application
    Filed: November 19, 2015
    Publication date: September 28, 2017
    Inventors: Frederico C. PRATAS, Ayose J. FALCON, Marc LUPON, Fernando LATORRE, Pedro LOPEZ, Enric HERRERO ABELLANAS, Georgios TOURNAVITIS
  • Publication number: 20170220524
    Abstract: Systems and methods for performing convolution operations. An example processing system comprises: a processing core; and a convolver unit to apply a convolution filter to a plurality of input data elements represented by a two-dimensional array, the convolver unit comprising a plurality of multipliers coupled to two or more sets of latches, wherein each set of latches is to store a plurality of data elements of a respective one-dimensional section of the two-dimensional array.
    Type: Application
    Filed: March 9, 2017
    Publication date: August 3, 2017
    Inventors: Enric Herrero Abellanas, Marc Lupon, Ayose J. Falcon, Frederico C. Pratas, Fernando Latorre, Pedro Lopez