Patents by Inventor Pedro Henrique Oliveira Pinheiro

Pedro Henrique Oliveira Pinheiro has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11023772
    Abstract: In one embodiment, a feature map of an image having h×w pixels and a patch having one or more pixels of the image are received. The patch has been processed by a first set of layers of a convolutional neural network and contains an object centered within the patch. The patch is then processed using the feature map and one or more pixel classifiers of a classification layer of a deep-learning model, where the classification layer includes h×w pixel classifiers, with each pixel classifier corresponding to a respective pixel of the patch. Each of the pixel classifiers used to process the patch outputs a respective value indicating whether the corresponding pixel belongs to the object centered in the patch.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: June 1, 2021
    Assignee: Facebook, Inc.
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Patent number: 10956817
    Abstract: Systems and methods for addressing the cross domain issue using a similarity based classifier convolutional neural network. An input image is passed through a convolutional neural network that extracts its features. These features are compared to features of multiple sets of prototype representations with each set of prototype representations being extracted from and representing a category of images. The similarity between the features of the input image and features of the various prototype representations is scored and the prototype representation whose features are most similar to the features of the input image will have its label applied to the input image. The classifier is trained using images from a source domain and the input images are from a target domain. The training for the classifier is such that the classifier will be unable to determine if a specific data point is from the source domain or from the target domain.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: March 23, 2021
    Inventor: Pedro Henrique Oliveira Pinheiro
  • Patent number: 10853943
    Abstract: Systems and methods for counting objects in images based on each object's approximate location in the images. An image is passed to a segmentation module. The segmentation module segments the image into at least one object blob. Each object blob is an indication of a single object. The object blobs are counted by a counting module. In some embodiments, the segmentation module segments the image by classifying each image pixel and grouping nearby pixels of the same class together. In some embodiments, the segmentation module comprises a neural network that is trained to group pixels based on a set of training images. A plurality of the training images contain at least one point marker corresponding to a single training object. The segmentation module learns to group pixels into training object blobs that each contain a single point marker. Each training object blob is thus an indication of a single object.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: December 1, 2020
    Assignee: ELEMENT AI INC.
    Inventors: Issam Hadj Laradji, Negar Rostamzadeh, Pedro Henrique Oliveira Pinheiro, David Maria Vazquez Bermudez, Mark William Schmidt
  • Publication number: 20200043171
    Abstract: Systems and methods for counting objects in images based on each object's approximate location in the images. An image is passed to a segmentation module. The segmentation module segments the image into at least one object blob. Each object blob is an indication of a single object. The object blobs are counted by a counting module. In some embodiments, the segmentation module segments the image by classifying each image pixel and grouping nearby pixels of the same class together. In some embodiments, the segmentation module comprises a neural network that is trained to group pixels based on a set of training images. A plurality of the training images contain at least one point marker corresponding to a single training object. The segmentation module learns to group pixels into training object blobs that each contain a single point marker. Each training object blob is thus an indication of a single object.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Issam Hadj LARADJI, Negar ROSTAMZADEH, Pedro Henrique OLIVEIRA PINHEIRO, David MARIA VAZQUEZ BERMUDEZ, Mark William SCHMIDT
  • Publication number: 20200034653
    Abstract: In one embodiment, a feature map of an image having h×w pixels and a patch having one or more pixels of the image are received. The patch has been processed by a first set of layers of a convolutional neural network and contains an object centered within the patch. The patch is then processed using the feature map and one or more pixel classifiers of a classification layer of a deep-learning model, where the classification layer includes h×w pixel classifiers, with each pixel classifier corresponding to a respective pixel of the patch. Each of the pixel classifiers used to process the patch outputs a respective value indicating whether the corresponding pixel belongs to the object centered in the patch.
    Type: Application
    Filed: October 1, 2019
    Publication date: January 30, 2020
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Patent number: 10496896
    Abstract: In one embodiment, a plurality of patches of an image are processed using a first-pass of a first deep-learning model to generate object-level information for each of the patches. Each patch includes one or more pixels of the image. Using a second-pass of the first deep-learning model, a respective object proposal is generated for each of the plurality of patches of the image. The second-pass takes as input the first-pass output, and the generated respective object proposals comprise pixel-level information for each of the patches. Using a second deep-learning model, a respective score is computed for each object proposal. The second deep-learning model takes as input the first-pass output, and the object score includes a likelihood that the respective patch of the object proposal contains an entire object.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 3, 2019
    Assignee: Facebook, Inc.
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Patent number: 10496895
    Abstract: In one embodiment a plurality of patches of an image are processed, using a first set of layers of a convolutional neural network, to output a plurality of object proposals associated with the plurality of patches of the image. Each patch includes one or more pixels of the image. Each object proposal includes a prediction as to a location of an object in the respective patch. Using a second set of layers of the convolutional neural network, the plurality of object proposals outputted by the first set of layers are processed to generate a plurality of refined object proposals. Each refined object proposal includes pixel-level information for the respective patch of the image. The first layer in the second set of layers of the convolutional neural network takes as input the plurality of object proposals outputted by the first set of layers.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: December 3, 2019
    Assignee: Facebook, Inc.
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Publication number: 20190325299
    Abstract: Systems and methods for addressing the cross domain issue using a similarity based classifier convolutional neural network. An input image is passed through a convolutional neural network that extracts its features. These features are compared to features of multiple sets of prototype representations with each set of prototype representations being extracted from and representing a category of images. The similarity between the features of the input image and features of the various prototype representations is scored and the prototype representation whose features are most similar to the features of the input image will have its label applied to the input image. The classifier is trained using images from a source domain and the input images are from a target domain. The training for the classifier is such that the classifier will be unable to determine if a specific data point is from the source domain or from the target domain.
    Type: Application
    Filed: April 18, 2018
    Publication date: October 24, 2019
    Inventor: Pedro Henrique OLIVEIRA PINHEIRO
  • Publication number: 20190228259
    Abstract: In one embodiment, a plurality of patches of an image are processed using a first-pass of a first deep-learning model to generate object-level information for each of the patches. Each patch includes one or more pixels of the image. Using a second-pass of the first deep-learning model, a respective object proposal is generated for each of the plurality of patches of the image. The second-pass takes as input the first-pass output, and the generated respective object proposals comprise pixel-level information for each of the patches. Using a second deep-learning model, a respective score is computed for each object proposal. The second deep-learning model takes as input the first-pass output, and the object score includes a likelihood that the respective patch of the object proposal contains an entire object.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Patent number: 10255522
    Abstract: In one embodiment, a plurality of patches of an image are processed using a first deep-learning model to detect a plurality of features associated with the first patch of the image. Each patch includes one or more pixels of the image. Using a second deep-learning model, a respective object proposal is generated for each of the plurality of patches of the image. The second deep-learning model takes as input the plurality of detected features associated with the respective patch of the image, and each object proposal includes a prediction as to a location of an object in the patch. Using a third deep-learning model, a respective score is computed for each object proposal generated using the second deep-learning model. The third deep-learning model takes as input the plurality of detected features associated with the respective patch of the image, and the object score may include a likelihood that the patch contains an entire object.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: April 9, 2019
    Assignee: Facebook, Inc.
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Publication number: 20180285686
    Abstract: In one embodiment a plurality of patches of an image are processed, using a first set of layers of a convolutional neural network, to output a plurality of object proposals associated with the plurality of patches of the image. Each patch includes one or more pixels of the image. Each object proposal includes a prediction as to a location of an object in the respective patch. Using a second set of layers of the convolutional neural network, the plurality of object proposals outputted by the first set of layers are processed to generate a plurality of refined object proposals. Each refined object proposal includes pixel-level information for the respective patch of the image. The first layer in the second set of layers of the convolutional neural network takes as input the plurality of object proposals outputted by the first set of layers.
    Type: Application
    Filed: December 22, 2017
    Publication date: October 4, 2018
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
  • Publication number: 20170364771
    Abstract: In one embodiment, a plurality of patches of an image are processed using a first deep-learning model to detect a plurality of features associated with the first patch of the image. Each patch includes one or more pixels of the image. Using a second deep-learning model, a respective object proposal is generated for each of the plurality of patches of the image. The second deep-learning model takes as input the plurality of detected features associated with the respective patch of the image, and each object proposal includes a prediction as to a location of an object in the patch. Using a third deep-learning model, a respective score is computed for each object proposal generated using the second deep-learning model. The third deep-learning model takes as input the plurality of detected features associated with the respective patch of the image, and the object score may include a likelihood that the patch contains an entire object.
    Type: Application
    Filed: June 15, 2017
    Publication date: December 21, 2017
    Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar