Patents by Inventor Joaquin Zepeda Salvatierra

Joaquin Zepeda Salvatierra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12147878
    Abstract: Techniques for feedback-based training may include selecting a scoring machine learning model based at least in part on a test metric, and applying the model on an unlabeled dataset to generate, per dataset item of the unlabeled dataset, a prediction and an importance ranking score for the prediction. Techniques for feedback-based training may further include selecting, based on the importance ranking scores, a result of the application of the model on the unlabeled dataset, providing the result and requesting feedback on the result via a graphical user interface, receiving the feedback via the graphical user interface, adding data from the unlabeled dataset into a training dataset when the feedback indicates a verified result, and retraining the model using the training dataset with the data added from the unlabeled dataset to generate a retrained model.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: November 19, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Barath Balasubramanian, Rahul Bhotika, Niels Brouwers, Ranju Das, Prakash Krishnan, Shaun Ryan James McDowell, Anushri Mainthia, Rakesh Madhavan Nambiar, Anant Patel, Avinash Aghoram Ravichandran, Joaquin Zepeda Salvatierra, Gurumurthy Swaminathan
  • Patent number: 11983243
    Abstract: Techniques for anomaly detection are described. An exemplary method includes receiving one or more requests to train an anomaly detection machine learning model using feedback-based training, the request to indicate one or more of a type of analysis to perform, a model selection indication, and a configuration for a training dataset; training the anomaly detection machine learning model according to the one or more requests using the training data; performing feedback-based training on the trained anomaly detection machine learning model; and using the retrained anomaly detection machine learning model.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: May 14, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Barath Balasubramanian, Rahul Bhotika, Niels Brouwers, Ranju Das, Prakash Krishnan, Shaun Ryan James Mcdowell, Anushri Mainthia, Rakesh Madhavan Nambiar, Anant Patel, Avinash Aghoram Ravichandran, Joaquin Zepeda Salvatierra, Gurumurthy Swaminathan
  • Patent number: 11741592
    Abstract: Techniques for anomaly detection are described. An exemplary method includes receiving a request to create a training data set from at least one image, the request to include an indication of the at least one image and at least one indication of an operation to perform on the at least one image to generate a plurality of images from the at least one image; creating a training dataset by extracting one or more chunks from a first at least one image according to the request; and receiving one or more requests to train an anomaly detection machine learning model using the created training dataset; and training an anomaly detection machine learning model according to one or more requests using the created training data.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: August 29, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Joaquin Zepeda Salvatierra, Anant Patel, Shaun Ryan James McDowell, Prakash Krishnan, Ranju Das, Niels Brouwers, Barath Balasubramanian
  • Patent number: 11481683
    Abstract: Techniques for creating machine learning models for direct homography regression for image rectification are described. In certain embodiments, a training service trains an algorithm on a source view of a training image and a homography matrix of the training image into a machine learning model that generates a normalized homography matrix for an input of the source view. The normalized homography matrix may then be utilized to generate a target view of an image input into the machine learning model. The target view of the image may be used in a document processing pipeline for document images captured using cameras.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: October 25, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Kunwar Yashraj Singh, Joaquin Zepeda Salvatierra, Erhan Bas, Vijay Mahadevan, Jonathan Wu, Rahul Bhotika
  • Publication number: 20220171995
    Abstract: Techniques for anomaly detection are described. An exemplary method includes receiving one or more requests to train an anomaly detection machine learning model using feedback-based training, the request to indicate one or more of a type of analysis to perform, a model selection indication, and a configuration for a training dataset; training the anomaly detection machine learning model according to the one or more requests using the training data; performing feedback-based training on the trained anomaly detection machine learning model; and using the retrained anomaly detection machine learning model.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 2, 2022
    Inventors: Barath BALASUBRAMANIAN, Rahul BHOTIKA, Niels BROUWERS, Ranju DAS, Prakash KRISHNAN, Shaun Ryan James MCDOWELL, Anushri MAINTHIA, Rakesh Madhavan NAMBIAR, Anant PATEL, Avinash AGHORAM RAVICHANDRAN, Joaquin ZEPEDA SALVATIERRA, Gurumurthy SWAMINATHAN
  • Publication number: 20220172342
    Abstract: Techniques for anomaly detection are described. An exemplary method includes receiving a request to create a training data set from at least one image, the request to include an indication of the at least one image and at least one indication of an operation to perform on the at least one image to generate a plurality of images from the at least one image; creating a training dataset by extracting one or more chunks from a first at least one image according to the request; and receiving one or more requests to train an anomaly detection machine learning model using the created training dataset; and training an anomaly detection machine learning model according to one or more requests using the created training data.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 2, 2022
    Inventors: Joaquin ZEPEDA SALVATIERRA, Anant PATEL, Shaun Ryan James MCDOWELL, Prakash KRISHNAN, Ranju DAS, Niels BROUWERS, Barath BALASUBRAMANIAN
  • Publication number: 20220172100
    Abstract: Techniques for feedback-based training are described.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 2, 2022
    Inventors: Barath BALASUBRAMANIAN, Rahul BHOTIKA, Niels BROUWERS, Ranju DAS, Prakash KRISHNAN, Shaun Ryan James MCDOWELL, Anushri MAINTHIA, Rakesh Madhavan NAMBIAR, Anant PATEL, Avinash AGHORAM RAVICHANDRAN, Joaquin ZEPEDA SALVATIERRA, Gurumurthy SWAMINATHAN
  • Patent number: 11202097
    Abstract: A method and an apparatus for encoding a picture are disclosed. For at least one block of a picture to encode, a block predictor is determined (22) for a decoded first component (21) of said at least one block, from a reconstructed region of a first component of said picture. At least one second component of said at least one block is then encoded (23) by predicting said at least one second component from a second component of said block predictor. Corresponding decoding method and apparatus are disclosed.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: December 14, 2021
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Dominique Thoreau, Mehmet Turkan, Martin Alain, Joaquin Zepeda Salvatierra
  • Patent number: 11184581
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 23, 2021
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Philippe Guillotel, Martin Alain, Erik Reinhard, Jean Begaint, Dominique Thoreau, Joaquin Zepeda Salvatierra
  • Patent number: 10999607
    Abstract: The present principles are directed to a parameterized OETF/EOTF for processing images and video. The present principles provide a method for encoding a picture, comprising: applying a parameterized transfer function to a luminance (L) signal of the picture to determine a resulting V(L) transformed signal; encoding the resulting V(L); wherein the parameterized transfer function is adjusted based on a plurality of parameters to model one of a plurality of transfer functions. The present principles also provide for a method for decoding a digital picture, the method comprising: receiving the digital picture; applying a parameterized transfer function to the digital picture to determine a luminance (L) signal of the digital picture, the parameterized transfer function being based on a plurality of parameters; wherein the parameterized transfer function is adjusted based on a plurality of parameters to model one of a plurality of transfer functions.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: May 4, 2021
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Erik Reinhard, Pierre Andrivon, Philippe Bordes, Christophe Chevance, Jurgen Stauder, Patrick Morvan, Edouard Francois, Joaquin Zepeda Salvatierra
  • Publication number: 20200382742
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Application
    Filed: November 28, 2017
    Publication date: December 3, 2020
    Inventors: Philippe GUILLOTEL, Martin ALAIN, Erik REINHARD, Jean BEGAINT, Dominique THOREAU, Joaquin ZEPEDA SALVATIERRA
  • Publication number: 20200021846
    Abstract: A spatial guided prediction technique uses reconstructed pixels of a first component of a digital video image block to determine prediction modes used to recursively build prediction blocks for the other components of the same digital video image block. The technique builds improved predictions resulting in smaller prediction residuals and less bits to code for a given image quality. In one embodiment, the prediction blocks for the subsequent digital video component blocks are built recursively line by line. In another embodiment, the prediction blocks for subsequent digital video component blocks are built recursively column by column.
    Type: Application
    Filed: September 21, 2017
    Publication date: January 16, 2020
    Inventors: Dominique THOREAU, Meh,et TURKAN, Martin ALAIN, Joaquin ZEPEDA SALVATIERRA
  • Publication number: 20190238886
    Abstract: A method and an apparatus for encoding a picture are disclosed. For at least one block of a picture to encode, a block predictor is determined (22) for a decoded first component (21) of said at least one block, from a reconstructed region of a first component of said picture. At least one second component of said at least one block is then encoded (23) by predicting said at least one second component from a second component of said block predictor. Corresponding decoding method and apparatus are disclosed.
    Type: Application
    Filed: October 24, 2017
    Publication date: August 1, 2019
    Inventors: Dominique THOREAU, Mehmet TURKAN, Martin ALAIN, Joaquin ZEPEDA SALVATIERRA
  • Publication number: 20180341805
    Abstract: In a particular implementation, a codebook C can be used for quantizing a feature vector of a database image into a quantization index, and then a different codebook (B) can be used to approximate the feature vector based on the quantization index. The codebooks B and C can have different sizes. Before performing image search, a lookup table can be built offline to include distances between the feature vector for a query image and codevectors in codebook B to speed up the image search. Using triplet constraints wherein a first image and a second image are indicated as a matching pair and the first image and a third image as non-matching, the codebooks B and C can be trained for the task of image search. The present principles can be applied to regular vector quantization, product quantization, and residual quantization.
    Type: Application
    Filed: November 4, 2016
    Publication date: November 29, 2018
    Inventors: Himalaya JAIN, Cagdas BILEN, Joaquin ZEPEDA SALVATIERRA, Patrick PEREZ
  • Publication number: 20180027262
    Abstract: The present principles are directed to a parameterized OETF/EOTF for processing images and video. The present principles provide a method for encoding a picture, comprising: applying a parameterized transfer function to a luminance (L) signal of the picture to determine a resulting V(L) transformed signal; encoding the resulting V(L); wherein the parameterized transfer function is adjusted based on a plurality of parameters to model one of a plurality of transfer functions. The present principles also provide for a method for decoding a digital picture, the method comprising: receiving the digital picture; applying a parameterized transfer function to the digital picture to determine a luminance (L) signal of the digital picture, the parameterized transfer function being based on a plurality of parameters; wherein the parameterized transfer function is adjusted based on a plurality of parameters to model one of a plurality of transfer functions.
    Type: Application
    Filed: January 26, 2016
    Publication date: January 25, 2018
    Inventors: Erik REINHARD, Pierre ANDRIVON, Philippe BORDES, Christophe CHEVANCE, Jurgen STAUDER, Patrick MORVAN, Edouard FRANCOIS, Joaquin ZEPEDA SALVATIERRA
  • Publication number: 20170309004
    Abstract: The present disclosure relates to image recognition or image searching. More precisely, the present disclosure relates to pruning local descriptors extracted from an input image. The present disclosure proposes a system, method and device directed to the pruning of local descriptors extracted from image patches of an input image. The present disclosure prunes local descriptors assigned to a codebook cell, based on a relationship of the local descriptor and the assigned codebook cell. The present disclosure includes assigning a weight value for use in pruning based on the relationship of the local descriptor and the assigned codebook cell. This weight value is then used during the encoding of the local descriptors for use in image searching or image recognition.
    Type: Application
    Filed: August 25, 2015
    Publication date: October 26, 2017
    Applicant: THOMSON LICENSING
    Inventors: Joaquin ZEPEDA SALVATIERRA, Aakanksha RANA, Patrick PEREZ
  • Publication number: 20170262478
    Abstract: A method for retrieving at least one search image matching a query image commences by first extracting a set of search images. The query image is encoded into a query image feature vector and the search images are encoded into search image feature vectors using an optimized encoding process that makes use of learned encoding parameters. The Euclidean distances between the query image feature vector and the search image feature vectors are then computed. The search images are ranked based on the computed distances; and at least one highest-ranked search image is retrieved.
    Type: Application
    Filed: August 25, 2015
    Publication date: September 14, 2017
    Inventors: Joaquin ZEPEDA SALVATIERRA, Patrick PEREZ, Aakanksha RANA
  • Publication number: 20160140425
    Abstract: A technique for improving the performance of image classification systems is proposed which consists of learning an adaptation architecture on top of the input features jointly with linear classifiers, e.g., SVM. This adaptation method is agnostic to the type of input feature and applies either to features built using aggregators, e.g., BoW, FV, or to features obtained from the activations or outputs from DCNN layers. The adaptation architecture may be single (shallow) or multi-layered (deep). This technique achieves a higher performance compared to current state of the art classification systems.
    Type: Application
    Filed: November 16, 2015
    Publication date: May 19, 2016
    Inventors: Praveen Anil KULKARNI, Joaquin ZEPEDA SALVATIERRA, Frédéric JURIE
  • Publication number: 20160119628
    Abstract: A method for processing in an encoder, the method comprising receiving, by the encoder, a set of local descriptors derived from an image, obtaining, by the encoder, K code words, wherein K>1; and determining, by the encoder, a first element of a bag-of-words image feature vector by using a differentiable function having a difference between each of the local descriptors and one of the K code words as a first parameter, wherein each of the K code words is used in the differentiable function for determining a different element of the bag-of-words image feature vector.
    Type: Application
    Filed: October 22, 2015
    Publication date: April 28, 2016
    Inventors: Joaquin Zepeda Salvatierra, Praveen Anil Kulkarni
  • Publication number: 20160110609
    Abstract: A temporal section that is defined by boundary images is selected in a video sequence. A maximum of k stable image frames are selected in the temporal section of image frames having a lowest temporal activity. Image fingerprints are computed from the selected stable image frames. A mega-frame image fingerprint data structure is constructed from the computed fingerprints.
    Type: Application
    Filed: April 25, 2014
    Publication date: April 21, 2016
    Inventors: Frederic Lefebvre, Joaquin Zepeda Salvatierra, Patrick Perez