Patents by Inventor Miquel Angel FARRÉ GUIU

Miquel Angel FARRÉ GUIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11074456
    Abstract: According to one implementation, a system for automating content annotation includes a computing platform having a hardware processor and a system memory storing an automation training software code. The hardware processor executes the automation training software code to initially train a content annotation engine using labeled content, test the content annotation engine using a first test set of content obtained from a training database, and receive corrections to a first automatically annotated content set resulting from the test. The hardware processor further executes the automation training software code to further train the content annotation engine based on the corrections, determine one or more prioritization criteria for selecting a second test set of content for testing the content annotation engine based on the statistics relating to the first automatically annotated content, and select the second test set of content from the training database based on the prioritization criteria.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: July 27, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Daniel Fojo, Anthony M. Accardo, Avner Swerdlow, Katharine Navarre
  • Publication number: 20210224356
    Abstract: A system for securing a content processing pipeline includes a computing platform having a hardware processor and a memory storing a software code. The hardware processor executes the software code to insert a synthesized test image configured to activate one or more neurons of a malicious neural network into a content stream, provide the content stream as an input stream to a first processing node of the pipeline, and receive an output stream including a post-processed test image. The hardware processor further executes the software code to compare the post-processed test image in the output with an expected image corresponding to the synthesized test image. and to validate at least one portion of the pipeline as secure when the post-processed test image in the output matches the expected image.
    Type: Application
    Filed: January 21, 2020
    Publication date: July 22, 2021
    Inventors: Miquel Angel Farre Guiu, Edward C. Drake, Anthony M. Accardo, Mark Arana
  • Patent number: 11068145
    Abstract: Various embodiments of the invention disclosed herein provide techniques for automatically displaying and providing electronic feedback about a 3D production asset. A client device executing a software application receives an asset data bundle associated with the 3D production asset. The client device generates a customized user interface based on at least one aspect of the asset data bundle. The client device displays the 3D production asset via the customized user interface. The client device receives an input associated with the 3D production asset via the customized user interface. The client device causes the input to be transmitted to at least one of a media content server and a production database.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: July 20, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Breymann, Anthony M. Accardo, Evan A. Binder, Katharine S. Navarre, Gino Guzzardo, Miquel Angel Farre Guiu
  • Patent number: 11064268
    Abstract: According to one implementation, a media content annotation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to receive a first version of media content and a second version of the media content altered with respect to the first version, and to map each of multiple segments of the first version of the media content to a corresponding one segment of the second version of the media content. The software code further aligns each of the segments of the first version of the media content with its corresponding one segment of the second version of the media content, and utilizes metadata associated with each of at least some of the segments of the first version of the media content to annotate its corresponding one segment of the second version of the media content.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 13, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Matthew C. Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Katharine S. Ettinger, Evan A. Binder, Anthony M. Accardo, Avner Swerdlow
  • Publication number: 20210209196
    Abstract: A system for performing authentication of content based on intrinsic attributes includes a computing platform having a hardware processor and a memory storing a content authentication software code. The hardware processor executes the content authentication software code to receive a content file including digital content and authentication data created based on a baseline version of the digital content, to generate validation data based on the digital content, to compare the validation data to the authentication data, and to identify the digital content as baseline digital content in response to determining that the validation data matches the authentication data based on the comparison. The hardware processor is also configured to execute the content authentication software code to identify the digital content as manipulated digital content in response to determining that the validation data does not match the authentication data based on the comparison.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 8, 2021
    Inventors: Mark Arana, Miquel Angel Farre Guiu, Edward C. Drake, Anthony M. Accardo
  • Publication number: 20210192385
    Abstract: Techniques for machine learning optimization are provided. A video comprising a plurality of segments is received, and a first segment of the plurality of segments is processed with a machine learning (ML) model to generate a plurality of tags, where each of the plurality of tags indicates presence of an element in the first segment. A respective accuracy value is determined for each respective tag of the plurality of tags, where the respective accuracy value is based at least in part on a maturity score for the ML model. The first segment is classified as accurate, based on determining that an aggregate accuracy of tags corresponding to the first segment exceeds a predefined threshold. Upon classifying the first segment as accurate, the first segment is bypassed during a review process.
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Inventors: Miquel Angel FARRÉ GUIU, Monica ALFARO VENDRELL, Marc JUNYENT MARTIN, Anthony M. ACCARDO
  • Patent number: 11010398
    Abstract: There is provided a system including a computing platform having a hardware processor and a memory, and a metadata extraction and management unit stored in the memory. The hardware processor is configured to execute the metadata extraction and management unit to extract a plurality of metadata types from a media asset sequentially and in accordance with a prioritized order of extraction based on metadata type, aggregate the plurality of metadata types to produce an aggregated metadata describing the media asset, use the aggregated metadata to include at least one database entry in a graphical database, wherein the at least one database entry describes the media asset, display a user interface for a user to view tags of metadata associated with the media asset, and correcting presence of one of the tags of metadata associated with the media asset, in response to an input from the user via the user interface.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: May 18, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Jordi Pont-Tuset, Pablo Beltran, Nimesh Narayan, Leonid Sigal, Aljoscha Smolic, Anthony M. Accardo
  • Publication number: 20210117678
    Abstract: According to one implementation, a system for automating inferential content annotation includes a computing platform having a hardware processor and a system memory storing a software code including a set of rules trained to annotate content inferentially. The hardware processor executes the software code to utilize one or more feature analyzer(s) to apply labels to features detected in the content, access one or more knowledge base(s) to validate at least one of the applied labels, and to obtain, from the knowledge base(s), descriptive data linked to the validated label(s). The software code then infers, using the set of rules, one or more label(s) for the content based on the validated label(s) and the descriptive data, and outputs tags for annotating the content, where the tags include the validated label(s) and the inferred label(s).
    Type: Application
    Filed: October 16, 2019
    Publication date: April 22, 2021
    Inventors: Miquel Angel Farre Guiu, Matthew C. Petrillo, Monica Alfaro Vendrell, Daniel Fojo, Albert Aparicio, Francese Josep Guitart Bravo, Jordi Badia Pujol, Marc Junyent Martin, Anthony M. Accardo
  • Publication number: 20210099760
    Abstract: According to one implementation, an automated audio mapping system includes a computing platform having a hardware processor and a system memory storing an audio mapping software code including an artificial neural network (ANN) trained to identify multiple different audio content types. The hardware processor is configured to execute the audio mapping software code to receive content including multiple audio tracks, and to identify, without using the ANN, a first music track and a second music track of the multiple audio tracks. The hardware processor is further configured to execute the audio mapping software code to identify, using the ANN, the audio content type of each of the multiple audio tracks except the first music track and the second music track, and to output a mapped content file including the multiple audio tracks each assigned to a respective one predetermined audio channel based on its identified audio content type.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Albert Aparicio, Avner Swerdlow, Anthony M. Accardo, Bradley Drew Anderson
  • Patent number: 10951958
    Abstract: A system for assessing authenticity of modified content includes a computing platform having a hardware processor and a memory storing a software code including a neural network trained to assess the authenticity of modified content generated based on baseline digital content and including one or more modifications to the baseline digital content. The hardware processor executes the software code to use the neural network to receive the modified content and to assess the authenticity of each of the one or more modifications to the baseline digital content to produce one or more authenticity assessments corresponding respectively to the one or more modifications to the baseline digital content. The hardware processor is also configured to execute the software code to generate an authenticity evaluation of the modified content based on the one or more authenticity assessments, and to output the authenticity evaluation for rendering on a display.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: March 16, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Mark Arana, Edward C. Drake, Miquel Angel Farre Guiu, Anthony M. Accardo
  • Publication number: 20210067844
    Abstract: According to one implementation, a cloud-based system for performing cloud-based image rendering for video stream enrichment includes a video forwarding unit and a video enrichment unit. The video forwarding unit is configured to detect one or more non-interactive video player(s) linked to the video forwarding unit over a communication network, forward a video stream to the non-interactive video player(s), and forward the video stream to the video enrichment unit. The video enrichment unit is configured to receive the video stream, detect one or more interactive video player(s) linked to the video enrichment unit over the communication network, identify a video enhancement corresponding to one or more customizable video segment(s) in the video stream, insert a rendered video enhancement into the one or more customizable video segment(s) to produce an enriched video stream, and distribute the enriched video stream to one or more of the interactive video player(s).
    Type: Application
    Filed: August 26, 2019
    Publication date: March 4, 2021
    Inventors: Evan A. Binder, Marc Junyent Martin, Jordi Badia Pujol, Avner Swerdlow, Miquel Angel Farre Guiu
  • Patent number: 10924823
    Abstract: According to one implementation, a cloud-based system for performing cloud-based image rendering for video stream enrichment includes a video forwarding unit and a video enrichment unit. The video forwarding unit is configured to detect one or more non-interactive video player(s) linked to the video forwarding unit over a communication network, forward a video stream to the non-interactive video player(s), and forward the video stream to the video enrichment unit. The video enrichment unit is configured to receive the video stream, detect one or more interactive video player(s) linked to the video enrichment unit over the communication network, identify a video enhancement corresponding to one or more customizable video segment(s) in the video stream, insert a rendered video enhancement into the one or more customizable video segment(s) to produce an enriched video stream, and distribute the enriched video stream to one or more of the interactive video player(s).
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: February 16, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan A. Binder, Marc Junyent Martin, Jordi Badia Pujol, Avner Swerdlow, Miquel Angel Farre Guiu
  • Publication number: 20210019576
    Abstract: According to one implementation, a quality control (QC) system for annotated content includes a computing platform having a hardware processor and a system memory storing an annotation culling software code. The hardware processor executes the annotation culling software code to receive multiple content sets annotated by an automated content classification engine, and obtain evaluations of the annotations applied by the automated content classification engine to the content sets. The hardware processor further executes the annotation culling software code to identify a sample size of the content sets for automated QC analysis of the annotations applied by the automated content classification engine, and cull the annotations applied by the automated content classification engine based on the evaluations when the number of annotated content sets equals the identified sample size.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Inventors: Miquel Angel Farre Guiu, Matthew C. Petrillo, Marc Junyent Martin, Anthony M. Accardo, Avner Swerdlow, Monica Alfaro Vendrell
  • Publication number: 20210012813
    Abstract: A content annotation system includes a computing platform having a hardware processor and a memory storing a tagging software code including an artificial neural network (ANN). The hardware processor executes the tagging software code to receive content having a content interval including an image of a generic content feature, encode the image into a latent vector representation of the image using an encoder of the ANN, and use a first decoder of the ANN to generate a first tag describing the generic content feature based on the latent vector representation. When a specific content feature learned by the ANN corresponds to the generic content feature described by the first tag, the tagging software code uses a second decoder of the ANN to generate a second tag uniquely identifying the specific content feature based on the latent vector representation, and tags the content interval with the first and second tags.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Inventors: Miquel Angel Farre Guiu, Monica Alfaro Vendrell, Albert Aparicio Isarn, Daniel Fojo, Marc Junyent Martin, Anthony M. Accardo, Avner Swerdlow
  • Patent number: 10891985
    Abstract: A content annotation system includes a computing platform having a hardware processor and a memory storing a tagging software code including an artificial neural network (ANN). The hardware processor executes the tagging software code to receive content having a content interval including an image of a generic content feature, encode the image into a latent vector representation of the image using an encoder of the ANN, and use a first decoder of the ANN to generate a first tag describing the generic content feature based on the latent vector representation. When a specific content feature learned by the ANN corresponds to the generic content to feature described by the first tag, the tagging software code uses a second decoder of the ANN to generate a second tag uniquely identifying the specific content feature based on the latent vector representation, and tags the content interval with the first and second tags.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: January 12, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Monica Alfaro Vendrell, Albert Aparicio Isarn, Daniel Fojo, Marc Junyent Martin, Anthony M. Accardo, Avner Swerdlow
  • Patent number: 10856041
    Abstract: A content promotion system includes a computing platform having a hardware processor and a system memory storing a conversational agent software code. The hardware processor executes the conversational agent software code to receive user identification data, obtain user profile data including a content consumption history of a user associated with the user identification data, and identify a first predetermined phrase for use in interacting with the user based on the user profile data. In addition, the conversational agent software code initiates a dialog with the user based on the first predetermined phrase, detects a response or non-response to the dialog, updates the user profile data based on the response or non-response, resulting in updated user profile data, identifies a second predetermined phrase for use in interacting with the user based on the updated user profile data, and continues the dialog with the user based on the second predetermined phrase.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: December 1, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Albert Aparicio Isarn, Jordi Badia Pujol, Marc Junyent Martin, Anthony M. Accardo, Jason Roeckle, John Solaro, Avner Swerdlow
  • Patent number: 10834413
    Abstract: A set of software applications configured to perform interframe and/or intraframe encoding operations based on data communicated between a graphics application and a graphics processor. The graphics application transmits a 3D model to the graphics processor to be rendered into a 2D frame of video data. The graphics application also transmits graphics commands to the graphics processor indicating specific transformations to be applied to the 3D model as well as textures that should be mapped onto portions of the 3D model. Based on these transformations, an interframe module can determine blocks of pixels that repeat across sequential frames. Based on the mapped textures, an intraframe module can determine blocks of pixels that repeat within an individual frame. A codec encodes the frames of video data into compressed form based on blocks of pixels that repeat across frames or within frames.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: November 10, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin
  • Patent number: 10817565
    Abstract: A media content tagging system includes a computing platform having a hardware processor, and a system memory storing a tag selector software code configured to receive media content having segments, each segment including multiple content elements each associated with metadata tags having respective pre-computed confidence scores. For each content element, the tag selector software code assigns each of the metadata tags to at least one tag group, determines a confidence score for each tag group based on the pre-computed confidence scores of its assigned metadata tags, discards tag groups having less than a minimum number of assigned metadata tags, and filters the reduced number of tag groups based on the second confidence score to identify a further reduced number of tag groups. The tag selector software code then selects at least one representative tag group for a segment from among the further reduced number of tag groups.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: October 27, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro, Pablo Beltran Sanchidrian, Marc Junyent Martin, Evan A. Binder, Anthony M. Accardo, Katharine S. Ettinger, Avner Swerdlow
  • Publication number: 20200304866
    Abstract: A content promotion system includes a computing platform having a hardware processor and a system memory storing a conversational agent software code. The hardware processor executes the conversational agent software code to receive user identification data, obtain user profile data including a content consumption history of a user associated with the user identification data, and identify a first predetermined phrase for use in interacting with the user based on the user profile data. In addition, the conversational agent software code initiates a dialog with the user based on the first predetermined phrase, detects a response or non-response to the dialog, updates the user profile data based on the response or non-response, resulting in updated user profile data, identifies a second predetermined phrase for use in interacting with the user based on the updated user profile data, and continues the dialog with the user based on the second predetermined phrase.
    Type: Application
    Filed: March 18, 2019
    Publication date: September 24, 2020
    Inventors: Miquel Angel Farre Guiu, Albert Aparicio, Jordi Badia Pujol, Marc Junyent Martin, Anthony M. Accardo, Jason Roeckle, John Solaro, Avner Swerdlow
  • Patent number: 10754712
    Abstract: In various embodiments, a broker application automatically allocates tasks to application programming interfaces (APIs) in microservice architectures. After receiving a task from a client application, the broker application performs operation(s) on content associated with the task to compute predicted performance data for multiple APIs. The broker application then determines that a first API included in the APIs should process the first task based on the predicted performance data. The broker application transmits an API request associated with the first task to the first API for processing. After receiving a result associated with the first task from the first API, the client application performs operation(s) based on the result.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: August 25, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Matthew Charles Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Anthony M. Accardo, Miquel Angel Farre Guiu, Katharine S. Ettinger, Avner Swerdlow