Patents by Inventor Miquel Angel FARRÉ GUIU

Miquel Angel FARRÉ GUIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200183968
    Abstract: Various embodiments of the invention disclosed herein provide techniques for automatically exposing 3D production assets to an editorial workstation in a content creation pipeline. A production asset management system transmits, to an editorial workstation, a first content library that includes first metadata associated with a 3D production asset that is included in an editorial cut. The first metadata allows a user to see what versions of the 3D production asset are available for incorporation into the editorial cut. The production asset management system retrieves second metadata associated with a second version of the 3D production asset from a production database. The production asset management system retrieves, based on the second metadata, the second version of the 3D production asset from the production database. The production asset management system transmits the second version of the 3D production asset to the editorial workstation for incorporation into the editorial cut.
    Type: Application
    Filed: February 4, 2019
    Publication date: June 11, 2020
    Inventors: Michael BREYMANN, Evan A. BINDER, Anthony M. ACCARDO, Katharine S. NAVARRE, Avner SWERDLOW, Miquel Angel FARRE GUIU
  • Patent number: 10664973
    Abstract: There is provided a system including a memory and a processor configured to obtain a first frame of a video content including an object and a first region based on a segmentation hierarchy of the first frame, insert a synthetic object into the first frame, merge an object segmentation hierarchy of the synthetic object with the segmentation hierarchy of the first frame to create a merged segmentation hierarchy, select a second region based on the merged segmentation hierarchy, provide the first frame including the first region and the second region to a crowd user for creating a corrected frame, receive the corrected frame from the crowd user including a first corrected region including the object and a second corrected region including the synthetic object, determine a quality based on the synthetic object and the second corrected region, and accept the first corrected region based on the quality.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: May 26, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Aljoscha Smolic
  • Publication number: 20200159396
    Abstract: Various embodiments of the invention disclosed herein provide techniques for automatically displaying and providing electronic feedback about a 3D production asset. A client device executing a software application receives an asset data bundle associated with the 3D production asset. The client device generates a customized user interface based on at least one aspect of the asset data bundle. The client device displays the 3D production asset via the customized user interface. The client device receives an input associated with the 3D production asset via the customized user interface. The client device causes the input to be transmitted to at least one of a media content server and a production database.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Michael Breymann, Anthony M. Accardo, Evan A. Binder, Katharine S. Navarre, Gino Guzzardo, Miquel Angel Farre Guiu
  • Publication number: 20200151459
    Abstract: According to one implementation, a system for automating content annotation includes a computing platform having a hardware processor and a system memory storing an automation training software code. The hardware processor executes the automation training software code to initially train a content annotation engine using labeled content, test the content annotation engine using a first test set of content obtained from a training database, and receive corrections to a first automatically annotated content set resulting from the test. The hardware processor further executes the automation training software code to further train the content annotation engine based on the corrections, determine one or more prioritization criteria for selecting a second test set of content for testing the content annotation engine based on the statistics relating to the first automatically annotated content, and select the second test set of content from the training database based on the prioritization criteria.
    Type: Application
    Filed: March 13, 2019
    Publication date: May 14, 2020
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Daniel Fojo, Anthony M. Accardo, Avner Swerdlow, Katharine Navarre
  • Patent number: 10642130
    Abstract: An embodiment provides a motorized monopole for a camera, including: a hand-held monopole; a first motor positioned at an end of the hand-held monopole; a first connecting element attached to the first motor; a second motor positioned at an end of the first connecting element; a second connecting element attached to the second motor; a third motor positioned at an end of the second connecting element; and a camera mounting plate attached to the second connecting element by the third motor, where components of the multi-axis gimbal are positioned such that a camera viewing axis, a horizontal image axis, and a vertical image axis of a camera mounted on the camera mounting plate need not be aligned with any of a rotational axis of the first motor, a rotational axis of the second motor, or a rotational axis of the third motor. Other aspects are described and claimed.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: May 5, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Gunter Niemeyer, Miquel Angel Farre Guiu, Vince Roberts, Anthony Accardo, Michael Holton
  • Publication number: 20200068208
    Abstract: A set of software applications configured to perform interframe and/or intraframe encoding operations based on data communicated between a graphics application and a graphics processor. The graphics application transmits a 3D model to the graphics processor to be rendered into a 2D frame of video data. The graphics application also transmits graphics commands to the graphics processor indicating specific transformations to be applied to the 3D model as well as textures that should be mapped onto portions of the 3D model. Based on these transformations, an interframe module can determine blocks of pixels that repeat across sequential frames. Based on the mapped textures, an intraframe module can determine blocks of pixels that repeat within an individual frame. A codec encodes the frames of video data into compressed form based on blocks of pixels that repeat across frames or within frames.
    Type: Application
    Filed: August 24, 2018
    Publication date: February 27, 2020
    Inventors: Miquel Angel FARRE GUIU, Marc JUNYENT MARTIN
  • Patent number: 10551724
    Abstract: One embodiment provides a monopole for a camera, including: a pole of length sufficient for two handed operation; an offset arrangement attached to an end of the pole, the offset arrangement comprising a first element connected at an angle to the end of the pole and a second element connected to the first element; the first element rotating about a first axis with respect to the end of the pole; the second element rotating about a second axis with respect to the first element; a camera mount attached to the second element, wherein the camera mount rotates about a third axis with respect to the second element; and at least one motor aligned with the first, the second or the third axis and imparting movement to the camera mount with respect to the pole in at least one degree of freedom selected from the group consisting of tilt, pan and roll. Other aspects are described and claimed.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: February 4, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Vincent H. Roberts, Kenneth D. Salter, Anthony M. Accardo, Miquel Angel Farre Guiu, Gunter Niemeyer
  • Publication number: 20200034215
    Abstract: In various embodiments, a broker application automatically allocates tasks to application programming interfaces (APIs) in microservice architectures. After receiving a task from a client application, the broker application performs operation(s) on content associated with the task to compute predicted performance data for multiple APIs. The broker application then determines that a first API included in the APIs should process the first task based on the predicted performance data. The broker application transmits an API request associated with the first task to the first API for processing. After receiving a result associated with the first task from the first API, the client application performs operation(s) based on the result.. Advantageously, because the broker application automatically allocates the first task to the first API based on the content, time and resource inefficiencies are reduced compared to prior art approaches that indiscriminately allocate tasks to APIs.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Matthew Charles PETRILLO, Monica ALFARO VENDRELL, Marc JUNYENT MARTIN, Anthony M. ACCARDO, Miquel Angel FARRE GUIU, Katharine S. ETTINGER, Avner SWERDLOW
  • Patent number: 10489722
    Abstract: Systems, methods, and articles of manufacture to perform an operation comprising processing, by a machine learning (ML) algorithm and a ML model, a plurality of images in a first dataset, wherein the ML model was generated based on a plurality of images in a training dataset, receiving user input reviewing a respective set of tags applied to each image in the first data set as a result of the processing, identifying, based on a first confusion matrix generated based on the user input and the sets of tags applied to the images in the first data set, a first labeling error in the training dataset, determining a type of the first labeling error based on a second confusion matrix, and modifying the training dataset based on the determined type of the first labeling error.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: November 26, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farré Guiu, Marc Junyent Martin, Matthew C. Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Avner Swerdlow, Katharine S. Ettinger, Evan A. Binder, Anthony M. Accardo
  • Patent number: 10469905
    Abstract: According to one implementation, a content classification system includes a computing platform having a hardware processor and a system memory storing a video asset classification software code. The hardware processor executes the video asset classification software code to receive video clips depicting video assets and each including images and annotation metadata, and to preliminarily classify the images with one or more of the video assets to produce image clusters. The hardware processor further executes the video asset classification software code to identify key features data corresponding respectively to each image cluster, to segregate the image clusters into image super-clusters based on the key feature data, and to uniquely identify each of at least some of the image super-clusters with one of the video assets.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: November 5, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Marc Junyent Martin, Avner Swerdlow, Katharine S. Ettinger, Anthony M. Accardo
  • Publication number: 20190297392
    Abstract: According to one implementation, a media content annotation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to receive a first version of media content and a second version of the media content altered with respect to the first version, and to map each of multiple segments of the first version of the media content to a corresponding one segment of the second version of the media content. The software code further aligns each of the segments of the first version of the media content with its corresponding one segment of the second version of the media content, and utilizes metadata associated with each of at least some of the segments of the first version of the media content to annotate its corresponding one segment of the second version of the media content.
    Type: Application
    Filed: March 23, 2018
    Publication date: September 26, 2019
    Inventors: Miquel Angel Farre Guiu, Matthew C. Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Katharine S. Ettinger, Evan A. Binder, Anthony M. Accardo, Avner Swerdlow
  • Patent number: 10299013
    Abstract: According to one implementation, a media content annotation system includes a computing platform including a hardware processor and a system memory storing a model-driven annotation software code. The hardware processor executes the model-driven annotation software code to receive media content for annotation, identify a data model corresponding to the media content, and determine a workflow for annotating the media content based on the data model, the workflow including multiple tasks. The hardware processor further executes the model-driven annotation software code to identify one or more annotation contributors for performing the tasks included in the workflow, distribute the tasks to the one or more annotation contributors, receive inputs from the one or more contributors responsive to at least some of the tasks, and generate an annotation for the media content based on the inputs.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: May 21, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Matthew Petrillo, Katharine Ettinger, Miquel Angel Farre Guiu, Anthony M. Accardo, Marc Junyent Martin
  • Publication number: 20190138617
    Abstract: A media content tagging system includes a computing platform having a hardware processor, and a system memory storing a tag selector software code configured to receive media content having segments, each segment including multiple content elements each associated with metadata tags having respective pre-computed confidence scores. For each content element, the tag selector software code assigns each of the metadata tags to at least one tag group, determines a confidence score for each tag group based on the pre-computed confidence scores of its assigned metadata tags, discards tag groups having less than a minimum number of assigned metadata tags, and filters the reduced number of tag groups based on the second confidence score to identify a further reduced number of tag groups. The tag selector software code then selects at least one representative tag group for a segment from among the further reduced number of tag groups.
    Type: Application
    Filed: November 6, 2017
    Publication date: May 9, 2019
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro, Pablo Beltran Sanchidrian, Marc Junyent Martin, Evan A. Binder, Anthony M. Accardo, Katharine S. Ettinger, Avner Swerdlow
  • Patent number: 10248864
    Abstract: There is provided a method that includes receiving a video having video shots, and creating video shot groups based on similarities between the video shots, where each video shot group of the video shot groups includes one or more of the video shots and has different ones of the video shots than other video shot groups. The method further includes creating at least one video supergroup including at least one video shot group of the video shot groups based on interactions among the one or more of the video shots in each of the video shot groups, and divide the at least one video supergroup into connected video supergroups, each connected video supergroup of the connected video supergroups including one or more of the video shot groups based on the interactions among the one or more of video shots in each of the video shot groups.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: April 2, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Pablo Beltran Sanchidrian, Aljoscha Smolic
  • Publication number: 20190045277
    Abstract: According to one implementation, a media content annotation system includes a computing platform including a hardware processor and a system memory storing a model-driven annotation software code. The hardware processor executes the model-driven annotation software code to receive media content for annotation, identify a data model corresponding to the media content, and determine a workflow for annotating the media content based on the data model, the workflow including multiple tasks. The hardware processor further executes the model-driven annotation software code to identify one or more annotation contributors for performing the tasks included in the workflow, distribute the tasks to the one or more annotation contributors, receive inputs from the one or more contributors responsive to at least some of the tasks, and generate an annotation for the media content based on the inputs.
    Type: Application
    Filed: August 1, 2017
    Publication date: February 7, 2019
    Inventors: Matthew Petrillo, Katharine Ettinger, Miquel Angel Farre Guiu, Anthony M. Accardo, Marc Junyent Martin
  • Publication number: 20190034822
    Abstract: Systems, methods, and articles of manufacture to perform an operation comprising processing, by a machine learning (ML) algorithm and a ML model, a plurality of images in a first dataset, wherein the ML model was generated based on a plurality of images in a training dataset, receiving user input reviewing a respective set of tags applied to each image in the first data set as a result of the processing, identifying, based on a first confusion matrix generated based on the user input and the sets of tags applied to the images in the first data set, a first labeling error in the training dataset, determining a type of the first labeling error based on a second confusion matrix, and modifying the training dataset based on the determined type of the first labeling error.
    Type: Application
    Filed: July 27, 2017
    Publication date: January 31, 2019
    Inventors: Miquel Angel FARRÉ GUIU, Marc JUNYENT MARTIN, Matthew C. PETRILLO, Monica ALFARO VENDRELL, Pablo Beltran SANCHIDRIAN, Avner SWERDLOW, Katharine S. ETTINGER, Evan A. BINDER, Anthony M. ACCARDO
  • Publication number: 20190020912
    Abstract: According to one implementation, a system for programmatic generation of media content digests includes a computing platform having a hardware processor and a system memory storing a media content digest software code. The hardware processor executes the media content digest software code to identify a media content for use in generating a content digest, the media content including a timecode of the media content, to access a metadata describing the media content and indexed to the timecode, and to identify one or more constraints for the content digest. In addition, the hardware processor executes the media content digest software code to programmatically extract content segments from the media content using the metadata indexed to the timecode and based on the one or more constraints, and to generate the content digest based on the media content from the content segments.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventors: John Solaro, Alexis J. Lindquist, Anthony M. Accardo, Avner Swerdlow, Miquel Angel Farre Guiu, Katharine S. Ettinger
  • Patent number: 10157318
    Abstract: A storyboard interface displaying key frames of a video may be presented to a user. Individual key frames may represent individual shots of the video. Shots may be grouped based on similarity. Key frames may be displayed in a chronological order of the corresponding shots. Key frames of grouped shots may be spatially correlated within the storyboard interface. For example, shots of a common group may be spatially correlated so that they may be easily discernable as a group even though the shots may not be temporally consecutive and/or or even temporally close to each other in the timeframe of the video itself.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: December 18, 2018
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Aljoscha Smolic, Marc Junyent Martin, Jordi Pont-Tusert, Alexandre Chapiro, Miquel Angel Farre Guiu
  • Publication number: 20180343496
    Abstract: According to one implementation, a content classification system includes a computing platform having a hardware processor and a system memory storing a video asset classification software code. The hardware processor executes the video asset classification software code to receive video clips depicting video assets and each including images and annotation metadata, and to preliminarily classify the images with one or more of the video assets to produce image clusters. The hardware processor further executes the video asset classification software code to identify key features data corresponding respectively to each image cluster, to segregate the image clusters into image super-clusters based on the key feature data, and to uniquely identify each of at least some of the image super-clusters with one of the video assets.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Marc Junyent Martin, Avner Swerdlow, Katharine S. Ettinger, Anthony M. Accardo
  • Publication number: 20180322636
    Abstract: There is provided a system including a memory and a processor configured to obtain a first frame of a video content including an object and a first region based on a segmentation hierarchy of the first frame, insert a synthetic object into the first frame, merge an object segmentation hierarchy of the synthetic object with the segmentation hierarchy of the first frame to create a merged segmentation hierarchy, select a second region based on the merged segmentation hierarchy, provide the first frame including the first region and the second region to a crowd user for creating a corrected frame, receive the corrected frame from the crowd user including a first corrected region including the object and a second corrected region including the synthetic object, determine a quality based on the synthetic object and the second corrected region, and accept the first corrected region based on the quality.
    Type: Application
    Filed: July 10, 2018
    Publication date: November 8, 2018
    Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Aljoscha Smolic