Patents by Inventor Marc Junyent MARTIN
Marc Junyent MARTIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200304866Abstract: A content promotion system includes a computing platform having a hardware processor and a system memory storing a conversational agent software code. The hardware processor executes the conversational agent software code to receive user identification data, obtain user profile data including a content consumption history of a user associated with the user identification data, and identify a first predetermined phrase for use in interacting with the user based on the user profile data. In addition, the conversational agent software code initiates a dialog with the user based on the first predetermined phrase, detects a response or non-response to the dialog, updates the user profile data based on the response or non-response, resulting in updated user profile data, identifies a second predetermined phrase for use in interacting with the user based on the updated user profile data, and continues the dialog with the user based on the second predetermined phrase.Type: ApplicationFiled: March 18, 2019Publication date: September 24, 2020Inventors: Miquel Angel Farre Guiu, Albert Aparicio, Jordi Badia Pujol, Marc Junyent Martin, Anthony M. Accardo, Jason Roeckle, John Solaro, Avner Swerdlow
-
Patent number: 10754712Abstract: In various embodiments, a broker application automatically allocates tasks to application programming interfaces (APIs) in microservice architectures. After receiving a task from a client application, the broker application performs operation(s) on content associated with the task to compute predicted performance data for multiple APIs. The broker application then determines that a first API included in the APIs should process the first task based on the predicted performance data. The broker application transmits an API request associated with the first task to the first API for processing. After receiving a result associated with the first task from the first API, the client application performs operation(s) based on the result.Type: GrantFiled: July 27, 2018Date of Patent: August 25, 2020Assignee: Disney Enterprises, Inc.Inventors: Matthew Charles Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Anthony M. Accardo, Miquel Angel Farre Guiu, Katharine S. Ettinger, Avner Swerdlow
-
Patent number: 10664973Abstract: There is provided a system including a memory and a processor configured to obtain a first frame of a video content including an object and a first region based on a segmentation hierarchy of the first frame, insert a synthetic object into the first frame, merge an object segmentation hierarchy of the synthetic object with the segmentation hierarchy of the first frame to create a merged segmentation hierarchy, select a second region based on the merged segmentation hierarchy, provide the first frame including the first region and the second region to a crowd user for creating a corrected frame, receive the corrected frame from the crowd user including a first corrected region including the object and a second corrected region including the synthetic object, determine a quality based on the synthetic object and the second corrected region, and accept the first corrected region based on the quality.Type: GrantFiled: July 10, 2018Date of Patent: May 26, 2020Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Aljoscha Smolic
-
Publication number: 20200151459Abstract: According to one implementation, a system for automating content annotation includes a computing platform having a hardware processor and a system memory storing an automation training software code. The hardware processor executes the automation training software code to initially train a content annotation engine using labeled content, test the content annotation engine using a first test set of content obtained from a training database, and receive corrections to a first automatically annotated content set resulting from the test. The hardware processor further executes the automation training software code to further train the content annotation engine based on the corrections, determine one or more prioritization criteria for selecting a second test set of content for testing the content annotation engine based on the statistics relating to the first automatically annotated content, and select the second test set of content from the training database based on the prioritization criteria.Type: ApplicationFiled: March 13, 2019Publication date: May 14, 2020Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Daniel Fojo, Anthony M. Accardo, Avner Swerdlow, Katharine Navarre
-
Publication number: 20200068208Abstract: A set of software applications configured to perform interframe and/or intraframe encoding operations based on data communicated between a graphics application and a graphics processor. The graphics application transmits a 3D model to the graphics processor to be rendered into a 2D frame of video data. The graphics application also transmits graphics commands to the graphics processor indicating specific transformations to be applied to the 3D model as well as textures that should be mapped onto portions of the 3D model. Based on these transformations, an interframe module can determine blocks of pixels that repeat across sequential frames. Based on the mapped textures, an intraframe module can determine blocks of pixels that repeat within an individual frame. A codec encodes the frames of video data into compressed form based on blocks of pixels that repeat across frames or within frames.Type: ApplicationFiled: August 24, 2018Publication date: February 27, 2020Inventors: Miquel Angel FARRE GUIU, Marc JUNYENT MARTIN
-
Publication number: 20200034215Abstract: In various embodiments, a broker application automatically allocates tasks to application programming interfaces (APIs) in microservice architectures. After receiving a task from a client application, the broker application performs operation(s) on content associated with the task to compute predicted performance data for multiple APIs. The broker application then determines that a first API included in the APIs should process the first task based on the predicted performance data. The broker application transmits an API request associated with the first task to the first API for processing. After receiving a result associated with the first task from the first API, the client application performs operation(s) based on the result.. Advantageously, because the broker application automatically allocates the first task to the first API based on the content, time and resource inefficiencies are reduced compared to prior art approaches that indiscriminately allocate tasks to APIs.Type: ApplicationFiled: July 27, 2018Publication date: January 30, 2020Inventors: Matthew Charles PETRILLO, Monica ALFARO VENDRELL, Marc JUNYENT MARTIN, Anthony M. ACCARDO, Miquel Angel FARRE GUIU, Katharine S. ETTINGER, Avner SWERDLOW
-
Patent number: 10489722Abstract: Systems, methods, and articles of manufacture to perform an operation comprising processing, by a machine learning (ML) algorithm and a ML model, a plurality of images in a first dataset, wherein the ML model was generated based on a plurality of images in a training dataset, receiving user input reviewing a respective set of tags applied to each image in the first data set as a result of the processing, identifying, based on a first confusion matrix generated based on the user input and the sets of tags applied to the images in the first data set, a first labeling error in the training dataset, determining a type of the first labeling error based on a second confusion matrix, and modifying the training dataset based on the determined type of the first labeling error.Type: GrantFiled: July 27, 2017Date of Patent: November 26, 2019Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farré Guiu, Marc Junyent Martin, Matthew C. Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Avner Swerdlow, Katharine S. Ettinger, Evan A. Binder, Anthony M. Accardo
-
Patent number: 10469905Abstract: According to one implementation, a content classification system includes a computing platform having a hardware processor and a system memory storing a video asset classification software code. The hardware processor executes the video asset classification software code to receive video clips depicting video assets and each including images and annotation metadata, and to preliminarily classify the images with one or more of the video assets to produce image clusters. The hardware processor further executes the video asset classification software code to identify key features data corresponding respectively to each image cluster, to segregate the image clusters into image super-clusters based on the key feature data, and to uniquely identify each of at least some of the image super-clusters with one of the video assets.Type: GrantFiled: August 3, 2018Date of Patent: November 5, 2019Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Marc Junyent Martin, Avner Swerdlow, Katharine S. Ettinger, Anthony M. Accardo
-
Publication number: 20190297392Abstract: According to one implementation, a media content annotation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to receive a first version of media content and a second version of the media content altered with respect to the first version, and to map each of multiple segments of the first version of the media content to a corresponding one segment of the second version of the media content. The software code further aligns each of the segments of the first version of the media content with its corresponding one segment of the second version of the media content, and utilizes metadata associated with each of at least some of the segments of the first version of the media content to annotate its corresponding one segment of the second version of the media content.Type: ApplicationFiled: March 23, 2018Publication date: September 26, 2019Inventors: Miquel Angel Farre Guiu, Matthew C. Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Katharine S. Ettinger, Evan A. Binder, Anthony M. Accardo, Avner Swerdlow
-
Patent number: 10299013Abstract: According to one implementation, a media content annotation system includes a computing platform including a hardware processor and a system memory storing a model-driven annotation software code. The hardware processor executes the model-driven annotation software code to receive media content for annotation, identify a data model corresponding to the media content, and determine a workflow for annotating the media content based on the data model, the workflow including multiple tasks. The hardware processor further executes the model-driven annotation software code to identify one or more annotation contributors for performing the tasks included in the workflow, distribute the tasks to the one or more annotation contributors, receive inputs from the one or more contributors responsive to at least some of the tasks, and generate an annotation for the media content based on the inputs.Type: GrantFiled: August 1, 2017Date of Patent: May 21, 2019Assignee: Disney Enterprises, Inc.Inventors: Matthew Petrillo, Katharine Ettinger, Miquel Angel Farre Guiu, Anthony M. Accardo, Marc Junyent Martin
-
Publication number: 20190138617Abstract: A media content tagging system includes a computing platform having a hardware processor, and a system memory storing a tag selector software code configured to receive media content having segments, each segment including multiple content elements each associated with metadata tags having respective pre-computed confidence scores. For each content element, the tag selector software code assigns each of the metadata tags to at least one tag group, determines a confidence score for each tag group based on the pre-computed confidence scores of its assigned metadata tags, discards tag groups having less than a minimum number of assigned metadata tags, and filters the reduced number of tag groups based on the second confidence score to identify a further reduced number of tag groups. The tag selector software code then selects at least one representative tag group for a segment from among the further reduced number of tag groups.Type: ApplicationFiled: November 6, 2017Publication date: May 9, 2019Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro, Pablo Beltran Sanchidrian, Marc Junyent Martin, Evan A. Binder, Anthony M. Accardo, Katharine S. Ettinger, Avner Swerdlow
-
Publication number: 20190045277Abstract: According to one implementation, a media content annotation system includes a computing platform including a hardware processor and a system memory storing a model-driven annotation software code. The hardware processor executes the model-driven annotation software code to receive media content for annotation, identify a data model corresponding to the media content, and determine a workflow for annotating the media content based on the data model, the workflow including multiple tasks. The hardware processor further executes the model-driven annotation software code to identify one or more annotation contributors for performing the tasks included in the workflow, distribute the tasks to the one or more annotation contributors, receive inputs from the one or more contributors responsive to at least some of the tasks, and generate an annotation for the media content based on the inputs.Type: ApplicationFiled: August 1, 2017Publication date: February 7, 2019Inventors: Matthew Petrillo, Katharine Ettinger, Miquel Angel Farre Guiu, Anthony M. Accardo, Marc Junyent Martin
-
Publication number: 20190034822Abstract: Systems, methods, and articles of manufacture to perform an operation comprising processing, by a machine learning (ML) algorithm and a ML model, a plurality of images in a first dataset, wherein the ML model was generated based on a plurality of images in a training dataset, receiving user input reviewing a respective set of tags applied to each image in the first data set as a result of the processing, identifying, based on a first confusion matrix generated based on the user input and the sets of tags applied to the images in the first data set, a first labeling error in the training dataset, determining a type of the first labeling error based on a second confusion matrix, and modifying the training dataset based on the determined type of the first labeling error.Type: ApplicationFiled: July 27, 2017Publication date: January 31, 2019Inventors: Miquel Angel FARRÉ GUIU, Marc JUNYENT MARTIN, Matthew C. PETRILLO, Monica ALFARO VENDRELL, Pablo Beltran SANCHIDRIAN, Avner SWERDLOW, Katharine S. ETTINGER, Evan A. BINDER, Anthony M. ACCARDO
-
Systems and methods for automatic key frame extraction and storyboard interface generation for video
Patent number: 10157318Abstract: A storyboard interface displaying key frames of a video may be presented to a user. Individual key frames may represent individual shots of the video. Shots may be grouped based on similarity. Key frames may be displayed in a chronological order of the corresponding shots. Key frames of grouped shots may be spatially correlated within the storyboard interface. For example, shots of a common group may be spatially correlated so that they may be easily discernable as a group even though the shots may not be temporally consecutive and/or or even temporally close to each other in the timeframe of the video itself.Type: GrantFiled: December 12, 2016Date of Patent: December 18, 2018Assignees: Disney Enterprises, Inc., ETH ZurichInventors: Aljoscha Smolic, Marc Junyent Martin, Jordi Pont-Tusert, Alexandre Chapiro, Miquel Angel Farre Guiu -
Publication number: 20180343496Abstract: According to one implementation, a content classification system includes a computing platform having a hardware processor and a system memory storing a video asset classification software code. The hardware processor executes the video asset classification software code to receive video clips depicting video assets and each including images and annotation metadata, and to preliminarily classify the images with one or more of the video assets to produce image clusters. The hardware processor further executes the video asset classification software code to identify key features data corresponding respectively to each image cluster, to segregate the image clusters into image super-clusters based on the key feature data, and to uniquely identify each of at least some of the image super-clusters with one of the video assets.Type: ApplicationFiled: August 3, 2018Publication date: November 29, 2018Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Marc Junyent Martin, Avner Swerdlow, Katharine S. Ettinger, Anthony M. Accardo
-
Publication number: 20180322636Abstract: There is provided a system including a memory and a processor configured to obtain a first frame of a video content including an object and a first region based on a segmentation hierarchy of the first frame, insert a synthetic object into the first frame, merge an object segmentation hierarchy of the synthetic object with the segmentation hierarchy of the first frame to create a merged segmentation hierarchy, select a second region based on the merged segmentation hierarchy, provide the first frame including the first region and the second region to a crowd user for creating a corrected frame, receive the corrected frame from the crowd user including a first corrected region including the object and a second corrected region including the synthetic object, determine a quality based on the synthetic object and the second corrected region, and accept the first corrected region based on the quality.Type: ApplicationFiled: July 10, 2018Publication date: November 8, 2018Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Aljoscha Smolic
-
Publication number: 20180276286Abstract: There is provided a system including a computing platform having a hardware processor and a memory, and a metadata extraction and management unit stored in the memory. The hardware processor is configured to execute the metadata extraction and management unit to extract a plurality of metadata types from a media asset sequentially and in accordance with a prioritized order of extraction based on metadata type, aggregate the plurality of metadata types to produce an aggregated metadata describing the media asset, use the aggregated metadata to include at least one database entry in a graphical database, wherein the at least one database entry describes the media asset, display a user interface for a user to view tags of metadata associated with the media asset, and correcting presence of one of the tags of metadata associated with the media asset, in response to an input from the user via the user interface.Type: ApplicationFiled: May 21, 2018Publication date: September 27, 2018Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Jordi Pont-Tuset, Pablo Beltran Sanchidrian, Nimesh Narayan, Leonid Sigal, Aljoscha Smolic, Anthony M. Accardo
-
Patent number: 10068616Abstract: According to one implementation, a video processing system for performing thumbnail generation includes a computing platform having a hardware processor and a system memory storing a thumbnail generator software code. The hardware processor executes the thumbnail generator software code to receive a video file, and identify a plurality of shots in the video file, each of the plurality of shots including a plurality of frames of the video file. For each of the plurality of shots, the hardware processor further executes the thumbnail generator software code to filter the plurality of frames to obtain a plurality of key frame candidates, determine a ranking of the plurality of key frame candidates based in part on a blur detection analysis and an image distribution analysis of each of the plurality of key frame candidates, and generate a thumbnail based on the ranking.Type: GrantFiled: January 11, 2017Date of Patent: September 4, 2018Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farre Guiu, Aljoscha Smolic, Marc Junyent Martin, Asier Aduriz, Tunc Ozan Aydin, Christopher A. Eich
-
Patent number: 10057644Abstract: According to one implementation, a content classification system includes a computing platform having a hardware processor and a system memory storing a video asset classification software code. The hardware processor executes the video asset classification software code to receive video clips depicting video assets and each including images and annotation metadata, and to preliminarily classify the images with one or more of the video assets to produce image clusters. The hardware processor further executes the video asset classification software code to identify key features data corresponding respectively to each image cluster, to segregate the image clusters into image super-clusters based on the key feature data, and to uniquely identify each of at least some of the image super-clusters with one of the video assets.Type: GrantFiled: April 26, 2017Date of Patent: August 21, 2018Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Pablo Beltran Sanchidrian, Marc Junyent Martin, Avner Swerdlow, Katharine S. Ettinger, Anthony M. Accardo
-
Patent number: 10037605Abstract: There is provided a system including a memory and a processor configured to obtain a first frame of a video content including an object and a first region based on a segmentation hierarchy of the first frame, insert a synthetic object into the first frame, merge an object segmentation hierarchy of the synthetic object with the segmentation hierarchy of the first frame to create a merged segmentation hierarchy, select a second region based on the merged segmentation hierarchy, provide the first frame including the first region and the second region to a crowd user for creating a corrected frame, receive the corrected frame from the crowd user including a first corrected region including the object and a second corrected region including the synthetic object, determine a quality based on the synthetic object and the second corrected region, and accept the first corrected region based on the quality.Type: GrantFiled: August 23, 2016Date of Patent: July 31, 2018Assignee: Disney Enterprises, Inc.Inventors: Miquel Angel Farre Guiu, Marc Junyent Martin, Aljoscha Smolic