Patents by Inventor Stefano Petrangeli
Stefano Petrangeli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220222866Abstract: In implementations of systems for digital image compression using context-based pixel predictor selection, a computing device implements a compression system to receive digital image data describing pixels of a digital image. The compression system groups first differences between values of the pixels and first prediction values of the pixels into context groups. A pixel predictor is determined for each of the context groups based on a compression criterion. The compression system generates second prediction values of the pixels using the determined pixel predictor for pixels corresponding to the first differences included in each of the context groups. Second differences between the values of the pixels and the second prediction values of the pixels are grouped into different context groups. The compression system compresses the digital image using entropy coding based on the different context groups.Type: ApplicationFiled: January 14, 2021Publication date: July 14, 2022Applicant: Adobe Inc.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Haoliang Wang
-
Publication number: 20220156886Abstract: Methods, system, and computer storage media are provided for novel view synthesis. An input image depicting an object is received and utilized to generate, via a neural network, a target view image. In exemplary aspects, additional view images are also generated within the same pass of the neural network. A loss is determined based on the target view image and additional view images and is used to modify the neural network to reduce errors. In some aspects, a rotated view image is generated by warping a ground truth image from an initial angle to a rotated view angle that matches a view angle of an image synthesized via the neural network, such as a target view image. The rotated view image and the synthesized image matching the rotated view angle (e.g., a target view image) are utilized to compute a rotational loss.Type: ApplicationFiled: November 13, 2020Publication date: May 19, 2022Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Haoliang Wang, YoungJoong Kwon
-
Publication number: 20220156503Abstract: A video summarization system generates a concatenated feature set by combining a feature set of a candidate video shot and a summarization feature set. Based on the concatenated feature set, the video summarization system calculates multiple action options of a reward function included in a trained reinforcement learning module. The video summarization system determines a reward outcome included in the multiple action options. The video summarization system modifies the summarization feature set to include the feature set of the candidate video shot by applying a particular modification indicated by the reward outcome. The video summarization system identifies video frames associated with the modified summarization feature set, and generates a summary video based on the identified video frames.Type: ApplicationFiled: November 19, 2020Publication date: May 19, 2022Inventors: Viswanathan Swaminathan, Stefano Petrangeli, Hongxiang Gu
-
Publication number: 20220156499Abstract: Systems and methods predict a performance metric for a video and identify key portions of the video that contribute to the performance metric, which can be used to edit the video to improve the ultimate viewer response to the video. An initial performance metric is computed for an initial video (e.g., using a neural network). A perturbed video is generated by perturbing a video portion of the initial video. A modified performance metric is computed for the perturbed video. Based on a difference between the initial and modified performance metrics, the system determines that the video portion contributed to a predicted user viewer response to the initial video. An indication of the video portion that contributed to the predicted user viewer response is provided as output, which can be used to edit the video to improve the predicted viewer response.Type: ApplicationFiled: November 19, 2020Publication date: May 19, 2022Inventors: Somdeb Sarkhel, Viswanathan Swaminathan, Stefano Petrangeli, Md Maminur Islam
-
Patent number: 11314970Abstract: A video summarization system generates a concatenated feature set by combining a feature set of a candidate video shot and a summarization feature set. Based on the concatenated feature set, the video summarization system calculates multiple action options of a reward function included in a trained reinforcement learning module. The video summarization system determines a reward outcome included in the multiple action options. The video summarization system modifies the summarization feature set to include the feature set of the candidate video shot by applying a particular modification indicated by the reward outcome. The video summarization system identifies video frames associated with the modified summarization feature set, and generates a summary video based on the identified video frames.Type: GrantFiled: November 19, 2020Date of Patent: April 26, 2022Assignee: Adobe Inc.Inventors: Viswanathan Swaminathan, Stefano Petrangeli, Hongxiang Gu
-
Patent number: 11252393Abstract: In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.Type: GrantFiled: October 19, 2020Date of Patent: February 15, 2022Assignee: Adobe Inc.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Gwendal Brieuc Christian Simon
-
Patent number: 11217208Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that iteratively select versions of augmented reality objects at augmented reality levels of detail to provide for download to a client device to reduce start-up latency associated with providing a requested augmented reality scene. In particular, in one or more embodiments, the disclosed systems determine utility and priority metrics associated with versions of augmented reality objects associated with a requested augmented reality scene. The disclosed systems utilize the determined metrics to select versions of augmented reality objects that are likely to be viewed by the client device and improve the quality of the augmented reality scene as the client device moves through the augmented reality scene. In at least one embodiment, the disclosed systems iteratively select versions of augmented reality objects at various levels of detail until the augmented reality scene is fully downloaded.Type: GrantFiled: March 30, 2020Date of Patent: January 4, 2022Assignee: ADOBE INC.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Na Wang, Haoliang Wang, Gwendal Simon
-
Patent number: 11170389Abstract: Techniques are disclosed for improving media content effectiveness. A methodology implementing the techniques according to an embodiment includes generating an intermediate representation (IR) of provided media content, the IR specifying editable elements of the content and maintaining a result of cumulative edits to those elements. The method also includes editing the elements of the IR to generate a set of candidate IR variations. The method further includes creating a set of candidate media contents based on the candidate IR variations, evaluating the candidate media contents to generate effectiveness scores, and pruning the set of candidate IR variations to retain a threshold number of the candidate IR variations as surviving IR variations associated with the highest effectiveness scores. The process iterates until either an effectiveness score exceeds a threshold value, the incremental improvement at each iteration falls below a desired value, or a maximum number of iterations have been performed.Type: GrantFiled: February 20, 2020Date of Patent: November 9, 2021Assignee: Adobe Inc.Inventors: Haoliang Wang, Viswanathan Swaminathan, Stefano Petrangeli, Ran Xu
-
Publication number: 20210337222Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media to enhance texture image delivery and processing at a client device. For example, the disclosed systems can utilize a server-side compression combination that includes, in sequential order, a first compression pass, a decompression pass, and a second compression pass. By applying this compression combination to a texture image at the server-side, the disclosed systems can leverage both GPU-friendly and network-friendly image formats. For example, at a client device, the disclosed system can instruct the client device to execute a combination of decompression-compression passes on a GPU-network-friendly image delivered over a network connection to the client device.Type: ApplicationFiled: April 28, 2020Publication date: October 28, 2021Inventors: Viswanathan Swaminathan, Stefano Petrangeli, Gwendal Simon
-
Publication number: 20210304706Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that iteratively select versions of augmented reality objects at augmented reality levels of detail to provide for download to a client device to reduce start-up latency associated with providing a requested augmented reality scene. In particular, in one or more embodiments, the disclosed systems determine utility and priority metrics associated with versions of augmented reality objects associated with a requested augmented reality scene. The disclosed systems utilize the determined metrics to select versions of augmented reality objects that are likely to be viewed by the client device and improve the quality of the augmented reality scene as the client device moves through the augmented reality scene. In at least one embodiment, the disclosed systems iteratively select versions of augmented reality objects at various levels of detail until the augmented reality scene is fully downloaded.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Na Wang, Haoliang Wang, Gwendal Simon
-
Publication number: 20210279916Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.Type: ApplicationFiled: May 26, 2021Publication date: September 9, 2021Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
-
Publication number: 20210264446Abstract: Techniques are disclosed for improving media content effectiveness. A methodology implementing the techniques according to an embodiment includes generating an intermediate representation (IR) of provided media content, the IR specifying editable elements of the content and maintaining a result of cumulative edits to those elements. The method also includes editing the elements of the IR to generate a set of candidate IR variations. The method further includes creating a set of candidate media contents based on the candidate IR variations, evaluating the candidate media contents to generate effectiveness scores, and pruning the set of candidate IR variations to retain a threshold number of the candidate IR variations as surviving IR variations associated with the highest effectiveness scores. The process iterates until either an effectiveness score exceeds a threshold value, the incremental improvement at each iteration falls below a desired value, or a maximum number of iterations have been performed.Type: ApplicationFiled: February 20, 2020Publication date: August 26, 2021Applicant: Adobe Inc.Inventors: Haoliang Wang, Viswanathan Swaminathan, Stefano Petrangeli, Ran Xu
-
Patent number: 11049290Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.Type: GrantFiled: September 26, 2019Date of Patent: June 29, 2021Assignee: Adobe Inc.Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
-
Publication number: 20210037227Abstract: In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.Type: ApplicationFiled: October 19, 2020Publication date: February 4, 2021Applicant: Adobe Inc.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Gwendal Brieuc Christian Simon
-
Publication number: 20200374506Abstract: In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.Type: ApplicationFiled: May 23, 2019Publication date: November 26, 2020Applicant: Adobe Inc.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Gwendal Brieuc Christian Simon
-
Patent number: 10848738Abstract: In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.Type: GrantFiled: May 23, 2019Date of Patent: November 24, 2020Assignee: Adobe Inc.Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Gwendal Brieuc Christian Simon
-
Publication number: 20200302658Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.Type: ApplicationFiled: September 26, 2019Publication date: September 24, 2020Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
-
Patent number: 9826016Abstract: A method and system for enabling a plurality of adaptive streaming client devices to share network resources includes a network node monitoring chunk request messages of client devices configured to select a quality level of a chunk from a plurality of quality levels and to request a media server for transmission of a chunk of the selected quality level. The quality level in a monitored chunk request message of a client device is used to estimate local quality information associated with the quality performance of the client device. Global quality information, determined based on the estimated local quality information associated with the client devices, and being indicative of the global quality performance of the client devices, is sent to the client devices.Type: GrantFiled: February 24, 2015Date of Patent: November 21, 2017Assignees: KONINKLIJKE KPN N.V., IMEC VZW, GHENT UNIVERSITYInventors: Stefano Petrangeli, Jeroen Famaey, Steven Latré
-
Publication number: 20160248835Abstract: A method and system for enabling a plurality of adaptive streaming client devices to share network resources includes a network node monitoring chunk request messages of client devices configured to select a quality level of a chunk from a plurality of quality levels and to request a media server for transmission of a chunk of the selected quality level. The quality level in a monitored chunk request message of a client device is used to estimate local quality information associated with the quality performance of the client device. Global quality information, determined based on the estimated local quality information associated with the client devices, and being indicative of the global quality performance of the client devices, is sent to the client devices.Type: ApplicationFiled: February 24, 2015Publication date: August 25, 2016Inventors: Stefano Petrangeli, Jeroen Famaey, Steven Latré