Patents by Inventor Shaun M. Poole
Shaun M. Poole has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12231715Abstract: In one or more embodiments, a computing device is configured to present playback of a media composition on a touchscreen display. Concurrently with presenting the playback, the computing device receives, via the touchscreen display during a first time period, a touch input that includes a series of motions that start when a particular frame of the media composition is being presented. Responsive to receiving the touch input: the computing device adds a media clip to the media composition, with the media clip including a graphical representation of the touch input, having a duration corresponding to the first time period, and being stored in association with the particular frame. Subsequent to adding the media clip, the computing device presents a second playback of the media composition where playback of the media clip is initiated when playback of the particular frame is started during the second playback of the media composition.Type: GrantFiled: May 8, 2023Date of Patent: February 18, 2025Assignee: Apple Inc.Inventors: David C Schweinsberg, Gregory E Niles, Shaun M Poole, Peter A Steinauer
-
Publication number: 20240378850Abstract: In one or more embodiments, a computing device is configured to modify an original video by applying a machine learning model. The computing device obtains multiple training data sets, with each particular training data set including an original video and a corresponding modified video. One or more frames from the original video are cropped to generate corresponding frames in the corresponding modified video. The computing device trains a machine learning model, using the training data sets, to generate modified videos from original videos such that one or more frames in the original videos are modified to generate corresponding frames in respective modified videos. Once the machine learning model is trained, the computing device obtains a target original video and applies the trained machine learning model to the target original video to generate a target modified video.Type: ApplicationFiled: May 8, 2023Publication date: November 14, 2024Applicant: Apple Inc.Inventors: Paul M. Bombach, James C. Arndt, David N. Chen, Todd E. Kramer, Shaun M. Poole, Rupamay Saha, Eugene M. Walden
-
Publication number: 20240380945Abstract: In one or more embodiments, a computing device is configured to present playback of a media composition on a touchscreen display. Concurrently with presenting the playback, the computing device receives, via the touchscreen display during a first time period, a touch input that includes a series of motions that start when a particular frame of the media composition is being presented. Responsive to receiving the touch input: the computing device adds a media clip to the media composition, with the media clip including a graphical representation of the touch input, having a duration corresponding to the first time period, and being stored in association with the particular frame. Subsequent to adding the media clip, the computing device presents a second playback of the media composition where playback of the media clip is initiated when playback of the particular frame is started during the second playback of the media composition.Type: ApplicationFiled: May 8, 2023Publication date: November 14, 2024Applicant: Apple Inc.Inventors: David C. Schweinsberg, Gregory E. Niles, Shaun M. Poole, Peter A. Steinauer
-
Publication number: 20240379127Abstract: In some implementations, a system generates a video clip in which the user can select which entity is in-focus and which other entities are out-of-focus based on a source video clip that has already been recorded. Multiple video clips may be generated based on the source video clip with different selected entities in-focus and other entities out-of-focus. In other implementations, a system overlays an image grid on frames of a video clip and analyzes the frames based on annotation criteria to determine which portions of the frames, corresponding to respective cells of the image grid, meet the annotation criteria. The system overlays an annotation on the cells of the image grid indicative of a portion of the frames meeting the annotation criteria while refraining from overlaying the annotation on any cells of the image grid indicative of a portion of the frames that do not meet the annotation criteria.Type: ApplicationFiled: May 8, 2023Publication date: November 14, 2024Applicant: Apple Inc.Inventors: Rupamay Saha, Shaun M. Poole
-
Publication number: 20240378767Abstract: A computing device harmonizes output image properties displayed on a graphical user interface (GUI) when a user switches between two or more images of different image formats to display on the GUI. When a user interacts with a user interface to change one or more display properties of an image, a system modifies displayed image properties based on the user's modifications. If the user selects another image with different dynamic range than the first image, the system performs a backend image conversion so that the displayed properties of the second image correspond to the user-selected modifications to the displayed properties of the first image.Type: ApplicationFiled: May 8, 2023Publication date: November 14, 2024Applicant: Apple Inc.Inventors: Daniel Pettigrew, Robert J. Bell, Peter H. Chou, Michael J. Garber, Shaun M. Poole
-
Patent number: 9437247Abstract: Some embodiments provide a graphical user interface (GUI) of a media-editing application. The GUI includes a composite display area for displaying a set of media clips that define a composite presentation. The set of media clips includes a particular media clip which includes several different groups of ordered clips that are selectable for use in the composite presentation. The GUI includes a preview display area for simultaneously displaying video images from several of the different groups corresponding to a time in the composite presentation. The displayed video images in the preview display area are selectable to determine which of the groups is for use in the composite presentation.Type: GrantFiled: January 30, 2012Date of Patent: September 6, 2016Assignee: APPLE INC.Inventors: Colleen Pendergast, Brian Meaney, Mark Alan Eaton, II, Shaun M. Poole, Mike Stern
-
Patent number: 9412414Abstract: Some embodiments provide a method that receives the addition of a video clip having a first set of spatial properties to a composite video project having a second set of spatial properties. When the first set of spatial properties and the second set of spatial properties are different, the method automatically applies a spatial conform effect to the video clip to conform images of the video clip to the second set of spatial properties. The method receives input to transform images of the video clip as displayed in the composite video project. The method stores the spatial conform effect and the received transform as separate effects for the video clip.Type: GrantFiled: September 7, 2011Date of Patent: August 9, 2016Assignee: APPLE INC.Inventors: Xiaohuan C. Wang, Giovanni Agnoli, Shaun M. Poole, Colleen Pendergast, Randy Ubillos, Vijay Sundaram, Paul T. Schneider, Peter Chou
-
Patent number: 9323438Abstract: Some embodiments of the invention provide a media-editing application for creating and editing a media presentation that displays the results of edits as the edits are made to the media presentation. The media-editing application displays the movement of media clips of the media presentation as the media clips are being moved within the media-editing application to change the media presentation. Also, the media editing application in some embodiments can dynamically display the results of edits in a preview display area. That is, the media editing application has a preview generator that can generate previews of the media presentation on the fly as media clips are being dragged into and within the timeline. This allows the user of the media-editing application to see and hear the results of the operation while performing them.Type: GrantFiled: June 1, 2011Date of Patent: April 26, 2016Assignee: APPLE INC.Inventors: Itrat U. Khan, Ken Matsuda, Giovanni Agnoli, Dave Cerf, Brian Meaney, Colleen Pendergast, Kenneth M. Carson, Shaun M. Poole
-
Patent number: 8954477Abstract: Some embodiments provide a method for defining a data structure for representing a media file imported into a media-editing application. The method defines a reference to an original version of a media file. The method defines references to one or more transcoded versions of the media file. Each of the transcoded versions has a different resolution. The method defines a set of metadata storing information regarding the media file. In some embodiments, the media file includes both audio and video. The method defines a video clip data structure for the media file that references the first data structure and an audio clip data structure for the media file that also references the asset data structure. The method defines a media clip data structure that contains the video clip data structure and the audio clip data structure. The media clip data structure is for editing into a composite video presentation.Type: GrantFiled: May 19, 2011Date of Patent: February 10, 2015Assignee: Apple Inc.Inventors: Giovanni Agnoli, Kenneth M. Carson, Nils Angquist, Andrew S. Demkin, Shaun M. Poole
-
Patent number: 8839110Abstract: Some embodiments provide a method that receives the addition of a video clip to a composite video project. The video clip has a sequence of video images at a first frame rate and the composite video project has a second frame rate for outputting video images. When the first frame rate does not match the second frame rate but is within a threshold of the second frame rate, the method generates output video images for a particular duration of the composite video project at the second frame rate by using each of the video images of the video clip once during the particular duration. When the first frame rate is not within the threshold, generating output video images for the particular duration of the composite video project at the second frame rate by using at least one of the video images for each output video image over the particular duration.Type: GrantFiled: August 25, 2011Date of Patent: September 16, 2014Assignee: Apple Inc.Inventors: Xiaohuan C. Wang, Giovanni Agnoli, Shaun M. Poole, Vijay Sundaram, Eric J. Graves, Peter Chou, Colleen Pendergast, David N. Chen
-
Publication number: 20130124999Abstract: Some embodiments provide a media-editing application. The application defines a reference clip data structure for a media clip that represents one or more media files imported into the media-editing application. The application receives a command to add the media clip into a composite media presentation. The application defines a clip instance data structure as part of the composite media presentation. The clip instance data structure inherits properties of the reference clip data structure and subsequent modifications to the reference clip data structure affect the clip instance data structure.Type: ApplicationFiled: January 30, 2012Publication date: May 16, 2013Inventors: Giovanni Agnoli, Shaun M. Poole, Kenneth M. Carson, Colleen Pendergast, Brian Meaney
-
Publication number: 20130124998Abstract: Some embodiments provide a graphical user interface (GUI) of a media-editing application. The GUI includes a composite display area for displaying a set of media clips that define a composite presentation. The set of media clips includes a particular media clip which includes several different groups of ordered clips that are selectable for use in the composite presentation. The GUI includes a preview display area for simultaneously displaying video images from several of the different groups corresponding to a time in the composite presentation. The displayed video images in the preview display area are selectable to determine which of the groups is for use in the composite presentation.Type: ApplicationFiled: January 30, 2012Publication date: May 16, 2013Inventors: Colleen Pendergast, Brian Meaney, Mark Alan Eaton, II, Shaun M. Poole, Mike Stern
-
Publication number: 20120210232Abstract: Some embodiments provide a method that receives the addition of a video clip to a composite video project. The video clip has a sequence of video images at a first frame rate and the composite video project has a second frame rate for outputting video images. When the first frame rate does not match the second frame rate but is within a threshold of the second frame rate, the method generates output video images for a particular duration of the composite video project at the second frame rate by using each of the video images of the video clip once during the particular duration. When the first frame rate is not within the threshold, generating output video images for the particular duration of the composite video project at the second frame rate by using at least one of the video images for each output video image over the particular duration.Type: ApplicationFiled: August 25, 2011Publication date: August 16, 2012Inventors: Xiaohuan C. Wang, Giovanni Agnoli, Shaun M. Poole, Vijay Sundaram, Eric J. Graves, Peter Chou, Colleen Pendergast, David N. Chen
-
Publication number: 20120209889Abstract: Some embodiments provide a method for defining a data structure for representing a media file imported into a media-editing application. The method defines a reference to an original version of a media file. The method defines references to one or more transcoded versions of the media file. Each of the transcoded versions has a different resolution. The method defines a set of metadata storing information regarding the media file. In some embodiments, the media file includes both audio and video. The method defines a video clip data structure for the media file that references the first data structure and an audio clip data structure for the media file that also references the asset data structure. The method defines a media clip data structure that contains the video clip data structure and the audio clip data structure. The media clip data structure is for editing into a composite video presentation.Type: ApplicationFiled: May 19, 2011Publication date: August 16, 2012Inventors: Giovanni Agnoli, Kenneth M. Carson, Nils Angquist, Andrew S. Demkin, Shaun M. Poole
-
Publication number: 20120210221Abstract: Some embodiments of the invention provide a media-editing application for creating and editing a media presentation that displays the results of edits as the edits are made to the media presentation. The media-editing application displays the movement of media clips of the media presentation as the media clips are being moved within the media-editing application to change the media presentation. Also, the media editing application in some embodiments can dynamically display the results of edits in a preview display area. That is, the media editing application has a preview generator that can generate previews of the media presentation on the fly as media clips are being dragged into and within the timeline. This allows the user of the media-editing application to see and hear the results of the operation while performing them.Type: ApplicationFiled: June 1, 2011Publication date: August 16, 2012Inventors: Itrat U. Khan, Ken Matsuda, Giovanni Agnoli, Dave Cerf, Brian Meaney, Colleen Pendergast, Kenneth M. Carson, Shaun M. Poole
-
Publication number: 20120207452Abstract: Some embodiments provide a method that receives the addition of a video clip having a first set of spatial properties to a composite video project having a second set of spatial properties. When the first set of spatial properties and the second set of spatial properties are different, the method automatically applies a spatial conform effect to the video clip to conform images of the video clip to the second set of spatial properties. The method receives input to transform images of the video clip as displayed in the composite video project. The method stores the spatial conform effect and the received transform as separate effects for the video clip.Type: ApplicationFiled: September 7, 2011Publication date: August 16, 2012Inventors: Xiaohuan C. Wang, Giovanni Agnoli, Shaun M. Poole, Colleen Pendergast, Randy Ubillos, Vijay Sundaram, Paul T. Schneider, Peter Chou
-
Publication number: 20120198319Abstract: For a media-editing application that creates a composite media presentation, some embodiments of the invention provide a method for reducing rendering operations by dividing the composite presentation into several segments and rendering the segments in a manner that allows for these segments to move with respect to each other without losing the rendered results. The media-editing application defines portions of a media presentation as segments. When the media-editing application renders a segment of the media presentation, the application computes an identifier that uniquely identifies the segment and then uses this identifier to store and later retrieve the rendered result for the segment. The application in some embodiments computes the identifier based on a set of attributes of the segment, and stores the results of rendering the segment at a location that is uniquely identifiable in a storage structure by the identifier.Type: ApplicationFiled: June 15, 2011Publication date: August 2, 2012Inventors: Giovanni Agnoli, Kenneth M. Carson, Eric J. Graves, Shaun M. Poole