Patents by Inventor Alan L. Erickson
Alan L. Erickson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230128276Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for determining whether a derived attribute from a request is stored at a derived attribute cache. In particular, in one or more embodiments, the disclosed systems obtain the derived attribute from artificial-intelligence models if the derived attribute is unavailable at the derived attribute cache. If the derived attribute is available at the derived attribute cache, the disclosed system returns the derived attribute in response to a request without having the artificial-intelligence models rederive the attribute.Type: ApplicationFiled: January 25, 2022Publication date: April 27, 2023Inventors: Alan L Erickson, Sarah Kong, Betty Leong
-
Publication number: 20230129341Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate preliminary object masks for objects in an image, surface the preliminary object masks as object mask previews, and on-demand converts preliminary object masks into refined object masks. Indeed, in one or more implementations, an object mask preview and on-demand generation system automatically detects objects in an image. For the detected objects, the object mask preview and on-demand generation system generates preliminary object masks for the detected objects of a first lower resolution. The object mask preview and on-demand generation system surfaces a given preliminary object mask in response to detecting a first input. The object mask preview and on-demand generation system also generates a refined object mask of a second higher resolution in response to detecting a second input.Type: ApplicationFiled: January 25, 2022Publication date: April 27, 2023Inventors: Betty Leong, Hyunghwan Byun, Alan L Erickson, Chih-Yao Hsieh, Sarah Kong, Seyed Morteza Safdarnejad, Salil Tambe, Yilin Wang, Zijun Wei, Zhengyun Zhang
-
Patent number: 10489009Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.Type: GrantFiled: November 5, 2018Date of Patent: November 26, 2019Assignee: Adobe Inc.Inventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
-
Patent number: 10380723Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.Type: GrantFiled: June 19, 2017Date of Patent: August 13, 2019Assignee: Adobe Inc.Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
-
Publication number: 20190073093Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.Type: ApplicationFiled: November 5, 2018Publication date: March 7, 2019Applicant: Adobe Inc.Inventors: Yuyan Song, Sarah Kong, Alan L. Erickson, Bradee R. Evans, Aseem O. Agarwala
-
Publication number: 20180365813Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.Type: ApplicationFiled: June 19, 2017Publication date: December 20, 2018Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
-
Patent number: 10120523Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.Type: GrantFiled: August 29, 2014Date of Patent: November 6, 2018Assignee: Adobe Systems IncorporatedInventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
-
Patent number: 9955065Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.Type: GrantFiled: August 27, 2014Date of Patent: April 24, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Nathan A. Carr, Alan L. Erickson, Yuyan Song, Manish Kumar, Bradee Rae Evans, Sarah A. Kong, Michael J. Orts, Meredith L. Stotzner, Heather M. Dolan, Yukie Takahashi
-
Patent number: 9723204Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.Type: GrantFiled: August 27, 2014Date of Patent: August 1, 2017Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Nathan A. Carr, Alan L. Erickson, Yuyan Song, Manish Kumar, Bradee Rae Evans, Sarah A. Kong, Michael J. Orts, Meredith L. Stotzner, Heather M. Dolan, Yukie Takahashi
-
Publication number: 20160063669Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.Type: ApplicationFiled: August 27, 2014Publication date: March 3, 2016Inventors: Gregg D. Wilensky, Nathan A. Carr, Alan L. Erickson, Yuyan Song, Manish Kumar, Bradee Rae Evans, Sarah A. Kong, Michael J. Orts, Meredith L. Stotzner, Heather M. Dolan, Yukie Takahashi
-
Publication number: 20160063670Abstract: Dynamic motion path blur techniques are described. In one or more implementations, paths may be specified to constrain a motion blur effect to be applied to a single image. A variety of different techniques may be employed as part of the motion blur effects, including use of curved blur kernel shapes, use of a mesh representation of blur kernel parameter fields to support real time output of the motion blur effect to an image, use of flash effects, blur kernel positioning to support centered or directional blurring, tapered exposure modeling, and null paths.Type: ApplicationFiled: August 27, 2014Publication date: March 3, 2016Inventors: Gregg D. Wilensky, Nathan A. Carr, Alan L. Erickson, Yuyan Song, Manish Kumar, Bradee Rae Evans, Sarah A. Kong, Michael J. Orts, Meredith L. Stotzner, Heather M. Dolan, Yukie Takahashi
-
Publication number: 20160062622Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.Type: ApplicationFiled: August 29, 2014Publication date: March 3, 2016Inventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
-
Patent number: 8406566Abstract: Methods and apparatus for soft edge masking. A soft edge masking technique may be provided via which, starting from an initial, potentially very rough and approximate border selection mask, the user may selectively apply brush strokes to areas of an image to selectively improve the border region of the mask, thus providing softness details in border regions which contain soft objects such as hair and fur. A stroke may be an additive stroke indicating a particular region in which detail from an original image is to be added to a composite image, or a subtractive stroke indicating a particular region in which detail is to be removed from the composite image. The stroke may also indicate a strength parameter value that may be used to indicate an amount of bias to be used in opacity calculations for the affected pixels.Type: GrantFiled: May 27, 2010Date of Patent: March 26, 2013Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Scott D. Cohen, Alan L. Erickson, Jen-Chan Chien
-
Patent number: 8379972Abstract: Various embodiments of methods and apparatus for removing unwanted background color from a border region surrounding a foreground object in an image in order to composite the foreground object of the image with a new background image are described. Embodiments of color decontamination for image compositing may accept an image and an alpha matte corresponding to the image as input. In some embodiments, estimated foreground colors are determined for pixels in a border region between the foreground and the background of the input image. In some embodiments, the input image may be created by down-sampling a higher resolution image and pixels with estimated foreground colors may be up-sampled. In some embodiments, a composite image may be created based on the input image, the alpha matte, the estimated foreground colors of pixels in the border region and a new background image.Type: GrantFiled: December 1, 2009Date of Patent: February 19, 2013Assignee: Adobe Systems IncorporatedInventors: Jue Wang, Sarah Kong, Alan L. Erickson
-
Patent number: 6964017Abstract: A system and method of creating interactive visual content in which base visual content, a selection of a trigger event associated with the base visual content, and intermediate visual content are received as an input. Viewing visual content derived from the base visual content is automatically generated. The viewing visual content can be displayed by a viewing application executing on a computer. A set of regions of the interactive visual content in which swap visual content is to be displayed by the viewing application when the trigger event occurs is automatically generated. For example, regions of the viewing visual content that are to be replace by the swap visual content when the trigger event occurs can be identified. Also, the swap visual content is automatically generated from the intermediate visual content.Type: GrantFiled: April 26, 2000Date of Patent: November 8, 2005Assignee: Adobe Systems IncorporatedInventors: Douglas E. Meisner, Alan L. Erickson, Troy A. Gaul, Timothy N. Wright, Christopher P. Hondl, Doug J. Ahmann