Patents by Inventor Gregg Wilensky

Gregg Wilensky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11755187
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that provide and apply dynamic image filters to modify digital images over time to simulate a dynamical system. Such dynamic image filters can modify a digital image to progress through different frames depicting visual effects mimicking natural and/or artificial qualities of a fluid, gas, chemical, cloud formation, fractal, or various physical matters or phenomena according to a dynamic-simulation function. Upon detecting a selection of a dynamic image filter, the disclosed systems can identify a dynamic-simulation function corresponding to the dynamical system. Based on selecting a portion of the (or entire) digital image at which to apply the dynamic image filter, the disclosed systems incrementally modify the digital image across time steps to simulate the dynamical system according to the dynamic-simulation function.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: September 12, 2023
    Assignee: Adobe Inc.
    Inventors: Gregg Wilensky, Russell Preston Brown, Michael Kaplan, David Tristram
  • Patent number: 11734805
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: August 22, 2023
    Assignee: Adobe Inc.
    Inventors: Gregg Wilensky, Mark Nichoson, Edward Wright
  • Publication number: 20220391077
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that provide and apply dynamic image filters to modify digital images over time to simulate a dynamical system. Such dynamic image filters can modify a digital image to progress through different frames depicting visual effects mimicking natural and/or artificial qualities of a fluid, gas, chemical, cloud formation, fractal, or various physical matters or phenomena according to a dynamic-simulation function. Upon detecting a selection of a dynamic image filter, the disclosed systems can identify a dynamic-simulation function corresponding to the dynamical system. Based on selecting a portion of the (or entire) digital image at which to apply the dynamic image filter, the disclosed systems incrementally modify the digital image across time steps to simulate the dynamical system according to the dynamic-simulation function.
    Type: Application
    Filed: August 9, 2022
    Publication date: December 8, 2022
    Inventors: Gregg Wilensky, Russell Preston Brown, Michael Kaplan, David Tristram
  • Patent number: 11409423
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that provide and apply dynamic image filters to modify digital images over time to simulate a dynamical system. Such dynamic image filters can modify a digital image to progress through different frames depicting visual effects mimicking natural and/or artificial qualities of a fluid, gas, chemical, cloud formation, fractal, or various physical matters or phenomena according to a dynamic-simulation function. Upon detecting a selection of a dynamic image filter, the disclosed systems can identify a dynamic-simulation function corresponding to the dynamical system. Based on selecting a portion of the (or entire) digital image at which to apply the dynamic image filter, the disclosed systems incrementally modify the digital image across time steps to simulate the dynamical system according to the dynamic-simulation function.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 9, 2022
    Assignee: Adobe Inc.
    Inventors: Gregg Wilensky, Russell Preston Brown, Michael Kaplan, David Tristram
  • Publication number: 20220236863
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that provide and apply dynamic image filters to modify digital images over time to simulate a dynamical system. Such dynamic image filters can modify a digital image to progress through different frames depicting visual effects mimicking natural and/or artificial qualities of a fluid, gas, chemical, cloud formation, fractal, or various physical matters or phenomena according to a dynamic-simulation function. Upon detecting a selection of a dynamic image filter, the disclosed systems can identify a dynamic-simulation function corresponding to the dynamical system. Based on selecting a portion of the (or entire) digital image at which to apply the dynamic image filter, the disclosed systems incrementally modify the digital image across time steps to simulate the dynamical system according to the dynamic-simulation function.
    Type: Application
    Filed: January 25, 2021
    Publication date: July 28, 2022
    Inventors: Gregg Wilensky, Russell Preston Brown, Michael Kaplan, David Tristram
  • Publication number: 20210407054
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
    Type: Application
    Filed: September 8, 2021
    Publication date: December 30, 2021
    Inventors: Gregg Wilensky, Mark Nichoson, Edward Wright
  • Patent number: 11196939
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: December 7, 2021
    Assignee: ADOBE INC.
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 11138699
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: October 5, 2021
    Assignee: ADOBE INC.
    Inventors: Gregg Wilensky, Mark Nichoson, Edward Wright
  • Publication number: 20200394773
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
    Type: Application
    Filed: June 13, 2019
    Publication date: December 17, 2020
    Inventors: Gregg Wilensky, Mark Nichoson, Edward Wright
  • Publication number: 20200280670
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 3, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 10701279
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: June 30, 2020
    Assignee: ADOBE INC.
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Publication number: 20200106945
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: October 2, 2018
    Publication date: April 2, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 8054317
    Abstract: Methods and systems for comparing and organizing color themes and word tag associations. One embodiment comprises a method for determining associated color themes based on an identified color theme by determining the distance between the identified color theme and each color theme of the collection of color themes, wherein each distance includes a color-based distance and the determined subset of associated color themes from the collection is based at least in part on the calculated distances from the identified color theme. Another embodiment comprises a method that allows an application to suggest tags for an identified color theme based on its similarity to color themes and associated tags of the color theme collection. Another embodiment suggests color themes based on an identified tag, and yet another embodiment suggests tags based on an identified tag.
    Type: Grant
    Filed: December 7, 2007
    Date of Patent: November 8, 2011
    Assignee: Adobe Systems Incorporated
    Inventor: Gregg Wilensky
  • Publication number: 20080095429
    Abstract: A digital image that includes first and second regions is processed. An intrinsic color of a given pixel located in an area of interest that is adjacent to at least one of the first and second regions is estimated by extrapolating from colors of multiple pixels in one of the first and second regions and multiple pixels in the other of the two regions.
    Type: Application
    Filed: October 31, 2007
    Publication date: April 24, 2008
    Inventors: Gregg Wilensky, Martin Newell
  • Publication number: 20080056563
    Abstract: Method and apparatus for segmenting a first region and a second region. A method for defining a boundary separating a first region and a second region of a digital image includes determining using a learning machine, based on one or more of the color arrangements, which pixels of the image satisfy criteria for classification as associated with the first region and which pixels of the image satisfy criteria for classification as associated with the second region. The digital image includes one or more color arrangements characteristic of the first region and one or more color arrangements characteristic of the second region. The method includes identifying pixels of the image that are determined not to satisfy the criteria for classification as being associated either with the first region or the second region. The method includes decontaminating the identified pixels to define a boundary between the first and second regions.
    Type: Application
    Filed: October 16, 2007
    Publication date: March 6, 2008
    Applicant: Adobe Systems Incorporated
    Inventors: Stephen Schiller, Gregg Wilensky
  • Publication number: 20070065006
    Abstract: Methods and apparatus, including computer program products, for performing color correction. One product can receive a digital image that includes a region depicting human skin; obtain a skin color value based on a sample; receive a skin parameter value that is a tan or a blush value; use the skin color value and the skin parameter value to determine an estimated ambient lighting condition of the image; and determine a color correction based on the estimated lighting condition and a target lighting condition. Another product can use the skin color value to determine an estimated color temperature of the image and an estimated tint shift of the image, and can determine a color correction based on the estimated lighting condition and a target lighting condition and the estimated tint shift. Another product can use the skin color value and the skin parameter value to determine an estimated camera color setting.
    Type: Application
    Filed: September 22, 2005
    Publication date: March 22, 2007
    Applicant: Adobe Systems Incorporated
    Inventor: Gregg Wilensky
  • Publication number: 20060126719
    Abstract: The invention provides methods and apparatus, including computer program products, implementing and using techniques for masking and extracting a foreground portion from a background portion of a digital image. In the method, a first input defining a first border region is received, which includes at least a part of the foreground portion and at least a part of the background portion in a first digital image. A second input defining a second border region is received, which includes at least a part of the foreground portion and at least a part of the background portion in a second digital image. An intermediary border region is interpolated for an image intermediary in time to the first and second digital images and the first, second, and intermediary border regions are used for masking the foreground portion from the background portion in the digital video.
    Type: Application
    Filed: January 12, 2006
    Publication date: June 15, 2006
    Inventor: Gregg Wilensky
  • Publication number: 20060074861
    Abstract: Methods and apparatus implementing a technique for searching media objects. In general, in one aspect, the technique includes receiving user input specifying a plurality of reference objects (2), defining a set of features for them, and combining the features to generate composite reference information (4) defining criteria for search (6).
    Type: Application
    Filed: September 30, 2002
    Publication date: April 6, 2006
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventor: Gregg Wilensky
  • Publication number: 20060053374
    Abstract: A system to perform localized activity with respect to digital data includes an interface component and an activity component. The interface component is configured to receive a marker location with respect to source digital data. The activity component is configured automatically to perform a parametrically-controlled activity with respect to the source digital data, based on an activity parameter. The activity component is further configured automatically to perform a parametrically-controlled selection of a selected portion of the source digital data, based on the marker location and a portion selection parameter, and to localize an effect of the parametrically-controlled activity to the selected portion of the source digital data.
    Type: Application
    Filed: November 9, 2004
    Publication date: March 9, 2006
    Inventor: Gregg Wilensky
  • Publication number: 20050089216
    Abstract: Method and apparatus for segmenting a first region and a second region. A method for defining a boundary separating a first region and a second region of a digital image includes determining using a learning machine, based on one or more of the color arrangements, which pixels of the image satisfy criteria for classification as associated with the first region and which pixels of the image satisfy criteria for classification as associated with the second region. The digital image includes one or more color arrangements characteristic of the first region and one or more color arrangements characteristic of the second region. The method includes identifying pixels of the image that are determined not to satisfy the criteria for classification as being associated either with the first region or the second region. The method includes decontaminating the identified pixels to define a boundary between the first and second regions.
    Type: Application
    Filed: October 24, 2003
    Publication date: April 28, 2005
    Inventors: Stephen Schiller, Gregg Wilensky