Patents by Inventor Aseem O. Agarwala

Aseem O. Agarwala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10656808
    Abstract: Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: May 19, 2020
    Assignee: Adobe Inc.
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 10489009
    Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
  • Publication number: 20190073093
    Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 7, 2019
    Applicant: Adobe Inc.
    Inventors: Yuyan Song, Sarah Kong, Alan L. Erickson, Bradee R. Evans, Aseem O. Agarwala
  • Patent number: 10120523
    Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: November 6, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
  • Patent number: 9928836
    Abstract: Natural language input processing utilizing grammar templates are described. In one or more implementations, a natural language input indicating an operation to be performed is parsed into at least one part-of-speech, a grammar template corresponding to the part-of-speech is located, an arbitrary term in the part-of-speech is detected based on the located grammar template, a term related to the arbitrary term and describing a modification for the operation is determined based on the sentence expression of the grammar template, and the indicated operation is performed with the described modification.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: March 27, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 9881376
    Abstract: A method, system, and computer-readable storage medium for performing content based transitions between images. Image content within each image of a set of images are analyzed to determine at least one respective characteristic metric for each image. A respective transition score for each pair of at least a subset of the images is determined with respect to each transition effect of a plurality of transition effects based on the at least one respective characteristic metric for each image. Transition effects implementing transitions between successive images for a sequence of the images are determined based on the transition scores. An indication of the determined transition effects is stored. The determined transition effects are useable to present the images in a slideshow or other image sequence presentation.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: January 30, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Elya Shechtman, Shai Bagon, Aseem O. Agarwala
  • Patent number: 9742994
    Abstract: This specification describes technologies relating to digital images. In general, one aspect of the subject matter described in this specification can be embodied in methods that include the actions of receiving a source wide-angle image; identifying one or more locally salient features of the source wide-angle image; calculating a mapping from the source wide-angle image to a two-dimensional mapped wide-angle image according to constraints using the identified one or more spatially variable salient features; and rendering the mapped wide-angle image using the calculated mapping such that the mapped wide-angle image reduces distortion of the locally salient features relative to the distortion of the source wide angle image.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: August 22, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Aseem O. Agarwala, Robert E. Carroll
  • Publication number: 20160321242
    Abstract: Natural language input processing utilizing grammar templates are described. In one or more implementations, a natural language input indicating an operation to be performed is parsed into at least one part-of-speech, a grammar template corresponding to the part-of-speech is located, an arbitrary term in the part-of-speech is detected based on the located grammar template, a term related to the arbitrary term and describing a modification for the operation is determined based on the sentence expression of the grammar template, and the indicated operation is performed with the described modification.
    Type: Application
    Filed: July 13, 2016
    Publication date: November 3, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 9436382
    Abstract: Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: September 6, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 9412366
    Abstract: Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: August 9, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Publication number: 20160062622
    Abstract: A mesh is a collection of multiple shapes referred to as elements, each of which can share an edge with one or more other elements of the mesh. The mesh is presented to the user on a display, and the user identifies a new element to be added to the mesh. User input is received to manipulate the new element (e.g., move the new element around the display). As the new element is manipulated, various conditions are applied to determine edges of elements existing in the mesh that the new element can be snapped to. Snapping a new element to an edge of an existing element in the mesh refers to adding the new element to the mesh so that the new element and the existing element share the edge. Indications of the edges of existing elements to which the new element can be snapped are provided to the user.
    Type: Application
    Filed: August 29, 2014
    Publication date: March 3, 2016
    Inventors: Yuyan Song, Sarah Kong, Alan L Erickson, Bradee R. Evans, Aseem O. Agarwala
  • Patent number: 9141335
    Abstract: Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: September 22, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 9013634
    Abstract: Methods, apparatus, and computer-readable storage media for video completion that may be applied to restore missing content, for example holes or border regions, in video sequences. A video completion technique applies a subspace constraint technique that finds and tracks feature points in the video, which are used to form a model of the camera motion and to predict locations of background scene points in frames where the background is occluded. Another frame where those points were visible is found, and that frame is warped using the predicted points. A content-preserving warp technique may be used. Image consistency constraints may be applied to modify the warp so that it fills the hole seamlessly. A compositing technique is applied to composite the warped image into the hole. This process may be repeated until the missing content is filled on all frames.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: April 21, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Aseem O. Agarwala, Daniel Goldman, Daniel H. Leventhal
  • Patent number: 8929610
    Abstract: Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique.
    Type: Grant
    Filed: February 7, 2012
    Date of Patent: January 6, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Aseem O. Agarwala, Jue Wang
  • Publication number: 20140337721
    Abstract: A method, system, and computer-readable storage medium for performing content based transitions between images. Image content within each image of a set of images are analyzed to determine at least one respective characteristic metric for each image. A respective transition score for each pair of at least a subset of the images is determined with respect to each transition effect of a plurality of transition effects based on the at least one respective characteristic metric for each image. Transition effects implementing transitions between successive images for a sequence of the images are determined based on the transitions between successive images for a sequence of the images are determined based on the transition scores. An indication of the determined transition effects is stored. The determined transition effects are useable to present the images in a slideshow or other image sequence presentation.
    Type: Application
    Filed: July 28, 2014
    Publication date: November 13, 2014
    Inventors: Elya Shechtman, Shai Bagon, Aseem O. Agarwala
  • Patent number: 8885880
    Abstract: Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique.
    Type: Grant
    Filed: February 7, 2012
    Date of Patent: November 11, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Aseem O. Agarwala, Jue Wang
  • Patent number: 8872928
    Abstract: Methods, apparatus, and computer-readable storage media for subspace video stabilization. A subspace video stabilization technique may provide a robust and efficient approach to video stabilization that achieves high-quality camera motion for a wide range of videos. The technique may transform a set of input two-dimensional (2D) motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene; this may be achieved by enforcing subspace constraints on feature trajectories while smoothing them. The technique may assemble tracked features in the video into a trajectory matrix, factor the trajectory matrix into two low-rank matrices, and perform filtering or curve fitting in a low-dimensional linear space. The technique may employ a moving factorization technique that is both efficient and streamable.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: October 28, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Aseem O. Agarwala, Jue Wang, Michael L. Gleicher, Feng Liu
  • Patent number: 8811771
    Abstract: A method, system, and computer-readable storage medium for performing content based transitions between images. Image content within each image of a set of images are analyzed to determine at least one respective characteristic metric for each image. A respective transition score for each pair of at least a subset of the images is determined with respect to each transition effect of a plurality of transition effects based on the at least one respective characteristic metric for each image. Transition effects implementing transitions between successive images for a sequence of the images are determined based on the transition scores. An indication of the determined transition effects is stored. The determined transition effects are useable to present the images in a slideshow or other image sequence presentation.
    Type: Grant
    Filed: November 26, 2008
    Date of Patent: August 19, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Eli Shechtman, Shai Bagon, Aseem O. Agarwala
  • Patent number: 8724854
    Abstract: Methods and apparatus for robust video stabilization. A video stabilization technique applies a feature tracking technique to an input video sequence to generate feature trajectories. The technique applies a video partitioning technique to segment the input video sequence into factorization windows and transition windows. The technique smoothes the trajectories in each of the windows, in sequence. For factorization windows, a subspace-based optimization technique may be used. For transition windows, a direct track optimization technique that uses a similarity motion model may be used. The technique then determines and applies warping models to the frames in the video sequence. In at least some embodiments, the warping models may include a content-preserving warping model, a homography model, a similarity transform model, and a whole-frame translation model. The warped frames may then be cropped according to a cropping technique.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: May 13, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Aseem O. Agarwala, Jue Wang
  • Publication number: 20140081625
    Abstract: Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.
    Type: Application
    Filed: November 21, 2012
    Publication date: March 20, 2014
    Applicant: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala