Patents by Inventor Gierad P. Laput
Gierad P. Laput has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10656808Abstract: Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.Type: GrantFiled: November 21, 2012Date of Patent: May 19, 2020Assignee: Adobe Inc.Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Patent number: 9928836Abstract: Natural language input processing utilizing grammar templates are described. In one or more implementations, a natural language input indicating an operation to be performed is parsed into at least one part-of-speech, a grammar template corresponding to the part-of-speech is located, an arbitrary term in the part-of-speech is detected based on the located grammar template, a term related to the arbitrary term and describing a modification for the operation is determined based on the sentence expression of the grammar template, and the indicated operation is performed with the described modification.Type: GrantFiled: July 13, 2016Date of Patent: March 27, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20160321242Abstract: Natural language input processing utilizing grammar templates are described. In one or more implementations, a natural language input indicating an operation to be performed is parsed into at least one part-of-speech, a grammar template corresponding to the part-of-speech is located, an arbitrary term in the part-of-speech is detected based on the located grammar template, a term related to the arbitrary term and describing a modification for the operation is determined based on the sentence expression of the grammar template, and the indicated operation is performed with the described modification.Type: ApplicationFiled: July 13, 2016Publication date: November 3, 2016Applicant: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Patent number: 9436382Abstract: Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.Type: GrantFiled: November 21, 2012Date of Patent: September 6, 2016Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Patent number: 9412366Abstract: Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.Type: GrantFiled: November 21, 2012Date of Patent: August 9, 2016Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Patent number: 9141335Abstract: Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.Type: GrantFiled: November 21, 2012Date of Patent: September 22, 2015Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140081625Abstract: Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140078076Abstract: Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140078075Abstract: Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
-
Publication number: 20140082500Abstract: Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.Type: ApplicationFiled: November 21, 2012Publication date: March 20, 2014Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala