Patents by Inventor Duncan Lewis
Duncan Lewis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11848000Abstract: Methods, systems, computer program products and data structures are described which allow for efficient correction of a transcription output of an automatic speech recognition system by a human proofreader. A method comprises receiving a voice input from a user; determining a transcription of the voice input; providing the transcription of the voice input; receiving a text input from the user indicating a revision to the transcription; determining how to revise the transcription in accordance with the text input; and revising the transcription of the voice input in accordance with the text input. A general or specialized language model, an acoustical language model, a character language model, a gaze tracker, and/or a stylus may be used to determine how to revise the transcription in accordance with the text input.Type: GrantFiled: December 12, 2019Date of Patent: December 19, 2023Assignee: Microsoft Technology Licensing, LLCInventor: William Duncan Lewis
-
Publication number: 20230351002Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: June 23, 2023Publication date: November 2, 2023Applicant: Live Nation Entertainment, Inc.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Patent number: 11727103Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: GrantFiled: July 6, 2021Date of Patent: August 15, 2023Assignee: Live Nation entertainment, Inc.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20220375463Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for integrating speech-to-text transcription in a productivity application are presented. A request to access a real-time speech-to-text transcription of an audio signal that is being received by a second device is sent by a first device. The real-time speech-to-text transcription may be surfaced in a transcription pane of a productivity application on the first device. A request to translate the transcription to a different language may be received. The transcription may be translated in real-time and surfaced in the transcription pane. A selection of a word in the surfaced transcription may be received. A request to drag the word from the transcription pane and drop it in a window in the productivity application outside of the transcription pane may be received. The word may be surfaced in the window in the productivity application outside of the transcription pane.Type: ApplicationFiled: August 2, 2022Publication date: November 24, 2022Inventors: Dana Minh NGUYEN, Rohail Mustafa SYED, Alisa Marilyn BACON, William Duncan LEWIS, Michael THOLFSEN, Carly LARSSON
-
Patent number: 11404049Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for integrating speech-to-text transcription in a productivity application are presented. A request to access a real-time speech-to-text transcription of an audio signal that is being received by a second device is sent by a first device. The real-time speech-to-text transcription may be surfaced in a transcription pane of a productivity application on the first device. A request to translate the transcription to a different language may be received. The transcription may be translated in real-time and surfaced in the transcription pane. A selection of a word in the surfaced transcription may be received. A request to drag the word from the transcription pane and drop it in a window in the productivity application outside of the transcription pane may be received. The word may be surfaced in the window in the productivity application outside of the transcription pane.Type: GrantFiled: December 9, 2019Date of Patent: August 2, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Dana Minh Nguyen, Rohail Mustafa Syed, Alisa Marilyn Bacon, William Duncan Lewis, Michael Tholfsen, Carly Larsson
-
Publication number: 20210374224Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: July 6, 2021Publication date: December 2, 2021Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Patent number: 11086982Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: GrantFiled: September 30, 2019Date of Patent: August 10, 2021Assignee: LIVE NATION ENTERTAINMENT, INC.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20210174787Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for integrating speech-to-text transcription in a productivity application are presented. A request to access a real-time speech-to-text transcription of an audio signal that is being received by a second device is sent by a first device. The real-time speech-to-text transcription may be surfaced in a transcription pane of a productivity application on the first device. A request to translate the transcription to a different language may be received. The transcription may be translated in real-time and surfaced in the transcription pane. A selection of a word in the surfaced transcription may be received. A request to drag the word from the transcription pane and drop it in a window in the productivity application outside of the transcription pane may be received. The word may be surfaced in the window in the productivity application outside of the transcription pane.Type: ApplicationFiled: December 9, 2019Publication date: June 10, 2021Inventors: Dana Minh Nguyen, Rohail Mustafa Syed, Alisa Marilyn Bacon, William Duncan Lewis, Michael Tholfsen, Carly Larsson
-
Publication number: 20210074277Abstract: Methods, systems, computer program products and data structures are described which allow for efficient correction of a transcription output of an automatic speech recognition system by a human proofreader. A method comprises receiving a voice input from a user; determining a transcription of the voice input; providing the transcription of the voice input; receiving a text input from the user indicating a revision to the transcription; determining how to revise the transcription in accordance with the text input; and revising the transcription of the voice input in accordance with the text input. A general or specialized language model, an acoustical language model, a character language model, a gaze tracker, and/or a stylus may be used to determine how to revise the transcription in accordance with the text input.Type: ApplicationFiled: December 12, 2019Publication date: March 11, 2021Applicant: Microsoft Technology Licensing, LLCInventor: William Duncan LEWIS
-
Publication number: 20200151316Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: September 30, 2019Publication date: May 14, 2020Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Patent number: 10430576Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: GrantFiled: October 26, 2018Date of Patent: October 1, 2019Assignee: Live Nation Entertainment, Inc.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20190114411Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: October 26, 2018Publication date: April 18, 2019Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Patent number: 10191903Abstract: A user context generator determines one or both of a location of a user and contextual information for the user. The contextual information is indicative of content of interest to the user. A custom content generator engine generates customized translated content for the user. Generating the customized translated content includes selecting, from translated content stored in a database, based on the one or both of the determined location of the user and the determined contextual information for the user, translated content to be presented to the user. The customized translated content includes a set of phrases in a source language and corresponding translations of phrases, in the set of phrases, from the source language to a target language. The selected translated content is displayed to the user, such that the user is provided with translated content of interest to the user.Type: GrantFiled: September 30, 2016Date of Patent: January 29, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: William Duncan Lewis, Vishal Chandulal Chowdhary, Tanvi Saumil Surti
-
Patent number: 10146928Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: GrantFiled: February 12, 2018Date of Patent: December 4, 2018Assignee: Live Nation Entertainment, Inc.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20180239887Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: February 12, 2018Publication date: August 23, 2018Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20180095949Abstract: A user context generator determines one or both of a location of a user and contextual information for the user. The contextual information is indicative of content of interest to the user. A custom content generator engine generates customized translated content for the user. Generating the customized translated content includes selecting, from translated content stored in a database, based on the one or both of the determined location of the user and the determined contextual information for the user, translated content to be presented to the user. The customized translated content includes a set of phrases in a source language and corresponding translations of phrases, in the set of phrases, from the source language to a target language. The selected translated content is displayed to the user, such that the user is provided with translated content of interest to the user.Type: ApplicationFiled: September 30, 2016Publication date: April 5, 2018Inventors: William Duncan LEWIS, Vishal Chandulal CHOWDHARY, Tanvi Saumil SURTI
-
Patent number: 9892252Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: GrantFiled: July 20, 2017Date of Patent: February 13, 2018Assignee: Live Nation Entertainment, Inc.Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20180025147Abstract: The present disclosure generally relates to visually varying an image using parallax image layers, and more specifically, relates to visually varying presentation of an access right displayed on a mobile device to enhance verification of access to resources. The variation of multiple layers of an image may be based on sensor data detected at the mobile device.Type: ApplicationFiled: July 20, 2017Publication date: January 25, 2018Inventors: Adit Shukla, Duncan Lewis, Patrick Jackson
-
Publication number: 20130103695Abstract: Various technologies described herein pertain to detecting machine translated content. Documents in a document pair are mutual lingual translations of each other. Further, document level features of the documents in the document pair can be identified. The document level features can correlate with translation quality between the documents in the document pair. Moreover, statistical classification can be used to detect whether the document pair is generated through machine translation based at least in part upon the document level features. Further, a first document can be a machine translation of a second document in the document pair or a disparate document when generated through machine translation.Type: ApplicationFiled: October 21, 2011Publication date: April 25, 2013Applicant: Microsoft CorporationInventors: Spencer Taylor Rarrick, William Duncan Lewis, Christopher Brian Quirk, Anthony Aue
-
Publication number: 20130018650Abstract: An intelligent selection system selects language model training data to obtain in-domain training datasets. The selection is accomplished by estimating a cross-entropy difference for each candidate text segment from a generic language dataset. The cross-entropy difference is a difference between the cross-entropy of the text segment according to the in-domain language model and the cross-entropy of the text segment according to a language model trained on a random sample of the data source from which the text segment is drawn. If the difference satisfies a threshold condition, the text segment is added as an in-domain text segment to a training dataset.Type: ApplicationFiled: February 1, 2012Publication date: January 17, 2013Applicant: MICROSOFT CORPORATIONInventors: Robert Carter Moore, William Duncan Lewis