Patents by Inventor Ali M. Vassigh
Ali M. Vassigh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11545148Abstract: Provided herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for enabling Do Not Disturb functionality in voice responsive devices. An example embodiment operates by: enabling an user to configure Do Not Disturb settings for a voice responsive device; while (a) the Do Not Disturb functionality is activated for the voice responsive device, and (b) within a Do Not Disturb time period specified by the Do Not Disturb settings: disabling one or more microphones; receiving an unambiguous trigger; responsive to receiving the unambiguous trigger, enabling the microphone(s); receiving a voice command; and processing the voice command. An example of an unambiguous trigger may be the user pressing a talk button (either a physical or digital button) on a remote control associated with the voice responsive device.Type: GrantFiled: March 4, 2020Date of Patent: January 3, 2023Assignee: Roku, Inc.Inventors: Ali M. Vassigh, Shubhada Hebbar, Christopher James Tegethoff
-
Publication number: 20200402504Abstract: Provided herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for enabling Do Not Disturb functionality in voice responsive devices. An example embodiment operates by: enabling an user to configure Do Not Disturb settings for a voice responsive device; while (a) the Do Not Disturb functionality is activated for the voice responsive device, and (b) within a Do Not Disturb time period specified by the Do Not Disturb settings: disabling one or more microphones; receiving an unambiguous trigger; responsive to receiving the unambiguous trigger, enabling the microphone(s); receiving a voice command; and processing the voice command. An example of an unambiguous trigger may be the user pressing a talk button (either a physical or digital button) on a remote control associated with the voice responsive device.Type: ApplicationFiled: March 4, 2020Publication date: December 24, 2020Inventors: ALI M. VASSIGH, SHUBHADA HEBBAR, CHRISTOPHER JAMES TEGETHOFF
-
Patent number: 10534438Abstract: A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands.Type: GrantFiled: April 28, 2017Date of Patent: January 14, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Christian Klein, Ali M. Vassigh, Jason S. Flaks, Vanessa Larco, Thomas M. Soemo
-
Patent number: 9875002Abstract: A method includes displaying a left panel and a central panel, where the left panel is contiguous to the central panel and oriented on a left side of the central panel. A first selection on the left panel is detected. Content is displayed on the central panel responsive to the first selection. A second selection from the content on the central panel is detected. The central panel is displayed together with a right panel responsive to the second selection. The right panel is contiguous with the central panel and oriented on a right side of the central panel. The left panel is removed in response to the second selection. A third selection on the right panel is detected. The left panel displays choices. The central panel includes different content associated with one of the choices. The right panel includes functions for operation on selected content.Type: GrantFiled: February 26, 2013Date of Patent: January 23, 2018Assignee: Roku, Inc.Inventors: Jana Kovacevic, Ali M. Vassigh, Jeffrey Paul Anderson, Vincent Clement Da Silva
-
Publication number: 20170228036Abstract: A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands.Type: ApplicationFiled: April 28, 2017Publication date: August 10, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Christian Klein, Ali M. Vassigh, Jason S. Flaks, Vanessa Larco, Thomas M. Soemo
-
Patent number: 8954330Abstract: The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism.Type: GrantFiled: November 28, 2011Date of Patent: February 10, 2015Assignee: Microsoft CorporationInventors: Michael F. Koenig, Oscar Enrique Murillo, Ira Lynn Snyder, Jr., Andrew D. Wilson, Kenneth P. Hinckley, Ali M. Vassigh
-
Publication number: 20140245222Abstract: A method includes displaying a left panel and a central panel, where the left panel is contiguous to the central panel and oriented on a left side of the central panel. A first selection on the left panel is detected. Content is displayed on the central panel responsive to the first selection. A second selection from the content on the central panel is detected. The central panel is displayed together with a right panel responsive to the second selection. The right panel is contiguous with the central panel and oriented on a right side of the central panel. The left panel is removed in response to the second selection. A third selection on the right panel is detected. The left panel displays choices. The central panel includes different content associated with one of the choices. The right panel includes functions for operation on selected content.Type: ApplicationFiled: February 26, 2013Publication date: August 28, 2014Applicant: ROKU, INC.Inventors: Jana Kovacevic, Ali M. Vassigh
-
Patent number: 8676581Abstract: Embodiments are disclosed that relate to the use of identity information to help avoid the occurrence of false positive speech recognition events in a speech recognition system. One embodiment provides a method comprising receiving speech recognition data comprising a recognized speech segment, acoustic locational data related to a location of origin of the recognized speech segment as determined via signals from the microphone array, and confidence data comprising a recognition confidence value, and also receiving image data comprising visual locational information related to a location of each person in an image. The acoustic locational data is compared to the visual locational data to determine whether the recognized speech segment originated from a person in the field of view of the image sensor, and the confidence data is adjusted depending on this determination.Type: GrantFiled: January 22, 2010Date of Patent: March 18, 2014Assignee: Microsoft CorporationInventors: Jason Flaks, Dax Hawkins, Christian Klein, Mitchell Stephen Dernis, Tommer Leyvand, Ali M. Vassigh, Duncan McKay
-
Patent number: 8659658Abstract: In a motion capture system having a depth camera, a physical interaction zone of a user is defined based on a size of the user and other factors. The zone is a volume in which the user performs hand gestures to provide inputs to an application. The shape and location of the zone can be customized for the user. The zone is anchored to the user so that the gestures can be performed from any location in the field of view. Also, the zone is kept between the user and the depth camera even as the user rotates his or her body so that the user is not facing the camera. A display provides feedback based on a mapping from a coordinate system of the zone to a coordinate system of the display. The user can move a cursor on the display or control an avatar.Type: GrantFiled: February 9, 2010Date of Patent: February 25, 2014Assignee: Microsoft CorporationInventors: Ali M. Vassigh, Christian Klein, Ernest L. Pennington
-
Publication number: 20130283183Abstract: A method is provided for personalizing the user interface of an appliance by displaying a standard interface template. The standard interface template is based on the detected biometric characteristics of a user and the demographic class matching the set of detected biometric characteristics.Type: ApplicationFiled: May 29, 2013Publication date: October 24, 2013Inventors: JOHN M. KNIGHT, ALI M. VASSIGH
-
Patent number: 8490157Abstract: Within a surface computing environment users are provided a seamless and intuitive manner of modifying security levels associated with information. If a modification is to be made the user can perceive the modifications and the result of such modifications, such as on a display. When information is rendered within the surface computing environment and a condition changes, the user can quickly have that information concealed in order to mitigate unauthorized access to the information.Type: GrantFiled: February 26, 2009Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Peter B. Thompson, Ian M. Sands, Ali M. Vassigh, Eric I-Chao Chang
-
Publication number: 20130138424Abstract: The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism.Type: ApplicationFiled: November 28, 2011Publication date: May 30, 2013Applicant: MICROSOFT CORPORATIONInventors: Michael F. Koenig, Oscar Enrique Murillo, Ira Lynn Snyder, JR., Andrew D. Wilson, Kenneth P. Hinckley, Ali M. Vassigh
-
Patent number: 8296151Abstract: A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands.Type: GrantFiled: June 18, 2010Date of Patent: October 23, 2012Assignee: Microsoft CorporationInventors: Christian Klein, Ali M. Vassigh, Jason S. Flaks, Vanessa Larco, Thomas M. Soemo
-
Publication number: 20120089392Abstract: Speech recognition techniques are disclosed herein. In one embodiment, a novice mode is available such that when the user is unfamiliar with the speech recognition system, a voice user interface (VUI) may be provided to guide them. The VUI may display one or more speech commands that are presently available. The VUI may also provide feedback to train the user. After the user becomes more familiar with speech recognition, the user may enter speech commands without the aid of the novice mode. In this “experienced mode,” the VUI need not be displayed. Therefore, the user interface is not cluttered.Type: ApplicationFiled: October 7, 2010Publication date: April 12, 2012Applicant: MICROSOFT CORPORATIONInventors: Vanessa Larco, Ali M. Vassigh, Alan T. Shen, Christian Klein, Thomas M. Soemo
-
Publication number: 20110313768Abstract: A multimedia entertainment system combines both gestures and voice commands to provide an enhanced control scheme. A user's body position or motion may be recognized as a gesture, and may be used to provide context to recognize user generated sounds, such as speech input. Likewise, speech input may be recognized as a voice command, and may be used to provide context to recognize a body position or motion as a gesture. Weights may be assigned to the inputs to facilitate processing. When a gesture is recognized, a limited set of voice commands associated with the recognized gesture are loaded for use. Further, additional sets of voice commands may be structured in a hierarchical manner such that speaking a voice command from one set of voice commands leads to the system loading a next set of voice commands.Type: ApplicationFiled: June 18, 2010Publication date: December 22, 2011Inventors: Christian Klein, Ali M. Vassigh, Jason S. Flaks, Vanessa Larco, Thomas M. Soemo
-
Patent number: D686237Type: GrantFiled: December 19, 2011Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Salvador Alucema, Ali M. Vassigh, Edward M. Capuano, Jeff Fleischmann, David Gardner, Colin Riley
-
Patent number: D693838Type: GrantFiled: November 21, 2011Date of Patent: November 19, 2013Assignee: Microsoft CorporationInventors: David Gardner, Ali M Vassigh
-
Patent number: D748131Type: GrantFiled: July 30, 2015Date of Patent: January 26, 2016Assignee: Roku, Inc.Inventors: Jana Kovacevic, Ali M. Vassigh, Vincent Clement da Silva, Jeffrey Paul Anderson
-
Patent number: D748649Type: GrantFiled: March 4, 2013Date of Patent: February 2, 2016Assignee: Roku, Inc.Inventors: Jana Kovacevic, Ali M. Vassigh, Vincent Clement da Silva, Jeffrey Paul Anderson
-
Patent number: D750120Type: GrantFiled: July 30, 2015Date of Patent: February 23, 2016Assignee: Roku, Inc.Inventors: Jana Kovacevic, Ali M. Vassigh, Vincent Clement da Silva, Jeffrey Paul Anderson