Patents by Inventor Ravin Balakrishnan

Ravin Balakrishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10409490
    Abstract: Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: September 10, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Scott, Shahram Izadi, Nicolas Villar, Ravin Balakrishnan
  • Publication number: 20180165004
    Abstract: A method and system for interactive 3D surgical planning are provided. The method and system provide 3D visualisation and manipulation of at least one anatomical feature in response to intuitive user inputs, including gesture inputs. In aspects, fracture segmentation and reduction, screw placement and fitting, and plate placement and contouring in a virtual 3D environment are provided.
    Type: Application
    Filed: February 13, 2018
    Publication date: June 14, 2018
    Inventors: Richard HURLEY, Rinat ABDRASHITOV, Karan SINGH, Ravin BALAKRISHNAN, James MCCRAE
  • Patent number: 9857915
    Abstract: Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface.
    Type: Grant
    Filed: May 19, 2008
    Date of Patent: January 2, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Hrvoje Benko, Andrew Wilson, Ravin Balakrishnan
  • Patent number: 9665278
    Abstract: Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key.
    Type: Grant
    Filed: February 26, 2010
    Date of Patent: May 30, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Scott, Shahram Izadi, Nicolas Villar, Ravin Balakrishnan
  • Patent number: 9459784
    Abstract: Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved.
    Type: Grant
    Filed: December 26, 2008
    Date of Patent: October 4, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hrvoje Benko, Andrew D. Wilson, Billy Chen, Ravin Balakrishnan, Patrick M. Baudisch
  • Patent number: 9406340
    Abstract: A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
    Type: Grant
    Filed: June 11, 2012
    Date of Patent: August 2, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kentaro Toyama, Gerald Chu, Ravin Balakrishnan
  • Patent number: 9218116
    Abstract: Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature.
    Type: Grant
    Filed: December 26, 2008
    Date of Patent: December 22, 2015
    Inventors: Hrvoje Benko, Andrew D. Wilson, Billy Chen, Ravin Balakrishnan, Patrick M. Baudisch
  • Patent number: 9195301
    Abstract: The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensing field, a planar position sensing system having a digitizing tablet, and a non-planar position sensing system having a sensing grid formed on a dome. The user interacts via the input configurations, such as by moving a digitizing stylus on the sensing grid formed on the dome enclosure surface. This interaction affects the content of the volumetric display by mapping positions and corresponding vectors of the stylus to a moving cursor within the 3D display space of the volumetric display that is offset from a tip of the stylus along the vector.
    Type: Grant
    Filed: July 28, 2008
    Date of Patent: November 24, 2015
    Assignee: AUTODESK, INC.
    Inventors: Gordon Paul Kurtenbach, George Fitzmaurice, Ravin Balakrishnan
  • Publication number: 20150324114
    Abstract: A method and system for interactive 3D surgical planning are provided. The method and system provide 3D visualisation and manipulation of at least one anatomical feature in response to intuitive user inputs, including gesture inputs. In aspects, fracture segmentation and reduction, screw placement and fitting, and plate placement and contouring in a virtual 3D environment are provided.
    Type: Application
    Filed: May 4, 2015
    Publication date: November 12, 2015
    Inventors: Richard HURLEY, Rinat ABDRASHITOV, Karan SINGH, Ravin BALAKRISHNAN, James MCCRAE
  • Publication number: 20140340388
    Abstract: A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application: (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications.
    Type: Application
    Filed: October 17, 2012
    Publication date: November 20, 2014
    Inventors: Steve Tsang, Karan Singh, Abhishek Ranjan, Ravin Balakrishnan
  • Patent number: 8892479
    Abstract: A machine learning model is trained by instructing a user to perform various predefined gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
    Type: Grant
    Filed: April 20, 2013
    Date of Patent: November 18, 2014
    Assignee: Microsoft Corporation
    Inventors: Desney Tan, Dan Morris, T. Scott Saponas, Ravin Balakrishnan
  • Publication number: 20130232095
    Abstract: A machine learning model is trained by instructing a user to perform various predefined gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
    Type: Application
    Filed: April 20, 2013
    Publication date: September 5, 2013
    Applicant: Microsoft Corporation
    Inventors: Desney Tan, Dan Morris, T. Scott Saponas, Ravin Balakrishnan
  • Patent number: 8447704
    Abstract: A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model.
    Type: Grant
    Filed: June 26, 2008
    Date of Patent: May 21, 2013
    Assignee: Microsoft Corporation
    Inventors: Desney Tan, Dan Morris, Scott Saponas, Ravin Balakrishnan
  • Patent number: 8402382
    Abstract: A method, system and computer program for organizing and visualizing display objects within a virtual environment is provided. In one aspect, attributes of display objects define the interaction between display objects according to pre-determined rules, including rules simulating real world mechanics, thereby enabling enriched user interaction. The present invention further provides for the use of piles as an organizational entity for desktop objects. The present invention further provides for fluid interaction techniques for committing actions on display objects in a virtual interface. A number of other interaction and visualization techniques are disclosed.
    Type: Grant
    Filed: April 18, 2007
    Date of Patent: March 19, 2013
    Assignee: Google Inc.
    Inventors: Anand Agarawala, Ravin Balakrishnan
  • Patent number: 8300062
    Abstract: A method, computer system and computer program is provided for using a suggestive modeling interface. The method consists of a method of a computer-implemented rendering of sketches, the method comprising the steps of: (1) a user activating a sketching application; (2) in response, the sketching application displaying on a screen a suggestive modeling interface; (3) the sketching application importing a sketch to the suggestive modeling interface; and (4) the sketching application retrieving from a database one or more suggestions based on the sketch. The method is operable to allow a user interactively using the sketching application to create a drawing that is guided by the imported sketch by selectively using one or more image guided drawing tools provided by the sketching application. The present invention is well-suited for three-dimensional modeling applications.
    Type: Grant
    Filed: April 18, 2006
    Date of Patent: October 30, 2012
    Inventors: Steve Tsang, Karan Singh, Abhishek Ranjan, Ravin Balakrishnan
  • Publication number: 20120253815
    Abstract: A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
    Type: Application
    Filed: June 11, 2012
    Publication date: October 4, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kentaro Toyama, Gerald Chu, Ravin Balakrishnan
  • Patent number: 8201074
    Abstract: A range of unified software authoring tools for creating a talking paper application for integration in an end user platform are described herein. The authoring tools are easy to use and are interoperable to provide an easy and cost-effective method of creating a talking paper application. The authoring tools provide a framework for creating audio content and image content and interactively linking the audio content and the image content. The authoring tools also provide for verifying the interactively linked audio and image content, reviewing the audio content, the image content and the interactive linking on a display device. Finally, the authoring tools provide for saving the audio content, the video content and the interactive linking for publication to a manufacturer for integration in an end user platform or talking paper platform.
    Type: Grant
    Filed: October 8, 2008
    Date of Patent: June 12, 2012
    Assignee: Microsoft Corporation
    Inventors: Kentaro Toyama, Gerald Chu, Ravin Balakrishnan
  • Publication number: 20110214053
    Abstract: Assisting input from a keyboard is described. In an embodiment, a processor receives a plurality of key-presses from the keyboard comprising alphanumeric data for input to application software executed at the processor. The processor analyzes the plurality of key-presses to detect at least one predefined typing pattern, and, in response, controls a display device to display a representation of at least a portion of the keyboard in association with a user interface of the application software. In another embodiment, a computer device has a keyboard and at least one sensor arranged to monitor at least a subset of keys on the keyboard, and detect an object within a predefined distance of a selected key prior to activation of the selected key. The processor then controls the display device to display a representation of a portion of the keyboard comprising the selected key.
    Type: Application
    Filed: February 26, 2010
    Publication date: September 1, 2011
    Applicant: Microsoft Corporation
    Inventors: James Scott, Shahram Izadi, Nicolas Villar, Ravin Balakrishnan
  • Patent number: 7986318
    Abstract: The present invention is a system that manages a volumetric display using volume windows. The volume windows have the typical functions, such as minimize, resize, etc., which operate in a volume. When initiated by an application a volume window is assigned to the application in a volume window data structure. Application data produced by the application is assigned to the windows responsive to which applications are assigned to which windows in the volume window data structure. Input events are assigned to the windows responsive to whether they are spatial or non-spatial. Spatial events are assigned to the window surrounding the event or cursor where a policy resolves situations where more than one window surrounds the cursor. Non-spatial events are assigned to the active or working window.
    Type: Grant
    Filed: February 2, 2006
    Date of Patent: July 26, 2011
    Assignee: Autodesk, Inc.
    Inventors: Gordon Paul Kurtenbach, George William Fitzmaurice, Ravin Balakrishnan
  • Publication number: 20100325572
    Abstract: This document relates to multiple mouse character entry. More particularly, the document relates to multiple mouse character entry tools for use on a common or shared graphical user interface (GUI). In some implementations, the multiple mouse character entry tools (MMCE tools) can generate a GUI that includes multiple distinctively identified cursors. Individual cursors can be controlled by individual users via a corresponding mouse. The MMCE tools can associate a set of characters with an individual cursor effective that an individual user can use the mouse's scroll wheel to scroll to specific characters of the set. The user can select an individual character by clicking a button of the mouse.
    Type: Application
    Filed: June 23, 2009
    Publication date: December 23, 2010
    Applicant: Microsoft Corporation
    Inventors: Mereidith J. Morris, Saleema Amershi, Neema M. Moraveji, Ravin Balakrishnan