Patents by Inventor Raman Kumar Sarin

Raman Kumar Sarin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190336867
    Abstract: Systems, methods, and apparatuses are provided for annotating a video frame generated by a video game. A video game model that associates element tags with elements of the video game may be generated. The video game model may be applied by a video game overlay executing concurrently with the video game. The video game overlay may receive a remote user input from one or more remote devices over a network. The remote user input may be multiplexed and/or normalized, and subsequently parsed by applying the video game model to extract an element tag corresponding to the video game. By applying the video game model, an in-game element of the video game corresponding to the element tag may be identified in the video frame. Based on the identified element in the video frame of the video game, the video frame may be annotated and presented to the video game user.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Arunabh Verma, Raman Kumar Sarin, Alex R. Gregorio
  • Patent number: 10449461
    Abstract: Systems, methods, and apparatuses are provided for annotating a video frame generated by a video game. A video game model that associates element tags with elements of the video game may be generated. The video game model may be applied by a video game overlay executing concurrently with the video game. The video game overlay may receive a remote user input from one or more remote devices over a network. The remote user input may be multiplexed and/or normalized, and subsequently parsed by applying the video game model to extract an element tag corresponding to the video game. By applying the video game model, an in-game element of the video game corresponding to the element tag may be identified in the video frame. Based on the identified element in the video frame of the video game, the video frame may be annotated and presented to the video game user.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arunabh Verma, Raman Kumar Sarin, Alex R. Gregorio
  • Patent number: 10175938
    Abstract: A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website.
    Type: Grant
    Filed: November 19, 2013
    Date of Patent: January 8, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew S. Zeigler, Michael Han-Young Kim, Rodger William Benson, Raman Kumar Sarin
  • Patent number: 9219804
    Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.
    Type: Grant
    Filed: August 8, 2013
    Date of Patent: December 22, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
  • Patent number: 9143601
    Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.
    Type: Grant
    Filed: November 9, 2011
    Date of Patent: September 22, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
  • Publication number: 20150143241
    Abstract: A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website.
    Type: Application
    Filed: November 19, 2013
    Publication date: May 21, 2015
    Applicant: Microsoft Corporation
    Inventors: Andrew S. Zeigler, Michael Han-Young Kim, Rodger William Benson, Raman Kumar Sarin
  • Patent number: 9031847
    Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.
    Type: Grant
    Filed: November 15, 2011
    Date of Patent: May 12, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
  • Patent number: 9008639
    Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.
    Type: Grant
    Filed: March 11, 2011
    Date of Patent: April 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
  • Publication number: 20130324194
    Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.
    Type: Application
    Filed: August 8, 2013
    Publication date: December 5, 2013
    Applicant: Microsoft Corporation
    Inventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
  • Patent number: 8509842
    Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.
    Type: Grant
    Filed: February 18, 2011
    Date of Patent: August 13, 2013
    Assignee: Microsoft Corporation
    Inventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
  • Publication number: 20130124207
    Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.
    Type: Application
    Filed: November 15, 2011
    Publication date: May 16, 2013
    Applicant: Microsoft Corporation
    Inventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
  • Publication number: 20130117365
    Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.
    Type: Application
    Filed: November 9, 2011
    Publication date: May 9, 2013
    Applicant: Microsoft Corporation
    Inventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
  • Publication number: 20120231838
    Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.
    Type: Application
    Filed: March 11, 2011
    Publication date: September 13, 2012
    Applicant: Microsoft Corporation
    Inventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
  • Publication number: 20120214542
    Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.
    Type: Application
    Filed: February 18, 2011
    Publication date: August 23, 2012
    Applicant: Microsoft Corporation
    Inventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
  • Publication number: 20100321275
    Abstract: Described is a multiple display computing device, including technology for automatically selecting among various operating modes so as to display content on the displays based upon their relative positions. For example concave modes correspond to inwardly facing viewing surfaces of both displays, such as for viewing private content from a single viewpoint. Convex modes have outwardly facing outwardly surfaces, such that private content is shown on one display and public content on another. Neutral modes are those in which the viewing surfaces of the displays are generally on a common plane, for single user or multiple user/collaborative viewing depending on each display's output orientation. The displays may be movably coupled to one another, or may be implemented as two detachable computer systems coupled by a network connection.
    Type: Application
    Filed: June 18, 2009
    Publication date: December 23, 2010
    Applicant: Microsoft Corporation
    Inventors: Kenneth Paul Hinckley, Raman Kumar Sarin
  • Patent number: 6101513
    Abstract: A system and method are described for outputting display information according to a print layout defining a set of display items on a page and relative position assignments for the display items on the page, and a separate and distinct page format describing a physical page and a set of virtual pages on the physical page. Means are provided for selecting the print layout from a set of print layouts and the page format from a set of page formats.After the print layout and page format are selected, a view processor fills the set of pages defined in the page format with print information corresponding to the set of display items described within the selected print layout. A print output generator thereafter generates device-specific display data for rendering the physical page containing the filled set of virtual pages constructed by the view processor according to the print layout and page format and a designated output device.
    Type: Grant
    Filed: May 31, 1996
    Date of Patent: August 8, 2000
    Assignee: Microsoft Corporation
    Inventors: Darren Arthur Shakib, Raman Kumar Sarin, Salim Alam, John Marshall Tippett, David Charles Whitney