Patents by Inventor Raman Kumar Sarin
Raman Kumar Sarin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190336867Abstract: Systems, methods, and apparatuses are provided for annotating a video frame generated by a video game. A video game model that associates element tags with elements of the video game may be generated. The video game model may be applied by a video game overlay executing concurrently with the video game. The video game overlay may receive a remote user input from one or more remote devices over a network. The remote user input may be multiplexed and/or normalized, and subsequently parsed by applying the video game model to extract an element tag corresponding to the video game. By applying the video game model, an in-game element of the video game corresponding to the element tag may be identified in the video frame. Based on the identified element in the video frame of the video game, the video frame may be annotated and presented to the video game user.Type: ApplicationFiled: May 7, 2018Publication date: November 7, 2019Inventors: Arunabh Verma, Raman Kumar Sarin, Alex R. Gregorio
-
Patent number: 10449461Abstract: Systems, methods, and apparatuses are provided for annotating a video frame generated by a video game. A video game model that associates element tags with elements of the video game may be generated. The video game model may be applied by a video game overlay executing concurrently with the video game. The video game overlay may receive a remote user input from one or more remote devices over a network. The remote user input may be multiplexed and/or normalized, and subsequently parsed by applying the video game model to extract an element tag corresponding to the video game. By applying the video game model, an in-game element of the video game corresponding to the element tag may be identified in the video frame. Based on the identified element in the video frame of the video game, the video frame may be annotated and presented to the video game user.Type: GrantFiled: May 7, 2018Date of Patent: October 22, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Arunabh Verma, Raman Kumar Sarin, Alex R. Gregorio
-
Patent number: 10175938Abstract: A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website.Type: GrantFiled: November 19, 2013Date of Patent: January 8, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Andrew S. Zeigler, Michael Han-Young Kim, Rodger William Benson, Raman Kumar Sarin
-
Patent number: 9219804Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: GrantFiled: August 8, 2013Date of Patent: December 22, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Patent number: 9143601Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.Type: GrantFiled: November 9, 2011Date of Patent: September 22, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
-
Publication number: 20150143241Abstract: A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website.Type: ApplicationFiled: November 19, 2013Publication date: May 21, 2015Applicant: Microsoft CorporationInventors: Andrew S. Zeigler, Michael Han-Young Kim, Rodger William Benson, Raman Kumar Sarin
-
Patent number: 9031847Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.Type: GrantFiled: November 15, 2011Date of Patent: May 12, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
-
Patent number: 9008639Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.Type: GrantFiled: March 11, 2011Date of Patent: April 14, 2015Assignee: Microsoft Technology Licensing, LLCInventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
-
Publication number: 20130324194Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: ApplicationFiled: August 8, 2013Publication date: December 5, 2013Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Patent number: 8509842Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: GrantFiled: February 18, 2011Date of Patent: August 13, 2013Assignee: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Publication number: 20130124207Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.Type: ApplicationFiled: November 15, 2011Publication date: May 16, 2013Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
-
Publication number: 20130117365Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.Type: ApplicationFiled: November 9, 2011Publication date: May 9, 2013Applicant: Microsoft CorporationInventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
-
Publication number: 20120231838Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.Type: ApplicationFiled: March 11, 2011Publication date: September 13, 2012Applicant: Microsoft CorporationInventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
-
Publication number: 20120214542Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: ApplicationFiled: February 18, 2011Publication date: August 23, 2012Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Publication number: 20100321275Abstract: Described is a multiple display computing device, including technology for automatically selecting among various operating modes so as to display content on the displays based upon their relative positions. For example concave modes correspond to inwardly facing viewing surfaces of both displays, such as for viewing private content from a single viewpoint. Convex modes have outwardly facing outwardly surfaces, such that private content is shown on one display and public content on another. Neutral modes are those in which the viewing surfaces of the displays are generally on a common plane, for single user or multiple user/collaborative viewing depending on each display's output orientation. The displays may be movably coupled to one another, or may be implemented as two detachable computer systems coupled by a network connection.Type: ApplicationFiled: June 18, 2009Publication date: December 23, 2010Applicant: Microsoft CorporationInventors: Kenneth Paul Hinckley, Raman Kumar Sarin
-
Patent number: 6101513Abstract: A system and method are described for outputting display information according to a print layout defining a set of display items on a page and relative position assignments for the display items on the page, and a separate and distinct page format describing a physical page and a set of virtual pages on the physical page. Means are provided for selecting the print layout from a set of print layouts and the page format from a set of page formats.After the print layout and page format are selected, a view processor fills the set of pages defined in the page format with print information corresponding to the set of display items described within the selected print layout. A print output generator thereafter generates device-specific display data for rendering the physical page containing the filled set of virtual pages constructed by the view processor according to the print layout and page format and a designated output device.Type: GrantFiled: May 31, 1996Date of Patent: August 8, 2000Assignee: Microsoft CorporationInventors: Darren Arthur Shakib, Raman Kumar Sarin, Salim Alam, John Marshall Tippett, David Charles Whitney