Patents by Inventor Rajesh K. Hegde
Rajesh K. Hegde has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9111263Abstract: A template and/or knowledge associated with a synchronous meeting are obtained by a computing device. The computing device then adaptively manages the synchronous meeting based at least in part on the template and/or knowledge.Type: GrantFiled: June 15, 2009Date of Patent: August 18, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Jin Li, James E. Oker, Rajesh K. Hegde, Dinei Afonso Ferreira Florencio, Michel Pahud, Sharon K. Cunnington, Philip A. Chou, Zhengyou Zhang
-
Patent number: 9065976Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show a currently interesting event.Type: GrantFiled: July 9, 2013Date of Patent: June 23, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Rajesh K. Hegde, Zhengyou Zhang, Philip A. Chou, Cha Zhang, Zicheng Liu, Sasa Junuzovic
-
Patent number: 8675067Abstract: The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.Type: GrantFiled: May 4, 2011Date of Patent: March 18, 2014Assignee: Microsoft CorporationInventors: Philip A. Chou, Zhengyou Zhang, Cha Zhang, Dinei A. Florencio, Zicheng Liu, Rajesh K. Hegde, Nirupama Chandrasekaran
-
Patent number: 8670018Abstract: Reaction information of participants to an interaction may be sensed and analyzed to determine one or more reactions or dispositions of the participants. Feedback may be provided based on the determined reactions. The participants may be given an opportunity to opt in to having their reaction information collected, and may be provided complete control over how their reaction information is shared or used.Type: GrantFiled: May 27, 2010Date of Patent: March 11, 2014Assignee: Microsoft CorporationInventors: Sharon K. Cunnington, Rajesh K. Hegde, Kori Quinn, Jin Li, Philip A. Chou, Zhengyou Zhang, Desney S. Tan
-
Publication number: 20140009562Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show a currently interesting event.Type: ApplicationFiled: July 9, 2013Publication date: January 9, 2014Inventors: Rajesh K. HEGDE, Zhengyou ZHANG, Philip A. CHOU, Cha ZHANG, Zicheng LIU, Sasa JUNUZOVIC
-
Patent number: 8600731Abstract: The claimed subject matter provides a system and/or a method that facilitates communication within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. The telepresence session can include at least one virtually represented user that communicates in a first language, the communication is at least one of a portion of audio, a portion of video, a portion of graphic, a gesture, or a portion of text. An interpreter component can evaluate the communication to translate an identified first language into a second language within the telepresence session, the translation is automatically provided to at least one virtually represented user within the telepresence.Type: GrantFiled: February 4, 2009Date of Patent: December 3, 2013Assignee: Microsoft CorporationInventors: Sharon Kay Cunnington, Jin Li, Michel Pahud, Rajesh K. Hegde, Zhengyou Zhang
-
Patent number: 8537196Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting.Type: GrantFiled: October 6, 2008Date of Patent: September 17, 2013Assignee: Microsoft CorporationInventors: Rajesh K. Hegde, Zhengyou Zhang, Philip A. Chou, Cha Zhang, Zicheng Liu, Sasa Junuzovic
-
Publication number: 20120281059Abstract: The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.Type: ApplicationFiled: May 4, 2011Publication date: November 8, 2012Applicant: MICROSOFT CORPORATIONInventors: Philip A. Chou, Zhengyou Zhang, Cha Zhang, Dinei A. Florencio, Zicheng Liu, Rajesh K. Hegde, Nirupama Chandrasekaran
-
Patent number: 8276195Abstract: Described herein is a method that includes receiving multiple requests for access to an exposed media object, wherein the exposed media object represents a live media stream that is being generated by a media source. The method also includes receiving data associated with each entity that provided a request, and determining, for each entity, whether the entities that provided the request are authorized to access the media stream based at least in part upon the received data and splitting the media stream into multiple media streams, wherein a number of media streams corresponds to a number of authorized entities. The method also includes automatically applying at least one policy to at least one of the split media streams based at least in part upon the received data.Type: GrantFiled: January 2, 2008Date of Patent: September 25, 2012Assignee: Microsoft CorporationInventors: Rajesh K. Hegde, Cha Zhang, Philip A. Chou, Zicheng Liu
-
Patent number: 8140715Abstract: A virtual media device is described for processing one or more input signals from one or more physical media input devices, to thereby generate an output signal for use by a consuming application module. The consuming application module interacts with the virtual media device as if it were a physical media input device. The virtual media device thereby frees the application module and its user from the burden of having to take specific account of the physical media input devices that are connected to a computing environment. The virtual media device can be coupled to one or more microphone devices, one or more video input devices, or a combination of audio and video input devices, etc. The virtual media device can apply any number of processing modules to generate the output signal, each performing a different respective operation.Type: GrantFiled: May 28, 2009Date of Patent: March 20, 2012Assignee: Microsoft CorporationInventors: Zicheng Liu, Rajesh K. Hegde, Philip A. Chou
-
Publication number: 20110295392Abstract: Reaction information of participants to an interaction may be sensed and analyzed to determine one or more reactions or dispositions of the participants. Feedback may be provided based on the determined reactions. The participants may be given an opportunity to opt in to having their reaction information collected, and may be provided complete control over how their reaction information is shared or used.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Sharon K. Cunnington, Rajesh K. Hegde, Kori Quinn, Jin Li, Philip A. Chou, Zhengyou Zhang, Desney S. Tan
-
Publication number: 20100318399Abstract: A template and/or knowledge associated with a synchronous meeting are obtained by a computing device. The computing device then adaptively manages the synchronous meeting based at least in part on the template and/or knowledge.Type: ApplicationFiled: June 15, 2009Publication date: December 16, 2010Applicant: MICROSOFT CORPORATIONInventors: Jin Li, James E. Oker, Rajesh K. Hegde, Dinei Afonso Ferreira Florencio, Michel Pahud, Sharon K. Cunnington, Philip A. Chou, Zhengyou Zhang
-
Patent number: 7853886Abstract: Persistent, spatial collaboration on the web supports a free-form, user-intuitive approach to a variety of projects and activities. Users can place differing object types at any time any where on a web page and/or the system can automatically, and with no user effort, affect object placement based on one or more meta data characteristics. A user can, in real-time, see changes made by another user to a web page, and, if desired, react accordingly, enabling true collaboration even if the various users are at remote locations. The flexibility of the methodology and system provides a platform for users to engage in projects and activities in a manner and environment suited to the users' mind sets, creativity, and natural proclivities.Type: GrantFiled: February 27, 2007Date of Patent: December 14, 2010Assignee: Microsoft CorporationInventors: Steven M. Drucker, Aamer Hydrie, Li-wei He, Rajesh K. Hegde, Zhengyou Zhang
-
Publication number: 20100302462Abstract: A virtual media device is described for processing one or more input signals from one or more physical media input devices, to thereby generate an output signal for use by a consuming application module. The consuming application module interacts with the virtual media device as if it were a physical media input device. The virtual media device thereby frees the application module and its user from the burden of having to take specific account of the physical media input devices that are connected to a computing environment. The virtual media device can be coupled to one or more microphone devices, one or more video input devices, or a combination of audio and video input devices, etc. The virtual media device can apply any number of processing modules to generate the output signal, each performing a different respective operation.Type: ApplicationFiled: May 28, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Zicheng Liu, Rajesh K. Hegde, Philip A. Chou
-
Publication number: 20100198579Abstract: The claimed subject matter provides a system and/or a method that facilitates communication within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. The telepresence session can include at least one virtually represented user that communicates in a first language, the communication is at least one of a portion of audio, a portion of video, a portion of graphic, a gesture, or a portion of text. An interpreter component can evaluate the communication to translate an identified first language into a second language within the telepresence session, the translation is automatically provided to at least one virtually represented user within the telepresence.Type: ApplicationFiled: February 4, 2009Publication date: August 5, 2010Applicant: MICROSOFT CORPORATIONInventors: Sharon Kay Cunnington, Jin Li, Michel Pahud, Rajesh K. Hegde, Zhengyou Zhang
-
Publication number: 20100085416Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting.Type: ApplicationFiled: October 6, 2008Publication date: April 8, 2010Applicant: Microsoft CorporationInventors: Rajesh K. Hegde, Zhengyou Zhang, Philip A. Chou, Cha Zhang, Zicheng Liu, Sasa Junuzovic
-
Publication number: 20090172779Abstract: Described herein is a method that includes receiving multiple requests for access to an exposed media object, wherein the exposed media object represents a live media stream that is being generated by a media source. The method also includes receiving data associated with each entity that provided a request, and determining, for each entity, whether the entities that provided the request are authorized to access the media stream based at least in part upon the received data and splitting the media stream into multiple media streams, wherein a number of media streams corresponds to a number of authorized entities. The method also includes automatically applying at least one policy to at least one of the split media streams based at least in part upon the received data.Type: ApplicationFiled: January 2, 2008Publication date: July 2, 2009Applicant: MICROSOFT CORPORATIONInventors: Rajesh K. Hegde, Cha Zhang, Philip A. Chou, Zicheng Liu
-
Publication number: 20080244418Abstract: The disclosed architecture extends the traditional integrated design environment (IDE) designed for solo development work with features and capabilities that support collaborative distributed work (e.g., distributed pair programming). The architecture provides integrated communication channels that enable the participants to engage in collaborative work. The graphical user interface capabilities are also extended with distributed functionality specific to multi-party (e.g., pair) programming, including, but not limited to manual and/or automatic role control and turn-taking, multiple cursors (destructive and non-destructive), remote highlighting, decaying edit trail, easy access to history of edits, language-independent event model and, view convergence and divergence. The system uses the collaboration and communication patterns and information to identify problems, extract metrics, make recommendations, etc.Type: ApplicationFiled: August 14, 2007Publication date: October 2, 2008Applicant: MICROSOFT CORPORATIONInventors: Dragos A. Manolescu, Rajesh K. Hegde
-
Publication number: 20080209327Abstract: Persistent, spatial collaboration on the web supports a free-form, user-intuitive approach to a variety of projects and activities. Users can place differing object types at any time any where on a web page and/or the system can automatically, and with no user effort, affect object placement based on one or more meta data characteristics. A user can, in real-time, see changes made by another user to a web page, and, if desired, react accordingly, enabling true collaboration even if the various users are at remote locations. The flexibility of the methodology and system provides a platform for users to engage in projects and activities in a manner and environment suited to the users' mind sets, creativity, and natural proclivities.Type: ApplicationFiled: February 27, 2007Publication date: August 28, 2008Applicant: Microsoft CorporationInventors: Steven M. Drucker, Aamer Hydrie, Li-wei He, Rajesh K. Hegde, Zhengyou Zhang