Patents by Inventor Jean Sebastien Fouillade
Jean Sebastien Fouillade has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9950431Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.Type: GrantFiled: January 25, 2016Date of Patent: April 24, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jean Sebastien Fouillade, Russel Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
-
Patent number: 9578076Abstract: Technology is described for visually communicating using a robotic device. An example of a method can include a video feed sent from the video camera of the robotic device to the remote user. A projection surface identified in the video feed can then be sent to the remote user using an application. Another operation can be obtaining an image from the remote user using the application. The image created by the remote user can then be projected on the projection surface.Type: GrantFiled: May 2, 2011Date of Patent: February 21, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Charles Olivier, Jean Sebastien Fouillade, William M. Crow, Francois Burianek, Russ Sanchez
-
Publication number: 20160136817Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.Type: ApplicationFiled: January 25, 2016Publication date: May 19, 2016Inventors: Jean Sebastien Fouillade, Russel Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
-
Patent number: 9259842Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.Type: GrantFiled: June 10, 2011Date of Patent: February 16, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Jean Sebastien Fouillade, Russell Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
-
Patent number: 9079313Abstract: The subject disclosure is directed towards controlling a robot based upon sensing a user's natural and intuitive movements and expressions. User movements and/or facial expressions are captured by an image and depth camera, resulting in skeletal data and/or image data that is used to control a robot's operation, e.g., in a real time, remote (e.g., over the Internet) telepresence session. Robot components that may be controlled include robot “expressions” (e.g., audiovisual data output by the robot), robot head movements, robot mobility drive operations (e.g., to propel and/or turn the robot), and robot manipulator operations, e.g., an arm-like mechanism and/or hand-like mechanism.Type: GrantFiled: March 15, 2011Date of Patent: July 14, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Charles F. Olivier, III, Jean Sebastien Fouillade
-
Patent number: 9001190Abstract: A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment.Type: GrantFiled: July 5, 2011Date of Patent: April 7, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Ashley Feniello, Jordan Correa, Russell Sanchez, Malek Chalabi
-
Patent number: 8761933Abstract: A method is provided for initiating a telepresence session with a person, using a robot. The method includes receiving a request to host a telepresence session at the robot and receiving an identification for a target person for the telepresence session by the robot. The robot then searches a current location for a person. If a person is found, a determination is made regarding whether the person is the target person. If the person found is not the target person, the person is prompted for a location for the target person. The robot moves to the location given by the person in response to the prompt.Type: GrantFiled: August 2, 2011Date of Patent: June 24, 2014Assignee: Microsoft CorporationInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Malek Chalabi, Nathaniel T. Clinton, Russell Sanchez, Adrien Felon, Graham Wheeler, Francois Burianek
-
Publication number: 20130035790Abstract: A method is provided for initiating a telepresence session with a person, using a robot. The method includes receiving a request to host a telepresence session at the robot and receiving an identification for a target person for the telepresence session by the robot. The robot then searches a current location for a person. If a person is found, a determination is made regarding whether the person is the target person. If the person found is not the target person, the person is prompted for a location for the target person. The robot moves to the location given by the person in response to the prompt.Type: ApplicationFiled: August 2, 2011Publication date: February 7, 2013Applicant: MICROSOFT CORPORATIONInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Malek Chalabi, Nathaniel T. Clinton, Russell Sanchez, Adrien Felon, Graham Wheeler, Francois Burianek
-
Publication number: 20130010066Abstract: A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment.Type: ApplicationFiled: July 5, 2011Publication date: January 10, 2013Applicant: Microsoft CorporationInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Ashley Feniello, Jordan Correa, Russell Sanchez, Malek Chalabi
-
Publication number: 20120316676Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.Type: ApplicationFiled: June 10, 2011Publication date: December 13, 2012Applicant: MICROSOFT CORPORATIONInventors: Jean Sebastien Fouillade, Russell Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
-
Publication number: 20120316680Abstract: A robot tracks objects using sensory data, and follows an object selected by a user. The object can be designated by a user from a set of objects recognized by the robot. The relative positions and orientations of the robot and object are determined. The position and orientation of the robot can be used so as to maintain a desired relationship between the object and the robot. Using the navigation system of the robot, during its movement, obstacles can be avoided. If the robot loses contact with the object being tracked, the robot can continue to navigate and search the environment until the object is reacquired.Type: ApplicationFiled: June 13, 2011Publication date: December 13, 2012Applicant: Microsoft CorporationInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Adrien Felon, Jeffrey Cole, Nathaniel T. Clinton, Russell Sanchez, Francois Burianek, Malek M. Chalabi, Harshavardhana Narayana Kikkeri
-
Publication number: 20120281092Abstract: Technology is described for visually communicating using a robotic device. An example of a method can include a video feed sent from the video camera of the robotic device to the remote user. A projection surface identified in the video feed can then be sent to the remote user using an application. Another operation can be obtaining an image from the remote user using the application. The image created by the remote user can then be projected on the projection surface.Type: ApplicationFiled: May 2, 2011Publication date: November 8, 2012Applicant: Microsoft CorporationInventors: Charles Olivier, Jean Sebastien Fouillade, William M. Crow, Francois Burianek, Russ Sanchez
-
Publication number: 20120239196Abstract: The subject disclosure is directed towards controlling a robot based upon sensing a user's natural and intuitive movements and expressions. User movements and/or facial expressions are captured by an image and depth camera, resulting in skeletal data and/or image data that is used to control a robot's operation, e.g., in a real time, remote (e.g., over the Internet) telepresence session. Robot components that may be controlled include robot “expressions” (e.g., audiovisual data output by the robot), robot head movements, robot mobility drive operations (e.g., to propel and/or turn the robot), and robot manipulator operations, e.g., an arm-like mechanism and/or hand-like mechanism.Type: ApplicationFiled: March 15, 2011Publication date: September 20, 2012Applicant: Microsoft CorporationInventors: Charles F. Olivier, III, Jean Sebastien Fouillade
-
Publication number: 20120215380Abstract: Described herein are technologies pertaining to robot navigation. The robot includes a video camera that is configured to transmit a live video feed to a remotely located computing device. A user interacts with the live video feed, and the robot navigates in its environment based upon the user interaction. In a first navigation mode, the user selects a location, and the robot autonomously navigates to the selected location. In a second navigation mode, the user causes the point of view of the video camera on the robot to change, and thereafter causes the robot to semi-autonomously drive in a direction corresponding to the new point of view of the video camera. In a third navigation mode, the user causes the robot to navigate to a selected location in the live video feed.Type: ApplicationFiled: February 23, 2011Publication date: August 23, 2012Applicant: Microsoft CorporationInventors: Jean Sebastien Fouillade, Charles F. Olivier, III, Malek M. Chalabi, Nathaniel T. Clinton, Russ Sanchez, Chad Aron Voss