Patents by Inventor Jean Sebastien Fouillade

Jean Sebastien Fouillade has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9950431
    Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: April 24, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jean Sebastien Fouillade, Russel Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
  • Patent number: 9578076
    Abstract: Technology is described for visually communicating using a robotic device. An example of a method can include a video feed sent from the video camera of the robotic device to the remote user. A projection surface identified in the video feed can then be sent to the remote user using an application. Another operation can be obtaining an image from the remote user using the application. The image created by the remote user can then be projected on the projection surface.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: February 21, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Charles Olivier, Jean Sebastien Fouillade, William M. Crow, Francois Burianek, Russ Sanchez
  • Publication number: 20160136817
    Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.
    Type: Application
    Filed: January 25, 2016
    Publication date: May 19, 2016
    Inventors: Jean Sebastien Fouillade, Russel Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
  • Patent number: 9259842
    Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.
    Type: Grant
    Filed: June 10, 2011
    Date of Patent: February 16, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jean Sebastien Fouillade, Russell Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
  • Patent number: 9079313
    Abstract: The subject disclosure is directed towards controlling a robot based upon sensing a user's natural and intuitive movements and expressions. User movements and/or facial expressions are captured by an image and depth camera, resulting in skeletal data and/or image data that is used to control a robot's operation, e.g., in a real time, remote (e.g., over the Internet) telepresence session. Robot components that may be controlled include robot “expressions” (e.g., audiovisual data output by the robot), robot head movements, robot mobility drive operations (e.g., to propel and/or turn the robot), and robot manipulator operations, e.g., an arm-like mechanism and/or hand-like mechanism.
    Type: Grant
    Filed: March 15, 2011
    Date of Patent: July 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade
  • Patent number: 9001190
    Abstract: A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment.
    Type: Grant
    Filed: July 5, 2011
    Date of Patent: April 7, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Ashley Feniello, Jordan Correa, Russell Sanchez, Malek Chalabi
  • Patent number: 8761933
    Abstract: A method is provided for initiating a telepresence session with a person, using a robot. The method includes receiving a request to host a telepresence session at the robot and receiving an identification for a target person for the telepresence session by the robot. The robot then searches a current location for a person. If a person is found, a determination is made regarding whether the person is the target person. If the person found is not the target person, the person is prompted for a location for the target person. The robot moves to the location given by the person in response to the prompt.
    Type: Grant
    Filed: August 2, 2011
    Date of Patent: June 24, 2014
    Assignee: Microsoft Corporation
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Malek Chalabi, Nathaniel T. Clinton, Russell Sanchez, Adrien Felon, Graham Wheeler, Francois Burianek
  • Publication number: 20130035790
    Abstract: A method is provided for initiating a telepresence session with a person, using a robot. The method includes receiving a request to host a telepresence session at the robot and receiving an identification for a target person for the telepresence session by the robot. The robot then searches a current location for a person. If a person is found, a determination is made regarding whether the person is the target person. If the person found is not the target person, the person is prompted for a location for the target person. The robot moves to the location given by the person in response to the prompt.
    Type: Application
    Filed: August 2, 2011
    Publication date: February 7, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Malek Chalabi, Nathaniel T. Clinton, Russell Sanchez, Adrien Felon, Graham Wheeler, Francois Burianek
  • Publication number: 20130010066
    Abstract: A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment.
    Type: Application
    Filed: July 5, 2011
    Publication date: January 10, 2013
    Applicant: Microsoft Corporation
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Ashley Feniello, Jordan Correa, Russell Sanchez, Malek Chalabi
  • Publication number: 20120316676
    Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.
    Type: Application
    Filed: June 10, 2011
    Publication date: December 13, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jean Sebastien Fouillade, Russell Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
  • Publication number: 20120316680
    Abstract: A robot tracks objects using sensory data, and follows an object selected by a user. The object can be designated by a user from a set of objects recognized by the robot. The relative positions and orientations of the robot and object are determined. The position and orientation of the robot can be used so as to maintain a desired relationship between the object and the robot. Using the navigation system of the robot, during its movement, obstacles can be avoided. If the robot loses contact with the object being tracked, the robot can continue to navigate and search the environment until the object is reacquired.
    Type: Application
    Filed: June 13, 2011
    Publication date: December 13, 2012
    Applicant: Microsoft Corporation
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Adrien Felon, Jeffrey Cole, Nathaniel T. Clinton, Russell Sanchez, Francois Burianek, Malek M. Chalabi, Harshavardhana Narayana Kikkeri
  • Publication number: 20120281092
    Abstract: Technology is described for visually communicating using a robotic device. An example of a method can include a video feed sent from the video camera of the robotic device to the remote user. A projection surface identified in the video feed can then be sent to the remote user using an application. Another operation can be obtaining an image from the remote user using the application. The image created by the remote user can then be projected on the projection surface.
    Type: Application
    Filed: May 2, 2011
    Publication date: November 8, 2012
    Applicant: Microsoft Corporation
    Inventors: Charles Olivier, Jean Sebastien Fouillade, William M. Crow, Francois Burianek, Russ Sanchez
  • Publication number: 20120239196
    Abstract: The subject disclosure is directed towards controlling a robot based upon sensing a user's natural and intuitive movements and expressions. User movements and/or facial expressions are captured by an image and depth camera, resulting in skeletal data and/or image data that is used to control a robot's operation, e.g., in a real time, remote (e.g., over the Internet) telepresence session. Robot components that may be controlled include robot “expressions” (e.g., audiovisual data output by the robot), robot head movements, robot mobility drive operations (e.g., to propel and/or turn the robot), and robot manipulator operations, e.g., an arm-like mechanism and/or hand-like mechanism.
    Type: Application
    Filed: March 15, 2011
    Publication date: September 20, 2012
    Applicant: Microsoft Corporation
    Inventors: Charles F. Olivier, III, Jean Sebastien Fouillade
  • Publication number: 20120215380
    Abstract: Described herein are technologies pertaining to robot navigation. The robot includes a video camera that is configured to transmit a live video feed to a remotely located computing device. A user interacts with the live video feed, and the robot navigates in its environment based upon the user interaction. In a first navigation mode, the user selects a location, and the robot autonomously navigates to the selected location. In a second navigation mode, the user causes the point of view of the video camera on the robot to change, and thereafter causes the robot to semi-autonomously drive in a direction corresponding to the new point of view of the video camera. In a third navigation mode, the user causes the robot to navigate to a selected location in the live video feed.
    Type: Application
    Filed: February 23, 2011
    Publication date: August 23, 2012
    Applicant: Microsoft Corporation
    Inventors: Jean Sebastien Fouillade, Charles F. Olivier, III, Malek M. Chalabi, Nathaniel T. Clinton, Russ Sanchez, Chad Aron Voss