Patents by Inventor Pierrick Jouet

Pierrick Jouet has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127552
    Abstract: A camera captures part of a surrounding scene to obtain a user view, disambiguating information uniquely identifying at least one controllable device or group of controllable devices in a set of controllable devices of a given type or at least one location or direction for a mobile controllable device is obtained, the user view and the disambiguating information relating to at least one controllable device overlaid on the user view are displayed on a display, a command intended to control at least one controllable device is received, wherein a command including at least part of the disambiguating information and a message based on the command is sent towards a corresponding controllable device.
    Type: Application
    Filed: January 10, 2022
    Publication date: April 18, 2024
    Inventors: Anthony Laurent, Vincent Alleaume, Caroline Baillard, Matthieu Fradet, Pierrick Jouet
  • Publication number: 20240056503
    Abstract: A first communication device, e.g., having a camera, is pointed, by a first person, in direction of a second person with which the first person wants to communicate in an interactive session, using his communication device. The camera captures the face of the second person. If the second person does not have a device enabling interaction with the first device, a third person in proximity of the second person, having a device enabling interaction, is selected, and a request is transmitted to the second device in the hands of the third person to handover his device to the second person, displaying, for example, the face of the second person. When handover is detected, the second person may be identified by the device being handed over. The interaction between the first and the second device may be established.
    Type: Application
    Filed: December 14, 2021
    Publication date: February 15, 2024
    Inventors: Vincent Alleaume, Pierrick Jouet, Anthony Laurent, Caroline Baillard
  • Publication number: 20230409041
    Abstract: A robot (20) includes a base (203, 204) and a detachable probe (205). The probe (205) includes at least one sensor of a least one type. Propulsion of the probe is provided by an ejection mechanism (204, 302) in the base. When the probe is ejected by the base, the probe captures data using its sensor(s) during its trajectory, according to an observation plan established by the base. The probe is recaptured by the base, probe data is transferred to the base, and the probe (205) is configured for a new observation.
    Type: Application
    Filed: November 4, 2021
    Publication date: December 21, 2023
    Inventors: Fabien Servant, Olivier Bureller, Philippe Robert, Pierrick Jouet
  • Publication number: 20230394749
    Abstract: Estimating lighting conditions in a mixed reality scene is a challenging task. Adding shadows cast by virtual objects in the scene contributes significantly to the user experience. To this end, there is proposed, among others, a method for an autonomously movable device having a camera. The device may move to a first position in the scene and capture a first image of a shadow of the device cast by a light source. The device may move to a second position in the scene and capture a second image of a shadow of the device cast by a light source. Features of the light source may be determined at each of the first and the second positions, and a lighting model may be determined for the scene based on the determined features of the light source in the scene.
    Type: Application
    Filed: October 14, 2021
    Publication date: December 7, 2023
    Inventors: Pierrick Jouet, Caroline Baillard, Philippe Robert
  • Patent number: 11798239
    Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: October 24, 2023
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Caroline Baillard, Pierrick Jouet, Matthieu Fradet
  • Publication number: 20230326147
    Abstract: In an augmented reality system, helper data are associated to augmented reality anchors to describe the surroundings of the anchor in the real environment. This allows to verify that positional tracking is correct, in other words, that an augmented reality terminal is localized at the right place in an augmented reality scene. This helper data may be shown on request. Typical examples of helper data are a cropped 2D image or a 3D mesh.
    Type: Application
    Filed: July 6, 2021
    Publication date: October 12, 2023
    Inventors: Pierrick Jouet, Caroline Baillard, Matthieu Fradet, Anthony Laurent
  • Publication number: 20230298280
    Abstract: In an augmented reality system, a map of the real environment is generated from a 3D textured mesh obtained through captured data representing the real environment. Some processing is done on the mesh to remove unnecessary elements and generate the map that comprises a set of 2D pictures: one picture for the ground level and one picture for the other elements of the scene. The generated map may then be rendered on an augmented reality device. The ground and the non-ground content may be rendered independently, then additional elements, such as other users of the augmented reality scene or virtual objects, are localized and represented in the map in real-time using a proxy. The rendering can be adapted to the user poses and to the devices themselves.
    Type: Application
    Filed: July 6, 2021
    Publication date: September 21, 2023
    Inventors: Pierrick Jouet, Matthieu Fradet, Vincent Alleaume, Caroline Baillard, Tao Luo, Anthony Laurent
  • Publication number: 20230260207
    Abstract: A method and device are provided for processing images. In one embodiment, the method comprises receiving an image is received of a scene having a plurality of real objects and determining if at least one of these real objects is a reference object stored in a database. Subsequently, candidate shadow map of the reference object identified is retrieved when available and when a reference object cannot be identified, an object in the scene with adequate parameters is selected and the candidate shadow maps of this new reference object using said lighting parameters is computed.
    Type: Application
    Filed: June 23, 2021
    Publication date: August 17, 2023
    Inventors: Philippe Robert, Anthony Laurent, Pierrick Jouet, Tao Luo
  • Publication number: 20230103081
    Abstract: According to embodiments, a scene modelling system may (e.g., initially) obtain and (e.g., subsequently) update a model of a scene based on data describing the scene. The data describing the scene may be received from any of sensors and objects, for example, located in the scene. The scene may comprise a set of connected and unconnected objects. An object may be associated with its own part of the model that may have been built, for example in an initialization phase. A connected object may transmit its (e.g., part of) model to the scene modelling system (e.g., on demand or upon detection of any change). An unconnected object (e.g., and its status) may be recognized in the scene from an image of the object, for example, captured in the scene.
    Type: Application
    Filed: March 4, 2021
    Publication date: March 30, 2023
    Inventors: Philippe Robert, Caroline Baillard, Anthony Laurent, Pierrick Jouet
  • Publication number: 20220157013
    Abstract: A method for processing a 3D scene, and corresponding device, system and computer program are disclosed. In an example embodiment, the disclosed method includes: obtaining an image comprising at least a nadir view of a 3D scene, captured by at least one camera; detecting, in the image, at least one shadow cast by at least one object of the 3D scene acting as a support for the at least one camera; and determining a direction of at least one real light source from the at least one detected shadow and at least information representative of the object.
    Type: Application
    Filed: April 2, 2020
    Publication date: May 19, 2022
    Inventors: Caroline Baillard, Philippe Robert, Pierrick Jouet
  • Publication number: 20220148278
    Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.
    Type: Application
    Filed: January 21, 2022
    Publication date: May 12, 2022
    Inventors: Caroline BAILLARD, Pierrick JOUET, Matthieu FRADET
  • Patent number: 11263816
    Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: March 1, 2022
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Caroline Baillard, Pierrick Jouet, Matthieu Fradet
  • Patent number: 11176728
    Abstract: A method for rendering a non-photorealistic (NPR) content from a set (SI) of at least one image of a same scene is provided. The set of images (SI) is associated with a depth image comprising a set of regions. Each region corresponds to a region of a given depth. The method for rendering a non-photorealistic content includes generation of a segmented image having at least one segmented region generated with a given segmentation scale. The at least one segmented region corresponds to at least one region of the set of regions. A binary edge image is generated in which at least one binary edge region is generated with a given edge extraction scale, the at least one binary edge region corresponding to at least one region of the set of regions. The non-photorealistic content is rendered by combining the segmented image and the binary edge image.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: November 16, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Caroline Baillard, Pierrick Jouet, Vincent Alleaume
  • Publication number: 20200394842
    Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.
    Type: Application
    Filed: November 29, 2017
    Publication date: December 17, 2020
    Inventors: Caroline BAILLARD, Pierrick JOUET, Matthieu FRADET
  • Patent number: 10825249
    Abstract: In order to blur a virtual object in a video in real time as the video is acquired by a device capturing a real scene, a salient idea is used which estimates an apparent motion vector between two successive images, being captured at two successive device poses, in which the apparent motion vector estimation is based on a motion of the device. The successive images are then filtered based on the estimated apparent motion vector.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: November 3, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Pierrick Jouet, Philippe Robert, Matthieu Fradet
  • Patent number: 10747307
    Abstract: A method of selection of an object in an environment including a plurality of real and/or virtual objects is described. The environment being displayed to a user through a display device includes an assignment of a gesture path to each object of the plurality of objects and the gesture path includes a series of gestures to be performed by the user to select the object.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: August 18, 2020
    Assignee: InterDigital CE Patent Holdings
    Inventors: Vincent Alleaume, Pierrick Jouet, Matthieu Fradet
  • Patent number: 10656705
    Abstract: An augmented reality (AR) interactive system and method is provided. In one embodiment the systems comprises a head mounted user interface configured to receive user input, a processor configured to manage data based on user input, a camera and a display. The camera and the display are in processing communication with one another and the head mounted user interface via the processor. The processor is configured to determine a user's field of view and a center of the user's field of view based on output of a sensor and rendering images for output to the display. Each of the images include one or more objects and a plurality of signs each corresponding to a selectable object in the user's field of view. The rendering of images include altering a first display attribute of a given sign of the plurality of displayed signs based on determining that the user's field of view is centered on the given sign.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: May 19, 2020
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Vincent Alleaume, Pierrick Jouet, Philippe Robert
  • Patent number: 10645298
    Abstract: The present disclosure relates to methods, apparatus and systems for automatically adapt the zoom coefficient of the camera of a system when running an augmented reality application. A pose estimation is performed for the camera which captures the real scene and the boundaries of the AR scene are calculated. According to the camera frustum and the boundaries, three-dimension rectangles are computes to determine a zoom coefficient that would optimize the seeing of the seen and the optimal position for the camera. The zoom is automatically optically or digitally adapted and the optimal position is indicated to the user by visual, audio or haptic means.
    Type: Grant
    Filed: August 27, 2017
    Date of Patent: May 5, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Anthony Laurent, Pierrick Jouet, Caroline Baillard
  • Publication number: 20190295324
    Abstract: A device and method is provided for establishing via a processor a common interacting area. The common interacting area is based on the intersection of field of view of a first user device and a second user device. The processor also establishes a border area inside the common interacting area. Any image appearing within the common interacting area that has any portion of it falling within the border area is modified by the processor. The image itself includes a real component and a computer generated component. A system, user interface and method is provided for.
    Type: Application
    Filed: March 19, 2019
    Publication date: September 26, 2019
    Inventors: Fabien SERVANT, Pierrick JOUET, Vincent ALLEAUME
  • Publication number: 20190251242
    Abstract: A method for authenticating a user in which a processor receives user input representing selected authentication elements among authentication elements presented to a user and triggers failure of user authentication or validates user authentication responsive to received authentication elements using at least one user interface authentication set is described. At least one mandatory authentication element is required to be selected by a user for successful user authentication and at least one failure authentication element for triggering failure of user authentication when selected by a user. The at least one authentication element corresponds to a time location in a media content.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 15, 2019
    Inventors: Vincent Alleaume, Pierrick Jouet, Tao Luo