Patents by Inventor Caroline BAILLARD
Caroline BAILLARD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240127552Abstract: A camera captures part of a surrounding scene to obtain a user view, disambiguating information uniquely identifying at least one controllable device or group of controllable devices in a set of controllable devices of a given type or at least one location or direction for a mobile controllable device is obtained, the user view and the disambiguating information relating to at least one controllable device overlaid on the user view are displayed on a display, a command intended to control at least one controllable device is received, wherein a command including at least part of the disambiguating information and a message based on the command is sent towards a corresponding controllable device.Type: ApplicationFiled: January 10, 2022Publication date: April 18, 2024Inventors: Anthony Laurent, Vincent Alleaume, Caroline Baillard, Matthieu Fradet, Pierrick Jouet
-
Publication number: 20240056503Abstract: A first communication device, e.g., having a camera, is pointed, by a first person, in direction of a second person with which the first person wants to communicate in an interactive session, using his communication device. The camera captures the face of the second person. If the second person does not have a device enabling interaction with the first device, a third person in proximity of the second person, having a device enabling interaction, is selected, and a request is transmitted to the second device in the hands of the third person to handover his device to the second person, displaying, for example, the face of the second person. When handover is detected, the second person may be identified by the device being handed over. The interaction between the first and the second device may be established.Type: ApplicationFiled: December 14, 2021Publication date: February 15, 2024Inventors: Vincent Alleaume, Pierrick Jouet, Anthony Laurent, Caroline Baillard
-
Publication number: 20230394749Abstract: Estimating lighting conditions in a mixed reality scene is a challenging task. Adding shadows cast by virtual objects in the scene contributes significantly to the user experience. To this end, there is proposed, among others, a method for an autonomously movable device having a camera. The device may move to a first position in the scene and capture a first image of a shadow of the device cast by a light source. The device may move to a second position in the scene and capture a second image of a shadow of the device cast by a light source. Features of the light source may be determined at each of the first and the second positions, and a lighting model may be determined for the scene based on the determined features of the light source in the scene.Type: ApplicationFiled: October 14, 2021Publication date: December 7, 2023Inventors: Pierrick Jouet, Caroline Baillard, Philippe Robert
-
Patent number: 11798239Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.Type: GrantFiled: January 21, 2022Date of Patent: October 24, 2023Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Caroline Baillard, Pierrick Jouet, Matthieu Fradet
-
Publication number: 20230326147Abstract: In an augmented reality system, helper data are associated to augmented reality anchors to describe the surroundings of the anchor in the real environment. This allows to verify that positional tracking is correct, in other words, that an augmented reality terminal is localized at the right place in an augmented reality scene. This helper data may be shown on request. Typical examples of helper data are a cropped 2D image or a 3D mesh.Type: ApplicationFiled: July 6, 2021Publication date: October 12, 2023Inventors: Pierrick Jouet, Caroline Baillard, Matthieu Fradet, Anthony Laurent
-
Publication number: 20230298280Abstract: In an augmented reality system, a map of the real environment is generated from a 3D textured mesh obtained through captured data representing the real environment. Some processing is done on the mesh to remove unnecessary elements and generate the map that comprises a set of 2D pictures: one picture for the ground level and one picture for the other elements of the scene. The generated map may then be rendered on an augmented reality device. The ground and the non-ground content may be rendered independently, then additional elements, such as other users of the augmented reality scene or virtual objects, are localized and represented in the map in real-time using a proxy. The rendering can be adapted to the user poses and to the devices themselves.Type: ApplicationFiled: July 6, 2021Publication date: September 21, 2023Inventors: Pierrick Jouet, Matthieu Fradet, Vincent Alleaume, Caroline Baillard, Tao Luo, Anthony Laurent
-
Publication number: 20230245400Abstract: A method of sharing and a method of presenting virtual content in a mixed reality scene rendered on at least two user devices having different viewing position and/or orientation onto the mixed reality scene and corresponding apparatus are described. At a first user device, a user is enabled to select a virtual content to be shared and a second user device with whom the virtual content is to be shared. Information related to the virtual content to be shared is provided, wherein the provided information comprises the 3D position of the virtual content to be shared. The information is received by the second user device and the shared virtual content is rendered with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene.Type: ApplicationFiled: April 7, 2023Publication date: August 3, 2023Inventors: Matthieu Fradet, Caroline Baillard, Anthony Laurent
-
Patent number: 11651576Abstract: A method of sharing and a method of presenting virtual content in a mixed reality scene rendered on at least two user devices having different viewing position and/or orientation onto the mixed reality scene and corresponding apparatus are described. At a first user device, a user is enabled to select a virtual content to be shared and a second user device with whom the virtual content is to be shared. Information related to the virtual content to be shared is provided, wherein the provided information comprises the 3D position of the virtual content to be shared. The information is received by the second user device and the shared virtual content is rendered with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene.Type: GrantFiled: August 25, 2022Date of Patent: May 16, 2023Assignee: InterDigital CE Patent HoldingsInventors: Matthieu Fradet, Caroline Baillard, Anthony Laurent
-
Publication number: 20230103081Abstract: According to embodiments, a scene modelling system may (e.g., initially) obtain and (e.g., subsequently) update a model of a scene based on data describing the scene. The data describing the scene may be received from any of sensors and objects, for example, located in the scene. The scene may comprise a set of connected and unconnected objects. An object may be associated with its own part of the model that may have been built, for example in an initialization phase. A connected object may transmit its (e.g., part of) model to the scene modelling system (e.g., on demand or upon detection of any change). An unconnected object (e.g., and its status) may be recognized in the scene from an image of the object, for example, captured in the scene.Type: ApplicationFiled: March 4, 2021Publication date: March 30, 2023Inventors: Philippe Robert, Caroline Baillard, Anthony Laurent, Pierrick Jouet
-
Publication number: 20230063303Abstract: A method of sharing and a method of presenting virtual content in a mixed reality scene rendered on at least two user devices having different viewing position and/or orientation onto the mixed reality scene and corresponding apparatus are described. At a first user device, a user is enabled to select a virtual content to be shared and a second user device with whom the virtual content is to be shared. Information related to the virtual content to be shared is provided, wherein the provided information comprises the 3D position of the virtual content to be shared. The information is received by the second user device and the shared virtual content is rendered with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene.Type: ApplicationFiled: August 25, 2022Publication date: March 2, 2023Inventors: Matthieu Fradet, Caroline Baillard, Anthony Laurent
-
Patent number: 11580706Abstract: Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before run-time and involving real-world constraints (23). Real-world information (22) is captured in there al 3D scene and the scenario is executed at runtime (14) in presence of there al-world constraints. When there al-world constraints are not identified (12) from there al-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of there al 3D scene. Application to mixed reality.Type: GrantFiled: August 3, 2021Date of Patent: February 14, 2023Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Anthony Laurent, Matthieu Fradet, Caroline Baillard
-
Patent number: 11443492Abstract: A method of sharing and a method of presenting virtual content in a mixed reality scene rendered on at least two user devices having different viewing position and/or orientation onto the mixed reality scene and corresponding apparatus are described. At a first user device, a user is enabled to select a virtual content to be shared and a second user device with whom the virtual content is to be shared. Information related to the virtual content to be shared is provided, wherein the provided information comprises the 3D position of the virtual content to be shared. The information is received by the second user device and the shared virtual content is rendered with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene.Type: GrantFiled: June 13, 2019Date of Patent: September 13, 2022Assignee: InterDigital CE Patent HoldingsInventors: Matthieu Fradet, Caroline Baillard, Anthony Laurent
-
Publication number: 20220157013Abstract: A method for processing a 3D scene, and corresponding device, system and computer program are disclosed. In an example embodiment, the disclosed method includes: obtaining an image comprising at least a nadir view of a 3D scene, captured by at least one camera; detecting, in the image, at least one shadow cast by at least one object of the 3D scene acting as a support for the at least one camera; and determining a direction of at least one real light source from the at least one detected shadow and at least information representative of the object.Type: ApplicationFiled: April 2, 2020Publication date: May 19, 2022Inventors: Caroline Baillard, Philippe Robert, Pierrick Jouet
-
Publication number: 20220148278Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.Type: ApplicationFiled: January 21, 2022Publication date: May 12, 2022Inventors: Caroline BAILLARD, Pierrick JOUET, Matthieu FRADET
-
Patent number: 11263816Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.Type: GrantFiled: November 29, 2017Date of Patent: March 1, 2022Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Caroline Baillard, Pierrick Jouet, Matthieu Fradet
-
Publication number: 20210366197Abstract: Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before run-time and involving real-world constraints (23). Real-world information (22) is captured in there al 3D scene and the scenario is executed at runtime (14) in presence of there al-world constraints. When there al-world constraints are not identified (12) from there al-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of there al 3D scene. Application to mixed reality.Type: ApplicationFiled: August 3, 2021Publication date: November 25, 2021Inventors: Anthony LAURENT, Matthieu FRADET, Caroline BAILLARD
-
Patent number: 11176728Abstract: A method for rendering a non-photorealistic (NPR) content from a set (SI) of at least one image of a same scene is provided. The set of images (SI) is associated with a depth image comprising a set of regions. Each region corresponds to a region of a given depth. The method for rendering a non-photorealistic content includes generation of a segmented image having at least one segmented region generated with a given segmentation scale. The at least one segmented region corresponds to at least one region of the set of regions. A binary edge image is generated in which at least one binary edge region is generated with a given edge extraction scale, the at least one binary edge region corresponding to at least one region of the set of regions. The non-photorealistic content is rendered by combining the segmented image and the binary edge image.Type: GrantFiled: February 22, 2017Date of Patent: November 16, 2021Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Caroline Baillard, Pierrick Jouet, Vincent Alleaume
-
Patent number: 11113888Abstract: Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before runtime and involving real-world constraints (23). Real-world information (22) is captured in the real 3D scene and the scenario is executed at runtime (14) in presence of the real-world constraints. When the real-world constraints are not identified (12) from the real-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of the real 3D scene. Application to mixed reality.Type: GrantFiled: December 19, 2017Date of Patent: September 7, 2021Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Anthony Laurent, Matthieu Fradet, Caroline Baillard
-
Publication number: 20210272373Abstract: A method of sharing and a method of presenting virtual content in a mixed reality scene rendered on at least two user devices having different viewing position and/or orientation onto the mixed reality scene and corresponding apparatus are described. At a first user device, a user is enabled to select a virtual content to be shared and a second user device with whom the virtual content is to be shared. Information related to the virtual content to be shared is provided, wherein the provided information comprises the 3D position of the virtual content to be shared. The information is received by the second user device and the shared virtual content is rendered with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene.Type: ApplicationFiled: June 13, 2019Publication date: September 2, 2021Inventors: Matthieu Fradet, Caroline Baillard, Anthony Laurent
-
Publication number: 20200394842Abstract: A method for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment, comprises: selecting (14), at a runtime of the augmented or mixed reality application, one of a finite set of at least two candidate insertion areas predetermined in the real-world 3D environment for the placement of the virtual object in the real-world 3D environment, based on criteria combining, for each of the candidate insertion areas, relationships between each of: the real-world 3D environment, the virtual object considered with respect to a placement of that virtual object in that candidate insertion area, and a user position; and inserting (14) the virtual object in the selected candidate insertion area.Type: ApplicationFiled: November 29, 2017Publication date: December 17, 2020Inventors: Caroline BAILLARD, Pierrick JOUET, Matthieu FRADET