Patents by Inventor Michele Stoppa
Michele Stoppa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240193810Abstract: A tracked device may be used in an extended reality system in coordination with a tracking device. The tracked device may be ordinarily difficult to track, for example due to changing appearances or relatively small surface areas of unchanging features, as may be the case with an electronic device with a relatively large display surrounded by a thin physical outer boundary. In these cases, the tracked device may periodically present an image to the tracking device that the tracking device stores as an indication to permit tracking of a known, unchanging feature despite the image not being presented continuously on the display of the tracked device. The image may include a static image, designated tracking data overlaid on an image frame otherwise scheduled for presentation, or extracted image features from the image frame otherwise scheduled for presentation. Additional power saving methods and known marker generation methods are also described.Type: ApplicationFiled: February 19, 2024Publication date: June 13, 2024Inventors: Paolo Di Febbo, Anthony Ghannoum, Michele Stoppa, Kiranjit Dhaliwal
-
Patent number: 11954885Abstract: A tracked device may be used in an extended reality system in coordination with a tracking device. The tracked device may be ordinarily difficult to track, for example due to changing appearances or relatively small surface areas of unchanging features, as may be the case with an electronic device with a relatively large display surrounded by a thin physical outer boundary. In these cases, the tracked device may periodically present an image to the tracking device that the tracking device stores as an indication to permit tracking of a known, unchanging feature despite the image not being presented continuously on the display of the tracked device. The image may include a static image, designated tracking data overlaid on an image frame otherwise scheduled for presentation, or extracted image features from the image frame otherwise scheduled for presentation. Additional power saving methods and known marker generation methods are also described.Type: GrantFiled: September 15, 2021Date of Patent: April 9, 2024Assignee: Apple Inc.Inventors: Paolo Di Febbo, Anthony Ghannoum, Michele Stoppa, Kiranjit Dhaliwal
-
Patent number: 11854242Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.Type: GrantFiled: September 22, 2021Date of Patent: December 26, 2023Assignee: Apple Inc.Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
-
Publication number: 20230306709Abstract: In one implementation, a method of estimating the heading of a device is performed by the device including a processor, non-transitory memory, and an image sensor. The method includes determining a geographic location of the device. The method includes capturing, using the image sensor, an image at the geographic location. The method includes detecting one or more lines within the image. The method includes determining a heading of the device based on the one or more lines and the geographic location.Type: ApplicationFiled: May 23, 2023Publication date: September 28, 2023Inventors: Oliver Thomas Ruepp, Jai Prakash, Johan Hedborg, Rahul Raguram, Michele Stoppa
-
Patent number: 11755124Abstract: A physical keyboard can be used to collect user input in a typing mode or in a tracking mode. To use a tracking mode, first movement data is detected for a hand of a user in relation to a physical keyboard at a first location. A determination is made that the first movement data is associated with a tracking movement. In response to determining that the movement type is associated with the tracking movement, a tracking mode is initiated. User input is provided based on the movement data and in accordance with the tracking mode. Contact data and non-contact data is used to determine a user intent, and a user instruction is processed based on the user intent.Type: GrantFiled: September 24, 2021Date of Patent: September 12, 2023Assignee: Apple Inc.Inventors: Michele Stoppa, Waleed Abdulla, Henning Tjaden, Sree Harsha Kalli, Senem E. Emgin, John B. Morrell, Seung Wook Kim
-
Patent number: 11699279Abstract: In one implementation, a method of estimating the heading of a device is performed by the device including a processor, non-transitory memory, and an image sensor. The method includes determining a geographic location of the device. The method includes capturing, using the image sensor, an image at the geographic location. The method includes detecting one or more lines within the image. The method includes determining a heading of the device based on the one or more lines and the geographic location.Type: GrantFiled: June 23, 2020Date of Patent: July 11, 2023Assignee: APPLE INC.Inventors: Oliver Thomas Ruepp, Jai Prakash, Johan Hedborg, Rahul Raguram, Michele Stoppa
-
Publication number: 20230083758Abstract: A tracked device may be used in an extended reality system in coordination with a tracking device. The tracked device may be ordinarily difficult to track, for example due to changing appearances or relatively small surface areas of unchanging features, as may be the case with an electronic device with a relatively large display surrounded by a thin physical outer boundary. In these cases, the tracked device may periodically present an image to the tracking device that the tracking device stores as an indication to permit tracking of a known, unchanging feature despite the image not being presented continuously on the display of the tracked device. The image may include a static image, designated tracking data overlaid on an image frame otherwise scheduled for presentation, or extracted image features from the image frame otherwise scheduled for presentation. Additional power saving methods and known marker generation methods are also described.Type: ApplicationFiled: September 15, 2021Publication date: March 16, 2023Inventors: Paolo Di Febbo, Anthony Ghannoum, Michele Stoppa, Kiranjit Dhaliwal
-
Publication number: 20220092331Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.Type: ApplicationFiled: September 22, 2021Publication date: March 24, 2022Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
-
Patent number: 11019330Abstract: A method for a computing device to recalibrate a multiple camera system includes collecting calibration data from one or more pictures captured by the multiple camera system of the computing device. The multiple camera system includes two or more cameras that are each physically separated by a distance from one another. The method further includes detecting decalibration of the camera system. The method further includes, when the camera system is decalibrated, generating recalibration parameters based on the calibration data. The method further includes determining whether the recalibration parameters are valid parameters and, when they are, updating the multiple camera system based on the recalibration parameters.Type: GrantFiled: April 2, 2015Date of Patent: May 25, 2021Assignee: AQUIFI, INC.Inventors: David Demirdjian, Britta Silke Hummel, Ahmed Tashrif Kamal, Michele Stoppa, Giuliano Pasqualotto, Pietro Salvagnini
-
Patent number: 10453185Abstract: A method for capturing a depth map includes: controlling a plurality of cameras to capture, concurrently, a plurality of first images during a first exposure interval, each of the cameras concurrently capturing a corresponding one of the first images, the cameras having overlapping fields of view; controlling a projection source to emit light at a first illumination level during the first exposure interval; controlling the cameras to capture, concurrently, a plurality of second images during a second exposure interval, each of the cameras concurrently capturing a corresponding one of the second images; controlling the projection source to emit light at a second illumination level during the second exposure interval, the second illumination level being different from the first illumination level; combining the first images with the second images to generate a depth map; and outputting the depth map.Type: GrantFiled: November 2, 2016Date of Patent: October 22, 2019Assignee: AQUIFI, INC.Inventors: Carlo Dal Mutto, Abbas Rafii, Pietro Salvagnini, Aryan Hazeghi, Michele Stoppa, Francesco Peruch, Giulio Marin
-
Publication number: 20180211373Abstract: A method for detecting a defect in an object includes: capturing, by one or more depth cameras, a plurality of partial point clouds of the object from a plurality of different poses with respect to the object; merging, by a processor, the partial point clouds to generate a merged point cloud; computing, by the processor, a three-dimensional (3D) multi-view model of the object; detecting, by the processor, one or more defects of the object in the 3D multi-view model; and outputting, by the processor, an indication of the one or more defects of the object.Type: ApplicationFiled: January 9, 2018Publication date: July 26, 2018Inventors: Michele Stoppa, Francesco Peruch, Giuliano Pasqualotto, Aryan Hazeghi, Pietro Salvagnini, Carlo Dal Mutto, Jason Trachewsky, Kinh Tieu
-
Patent number: 9826216Abstract: A pattern projection system includes a coherent light source, a repositionable DOE disposed to receive coherent light from said coherent light source and disposed to output at least one pattern of projectable light onto a scene to be imaged by an (x,y) two-dimensional optical acquisition system. Coherent light speckle artifacts in the projected pattern are reduced by rapidly controllably repositioning the DOE or the entire pattern projection system. Different projectable patterns are selected from a set of M patterns that are related to each other by a translation and/or rotation operation in two-dimensional cosine space. A resultant (x,y,z) depth map has improved quality and robustness due to projection of the selected patterns. Three-dimensional (x,y,z) depth data obtained from two-dimensional imaged data including despeckling is higher quality data than if projected patterns without despeckling were used.Type: GrantFiled: September 23, 2016Date of Patent: November 21, 2017Assignee: AQUIFI, INC.Inventors: Aryan Hazeghi, Carlo Dal Mutto, Giulio Marin, Francesco Peruch, Michele Stoppa, Abbas Rafii
-
Patent number: 9807371Abstract: A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second camera; detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras; identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and calibration parameters; identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the calibration parameters; and calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines.Type: GrantFiled: April 12, 2017Date of Patent: October 31, 2017Assignee: Aquifi, Inc.Inventors: Pietro Salvagnini, Michele Stoppa, Abbas Rafii
-
Publication number: 20170223340Abstract: A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second camera; detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras; identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and calibration parameters; identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the calibration parameters; and calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines.Type: ApplicationFiled: April 12, 2017Publication date: August 3, 2017Inventors: Pietro Salvagnini, Michele Stoppa, Abbas Rafii
-
Publication number: 20170180706Abstract: A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second camera; detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras; identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and calibration parameters; identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the calibration parameters; and calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines.Type: ApplicationFiled: May 5, 2016Publication date: June 22, 2017Inventors: Pietro Salvagnini, Michele Stoppa, Abbas Rafii
-
Patent number: 9674504Abstract: A method for detecting decalibration of a depth camera system including a first, second, and third cameras having overlapping fields of view in a direction includes: detecting a feature in a first image captured by the first camera; detecting the feature in a second image captured by the second camera; detecting the feature in a third image captured by the third camera, the third camera being non-collinear with the first and second cameras; identifying a first conjugate epipolar line in the second image in accordance with a detected location of the feature in the first image and calibration parameters; identifying a second conjugate epipolar line in the second image in accordance with a detected location of the feature in the third image and the calibration parameters; and calculating a difference between a detected location of the feature in the second image and the first and second conjugate epipolar lines.Type: GrantFiled: May 5, 2016Date of Patent: June 6, 2017Assignee: Aquifi, Inc.Inventors: Pietro Salvagnini, Michele Stoppa, Abbas Rafii
-
Publication number: 20170142312Abstract: A method for capturing a depth map includes: controlling a plurality of cameras to capture, concurrently, a plurality of first images during a first exposure interval, each of the cameras concurrently capturing a corresponding one of the first images, the cameras having overlapping fields of view; controlling a projection source to emit light at a first illumination level during the first exposure interval; controlling the cameras to capture, concurrently, a plurality of second images during a second exposure interval, each of the cameras concurrently capturing a corresponding one of the second images; controlling the projection source to emit light at a second illumination level during the second exposure interval, the second illumination level being different from the first illumination level; combining the first images with the second images to generate a depth map; and outputting the depth map.Type: ApplicationFiled: November 2, 2016Publication date: May 18, 2017Inventors: Carlo Dal Mutto, Abbas Rafii, Pietro Salvagnini, Aryan Hazeghi, Michele Stoppa, Francesco Peruch, Giulio Marin
-
Patent number: 9619042Abstract: A method for operating a real-time gesture based interactive system includes: obtaining a sequence of frames of data from an acquisition system; comparing successive frames of the data for portions that change between frames; determining whether any of the portions that changed are part of an interaction medium detected in the sequence of frames of data; defining a 3D interaction zone relative to an initial position of the part of the interaction medium detected in the sequence of frames of data; tracking a movement of the interaction medium to generate a plurality of 3D positions of the interaction medium; detecting movement of the interaction medium from inside to outside the 3D interaction zone at a boundary 3D position; shifting the 3D interaction zone relative to the boundary 3D position; computing a plurality of computed positions based on the 3D positions; and supplying the computed positions to control an application.Type: GrantFiled: September 30, 2016Date of Patent: April 11, 2017Assignee: Aquifi, Inc.Inventors: Carlo Dal Mutto, Giuliano Pasqualotto, Giridhar Murali, Michele Stoppa, Amir hossein Khalili, Ahmed Tashrif Kamal, Britta Hummel
-
Publication number: 20170017307Abstract: A method for operating a real-time gesture based interactive system includes: obtaining a sequence of frames of data from an acquisition system; comparing successive frames of the data for portions that change between frames; determining whether any of the portions that changed are part of an interaction medium detected in the sequence of frames of data; defining a 3D interaction zone relative to an initial position of the part of the interaction medium detected in the sequence of frames of data; tracking a movement of the interaction medium to generate a plurality of 3D positions of the interaction medium; detecting movement of the interaction medium from inside to outside the 3D interaction zone at a boundary 3D position; shifting the 3D interaction zone relative to the boundary 3D position; computing a plurality of 2D positions based on the 3D positions; and supplying the 2D positions to control an application.Type: ApplicationFiled: September 30, 2016Publication date: January 19, 2017Inventors: Carlo Dal Mutto, Giuliano Pasqualotto, Giridhar Murali, Michele Stoppa, Amir hossein Khalili, Ahmed Tashrif Kamal, Britta Hummel
-
Patent number: 9501138Abstract: A method for operating a real-time gesture based interactive system includes: obtaining a sequence of frames of data from an acquisition system; comparing successive frames of the data for portions that change between frames; determining whether any of the portions that changed are part of an interaction medium detected in the sequence of frames of data; defining a 3D interaction zone relative to an initial position of the part of the interaction medium detected in the sequence of frames of data; tracking a movement of the interaction medium to generate a plurality of 3D positions of the interaction medium; detecting movement of the interaction medium from inside to outside the 3D interaction zone at a boundary 3D position; shifting the 3D interaction zone relative to the boundary 3D position; computing a plurality of 2D positions based on the 3D positions; and supplying the 2D positions to control an application.Type: GrantFiled: May 5, 2015Date of Patent: November 22, 2016Assignee: Aquifi, Inc.Inventors: Carlo Dal Mutto, Giuliano Pasqualotto, Giridhar Murali, Michele Stoppa, Amir hossein Khalili, Ahmed Tashrif Kamal, Britta Hummel