Patents by Inventor JOSEPH A. MALIA
JOSEPH A. MALIA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Systems, Methods, and Graphical User Interfaces for Annotating, Measuring, and Modeling Environments
Publication number: 20240045562Abstract: A system displays a representation of a camera field of view including a first portion of a physical environment, captures depth information indicative of a first subset of the first portion, and displays, overlaid on a portion of the representation of the field of view corresponding to the first subset, an indication of an extent of the first portion for which depth information has been captured. In response to detecting movement of the field of view, the system: updates the representation of the field of view to include a representation of a second portion of the physical environment; captures depth information indicative of a second subset of the second portion; and updates the indication to indicate an extent of the second portion for which depth information has been captured, including displaying the indication overlaid on a portion of the representation of the field of view corresponding to the second subset.Type: ApplicationFiled: October 20, 2023Publication date: February 8, 2024Inventors: Allison W. Dryer, Giancarlo Yerkes, Grant R. Paul, Lisa K. Forssell, Joseph A. Malia -
Publication number: 20230388665Abstract: The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.Type: ApplicationFiled: July 31, 2023Publication date: November 30, 2023Inventors: Johnnie B. MANZARI, Jeffrey A. BRASKET, Graham R. CLARKE, Saumitro DASGUPTA, Mikko Berggren ETTIENNE, Toke JANSEN, Wayne LOOFBOURROW, Joseph A. MALIA, Seyyedhossein MOUSAVI, Jens Jacob PALLISGAARD, Paul Thomas SCHNEIDER, Joshua Blake SHAGAM, William A. SORRENTINO, III, Andre SOUZA DOS SANTOS, Piotr J. STANCZYK
-
Publication number: 20230368458Abstract: A computer system displays a preview of a three-dimensional model of a physical environment that includes a partially completed three-dimensional model of the physical environment that is displayed with a first orientation that corresponds to a first viewpoint of a user. The computer system detects first movement that changes a current viewpoint of the user in the physical environment to a second viewpoint and updates the preview of the three-dimensional model, including adding additional information to and rotating the partially completed three-dimensional model to a second orientation. While displaying a second view of the physical environment that corresponds to the second viewpoint, the computer system, in response to detecting a first input, updates the preview of the three-dimensional model in the first user interface, including rotating the partially completed three-dimensional model to a third orientation that does not correspond to the second viewpoint of the user.Type: ApplicationFiled: May 8, 2023Publication date: November 16, 2023Inventors: Allison W. Dryer, Giancarlo Yerkes, Praveen Sharma, Grant R. Paul, Joseph A. Malia
-
Patent number: 11818455Abstract: A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.Type: GrantFiled: February 8, 2023Date of Patent: November 14, 2023Assignee: APPLE INC.Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
-
Publication number: 20230326148Abstract: A first electronic device with one or more processors, memory, one or more cameras, and a display generation component captures, with the one or more cameras, an image of a second electronic device that includes position information displayed via a display generation component of the second electronic device. The position information indicates a location of the second electronic device within an augmented reality environment that includes a physical environment in which the first electronic device and the second electronic device are located. The first electronic device, after capturing the image of the second electronic device that includes the position information, displays, via the display generation component of the first electronic device, one or more virtual objects within the augmented reality (AR) environment using the position information captured from the second electronic device.Type: ApplicationFiled: March 24, 2023Publication date: October 12, 2023Inventors: Praveen Sharma, Fiona P. O'Leary, Joseph A. Malia
-
Publication number: 20230305674Abstract: A computer system displays, in a first viewing mode, a simulated environment that is oriented relative to a physical environment of the computer system. In response to detecting a first change in attitude, the computer system changes an appearance of a first virtual user interface object so as to maintain a fixed spatial relationship between the first virtual user interface object and the physical environment. The computing system detects a gesture. In response to detecting a second change in attitude, in accordance with a determination that the gesture met mode change criteria, the computer system transitions from displaying the simulated environment in the first viewing mode to displaying the simulated environment in a second viewing mode. Displaying the virtual model in the simulated environment in the second viewing mode includes forgoing changing the appearance of the first virtual user interface object to maintain the fixed spatial relationship.Type: ApplicationFiled: April 27, 2023Publication date: September 28, 2023Inventors: Mark K. Hauenstein, Joseph A. Malia, Julian K. Missig, Matthaeus Krenn, Jeffrey T. Bernstein
-
Patent number: 11765163Abstract: An electronic device performs techniques related generally to implementing biometric authentication. In some examples, a device provides user interfaces for a biometric enrollment process tutorial. In some examples, a device provides user interfaces for aligning a biometric feature for enrollment. In some examples, a device provides user interfaces for enrolling a biometric feature. In some examples, a device provides user interfaces for providing hints during a biometric enrollment process. In some examples, a device provides user interfaces for application-based biometric authentication. In some examples, a device provides user interfaces for autofilling biometrically secured fields. In some examples, a device provides user interfaces for unlocking a device using biometric authentication. In some examples, a device provides user interfaces for retrying biometric authentication. In some examples, a device provides user interfaces for managing transfers using biometric authentication.Type: GrantFiled: July 13, 2022Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Marcel Van Os, Peter D. Anton, Arian Behzadi, Jonathan R. Dascola, Lynne Devine, Alan C. Dye, Christopher Patrick Foss, Bradley W. Griffin, Jonathan P. Ive, Joseph A. Malia, Pedro Mari, Daamun Mohseni, Grant Paul, Daniel Trent Preston, William M. Tyler
-
Patent number: 11740755Abstract: A computer system while displaying an augmented reality environment, concurrently displays: a representation of at least a portion of a field of view of one or more cameras that includes a physical object, and a virtual user interface object at a location in the representation of the field of view, where the location is determined based on the respective physical object in the field of view. While displaying the augmented reality environment, in response to detecting an input that changes a virtual environment setting for the augmented reality environment, the computer system adjusts an appearance of the virtual user interface object in accordance with the change made to the virtual environment setting and applies to at least a portion of the representation of the field of view a filter selected based on the change made to the virtual environment setting.Type: GrantFiled: September 28, 2021Date of Patent: August 29, 2023Assignee: APPLE INC.Inventors: Mark K. Hauenstein, Joseph A. Malia, Julian K. Missig, Matthaeus Krenn, Jeffrey T. Bernstein
-
Publication number: 20230252659Abstract: The present disclosure generally relates to displaying and editing an image with depth information. In response to an input, an object in the image having a one or more elements in a first depth range is identified. The identified object is then isolated from other elements in the image and displayed separately from the other elements. The isolated object may then be utilized in different applications.Type: ApplicationFiled: April 20, 2023Publication date: August 10, 2023Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
-
Publication number: 20230229297Abstract: The present disclosure generally relates to user interfaces. In some examples, the electronic device provides for transitioning between simulated lighting effects. In some examples, the electronic device applies a simulated lighting effect to an image. In some examples, the electronic device provides user interfaces for applying a filter to an image. In some examples, the electronic device provides for a reduced filter interface. In some examples, the electronic device provides a visual aid displayed in a viewfinder.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Behkish J. MANZARI, Marek BAREZA, Jeffrey A. BRASKET, Frederic CAO, Alan C. DYE, Elliott HARRIS, Cyrus Daniel IRANI, Jonathan P. IVE, Garrett JOHNSON, Emilie KIM, Joseph A. MALIA, Grant PAUL, Pavel PIVONKA, Billy SORRENTINO, III, Andre SOUZA DOS SANTOS
-
Publication number: 20230199296Abstract: A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.Type: ApplicationFiled: February 8, 2023Publication date: June 22, 2023Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
-
Patent number: 11669985Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: GrantFiled: April 28, 2022Date of Patent: June 6, 2023Assignee: Apple Inc.Inventors: Matan Stauber, Amir Hoffnung, Matthaeus Krenn, Jeffrey Traer Bernstein, Joseph A. Malia, Mark Hauenstein
-
Patent number: 11632600Abstract: While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.Type: GrantFiled: April 8, 2022Date of Patent: April 18, 2023Assignee: APPLE INC.Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
-
Patent number: 11615595Abstract: A first electronic device with one or more processors, memory, one or more cameras, and a display generation component captures, with the one or more cameras, an image of a second electronic device that includes position information displayed via a display generation component of the second electronic device. The position information indicates a location of the second electronic device within an augmented reality environment that includes a physical environment in which the first electronic device and the second electronic device are located. The first electronic device, after capturing the image of the second electronic device that includes the position information, displays, via the display generation component of the first electronic device, one or more virtual objects within the augmented reality (AR) environment using the position information captured from the second electronic device.Type: GrantFiled: September 21, 2021Date of Patent: March 28, 2023Assignee: APPLE INC.Inventors: Praveen Sharma, Fiona P. O'Leary, Joseph A. Malia
-
Patent number: 11539876Abstract: The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.Type: GrantFiled: September 23, 2021Date of Patent: December 27, 2022Assignee: Apple Inc.Inventors: Behkish J. Manzari, Graham R. Clarke, Toke Jansen, Joseph A. Malia, Andre Souza Dos Santos, William A. Sorrentino, III, Jeffrey A. Brasket, Wayne Loofbourrow, Seyyedhossein Mousavi, Jens Jacob Pallisgaard, Paul Thomas Schneider, Joshua Blake Shagam, Piotr J. Stanczyk
-
Publication number: 20220353425Abstract: The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.Type: ApplicationFiled: September 23, 2021Publication date: November 3, 2022Inventors: Behkish J. MANZARI, Graham R. CLARKE, Toke Jansen, Joseph A. Malia, Andre SOUZA DOS SANTOS, William A. SORRENTINO, III, Jeffrey A. BRASKET, Wayne LOOFBOURROW, Seyyedhossein MOUSAVI, Jens Jacob Pallisgaard, Paul Thomas SCHNEIDER, Joshua Blake Shagam, Piotr J. Stanczyk
-
Publication number: 20220351549Abstract: An electronic device performs techniques related generally to implementing biometric authentication. In some examples, a device provides user interfaces for a biometric enrollment process tutorial. In some examples, a device provides user interfaces for aligning a biometric feature for enrollment. In some examples, a device provides user interfaces for enrolling a biometric feature. In some examples, a device provides user interfaces for providing hints during a biometric enrollment process. In some examples, a device provides user interfaces for application-based biometric authentication. In some examples, a device provides user interfaces for autofilling biometrically secured fields. In some examples, a device provides user interfaces for unlocking a device using biometric authentication. In some examples, a device provides user interfaces for retrying biometric authentication. In some examples, a device provides user interfaces for managing transfers using biometric authentication.Type: ApplicationFiled: July 13, 2022Publication date: November 3, 2022Inventors: Marcel VAN OS, Peter D. ANTON, Arian BEHZADI, Jonathan R. DASCOLA, Lynne DEVINE, Alan C. DYE, Christopher Patrick FOSS, Bradley W. GRIFFIN, Jonathan P. IVE, Joseph A. MALIA, Pedro MARI, Daamun MOHSENI, Grant PAUL, Daniel Trent PRESTON, William M. TYLER
-
Publication number: 20220342972Abstract: The present disclosure relates generally to implementing biometric authentication, including providing user interfaces for: while the electronic device is in a first state in which a respective function of the electronic device is disabled, detecting one or more activations of the button; and in response to detecting the one or more activations of the button: capturing, with the one or more biometric sensors that are separate from the button, biometric data; if the biometric data satisfies biometric authentication criteria, transitioning the electronic device to a second state in which the respective function of the electronic device is enabled; and if the biometric data does not satisfy the biometric authentication criteria, maintaining the electronic device in the first state and displaying, on the display, an indication that biometric authentication has failed.Type: ApplicationFiled: July 11, 2022Publication date: October 27, 2022Inventors: Marcel VAN OS, Peter D. ANTON, Arian BEHZADI, Jeffrey T. BERNSTEIN, Lynne DEVINE, Allison W. DRYER, Cas G. LEMMENS, Joseph A. MALIA
-
Publication number: 20220262022Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: ApplicationFiled: April 28, 2022Publication date: August 18, 2022Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
-
Patent number: 11418699Abstract: The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.Type: GrantFiled: September 24, 2021Date of Patent: August 16, 2022Assignee: Apple Inc.Inventors: Behkish J. Manzari, Graham R. Clarke, Toke Jansen, Joseph A. Malia, Andre Souza Dos Santos, William A. Sorrentino, III, Saumitro Dasgupta, Mikko Berggren Ettienne, Wayne Loofbourrow, Seyyedhossein Mousavi, Jens Jacob Pallisgaard, Paul Thomas Schneider, Joshua Blake Shagam, Piotr J. Stanczyk