Patents by Inventor Michelle Chua
Michelle Chua has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230377259Abstract: Generating a three-dimensional virtual representation of a three-dimensional physical object can be based on capturing or receiving a capture bundle or a set of images. In some examples, generating the virtual representation of the physical object can be facilitated by user interfaces for identifying a physical object and capturing a set of images of the physical object. Generating the virtual representation can include previewing or modifying a set of images. In some examples, generating the virtual representation of the physical object can include generating a first representation of the physical object (e.g., a point cloud) and/or generating a second three-dimensional virtual representation of the physical object (e.g., a mesh reconstruction). In some examples, a visual indication of the progress of the image capture process and/or the generation of the virtual representation of the three-dimensional object can be displayed, such as in a capture user interface.Type: ApplicationFiled: May 15, 2023Publication date: November 23, 2023Inventors: Zachary Z. BECKER, Michelle CHUA, Thorsten GERNOTH, Michael P. JOHNSON, Allison W. DRYER
-
Publication number: 20230377299Abstract: Generating a three-dimensional virtual representation of a three-dimensional physical object can be based on capturing or receiving a capture bundle or a set of images. In some examples, generating the virtual representation of the physical object can be facilitated by user interfaces for identifying a physical object and capturing a set of images of the physical object. Generating the virtual representation can include previewing or modifying a set of images. In some examples, generating the virtual representation of the physical object can include generating a first representation of the physical object (e.g., a point cloud) and/or generating a second three-dimensional virtual representation of the physical object (e.g., a mesh reconstruction). In some examples, a visual indication of the progress of the image capture process and/or the generation of the virtual representation of the three-dimensional object can be displayed, such as in a capture user interface.Type: ApplicationFiled: May 15, 2023Publication date: November 23, 2023Inventors: Zachary Z. BECKER, Michelle CHUA, Thorsten GERNOTH, Michael P. JOHNSON
-
Publication number: 20230377300Abstract: Generating a three-dimensional virtual representation of a three-dimensional physical object can be based on capturing or receiving a capture bundle or a set of images. In some examples, generating the virtual representation of the physical object can be facilitated by user interfaces for identifying a physical object and capturing a set of images of the physical object. Generating the virtual representation can include previewing or modifying a set of images. In some examples, generating the virtual representation of the physical object can include generating a first representation of the physical object (e.g., a point cloud) and/or generating a second three-dimensional virtual representation of the physical object (e.g., a mesh reconstruction). In some examples, a visual indication of the progress of the image capture process and/or the generation of the virtual representation of the three-dimensional object can be displayed, such as in a capture user interface.Type: ApplicationFiled: May 15, 2023Publication date: November 23, 2023Inventors: Zachary Z. BECKER, Michelle CHUA, Thorsten GERNOTH, Michael P. JOHNSON, Allison W. DRYER
-
Publication number: 20230350536Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting a point-of-view (POV) for displaying an environment. In some implementations, a device includes a display, one or more processors, and a non-transitory memory. In some implementations, a method includes obtaining a request to display a graphical environment. The graphical environment is associated with a set of saliency values corresponding to respective portions of the graphical environment. A POV for displaying the graphical environment is selected based on the set of saliency values. The graphical environment is displayed from the selected POV on the display.Type: ApplicationFiled: February 22, 2023Publication date: November 2, 2023Inventors: Dan Feng, Aashi Manglik, Adam M. O'Hern, Bo Morgan, Bradley W. Peebler, Daniel L. Kovacs, Edward Ahn, James Moll, Mark E. Drummond, Michelle Chua, Mu Qiao, Noah Gamboa, Payal Jotwani, Siva Chandra Mouli Sivapurapu
-
Patent number: 11782571Abstract: Various implementations disclosed herein include a method performed by a device. While executing a CGR application, the method includes displaying a three-dimensional object in a three-dimensional space, wherein the three-dimensional space is defined by a three-dimensional coordinate system. The method also includes: detecting a first user input directed to the three-dimensional object; and in response to detecting the first user input, displaying a spatial manipulation user interface element including a set of spatial manipulation affordances respectively associated with a set of spatial manipulations of the three-dimensional object, wherein each of the set of spatial manipulations corresponds to a translational movement of the three-dimensional object along a corresponding axis of the three-dimensional space.Type: GrantFiled: July 19, 2022Date of Patent: October 10, 2023Assignee: APPLE INC.Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Publication number: 20230031832Abstract: A three-dimensional preview of content can be generated and presented at an electronic device in a three-dimensional environment. The three-dimensional preview of content can be presented concurrently with a two-dimensional representation of the content in a content generation environment presented in the three-dimensional environment. While the three-dimensional preview of content is presented in the three-dimensional environment, one or more affordances can be provided for interacting with the one or more computer-generated virtual objects of the three-dimensional preview. The one or more affordances may be displayed with the three-dimensional preview of content in the three-dimensional environment. The three-dimensional preview of content may be presented on a three-dimensional tray and the one or more affordances may be presented in a control bar or other grouping of controls outside the perimeter of the tray and/or along the perimeter of the tray.Type: ApplicationFiled: July 15, 2022Publication date: February 2, 2023Inventors: David A. LIPTON, Ryan S. BURGOYNE, Michelle CHUA, Zachary Z. BECKER, Karen N. WONG, Eric G. THIVIERGE, Mahdi NABIYOUNI, Eric CHIU, Tyler L. CASELLA
-
Publication number: 20220413691Abstract: A computer-generated virtual object manipulator having one or more affordances for manipulating a computer-generated virtual object is disclosed. Selection of a virtual object can cause an object manipulator to be displayed over the virtual object. The object manipulator can include a cone-shaped single-axis translation affordance for each of one or more object axes, a disc-shaped single-axis scale affordance for each of the one or more object axes, an arc-shaped rotation affordance for rotation about each of the one or more object axes, and a center of object affordance for free space movement of the virtual object. The object manipulator can also include a slice-shaped two-axis translation affordance that can be displayed after hovering over an area in a particular plane.Type: ApplicationFiled: June 16, 2022Publication date: December 29, 2022Inventors: Zachary Z. BECKER, Michelle CHUA, David A. LIPTON, Robin Yann Joram STORM, Eric G. THIVIERGE, Jue WANG
-
Publication number: 20220350461Abstract: Various implementations disclosed herein include a method performed by a device. While executing a CGR application, the method includes displaying a three-dimensional object in a three-dimensional space, wherein the three-dimensional space is defined by a three-dimensional coordinate system. The method also includes: detecting a first user input directed to the three-dimensional object; and in response to detecting the first user input, displaying a spatial manipulation user interface element including a set of spatial manipulation affordances respectively associated with a set of spatial manipulations of the three-dimensional object, wherein each of the set of spatial manipulations corresponds to a translational movement of the three-dimensional object along a corresponding axis of the three-dimensional space.Type: ApplicationFiled: July 19, 2022Publication date: November 3, 2022Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Publication number: 20220291806Abstract: A method includes determining to present a computer-generated reality (CGR) object that is associated with a first anchor and a second anchor. The method includes determining, based on an image of a physical environment, whether the physical environment includes a portion corresponding to the first anchor. The method includes, in response to determining that the physical environment lacks a portion that corresponds to the first anchor, determining, based on the image, whether the physical environment includes a portion corresponding to the second anchor. The method includes, in response to determining that the physical environment includes a portion that corresponds to the second anchor, displaying, on the display, the CGR object at a location of the display corresponding to the second anchor.Type: ApplicationFiled: May 26, 2022Publication date: September 15, 2022Inventors: Cameron J. Dunn, Eric Steven Peyton, Olivier Marie Jacques Pinon, Etienne H. Guerard, David John Addey, Pau Sastre Miguel, Michelle Chua, Eric Thivierge
-
Patent number: 11429246Abstract: Various implementations disclosed herein include a method performed by a device. The method includes displaying a three-dimensional object in a three-dimensional space. The method includes displaying a spatial manipulation user interface element including a set of spatial manipulation affordances respectively associated with a set of spatial manipulations of the three-dimensional object. Each of the set of spatial manipulations corresponds to a translational movement of the three-dimensional object along a corresponding axis of the three-dimensional space. The method includes detecting a first user input directed to a first spatial manipulation affordance of the spatial manipulation affordance. The first spatial manipulation affordance is associated with a first axis of the three-dimensional space.Type: GrantFiled: October 1, 2021Date of Patent: August 30, 2022Assignee: Apple Inc.Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Patent number: 11385761Abstract: In one embodiment, a method of generating a computer-generated reality (CGR) file includes receiving, via one or more input devices, user input generating a computer-generated reality (CGR) scene, a user input associating an anchor with the CGR scene, user input associating one or more CGR objects with the CGR scene, wherein the CGR objects are to be displayed in association with the anchor, and user input associating a behavior with the CGR scene, wherein the behavior includes one or more triggers and actions and wherein the actions are performed in response to detecting any of the triggers. The method includes generating a CGR file including data regarding the CGR scene, the CGR file including data regarding the anchor, the CGR objects, and the behavior.Type: GrantFiled: June 3, 2020Date of Patent: July 12, 2022Assignee: APPLE INC.Inventors: Cameron J. Dunn, Eric Steven Peyton, Olivier Marie Jacques Pinon, Etienne H. Guerard, David John Addey, Pau Sastre Miguel, Michelle Chua, Eric Thivierge
-
Publication number: 20220019335Abstract: Various implementations disclosed herein include a method performed by a device. The method includes displaying a three-dimensional object in a three-dimensional space. The method includes displaying a spatial manipulation user interface element including a set of spatial manipulation affordances respectively associated with a set of spatial manipulations of the three-dimensional object. Each of the set of spatial manipulations corresponds to a translational movement of the three-dimensional object along a corresponding axis of the three-dimensional space. The method includes detecting a first user input directed to a first spatial manipulation affordance of the spatial manipulation affordance. The first spatial manipulation affordance is associated with a first axis of the three-dimensional space.Type: ApplicationFiled: October 1, 2021Publication date: January 20, 2022Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Publication number: 20210383097Abstract: Various implementations disclosed herein include devices, systems, and methods that facilitate the creation of a 3D model for object detection based on a scan of the object. Some implementations provide a user interface that a user interacts with to facilitate a scan of an object to create 3D model of the object for later object detection. The user interface may include an indicator that provides visual or audible feedback to the user indicating the direction that the capturing device is facing relative to the object being scanned. The direction of the capture device may be identified using sensors on the device (e.g., inertial measurement unit (IMU), gyroscope, etc.) or other techniques (e.g., visual inertial odometry (VIO)) and based on the user positioning the device so that the object is in view.Type: ApplicationFiled: August 19, 2021Publication date: December 9, 2021Inventors: Etienne GUERARD, Omar SHAIK, Michelle CHUA, Zachary Z. BECKER
-
Patent number: 11182044Abstract: A method includes displaying a spatial manipulation user interface element including a first set of spatial manipulation affordances respectively associated with a first set of spatial manipulations of a three-dimensional object. The first set of spatial manipulations is based on a first virtual camera perspective. The method includes, in response to detecting a user input changing the first virtual camera perspective to a second virtual camera perspective, changing display of the three-dimensional object from the first virtual camera perspective to the second virtual camera perspective, and displaying a second set of spatial manipulation affordances respectively associated with a second set of spatial manipulations. The second set of spatial manipulations is based on the second virtual camera perspective, and includes a spatial manipulation excluded from the first set of spatial manipulations.Type: GrantFiled: May 29, 2020Date of Patent: November 23, 2021Assignee: APPLE INC.Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Publication number: 20200387289Abstract: In one embodiment, a method of generating a computer-generated reality (CGR) file includes receiving, via one or more input devices, user input generating a computer-generated reality (CGR) scene, a user input associating an anchor with the CGR scene, user input associating one or more CGR objects with the CGR scene, wherein the CGR objects are to be displayed in association with the anchor, and user input associating a behavior with the CGR scene, wherein the behavior includes one or more triggers and actions and wherein the actions are performed in response to detecting any of the triggers. The method includes generating a CGR file including data regarding the CGR scene, the CGR file including data regarding the anchor, the CGR objects, and the behavior.Type: ApplicationFiled: June 3, 2020Publication date: December 10, 2020Inventors: Cameron J. Dunn, Eric Steven Peyton, Olivier Marie Jacques Pinon, Etienne H. Guerard, David John Addey, Pau Sastre Miguel, Michelle Chua, Eric Thivierge
-
Publication number: 20200379626Abstract: In one implementation, a method of spatially manipulating a three-dimension object includes displaying a three-dimensional object in a three-dimensional space from a first virtual camera perspective, wherein the three-dimensional space is defined by a three-dimensional coordinate system including three perpendicular axes. The method includes displaying a spatial manipulation user interface element including a first set of spatial manipulation affordances respectively associated with a first set of spatial manipulations of the three-dimensional object, wherein the first set of spatial manipulations is based on the first virtual camera perspective. The method includes detecting a user input changing the first virtual camera perspective to a second virtual camera perspective.Type: ApplicationFiled: May 29, 2020Publication date: December 3, 2020Inventors: Gerald Louis Guyomard, Etienne H. Guerard, Adam Michael O'Hern, Michelle Chua, Robin-Yann Joram Storm, Adam James Bolton, Zachary Becker, Bradley Warren Peebler
-
Patent number: 10251549Abstract: En face views of OCT volumes provide important and complementing visualizations of the retina and optic nerve head investigating biomarkers of diseases affecting the retina. We demonstrate the combination of real time processing of OCT volumetric data for axial tracking. In combination with a Controllable Optical Element (COE), this invention demonstrates acquisition, real time tracking, automated focus on depth resolved en face layers extracted from a volume, and focus stacked OCT volumes with high resolution throughout an extended depth range.Type: GrantFiled: February 7, 2017Date of Patent: April 9, 2019Inventors: Marinko Venci Sarunic, Yifan Jian, Eunice Michelle Chua Cua, Sujin Lee, Dongkai Miao
-
Patent number: 10186086Abstract: An augmented reality head-mounted device includes a gaze detector, a camera, and a communication interface. The gaze detector determines a gaze vector of an eye of a wearer of the augmented reality head-mounted device. The camera images a physical space including a display of a computing device. The communication interface sends a control signal to the computing device in response to a wearer input. The control signal indicates a location at which the gaze vector intersects the display and useable by the computing device to adjust operation of the computing device.Type: GrantFiled: September 2, 2015Date of Patent: January 22, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Riccardo Giraldi, Anatolie Gavriliuc, Michelle Chua, Andrew Frederick Muehlhausen, Robert Thomas Held, Joseph van den Heuvel
-
Patent number: 9865091Abstract: Examples are disclosed herein that relate to identifying and localizing devices in an environment via an augmented reality display device. One example provides, on a portable augmented reality computing device, a method including establishing a coordinate frame for an environment, and discovering, via a location-sensitive input device, a location of a physical manifestation of the device in the environment, assigning a device location for the device in the coordinate frame based upon the location of the physical manifestation, and modifying an output of the portable augmented reality computing device based upon a change in relative position between the portable augmented reality computing device and the physical manifestation in environment.Type: GrantFiled: September 2, 2015Date of Patent: January 9, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Riccardo Giraldi, Anatolie Gavriliuc, Michelle Chua, Andrew Frederick Muehlhausen, Robert Thomas Held, Joseph van den Heuvel, Todd Alan Omotani, Richard J. Wifall, Christian Sadak, Gregory Alt
-
Publication number: 20170227350Abstract: En face views of OCT volumes provide important and complementing visualizations of the retina and optic nerve head investigating biomarkers of diseases affecting the retina. We demonstrate the combination of real time processing of OCT volumetric data for axial tracking. In combination with a Controllable Optical Element (COE), this invention demonstrates acquisition, real time tracking, automated focus on depth resolved en face layers extracted from a volume, and focus stacked OCT volumes with high resolution throughout an extended depth range.Type: ApplicationFiled: February 7, 2017Publication date: August 10, 2017Inventors: Marinko Venci Sarunic, Yifan Jian, Eunice Michelle Chua Cua, Sujin Lee, Dongkai Miao