Patents by Inventor Justin D. Stoyles
Justin D. Stoyles has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230393796Abstract: In some exemplary processes for controlling an external device using a computer-generated reality interface, a view of a physical environment that includes the external device is provided through the display, and information specifying a function of the external device is received from the external device. While the view of the physical environment is being provided, an affordance corresponding to the function is displayed at a position on the display that overlays at least a portion of the external device.Type: ApplicationFiled: August 18, 2023Publication date: December 7, 2023Inventors: Justin D. STOYLES, Michael KUHN
-
Publication number: 20230376261Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causes a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.Type: ApplicationFiled: August 1, 2023Publication date: November 23, 2023Inventors: Justin D. STOYLES, Michael KUHN
-
Patent number: 11762619Abstract: In some exemplary processes for controlling an external device using a computer-generated reality interface, information specifying a function of the external device is received from the external device. First image data of a physical environment that includes the external device is obtained with one or more image sensors. A representation of the physical environment according to the first image data is displayed on a display. While displaying the representation of the physical environment, second image data identifying a gesture occurring between the display and the external device in the physical environment is obtained with the one or more image sensors. A determination is made as to whether the identified gesture satisfies one or more predetermined criteria associated with the function. In accordance with determining that the identified gesture satisfies one or more predetermined criteria associated with the function, the external device is caused to perform the function.Type: GrantFiled: August 4, 2021Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Patent number: 11762620Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causing a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.Type: GrantFiled: November 23, 2021Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Patent number: 11302086Abstract: The present disclosure relates to providing a software feature of an electronic product in an augmented reality (AR) environment. In some embodiments, images are obtained using one or more image sensors, a determination is made whether the obtained images include printed media depicting the electronic product, when the obtained images include the printed media depicting the electronic product, a virtual object corresponding to the electronic product is displayed in the AR environment, and the software feature of the electronic product is provided with the virtual object.Type: GrantFiled: January 5, 2021Date of Patent: April 12, 2022Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Publication number: 20220083303Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causing a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.Type: ApplicationFiled: November 23, 2021Publication date: March 17, 2022Inventors: Justin D. STOYLES, Michael KUHN
-
Patent number: 11227494Abstract: The present disclosure relates to providing transit information in an augmented reality environment. In some embodiments, images are obtained using one or more image sensors, a determination is made whether the obtained images include a map, and, in accordance with a set of one or more conditions being satisfied, transit information is displayed in the augmented reality environment. A location of the displayed transit information in the augmented reality environment may correspond to a respective feature of the map.Type: GrantFiled: September 24, 2018Date of Patent: January 18, 2022Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Patent number: 11188286Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causing a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.Type: GrantFiled: February 26, 2020Date of Patent: November 30, 2021Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Publication number: 20210365228Abstract: In some exemplary processes for controlling an external device using a computer-generated reality interface, information specifying a function of the external device is received from the external device. First image data of a physical environment that includes the external device is obtained with one or more image sensors. A representation of the physical environment according to the first image data is displayed on a display. While displaying the representation of the physical environment, second image data identifying a gesture occurring between the display and the external device in the physical environment is obtained with the one or more image sensors. A determination is made as to whether the identified gesture satisfies one or more predetermined criteria associated with the function. In accordance with determining that the identified gesture satisfies one or more predetermined criteria associated with the function, the external device is caused to perform the function.Type: ApplicationFiled: August 4, 2021Publication date: November 25, 2021Inventors: Justin D. STOYLES, Michael KUHN
-
Patent number: 11120600Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: GrantFiled: February 14, 2019Date of Patent: September 14, 2021Assignee: Apple Inc.Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
-
Patent number: 11086581Abstract: In some exemplary processes for controlling an external device using a computer-generated reality interface, information specifying a function of the external device is received from the external device. First image data of a physical environment that includes the external device is obtained with one or more image sensors. A representation of the physical environment according to the first image data is displayed on a display. While displaying the representation of the physical environment, second image data identifying a gesture occurring between the display and the external device in the physical environment is obtained with the one or more image sensors. A determination is made as to whether the identified gesture satisfies one or more predetermined criteria associated with the function. In accordance with determining that the identified gesture satisfies one or more predetermined criteria associated with the function, the external device is caused to perform the function.Type: GrantFiled: February 26, 2020Date of Patent: August 10, 2021Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Patent number: 11087559Abstract: The present disclosure relates to managing augmented reality content created on a first electronic device and viewed at a second electronic device. In some embodiments, the first electronic device determines its physical location, receives input representing user-generated augmented reality content, displays an augmented reality environment including the user-generated augmented reality content overlaid on a live view of the physical location, and sends the user-generated augmented reality content to an external storage repository. The second electronic device can then receive the user-generated augmented reality content, determine whether it is at the physical location, and display the user-generated augmented reality content when it is at the physical location.Type: GrantFiled: November 3, 2020Date of Patent: August 10, 2021Assignee: Apple Inc.Inventors: Michael Kuhn, Justin D. Stoyles
-
Patent number: 11087558Abstract: The present disclosure relates to managing augmented reality content created on a first electronic device and viewed at a second electronic device. In some embodiments, the first electronic device determines its physical location, receives input representing user-generated augmented reality content, displays an augmented reality environment including the user-generated augmented reality content overlaid on a live view of the physical location, and sends the user-generated augmented reality content to an external storage repository. The second electronic device can then receive the user-generated augmented reality content, determine whether it is at the physical location, and display the user-generated augmented reality content when it is at the physical location.Type: GrantFiled: September 21, 2020Date of Patent: August 10, 2021Assignee: Apple Inc.Inventors: Michael Kuhn, Justin D. Stoyles
-
Patent number: 10891800Abstract: The present disclosure relates to providing a software feature of an electronic product in an augmented reality (AR) environment. In some embodiments, images are obtained using one or more image sensors, a determination is made whether the obtained images include printed media depicting the electronic product, when the obtained images include the printed media depicting the electronic product, a virtual object corresponding to the electronic product is displayed in the AR environment, and the software feature of the electronic product is provided with the virtual object.Type: GrantFiled: June 10, 2020Date of Patent: January 12, 2021Assignee: Apple Inc.Inventors: Justin D. Stoyles, Michael Kuhn
-
Patent number: 10820795Abstract: In one implementation, a method includes: determining an interpupillary distance (IPD) measurement for a user based on a function of depth data obtained by the depth sensor and image data obtained by the image sensor; and calibrating a head-mounted device (HMD) provided to deliver augmented reality/virtual reality (AR/VR) content by setting one or more presentation parameters of the HMD based on the IPD measurement in order to tailor one or more AR/VR displays of the HMD to a field-of-view of the user.Type: GrantFiled: June 22, 2018Date of Patent: November 3, 2020Assignee: APPLE INC.Inventors: Thibaut Weise, Justin D. Stoyles, Michael Kuhn, Reinhard Klapfer, Stefan Misslinger
-
Publication number: 20200201444Abstract: In some exemplary processes for controlling an external device using a computer-generated reality interface, information specifying a function of the external device is received from the external device. First image data of a physical environment that includes the external device is obtained with one or more image sensors. A representation of the physical environment according to the first image data is displayed on a display. While displaying the representation of the physical environment, second image data identifying a gesture occurring between the display and the external device in the physical environment is obtained with the one or more image sensors. A determination is made as to whether the identified gesture satisfies one or more predetermined criteria associated with the function. In accordance with determining that the identified gesture satisfies one or more predetermined criteria associated with the function, the external device is caused to perform the function.Type: ApplicationFiled: February 26, 2020Publication date: June 25, 2020Inventors: Justin D. STOYLES, Michael KUHN
-
Publication number: 20200192622Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causing a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.Type: ApplicationFiled: February 26, 2020Publication date: June 18, 2020Inventors: Justin D. STOYLES, Michael KUHN
-
Publication number: 20190251728Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: ApplicationFiled: February 14, 2019Publication date: August 15, 2019Inventors: Justin D. STOYLES, Alexandre R. MOHA, Nicolas V. SCAPEL, Guillaume P. BARLIER, Aurelio GUZMAN, Bruno M. SOMMER, Nina DAMASKY, Thibaut WEISE, Thomas GOOSSENS, Hoan PHAM, Brian AMBERG
-
Patent number: 10210648Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: GrantFiled: November 10, 2017Date of Patent: February 19, 2019Assignee: Apple Inc.Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
-
Publication number: 20180336714Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.Type: ApplicationFiled: November 10, 2017Publication date: November 22, 2018Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg