Patents by Inventor Tyler L. Casella

Tyler L. Casella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134493
    Abstract: An exemplary process presents a first set of views having a first set of options for using content in a 3D environment, wherein the first set of views are provided from a first set of viewpoints, determines to present a second set of for using the content in the 3D environment based on user interaction data, wherein the second set of options includes fewer options than the first set of options and in accordance with determining to present the second set of options, presents a second set of views including the second set of options, wherein the second set of views are provided from a second set of viewpoints in the 3D environment.
    Type: Application
    Filed: March 2, 2022
    Publication date: April 25, 2024
    Inventors: Scott Bassett, Tyler L. Casella, Benjamin B. Loggins, Amanda K. Warren
  • Patent number: 11809620
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: November 7, 2023
    Assignee: APPLE INC.
    Inventors: Norman N. Wang, Tyler L Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20230290042
    Abstract: Various implementations disclosed herein include devices, systems, and methods that presents playback of application content within a three-dimensional (3D) environment. An exemplary process presents a first set of views that includes application content provided by the application within a 3D environment. The first set of views are provided from a first set of viewpoints during execution of the application. The process records of the execution of the application based on recording program state information and changes to the application content that are determined based on user interactions, and presents a second set of views including a playback of the application content within the 3D environment based on the recording. The second set of views are provided from a second set of viewpoints that are different than the first set of viewpoints.
    Type: Application
    Filed: March 13, 2023
    Publication date: September 14, 2023
    Inventors: Tyler L. CASELLA, Yi ZHOU, Maneli NOORKAMI, David J. ADDEY
  • Publication number: 20230065077
    Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes rendering a first volumetric object in order to generate first object data. The method includes displaying, on the display, the first object data according to a first display mode. The first display mode includes displaying the first object data within a two-dimensional (2D) content region. The method includes detecting a request to change from the first display mode to a second display mode. The method includes, in response to detecting the request, displaying, on the display, the first object data according to the second display mode. The second display mode includes displaying the first object data within a representation of a physical environment.
    Type: Application
    Filed: June 27, 2022
    Publication date: March 2, 2023
    Inventors: David Lui, Xiao Jin Yu, Tyler L. Casella, Hon-ming Chen, Shuai Song
  • Publication number: 20230032771
    Abstract: Three-dimensional data can be synchronized between a first electronic device and a second electronic device. A content creation application may be running on the first electronic device and may utilize a data file describing a three-dimensional content item. A two-dimensional representation of the content item may be displayed on the first electronic device. A user may request to preview the two-dimensional representation of the content item in three-dimensions. The first electronic device may initiate a data transfer with the second electronic device. The three-dimensional data of the data file may be transferred, via a communication link, from the content creation of the first electronic device to a three-dimensional graphic rendering application at the second electronic device. The three-dimensional graphic rendering application may generate a preview of the content item in three-dimensions based on the received three-dimensional data.
    Type: Application
    Filed: July 22, 2022
    Publication date: February 2, 2023
    Inventors: Peter G. ZION, Tyler L. CASELLA, Omar SHAIK, Benjamin B. LOGGINS, Eric S. PEYTON, Christopher H. DEMPSEY
  • Publication number: 20230031832
    Abstract: A three-dimensional preview of content can be generated and presented at an electronic device in a three-dimensional environment. The three-dimensional preview of content can be presented concurrently with a two-dimensional representation of the content in a content generation environment presented in the three-dimensional environment. While the three-dimensional preview of content is presented in the three-dimensional environment, one or more affordances can be provided for interacting with the one or more computer-generated virtual objects of the three-dimensional preview. The one or more affordances may be displayed with the three-dimensional preview of content in the three-dimensional environment. The three-dimensional preview of content may be presented on a three-dimensional tray and the one or more affordances may be presented in a control bar or other grouping of controls outside the perimeter of the tray and/or along the perimeter of the tray.
    Type: Application
    Filed: July 15, 2022
    Publication date: February 2, 2023
    Inventors: David A. LIPTON, Ryan S. BURGOYNE, Michelle CHUA, Zachary Z. BECKER, Karen N. WONG, Eric G. THIVIERGE, Mahdi NABIYOUNI, Eric CHIU, Tyler L. CASELLA
  • Publication number: 20230030699
    Abstract: Three-dimensional data can be synchronized between a first electronic device and a second electronic device. A content creation application may be running on the first electronic device and may utilize a data file describing a three-dimensional content item. A two-dimensional representation of the content item may be displayed on the first electronic device. A user may request to preview the two-dimensional representation of the content item in three-dimensions. The first electronic device may initiate a data transfer with the second electronic device. The three-dimensional data of the data file may be transferred, via a communication link, from the content creation of the first electronic device to a three-dimensional graphic rendering application at the second electronic device. The three-dimensional graphic rendering application may generate a preview of the content item in three-dimensions based on the received three-dimensional data.
    Type: Application
    Filed: July 22, 2022
    Publication date: February 2, 2023
    Inventors: Peter G. ZION, Tyler L. CASELLA, Omar SHAIK, Benjamin B. LOGGINS, Eric S. PEYTON, Christopher H. DEMPSEY
  • Patent number: 11520401
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: December 6, 2022
    Assignee: APPLE INC.
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20220155863
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 3, 2022
    Publication date: May 19, 2022
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 11275438
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: March 15, 2022
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20210286701
    Abstract: Systems and methods for simulated reality view-based breakpoints are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based at least on the motion data, a view within a simulated reality environment presented using a head-mounted display; detecting that the view is a member of a set of views associated with a breakpoint; based at least on the view being a member of the set of views, triggering the breakpoint; responsive to the breakpoint being triggered, performing a debug action associated with the breakpoint; and, while performing the debug action, continuing to execute a simulation process of the simulated reality environment to enable a state of at least one virtual object in the simulated reality environment to continue to evolve and be viewed with the head-mounted display.
    Type: Application
    Filed: March 16, 2021
    Publication date: September 16, 2021
    Inventors: Tyler L. Casella, Norman N. Wang, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 11113989
    Abstract: A device implementing dynamic library access based on proximate programmable item detection includes a sensor and a processor configured to detect, using the sensor, a programmable physical item in a proximate area. The processor is further configured to, responsive to detecting the programmable physical item, provide an indication of available functions for programming the programmable physical item. The processor is further configured to receive input of code that comprises at least one of the available functions for programming the programmable physical item. The processor is further configured to program the programmable physical item based at least in part on the code. In one or more implementations, the processor may be further configured to translate the code into a set of commands for programming the programmable physical item and to transmit the set of commands to the programmable physical item.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: September 7, 2021
    Assignee: Apple Inc.
    Inventors: Tyler L. Casella, Edwin W. Foo, Norman N. Wang, Ken Wakasa
  • Patent number: 11107367
    Abstract: A device implementing an adaptive assembly guidance system includes an image sensor and a processor configured to capture, using the image sensor, an image of a set of connectable components. The processor is further configured to process the captured image to detect individual connectable components of the set of connectable components and to detect a current configuration of the set of connectable components. The processor is further configured to determine, based at least in part on the detected individual connectable components of the set of connectable components, a recommended configuration of the set of connectable components. The processor is further configured to display information for assembling the set of connectable components into the recommended configuration from the current configuration.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: August 31, 2021
    Assignee: Apple Inc.
    Inventors: Tyler L. Casella, Edwin W. Foo, Norman N. Wang, Ken Wakasa
  • Publication number: 20210157405
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20210157404
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 10984607
    Abstract: One exemplary implementation involves performing operations at a device with one or more processors, a camera, and a computer-readable storage medium, such as a desktop computer, laptop computer, tablet, or mobile phone. The device receives a data object corresponding to three dimensional (3D) content from a separate device. The device receives input corresponding to a user selection to view the 3D content in a computer generated reality (CGR) environment, and in response, displays the CGR environment at the device. To display the CGR environment the device uses the camera to capture images and constructs the CGR environment using the data object and the captured images.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 20, 2021
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Wei Lun Huang, David Lui, Tyler L. Casella, Ross R. Dexter
  • Publication number: 20210034319
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable two or more devices to simultaneously view or edit the same 3D model in the same or different settings/viewing modes (e.g., monoscopically, stereoscopically, in SR, etc.). In an example, one or more users are able to use different devices to interact in the same setting to view or edit the same 3D model using different views from different viewpoints. The devices can each display different views from different viewpoints of the same 3D model and, as changes are made to the 3D model, consistency of the views on the devices is maintained.
    Type: Application
    Filed: October 15, 2020
    Publication date: February 4, 2021
    Inventors: Norman N. Wang, Benjamin B. Loggins, Ross R. Dexter, Tyler L. Casella
  • Publication number: 20200004327
    Abstract: A method for debugging includes determining an eye focus depth for a user, determining a virtual focus point relative to a virtual view location in a virtual environment based on the eye focus depth for the user, transitioning a first object from the virtual environment from a first rendering mode to a second rendering mode based on a location of the virtual focus point relative to the first object, wherein visibility of a second object from the virtual view location is occluded by the first object in the first rendering mode and visibility of the second object from the virtual view location is not occluded by the first object in the second rendering mode, and activating a function of a development interface relative to the second object while the first object is in the second rendering mode.
    Type: Application
    Filed: June 26, 2019
    Publication date: January 2, 2020
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 10210645
    Abstract: This disclosure relates generally to the field of image processing and, more particularly, to various techniques and animation tools for allowing 2D and 3D graphics rendering and animation infrastructures to be able to dynamically render customized animations—without the need for the customized animations to be explicitly tied to any particular graphical entity. These so-called entity agnostic animations may then be integrated into “mixed” graphical scenes (i.e., scenes with both two-dimensional and three-dimensional components), where they may be: applied to any suitable graphical entity; visualized in real-time by the programmer; edited dynamically by the programmer; and shared across various computing platforms and environments that support the entity agnostic animation tools described herein.
    Type: Grant
    Filed: June 7, 2015
    Date of Patent: February 19, 2019
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Jacques P. Gasselin de Richebourg, Ross R. Dexter, Tyler L. Casella
  • Patent number: 10115181
    Abstract: A method of assembling a tile map can include assigning each tile in a plurality of tiles to one or more color groups in correspondence with a measure of a color profile of the respective tile: A position of each tile in relation to one or more neighboring tiles can be determined from a position of a silhouette corresponding to each respective tile in relation to one or more neighboring silhouettes within a set containing a plurality of silhouettes. The plurality of tiles can be automatically assembled into a tile map, with a position of each tile in the tile map being determined from the color group to which the respective tile belongs and the determined position of the respective tile in relation to the one or more neighboring tiles. Tangible, non-transitory computer-readable media can include computer executable instructions that, when executed, cause a computing environment to implement disclosed methods.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: October 30, 2018
    Assignee: Apple Inc.
    Inventors: Ross R. Dexter, Timothy R. Oriol, Clement P. Boissiere, Tyler L. Casella, Norman N. Wang