Patents by Inventor Norman N. Wang

Norman N. Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135251
    Abstract: A method may include receiving a communication from a device at an artificial intelligence controller including state information for a software application component running on the device, the state information including information corresponding to at least one potential state change available to the software application component, and metrics associated with at least one end condition, interpreting the state information using the artificial intelligence controller, and selecting an artificial intelligence algorithm from a plurality of artificial intelligence algorithms for use by the software application component based on the interpreted state information; and transmitting, to the device, an artificial intelligence algorithm communication, the artificial intelligence algorithm communication indicating the selected artificial intelligence algorithm for use in the software application component on the device.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 25, 2024
    Applicant: Apple Inc.
    Inventors: Ross R. Dexter, Michael R. Brennan, Bruno M. Sommer, Norman N. Wang
  • Patent number: 11886957
    Abstract: A method may include receiving a communication from a device at an artificial intelligence controller including state information for a software application component running on the device, the state information including information corresponding to at least one potential state change available to the software application component, and metrics associated with at least one end condition, interpreting the state information using the artificial intelligence controller, and selecting an artificial intelligence algorithm from a plurality of artificial intelligence algorithms for use by the software application component based on the interpreted state information; and transmitting, to the device, an artificial intelligence algorithm communication, the artificial intelligence algorithm communication indicating the selected artificial intelligence algorithm for use in the software application component on the device.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Ross R. Dexter, Michael R. Brennan, Bruno M. Sommer, Norman N. Wang
  • Patent number: 11809620
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: November 7, 2023
    Assignee: APPLE INC.
    Inventors: Norman N. Wang, Tyler L Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 11537368
    Abstract: The subject technology provides for parsing a line of code in a project of an integrated development environment (IDE). The subject technology executes indirectly, using the interpreter, the parsed line of code. The interpreter references a translated source code document generated by a source code translation component from a machine learning (ML) document written in a particular data format. The translated source code document includes code in a chosen programming language specific to the IDE, and the code of the translated source code document is executable by the interpreter. Further the subject technology provides, by the interpreter, an output of the executed parsed line of code.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: December 27, 2022
    Assignee: Apple Inc.
    Inventors: Alexander B. Brown, Michael R. Siracusa, Norman N. Wang
  • Patent number: 11520401
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: December 6, 2022
    Assignee: APPLE INC.
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20220155863
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 3, 2022
    Publication date: May 19, 2022
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 11275438
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: March 15, 2022
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20210286701
    Abstract: Systems and methods for simulated reality view-based breakpoints are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based at least on the motion data, a view within a simulated reality environment presented using a head-mounted display; detecting that the view is a member of a set of views associated with a breakpoint; based at least on the view being a member of the set of views, triggering the breakpoint; responsive to the breakpoint being triggered, performing a debug action associated with the breakpoint; and, while performing the debug action, continuing to execute a simulation process of the simulated reality environment to enable a state of at least one virtual object in the simulated reality environment to continue to evolve and be viewed with the head-mounted display.
    Type: Application
    Filed: March 16, 2021
    Publication date: September 16, 2021
    Inventors: Tyler L. Casella, Norman N. Wang, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 11113989
    Abstract: A device implementing dynamic library access based on proximate programmable item detection includes a sensor and a processor configured to detect, using the sensor, a programmable physical item in a proximate area. The processor is further configured to, responsive to detecting the programmable physical item, provide an indication of available functions for programming the programmable physical item. The processor is further configured to receive input of code that comprises at least one of the available functions for programming the programmable physical item. The processor is further configured to program the programmable physical item based at least in part on the code. In one or more implementations, the processor may be further configured to translate the code into a set of commands for programming the programmable physical item and to transmit the set of commands to the programmable physical item.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: September 7, 2021
    Assignee: Apple Inc.
    Inventors: Tyler L. Casella, Edwin W. Foo, Norman N. Wang, Ken Wakasa
  • Patent number: 11107367
    Abstract: A device implementing an adaptive assembly guidance system includes an image sensor and a processor configured to capture, using the image sensor, an image of a set of connectable components. The processor is further configured to process the captured image to detect individual connectable components of the set of connectable components and to detect a current configuration of the set of connectable components. The processor is further configured to determine, based at least in part on the detected individual connectable components of the set of connectable components, a recommended configuration of the set of connectable components. The processor is further configured to display information for assembling the set of connectable components into the recommended configuration from the current configuration.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: August 31, 2021
    Assignee: Apple Inc.
    Inventors: Tyler L. Casella, Edwin W. Foo, Norman N. Wang, Ken Wakasa
  • Publication number: 20210157404
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Publication number: 20210157405
    Abstract: A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 10984607
    Abstract: One exemplary implementation involves performing operations at a device with one or more processors, a camera, and a computer-readable storage medium, such as a desktop computer, laptop computer, tablet, or mobile phone. The device receives a data object corresponding to three dimensional (3D) content from a separate device. The device receives input corresponding to a user selection to view the 3D content in a computer generated reality (CGR) environment, and in response, displays the CGR environment at the device. To display the CGR environment the device uses the camera to capture images and constructs the CGR environment using the data object and the captured images.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 20, 2021
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Wei Lun Huang, David Lui, Tyler L. Casella, Ross R. Dexter
  • Publication number: 20210034319
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable two or more devices to simultaneously view or edit the same 3D model in the same or different settings/viewing modes (e.g., monoscopically, stereoscopically, in SR, etc.). In an example, one or more users are able to use different devices to interact in the same setting to view or edit the same 3D model using different views from different viewpoints. The devices can each display different views from different viewpoints of the same 3D model and, as changes are made to the 3D model, consistency of the views on the devices is maintained.
    Type: Application
    Filed: October 15, 2020
    Publication date: February 4, 2021
    Inventors: Norman N. Wang, Benjamin B. Loggins, Ross R. Dexter, Tyler L. Casella
  • Publication number: 20200004327
    Abstract: A method for debugging includes determining an eye focus depth for a user, determining a virtual focus point relative to a virtual view location in a virtual environment based on the eye focus depth for the user, transitioning a first object from the virtual environment from a first rendering mode to a second rendering mode based on a location of the virtual focus point relative to the first object, wherein visibility of a second object from the virtual view location is occluded by the first object in the first rendering mode and visibility of the second object from the virtual view location is not occluded by the first object in the second rendering mode, and activating a function of a development interface relative to the second object while the first object is in the second rendering mode.
    Type: Application
    Filed: June 26, 2019
    Publication date: January 2, 2020
    Inventors: Norman N. Wang, Tyler L. Casella, Benjamin Breckin Loggins, Daniel M. Delwood
  • Patent number: 10210645
    Abstract: This disclosure relates generally to the field of image processing and, more particularly, to various techniques and animation tools for allowing 2D and 3D graphics rendering and animation infrastructures to be able to dynamically render customized animations—without the need for the customized animations to be explicitly tied to any particular graphical entity. These so-called entity agnostic animations may then be integrated into “mixed” graphical scenes (i.e., scenes with both two-dimensional and three-dimensional components), where they may be: applied to any suitable graphical entity; visualized in real-time by the programmer; edited dynamically by the programmer; and shared across various computing platforms and environments that support the entity agnostic animation tools described herein.
    Type: Grant
    Filed: June 7, 2015
    Date of Patent: February 19, 2019
    Assignee: Apple Inc.
    Inventors: Norman N. Wang, Jacques P. Gasselin de Richebourg, Ross R. Dexter, Tyler L. Casella
  • Publication number: 20180349114
    Abstract: The subject technology provides for parsing a line of code in a project of an integrated development environment (IDE). The subject technology executes indirectly, using the interpreter, the parsed line of code. The interpreter references a translated source code document generated by a source code translation component from a machine learning (ML) document written in a particular data format. The translated source code document includes code in a chosen programming language specific to the IDE, and the code of the translated source code document is executable by the interpreter. Further the subject technology provides, by the interpreter, an output of the executed parsed line of code.
    Type: Application
    Filed: September 29, 2017
    Publication date: December 6, 2018
    Inventors: Alexander B. Brown, Michael R. Siracusa, Norman N. Wang
  • Patent number: 10115181
    Abstract: A method of assembling a tile map can include assigning each tile in a plurality of tiles to one or more color groups in correspondence with a measure of a color profile of the respective tile: A position of each tile in relation to one or more neighboring tiles can be determined from a position of a silhouette corresponding to each respective tile in relation to one or more neighboring silhouettes within a set containing a plurality of silhouettes. The plurality of tiles can be automatically assembled into a tile map, with a position of each tile in the tile map being determined from the color group to which the respective tile belongs and the determined position of the respective tile in relation to the one or more neighboring tiles. Tangible, non-transitory computer-readable media can include computer executable instructions that, when executed, cause a computing environment to implement disclosed methods.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: October 30, 2018
    Assignee: Apple Inc.
    Inventors: Ross R. Dexter, Timothy R. Oriol, Clement P. Boissiere, Tyler L. Casella, Norman N. Wang
  • Patent number: 10114499
    Abstract: Systems, methods, and computer-readable media are provided for enabling efficient control of a media application at a media electronic device by a user electronic device, and, more particularly, for more practically handling initial and subsequent user touch events on a surface of a touchpad input component with respect to a potentially intended default center position and/or for more accurately enabling full saturation of a particular directional control.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: October 30, 2018
    Assignee: APPLE INC.
    Inventors: Jacques P. Gasselin de Richebourg, Norman N. Wang, James J. Cwik
  • Publication number: 20180277014
    Abstract: A device implementing dynamic library access based on proximate programmable item detection includes a sensor and a processor configured to detect, using the sensor, a programmable physical item in a proximate area. The processor is further configured to, responsive to detecting the programmable physical item, provide an indication of available functions for programming the programmable physical item. The processor is further configured to receive input of code that comprises at least one of the available functions for programming the programmable physical item. The processor is further configured to program the programmable physical item based at least in part on the code. In one or more implementations, the processor may be further configured to translate the code into a set of commands for programming the programmable physical item and to transmit the set of commands to the programmable physical item.
    Type: Application
    Filed: July 6, 2017
    Publication date: September 27, 2018
    Inventors: Tyler L. CASELLA, Edwin W. FOO, Norman N. WANG, Ken WAKASA