Patents by Inventor Jeffrey S. Norris

Jeffrey S. Norris has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144533
    Abstract: Various implementations disclosed herein include devices, systems, and methods that track a movement of an input device. For example, an example process may include determine a pose of a tracking device in a physical environment based on first sensor data from an image sensor. The process then may receive, from the tracking device, first positional data corresponding to a first relative positioning between the tracking device and an input device in the physical environment, where the first positional data is determined based on second sensor data obtained via a sensor on the tracking device. The process then may track movement of the input device in the physical environment based at least in part on the first positional data and the pose of the tracking device. The process then may determine an input for the electronic device based at least in part on tracking the movement of the input device.
    Type: Application
    Filed: January 10, 2024
    Publication date: May 2, 2024
    Inventors: Jeffrey S. Norris, Michael J. Rockwell, Tony Kobayashi, William D. Lindmeier
  • Patent number: 11915097
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide color visual markers that include colored markings that encode data, where the colors of the colored markings are determined by scanning (e.g., detecting the visual marker using a sensor of an electronic device) the visual marker itself. In some implementations, a visual marker is detected in an image of a physical environment. In some implementations, the visual marker is detected in the image by detecting a predefined shape of a first portion of the visual marker in the image. Then, a color-interpretation scheme is determined for interpreting colored markings of the visual marker that encode data by identifying a set of colors at a corresponding set of predetermined locations on the visual marker. Then, the data of the visual marker is decoded using the colored markings and the set of colors of the color-interpretation scheme.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Anselm Grundhoefer, Arun Srivatsan Rangaprasad, Jeffrey S. Norris, Paul Ewers, Scott G. Wade, Thomas G. Salter, Tom Sengelaub
  • Publication number: 20240062413
    Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.
    Type: Application
    Filed: October 26, 2023
    Publication date: February 22, 2024
    Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
  • Patent number: 11830214
    Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: November 28, 2023
    Assignee: APPLE INC.
    Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
  • Publication number: 20230298278
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine how to present a three-dimensional (3D) photo in an extended reality (XR) environment (e.g., in 3D, 2D, blurry, or not at all) based on viewing position of a user active in the XR environment relative to a placement of the 3D photo in the XR environment. In some implementations, at an electronic device having a processor, a 3D photo that is an incomplete 3D representation created based on one or more images captured by an image capture device is obtained. In some implementations, a viewing position of the electronic device relative to a placement position of the 3D photo is determined, and a presentation mode for the 3D photo is determined based on the viewing position. In some implementations, the 3D photo is provided at the placement position based on the presentation mode in the XR environment.
    Type: Application
    Filed: November 4, 2022
    Publication date: September 21, 2023
    Inventors: Alexandre DA VEIGA, Jeffrey S. NORRIS, Madhurani R. SAPRE, Spencer H. RAY
  • Publication number: 20230288701
    Abstract: Various implementations disclosed herein include devices, systems, and methods that are capable of executing an application on a head-mounted device (HMD) having a first image sensor in a first image sensor configuration. In some implementations, the application is configured for execution on a device including a second image sensor in a second image sensor configuration different than the first image sensor configuration. In some implementations, a request is received from the executing application for image data from the second image sensor. Responsive to the request at the HMD, a pose of a virtual image sensor is determined, image data is generated based on the pose of the virtual image sensor, and the generated image data is provided to the executing application.
    Type: Application
    Filed: March 22, 2023
    Publication date: September 14, 2023
    Inventors: Jeffrey S. NORRIS, Bruno M. SOMMER, Olivier GUTKNECHT
  • Publication number: 20230273706
    Abstract: Some examples of the disclosure are directed to methods for spatial placement of avatars in a communication session. In some examples, while a first electronic device is presenting a three-dimensional environment, the first electronic device may receive an input corresponding to a request to enter a communication session with a second electronic device. In some examples, in response to receiving the input, the first electronic device may scan an environment surrounding the first electronic device. In some examples, the first electronic device may identify a placement location in the three-dimensional environment at which to display a virtual object representing a user of the second electronic device. In some examples, the first electronic device displays the virtual object representing the user of the second electronic device at the placement location in the three-dimensional environment. Some examples of the disclosure are directed to methods for spatial refinement in the communication session.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 31, 2023
    Inventors: Connor A. SMITH, Benjamin H. BOESEL, David H. HUANG, Jeffrey S. NORRIS, Jonathan PERRON, Jordan A. CAZAMIAS, Miao REN, Shih-Sang CHIU
  • Publication number: 20230055232
    Abstract: A method includes receiving one or more signals that each indicate a device type for a respective remote device, identifying one or more visible devices in one or more images, matching a first device from the one or more visible devices with a first signal from the one or more signals based on a device type of the first device matching a device type for the first signal and based on a visible output of the first device, pairing the first device with a second device, and controlling a function of the first device using the second device.
    Type: Application
    Filed: November 3, 2022
    Publication date: February 23, 2023
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Patent number: 11532227
    Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20220114882
    Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20220094724
    Abstract: A device for providing operating system managed group communication sessions may include a memory and at least one processor. The at least one processor may be configured to receive, by an operating system level process executing on a device and from an application process executing on a device, a request to initiate a group session between a user associated with the device and another user. The at least one processor may be further configured to identify, by the operating system level process, another device associated with the other user. The at least one processor may be further configured to initiate, by the operating system level process, the group session with the user via the other device. The at least one processor may be further configured to manage, by the operating system level process, the group session.
    Type: Application
    Filed: April 6, 2021
    Publication date: March 24, 2022
    Inventors: Geoffrey STAHL, Jeffrey S. NORRIS, Timothy R. ORIOL, Joel N. KERR, Srinivas VEDULA, Bruno SOMMER
  • Patent number: 11275481
    Abstract: A method and system control navigation of a 3D CAD model in an augmented reality space. The 3D CAD model is rendered and appears as if the 3D CAD model is present in a physical space at true scale. A constrained translate or rotation mode is activated. In response to the activating, an axis triad is rendered. For translate, the axis triad is three lines extending from a point in each principal axis direction, and division markers are displayed along the lines at predefined division distances. One of the division markers is selected and the model is moved the predefined division distance of the selected division marker along the first line. For rotation, the axis triad is three perpendicular rings, and upon selecting one of the rings, an input gesture causes the model to rotate around the selected axis in the direction of the input gesture.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: March 15, 2022
    Assignee: CALIFORNIA INSTITUTE OF TECHNOLOGY
    Inventors: Matthew C. Clausen, Charles Goddard, Garrett K. Johnson, Marsette A. Vona, III, Victor X. Luo, Jeffrey S. Norris, Anthony Valderrama, Alexandra E. Samochina
  • Patent number: 11210932
    Abstract: A method includes identifying remote devices, at a host device, based on received signals that indicate locations and device types for the remote devices. The method also includes identifying visible devices in images of a location and matching a first visible device to a first remote device. The first visible device is matched with the first remote device based on presence of the first visible device within a search area of the images, the search area of the images is determined based on the location for the first remote device, the first visible device is matched with the first remote device based on the device type for the first remote device, and the first visible device is matched with the first remote device based on a machine recognizable indicator that is output by the first visible device. The method also includes pairing the first remote device with the host device.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: December 28, 2021
    Assignee: Apple Inc.
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20210097714
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine the relative positioning (e.g., offset) between a mobile electronic device and a visual marker. In some implementations, the determined relative positioning and a known position of the visual marker are used to determine a position (e.g., geo coordinates) of the mobile electronic device that is more accurate than existing techniques. In some implementations, the determined relative positioning is used with a position of the mobile electronic device to crowd source the stored position of the visual marker. In some implementations, the determined relative positioning and a position of the visual marker are used to determine a position of an object detected in an image by the mobile electronic device. In some implementations at an electronic device having a processor, locally-determined locations of a visual marker are received from mobile electronic devices that scan a visual marker.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 1, 2021
    Inventors: Anselm Grundhoefer, Jeffrey S. Norris, Mohamed Selim Ben Himane, Paul Ewers, Scott G. Wade, Shih-Sang (Carnaven) Chiu, Thomas G. Salter, Tom Sengelaub, Viral N. Parekh
  • Publication number: 20200371665
    Abstract: A method and system control navigation of a 3D CAD model in an augmented reality space. The 3D CAD model is rendered and appears as if the 3D CAD model is present in a physical space at true scale. A constrained translate or rotation mode is activated. In response to the activating, an axis triad is rendered. For translate, the axis triad is three lines extending from a point in each principal axis direction, and division markers are displayed along the lines at predefined division distances. One of the division markers is selected and the model is moved the predefined division distance of the selected division marker along the first line. For rotation, the axis triad is three perpendicular rings, and upon selecting one of the rings, an input gesture causes the model to rotate around the selected axis in the direction of the input gesture.
    Type: Application
    Filed: May 19, 2020
    Publication date: November 26, 2020
    Applicant: California Institute of Technology
    Inventors: Matthew C. Clausen, Charles Goddard, Garrett K. Johnson, Marsette A. Vona, III, Victor X. Luo, Jeffrey S. Norris, Anthony Valderrama, Alexandra E. Samochina
  • Publication number: 20200372789
    Abstract: A method includes identifying remote devices, at a host device, based on received signals that indicate locations and device types for the remote devices. The method also includes identifying visible devices in images of a location and matching a first visible device to a first remote device. The first visible device is matched with the first remote device based on presence of the first visible device within a search area of the images, the search area of the images is determined based on the location for the first remote device, the first visible device is matched with the first remote device based on the device type for the first remote device, and the first visible device is matched with the first remote device based on a machine recognizable indicator that is output by the first visible device. The method also includes pairing the first remote device with the host device.
    Type: Application
    Filed: May 19, 2020
    Publication date: November 26, 2020
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Patent number: 10657716
    Abstract: A method, apparatus, and system provide the ability to control navigation of a three-dimensional (3D) computer aided design (CAD) model in an augmented reality space. The 3D CAD model is rendered in the augmented reality space and appears as if it is present in a physical space at true scale. A virtual camera is defined as fixed to a current pose of a user's head. A virtual line segment S is constructed coincident with a ray R from a center of projection P of the virtual camera and a center pixel of the virtual camera. A check for geometric intersections between the virtual line segment S and surfaces of scene elements is conducted. Upon intersecting with a part of the model, a gaze cursor is rendered at an intersection point C closest to the center of projection P.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: May 19, 2020
    Assignee: CALIFORNIA INSTITUTE OF TECHNOLOGY
    Inventors: Matthew C. Clausen, Charles Goddard, Garrett K. Johnson, Marsette A. Vona, III, Victor X. Luo, Jeffrey S. Norris, Anthony J. Valderrama
  • Patent number: 10612908
    Abstract: An electronic device displays a field of view of a camera with a view of a three-dimensional space and updates the field of view based on changes detected by the camera. While a measurement-point-creation indicator is over a determined anchor point in the field of view, the device changes a visual appearance of the indicator to indicate that a measurement point will be added at the anchor point if a touch input meets first criteria. In response to a touch input that meets the first criteria, a measurement point is added at the anchor point if the indicator is over the anchor point, and at a location away from the anchor point if not. In response to movement of the camera changing the field of view, if the field of view does not include a feature to which measurement points can be added, the measurement-point-creation indicator ceases to be displayed.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: April 7, 2020
    Assignee: APPLE INC.
    Inventors: Allison W. Dryer, Grant R. Paul, Jonathan R. Dascola, Lisa K. Forssell, Andrew H. Goulding, Stephen O. Lemay, Giancarlo Yerkes, Jeffrey S. Norris, Alexandre Da Veiga
  • Publication number: 20190370994
    Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 5, 2019
    Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
  • Publication number: 20190340799
    Abstract: An electronic device displays a field of view of a camera with a view of a three-dimensional space and updates the field of view based on changes detected by the camera. While a measurement-point-creation indicator is over a determined anchor point in the field of view, the device changes a visual appearance of the indicator to indicate that a measurement point will be added at the anchor point if a touch input meets first criteria. In response to a touch input that meets the first criteria, a measurement point is added at the anchor point if the indicator is over the anchor point, and at a location away from the anchor point if not. In response to movement of the camera changing the field of view, if the field of view does not include a feature to which measurement points can be added, the measurement-point-creation indicator ceases to be displayed.
    Type: Application
    Filed: September 21, 2018
    Publication date: November 7, 2019
    Inventors: Allison W. Dryer, Grant R. Paul, Jonathan R. Dascola, Lisa K. Forssell, Andrew H. Goulding, Stephen O. Lemay, Giancarlo Yerkes, Jeffrey S. Norris, Alexandre Da Veiga