Patents by Inventor Paul Lacey
Paul Lacey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240402800Abstract: Various implementations disclosed herein include devices, systems, and methods that interpret user activity as user interactions with user interface (UI) elements positioned within a three-dimensional (3D) space such as an extended reality (XR) environment. Some implementations enable user interactions with virtual elements displayed in 3D environments that utilize alternative input modalities, e.g., XR environments that interpret user activity as either direct interactions or indirect interactions with virtual elements.Type: ApplicationFiled: May 29, 2024Publication date: December 5, 2024Inventors: Julian K. Shutzberg, David J. Meyer, David M. Teitelbaum, Mehmet N. Agaoglu, Ian R. Fasel, Chase B. Lortie, Daniel J. Brewer, Tim H. Cornelissen, Leah M. Gum, Alexander G. Berardino, Lorenzo Soto Doblado, Vinay Chawda, Itay Bar Yosef, Dror Irony, Eslam A. Mostafa, Guy Engelhard, Paul A. Lacey, Ashwin Kumar Asoka Kumar Shenoi, Bhavin Vinodkumar Nayak, Liuhao Ge, Lucas Soffer, Victor Belyaev, Bharat C. Dandu, Matthias M. Schroeder, Yirong Tang
-
Publication number: 20240393876Abstract: Various implementations provide views of 3D environments (e.g., extended reality (XR) environments). Non-eye-based user activity, such as hand gestures, is associated with some types of eye-based activity, such as the user gazing at a particular user interface component displayed within a view of a 3D environment. For example, a user's pinching hand gesture may be associated with the user gazing at a particular user interface component, such as a button, at around the same time as the pinching hand gesture is made. These associated behaviors (e.g., the pinch and gaze at the button) may then be interpreted as user input, e.g., user input selecting or otherwise acting upon that user interface component. In some implementations, non-eye-based user activity is only associated with types of eye-based user activity that are likely to correspond to a user perceiving what they are seeing and/or intentionally looking at something.Type: ApplicationFiled: July 31, 2024Publication date: November 28, 2024Inventors: Vinay Chawda, Mehmet N. Agaoglu, Leah M. Gum, Paul A. Lacey, Julian K. Shutzberg, Tim H. Cornelissen, Alexander G. Berardino
-
Publication number: 20240377540Abstract: Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times.Type: ApplicationFiled: July 12, 2024Publication date: November 14, 2024Applicant: Magic Leap, Inc.Inventors: David Cohen, Elad Joseph, Eyal Preter, Paul Lacey, Koon Keong Shee, Evyatar Bluzer
-
Patent number: 12099653Abstract: Various implementations provide views of 3D environments (e.g., extended reality (XR) environments). Non-eye-based user activity, such as hand gestures, is associated with some types of eye-based activity, such as the user gazing at a particular user interface component displayed within a view of a 3D environment. For example, a user's pinching hand gesture may be associated with the user gazing at a particular user interface component, such as a button, at around the same time as the pinching hand gesture is made. These associated behaviors (e.g., the pinch and gaze at the button) may then be interpreted as user input, e.g., user input selecting or otherwise acting upon that user interface component. In some implementations, non-eye-based user activity is only associated with types of eye-based user activity that are likely to correspond to a user perceiving what they are seeing and/or intentionally looking at something.Type: GrantFiled: September 11, 2023Date of Patent: September 24, 2024Assignee: APPLE INC.Inventors: Vinay Chawda, Mehmet N. Agaoglu, Leah M. Gum, Paul A. Lacey, Julian K. Shutzberg, Tim H. Cornelissen, Alexander G. Berardino
-
Patent number: 12066545Abstract: Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times.Type: GrantFiled: March 23, 2021Date of Patent: August 20, 2024Assignee: Magic Leap, Inc.Inventors: David Cohen, Elad Joseph, Eyal Preter, Paul Lacey, Koon Keong Shee, Evyatar Bluzer
-
Publication number: 20240272723Abstract: Techniques are disclosed for allowing a user's hands to interact with virtual objects. An image of at least one hand may be received from an image capture devices. A plurality of keypoints associated with at least one hand may be detected. In response to determining that a hand is making or is transitioning into making a particular gesture, a subset of the plurality of keypoints may be selected. An interaction point may be registered to a particular location relative to the subset of the plurality of keypoints based on the particular gesture. A proximal point may be registered to a location along the user's body. A ray may be cast from the proximal point through the interaction point. A multi-DOF controller for interacting with the virtual object may be formed based on the ray.Type: ApplicationFiled: April 12, 2024Publication date: August 15, 2024Applicant: Magic Leap, Inc.Inventor: Paul Lacey
-
Publication number: 20240257480Abstract: Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.Type: ApplicationFiled: April 8, 2024Publication date: August 1, 2024Inventors: Paul Lacey, Samuel A. Miller, Nicholas Atkinson Kramer, David Charles Lundmark
-
Patent number: 11983326Abstract: Techniques are disclosed for allowing a user's hands to interact with virtual objects. An image of at least one hand may be received from an image capture devices. A plurality of keypoints associated with at least one hand may be detected. In response to determining that a hand is making or is transitioning into making a particular gesture, a subset of the plurality of keypoints may be selected. An interaction point may be registered to a particular location relative to the subset of the plurality of keypoints based on the particular gesture. A proximal point may be registered to a location along the user's body. A ray may be cast from the proximal point through the interaction point. A multi-DOF controller for interacting with the virtual object may be formed based on the ray.Type: GrantFiled: February 25, 2021Date of Patent: May 14, 2024Assignee: Magic Leap, Inc.Inventor: Paul Lacey
-
Patent number: 11983823Abstract: Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.Type: GrantFiled: November 5, 2020Date of Patent: May 14, 2024Assignee: Magic Leap, Inc.Inventors: Paul Lacey, Samuel A. Miller, Nicholas Atkinson Kramer, David Charles Lundmark
-
Publication number: 20240103618Abstract: Methods and apparatus for correcting the gaze direction and the origin (entrance pupil) in gaze tracking systems. During enrollment after an eye model is obtained, the pose of the eye when looking at a target prompt is determined. This information is used to estimate the true visual axis of the eye. The visual axis may then be used to correct the point of view (PoV) with respect to the display during use. If a clip-on lens is present, a corrected gaze axis may be calculated based on the known optical characteristics and pose of the clip-on lens. A clip-on corrected entrance pupil may then be estimated by firing two or more virtual rays through the clip-on lens to determine the intersection between the rays and the corrected gaze axis.Type: ApplicationFiled: September 19, 2023Publication date: March 28, 2024Applicant: Apple Inc.Inventors: Julia Benndorf, Qichao Fan, Julian K. Shutzberg, Paul A. Lacey, Hua Gao
-
Publication number: 20240103613Abstract: Various implementations provide views of 3D environments (e.g., extended reality (XR) environments). Non-eye-based user activity, such as hand gestures, is associated with some types of eye-based activity, such as the user gazing at a particular user interface component displayed within a view of a 3D environment. For example, a user's pinching hand gesture may be associated with the user gazing at a particular user interface component, such as a button, at around the same time as the pinching hand gesture is made. These associated behaviors (e.g., the pinch and gaze at the button) may then be interpreted as user input, e.g., user input selecting or otherwise acting upon that user interface component. In some implementations, non-eye-based user activity is only associated with types of eye-based user activity that are likely to correspond to a user perceiving what they are seeing and/or intentionally looking at something.Type: ApplicationFiled: September 11, 2023Publication date: March 28, 2024Inventors: Vinay Chawda, Mehmet N. Agaoglu, Leah M. Gum, Paul A. Lacey, Julian K. Shutzberg, Tim H. Cornelissen, Alexander G. Birardino
-
Publication number: 20240004464Abstract: This document describes imaging and visualization systems in which the intent of a group of users in a shared space is determined and acted upon. In one aspect, a method includes identifying, for a group of users in a shared virtual space, a respective objective for each of two or more of the users in the group of users. For each of the two or more users, a determination is made, based on inputs from multiple sensors having different input modalities, a respective intent of the user. At least a portion of the multiple sensors are sensors of a device of the user that enables the user to participate in the shared virtual space. A determination is made, based on the respective intent, whether the user is performing the respective objective for the user. Output data is generated and provided based on the respective objectives respective intents.Type: ApplicationFiled: November 9, 2021Publication date: January 4, 2024Inventors: Paul Lacey, Brian David Schwab, Samuel A. Miller, John Andrew Sands, Colman Thomas Bryant
-
Publication number: 20210302587Abstract: Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times .Type: ApplicationFiled: March 23, 2021Publication date: September 30, 2021Applicant: Magic Leap, Inc.Inventors: David Cohen, Elad Joseph, Eyal Preter, Paul Lacey, Koon Keong Shee, Evyatar Bluzer
-
Publication number: 20210263593Abstract: Techniques are disclosed for allowing a user's hands to interact with virtual objects. An image of at least one hand may be received from an image capture devices. A plurality of keypoints associated with at least one hand may be detected. In response to determining that a hand is making or is transitioning into making a particular gesture, a subset of the plurality of keypoints may be selected. An interaction point may be registered to a particular location relative to the subset of the plurality of keypoints based on the particular gesture. A proximal point may be registered to a location along the user's body. A ray may be cast from the proximal point through the interaction point. A multi-DOF controller for interacting with the virtual object may be formed based on the ray.Type: ApplicationFiled: February 25, 2021Publication date: August 26, 2021Applicant: Magic Leap, Inc.Inventor: Paul Lacey
-
Publication number: 20210056764Abstract: Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.Type: ApplicationFiled: November 5, 2020Publication date: February 25, 2021Inventors: Paul Lacey, Samuel A. Miller, Nicholas Atkinson Kramer, David Charles Lundmark
-
Patent number: 10861242Abstract: Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.Type: GrantFiled: May 21, 2019Date of Patent: December 8, 2020Assignee: Magic Leap, Inc.Inventors: Paul Lacey, Samuel A. Miller, Nicholas Atkinson Kramer, David Charles Lundmark
-
Publication number: 20190362557Abstract: Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.Type: ApplicationFiled: May 21, 2019Publication date: November 28, 2019Inventors: Paul Lacey, Samuel A. Miller, Nicholas Atkinson Kramer, David Charles Lundmark
-
Patent number: 10021243Abstract: A method or apparatus for connecting a telephone call between a vehicle driver and a customer, the method comprising receiving a driver request message from a device associated with the vehicle driver to place a telephone call between the vehicle driver and a customer; using the driver request message to match the vehicle driver with a job allocation record in at least one database and identifying from the record the identity of the customer; retrieving a telephone number relating to the device associated with the vehicle driver and retrieving a customer telephone number from the at least one database; and causing a telephony service to use the telephone numbers to place the telephone call between the vehicle driver and the customer.Type: GrantFiled: February 24, 2016Date of Patent: July 10, 2018Assignee: Addison Lee LimitedInventor: Paul Lacey
-
Publication number: 20180075566Abstract: A System and Method of Calculating a Price for a Vehicle Journey A system and method for calculating a price for a vehicle journey are provided, the method performed at a vehicle management server. The vehicle management server receives a journey booking comprising journey information, the journey information specifying at least a start time, a start location and an end location. The journey information may also specify further information. A journey cost calculation module, which may be integral with or separate from the vehicle management server, calculates a base price for the journey based on the start and end points of the journey. The journey cost calculation module checks whether the start time of the journey booking is within a predetermined time period specified by a first modifier rule and if the start time of the journey booking is within the predetermined time period, applies a price adjustment to the calculated base price according to the first modifier rule.Type: ApplicationFiled: February 24, 2016Publication date: March 15, 2018Inventor: Paul Lacey
-
Publication number: 20160248914Abstract: A method or apparatus for connecting a telephone call between a vehicle driver and a customer, the method comprising receiving a driver request message from a device associated with the vehicle driver to place a telephone call between the vehicle driver and a customer; using the driver request message to match the vehicle driver with a job allocation record in at least one database and identifying from the record the identity of the customer; retrieving a telephone number relating to the device associated with the vehicle driver and retrieving a customer telephone number from the at least one database; and causing a telephony service to use the telephone numbers to place the telephone call between the vehicle driver and the customer.Type: ApplicationFiled: February 24, 2016Publication date: August 25, 2016Inventor: Paul Lacey