Patents by Inventor Luke Cartwright
Luke Cartwright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250110563Abstract: The subject technology receives, by one or more hardware processors implementing a local wireless network, a request from a client device to mirror media content displayed on a screen of the client device on a wearable device. In response to the request, the subject technology causes a display of the media content in a mirroring lens of the wearable device. While the media content is being displayed in the mirroring lens of the wearable device, the subject technology tracks hand gestures of a user wearing the wearable device and viewing the media content displayed in the mirroring lens of the wearable device. The subject technology processes navigational or manipulation data based on the tracked hand gestures and sends a navigation or manipulation instruction to the client device or a mirroring lens processor of the wearable device based on the tracked hand gestures.Type: ApplicationFiled: November 7, 2023Publication date: April 3, 2025Inventors: Hunter Araujo, Sambu Patach Arrojula, Ilteris Kaan Canberk, Luke Cartwright, Samuel Geoffrey Finding, Matthew Hallberg, Dmytro Kucher, Jeremy Littel, Charles Miller, William Miles Miller, Richard Zhuang
-
Patent number: 12266063Abstract: A head-wearable apparatus determines an imaginary reference plane intersecting a head of a user viewing augmented content in a viewing pane having vertical and lateral dimensions in a display of the head-wearable apparatus. The imaginary reference plane coincides with a first viewing direction of the head of the user. The apparatus detects a rotational movement of the head of the user in a vertical direction while viewing the augmented content. In response to the detected rotational movement, the apparatus determines a second viewing direction of the head of the user when viewing the augmented content in the second viewing direction and determines a reference angle between the imaginary reference plane and the second viewing direction. Based on the reference angle, the apparatus assign one of a billboard display mode and a headlock display mode (or combination) to the augmented content presented in the display.Type: GrantFiled: March 21, 2023Date of Patent: April 1, 2025Assignee: Snap Inc.Inventors: Luke Cartwright, Ilteris Kaan Canberk
-
Publication number: 20240370159Abstract: A method for transferring a media item from a portable device to a head-worn device for display in an augmented reality mood board comprises receiving user input at the portable device to transfer the media item from the portable device to the head-worn device, and transmitting, via a short-range data transmission protocol, a low-resolution representation of the media item, from the portable device to the head-worn device. A link to a higher-resolution representation may also be transmitted to the head-worn device, to enable the head-worn device to obtain the higher-resolution representation. The transmission of the low-resolution representation may commence as soon as initiation of the gesture is detected by the portable device.Type: ApplicationFiled: May 4, 2023Publication date: November 7, 2024Inventors: Luke Cartwright, Matthew Hallberg, Dmytro Kucher, Xiao Li, Shaun Stelmack, Jiyang Zhu
-
Publication number: 20240371110Abstract: A method for manipulating a gallery of extended reality (XR) media items displayed in a field of view of the head-worn device. The gallery of XR media items is associated with an anchor user interface element. User selection input is detected at a perceived location of the anchor user interface element. Subsequent user motion input moves the perceived location or orientation of the gallery of XR media items in the field of view of the head-worn device. Rotation or translation of the anchor user interface elements results in a corresponding rotation and/or translation of all of the items in the gallery of XR media items.Type: ApplicationFiled: May 4, 2023Publication date: November 7, 2024Inventors: Luke Cartwright, Matthew Hallberg, Dmytro Kucher, Xiao Li, Shaun Stelmack, Jiyang Zhu
-
Publication number: 20240355075Abstract: Systems, methods, and computer readable media for 3D content display using head-wearable apparatuses. Example methods include a head-wearable apparatus that is configured to determine a position for a content item on a closest curved line, of a plurality of curved lines, to the head-wearable apparatus that has space for the content item. The method includes adjusting a shape of the content item based on the position of the content item on the closest curved line and a user view of a user of the head-wearable apparatus. The method includes causing the adjusted content item to be displayed on a display of the head-wearable apparatus at the position on the closest curved line. The curved lines are either higher or lower as the curved lines goes away from the head-wearable apparatus. Additionally, the curved line or the content item may be adjusted with a random movement for an organic appearance.Type: ApplicationFiled: April 19, 2023Publication date: October 24, 2024Inventors: Luke Cartwright, Dmytro Kucher
-
Publication number: 20240320927Abstract: A head-wearable apparatus determines an imaginary reference plane intersecting a head of a user viewing augmented content in a viewing pane having vertical and lateral dimensions in a display of the head-wearable apparatus. The imaginary reference plane coincides with a first viewing direction of the head of the user. The apparatus detects a rotational movement of the head of the user in a vertical direction while viewing the augmented content. In response to the detected rotational movement, the apparatus determines a second viewing direction of the head of the user when viewing the augmented content in the second viewing direction and determines a reference angle between the imaginary reference plane and the second viewing direction. Based on the reference angle, the apparatus assign one of a billboard display mode and a headlock display mode (or combination) to the augmented content presented in the display.Type: ApplicationFiled: March 21, 2023Publication date: September 26, 2024Inventors: Luke Cartwright, Ilteris Kaan Canberk
-
Patent number: 11183185Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.Type: GrantFiled: January 9, 2019Date of Patent: November 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Alexander James Thaman, Alton Hau Kwong Kwok
-
Patent number: 10930275Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.Type: GrantFiled: December 18, 2018Date of Patent: February 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Richard William Neal
-
Patent number: 10789952Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.Type: GrantFiled: December 20, 2018Date of Patent: September 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Richard William Neal, Alton Kwok
-
Publication number: 20200219501Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.Type: ApplicationFiled: January 9, 2019Publication date: July 9, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Alexander James THAMAN, Alton Hau Kwong KWOK
-
Publication number: 20200202849Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.Type: ApplicationFiled: December 20, 2018Publication date: June 25, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Richard William NEAL, Alton KWOK
-
Publication number: 20200193976Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.Type: ApplicationFiled: December 18, 2018Publication date: June 18, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Richard William NEAL
-
Patent number: 10600246Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: GrantFiled: August 9, 2018Date of Patent: March 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Marcelo Alonso Mejia Cobo, Misbah Uraizee
-
Publication number: 20190385372Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, identifying at least one surface in the physical environment, positioning a passthrough portal in the virtual environment, a position of the passthrough portal having a z-distance from the user in the virtual environment that is equal to a z-distance of the at least one surface in the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: ApplicationFiled: August 9, 2018Publication date: December 19, 2019Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE, Nicholas Ferianc KAMUDA
-
Publication number: 20190385368Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: ApplicationFiled: August 9, 2018Publication date: December 19, 2019Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE
-
Publication number: 20150049085Abstract: Methods, systems, and computer-readable media for editing a mesh representing a surface are provided. The method includes receiving a representation of an object. The representation includes the mesh and a plurality of discrete elements comprising one or more boundary elements. The mesh is associated with the one or more boundary elements. The method also includes changing an edited element of the plurality of discrete elements from a boundary element to a non-boundary element or from a non-boundary element to a boundary element. The method also includes locally recalculating a portion the mesh based on the changing.Type: ApplicationFiled: August 14, 2014Publication date: February 19, 2015Inventors: Bjarte Dysvik, Luke Cartwright