Patents by Inventor Xiaohuan Corina Wang
Xiaohuan Corina Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220353432Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.Type: ApplicationFiled: July 8, 2022Publication date: November 3, 2022Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
-
Patent number: 11394898Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.Type: GrantFiled: September 6, 2018Date of Patent: July 19, 2022Assignee: Apple Inc.Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
-
Patent number: 11178356Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.Type: GrantFiled: December 26, 2019Date of Patent: November 16, 2021Assignee: Apple Inc.Inventors: David Black, Andrew L. Harding, Joseph-Alexander P. Weil, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
-
Patent number: 10839577Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.Type: GrantFiled: October 31, 2018Date of Patent: November 17, 2020Assignee: Apple Inc.Inventors: Toshihiro Horie, Kevin O'Neil, Zehang Sun, Xiaohuan Corina Wang, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Andy Harding, Greg Dudey
-
Publication number: 20200137349Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.Type: ApplicationFiled: December 26, 2019Publication date: April 30, 2020Applicant: Apple Inc.Inventors: David Black, Andrew L. Harding, Joseph-Alexander P. Weil, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
-
Patent number: 10560656Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.Type: GrantFiled: March 15, 2018Date of Patent: February 11, 2020Assignee: Apple Inc.Inventors: Joseph-Alexander P. Weil, Andrew L. Harding, David Black, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
-
Publication number: 20190082118Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.Type: ApplicationFiled: September 6, 2018Publication date: March 14, 2019Applicant: Apple Inc.Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
-
Publication number: 20190080498Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.Type: ApplicationFiled: October 31, 2018Publication date: March 14, 2019Applicant: Apple Inc.Inventors: Toshihiro Horie, Kevin O'Neil, Zehang Sun, Xiaohuan Corina Wang, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Andy Harding, Greg Dudey
-
Publication number: 20180270446Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.Type: ApplicationFiled: March 15, 2018Publication date: September 20, 2018Applicant: Apple Inc.Inventors: Joseph-Alexander P. Weil, Andrew L. Harding, David Black, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
-
Publication number: 20020190988Abstract: A method of displacing a tessellated surface, based on features of a displacement map, by analyzing a model to determine the level of detail in the model. Where the level of detail is high the number of polygons, typically triangles, used to represent the high detail area is increased through the use of “sub-triangles”. The positions of the sub-triangles are also strategically located and constrained to better represent the high detail area, particularly any edges in the area. The level of detail can be determined using a displacement map for the surface. The positions of the triangles can be located by determining feature points (or sub-triangle vertices) in the areas of detail where the feature points can be moved toward the areas of high rate of change and additional feature points can be added. The feature points can be connected to form the sub-triangles with an emphasis or constraint on connecting points along an edge or border.Type: ApplicationFiled: January 31, 2002Publication date: December 19, 2002Inventors: Jerome Maillot, Xiaohuan Corina Wang