Patents by Inventor Xiaohuan Corina Wang

Xiaohuan Corina Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220353432
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Application
    Filed: July 8, 2022
    Publication date: November 3, 2022
    Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
  • Patent number: 11394898
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: July 19, 2022
    Assignee: Apple Inc.
    Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
  • Patent number: 11178356
    Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: November 16, 2021
    Assignee: Apple Inc.
    Inventors: David Black, Andrew L. Harding, Joseph-Alexander P. Weil, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
  • Patent number: 10839577
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: November 17, 2020
    Assignee: Apple Inc.
    Inventors: Toshihiro Horie, Kevin O'Neil, Zehang Sun, Xiaohuan Corina Wang, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Andy Harding, Greg Dudey
  • Publication number: 20200137349
    Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
    Type: Application
    Filed: December 26, 2019
    Publication date: April 30, 2020
    Applicant: Apple Inc.
    Inventors: David Black, Andrew L. Harding, Joseph-Alexander P. Weil, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
  • Patent number: 10560656
    Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: February 11, 2020
    Assignee: Apple Inc.
    Inventors: Joseph-Alexander P. Weil, Andrew L. Harding, David Black, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
  • Publication number: 20190082118
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 14, 2019
    Applicant: Apple Inc.
    Inventors: Xiaohuan Corina Wang, Zehang Sun, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Toshihiro Horie, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Kevin O'Neil, Andy Harding, Greg Dudey
  • Publication number: 20190080498
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Application
    Filed: October 31, 2018
    Publication date: March 14, 2019
    Applicant: Apple Inc.
    Inventors: Toshihiro Horie, Kevin O'Neil, Zehang Sun, Xiaohuan Corina Wang, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Andy Harding, Greg Dudey
  • Publication number: 20180270446
    Abstract: In some implementations, a user device can be configured to create media messages with automatic titling. For example, a user can create a media messaging project that includes multiple video clips. The video clips can be generated based on video data and/or audio data captured by the user device and/or based on pre-recorded video data and/or audio data obtained from various storage locations. When the user device captures the audio data for a clip, the user device can obtain a speech-to-text transcription of the audio data in near real time and present the transcription data (e.g., text) overlaid on the video data while the video data is being captured or presented by the user device.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 20, 2018
    Applicant: Apple Inc.
    Inventors: Joseph-Alexander P. Weil, Andrew L. Harding, David Black, James Brasure, Joash S. Berkeley, Katherine K. Ernst, Richard Salvador, Stephen Sheeler, William D. Cummings, Xiaohuan Corina Wang, Robert L. Clark, Kevin M. O'Neil
  • Publication number: 20020190988
    Abstract: A method of displacing a tessellated surface, based on features of a displacement map, by analyzing a model to determine the level of detail in the model. Where the level of detail is high the number of polygons, typically triangles, used to represent the high detail area is increased through the use of “sub-triangles”. The positions of the sub-triangles are also strategically located and constrained to better represent the high detail area, particularly any edges in the area. The level of detail can be determined using a displacement map for the surface. The positions of the triangles can be located by determining feature points (or sub-triangle vertices) in the areas of detail where the feature points can be moved toward the areas of high rate of change and additional feature points can be added. The feature points can be connected to form the sub-triangles with an emphasis or constraint on connecting points along an edge or border.
    Type: Application
    Filed: January 31, 2002
    Publication date: December 19, 2002
    Inventors: Jerome Maillot, Xiaohuan Corina Wang