Patents by Inventor Yedan Qian

Yedan Qian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11205305
    Abstract: In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: December 21, 2021
    Assignee: SAMSUNG ELECTRONICS COMPANY, LTD.
    Inventors: Sajid Sadi, Sergio Perdices-Gonzalez, Rahul Budhiraja, Brian Dongwoo Lee, Ayesha Mudassir Khwaja, Pranav Mistry, Link Huang, Cathy Kim, Michael Noh, Ranhee Chung, Sangwoo Han, Jason Yeh, Junyeon Cho, Soichan Nget, Brian Harms, Yedan Qian, Ruokan He
  • Patent number: 10298412
    Abstract: In one embodiment, one or more systems may receive input from a user identifying an interactive region of a physical environment. The one or more systems may determine a location of the interactive region relative to a depth sensor and monitor, at least in part by the depth sensor, the interactive region for a predetermined event. The one or more systems may detect, at least in part by the depth sensor, the predetermined event. In response to detecting the predetermined event, the one or more systems may initiate a predetermined action associated with the predetermined event.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: May 21, 2019
    Assignee: Samsung Electronics Company, Ltd.
    Inventors: Brian Harms, Pol Pla, Yedan Qian, Olivier Bau
  • Publication number: 20180323992
    Abstract: In one embodiment, one or more systems may receive input from a user identifying an interactive region of a physical environment. The one or more systems may determine a location of the interactive region relative to a depth sensor and monitor, at least in part by the depth sensor, the interactive region for a predetermined event. The one or more systems may detect, at least in part by the depth sensor, the predetermined event. In response to detecting the predetermined event, the one or more systems may initiate a predetermined action associated with the predetermined event.
    Type: Application
    Filed: July 19, 2018
    Publication date: November 8, 2018
    Inventors: Brian Harms, Pol Pla, Yedan Qian, Olivier Bau
  • Patent number: 10057078
    Abstract: In one embodiment, one or more systems may receive input from a user identifying an interactive region of a physical environment. The one or more systems may determine a location of the interactive region relative to a depth sensor and monitor, at least in part by the depth sensor, the interactive region for a predetermined event. The one or more systems may detect, at least in part by the depth sensor, the predetermined event. In response to detecting the predetermined event, the one or more systems may initiate a predetermined action associated with the predetermined event.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: August 21, 2018
    Assignee: Samsung Electronics Company, Ltd.
    Inventors: Brian Harms, Pol Pla, Yedan Qian, Olivier Bau
  • Publication number: 20170054569
    Abstract: In one embodiment, one or more systems may receive input from a user identifying an interactive region of a physical environment. The one or more systems may determine a location of the interactive region relative to a depth sensor and monitor, at least in part by the depth sensor, the interactive region for a predetermined event. The one or more systems may detect, at least in part by the depth sensor, the predetermined event. In response to detecting the predetermined event, the one or more systems may initiate a predetermined action associated with the predetermined event.
    Type: Application
    Filed: February 3, 2016
    Publication date: February 23, 2017
    Inventors: Brian Harms, Pol Pla, Yedan Qian, Olivier Bau
  • Publication number: 20160086379
    Abstract: In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object.
    Type: Application
    Filed: September 16, 2015
    Publication date: March 24, 2016
    Inventors: Sajid Sadi, Sergio Perdices-Gonzalez, Rahul Budhiraja, Brian Dongwoo Lee, Ayesha Mudassir Khwaja, Pranav Mistry, Link Huang, Cathy Kim, Michael Noh, Ranhee Chung, Sangwoo Han, Jason Yeh, Junyeon Cho, Soichan Nget, Brian Harms, Yedan Qian, Ruokan He