Patents by Inventor Jingyi Fang

Jingyi Fang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240058963
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for robot programming. One of the methods comprises generating an interactive user interface that includes an illustration of a virtual robot corresponding to a physical robot; receiving first user input data specifying a first target pose of the virtual robot; causing the physical robot to traverse to the first target pose while updating in real-time the illustration of the virtual robot as the physical robot transitions to the first target pose; receiving a user request to switch from operating in a synchronized mode to operating in an unsynchronized mode; receiving second user input data specifying a second target pose of the virtual robot; and generating an animation of the virtual robot transitioning from the first target pose to the second target pose but withholding causing the physical robot to traverse to the second target pose.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Jingyi Fang, Michael Christopher Degen, David Andrew Schmidt, Jason Harold Tucker
  • Publication number: 20230294275
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for robot programming. One of the methods comprises generating an interactive user interface that includes an illustration of a first virtual robot, the first virtual robot having an initial pose that defines respective joint angles of one or more joints of the first virtual robot; receiving user input data specifying a target pose of the first virtual robot; and generating an animation of the first virtual robot transitioning between the initial pose and the target pose.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 21, 2023
    Inventor: Jingyi Fang
  • Publication number: 20230226688
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating control instructions for operating a robot. One of the methods includes generating an interactive user interface that illustrates an object to be manipulated by a robot by using an end effector; receiving, within the user interface, first user input data indicating a workcell location; computing a surface normal of a surface in the workcell corresponding to the workcell location; presenting, within the user interface, a graphical representation of the surface normal corresponding to the workcell location; receiving, within the user interface, second user input data selecting the workcell location; and generating pose data for the robot using the computed surface normal and the workcell location.
    Type: Application
    Filed: January 19, 2023
    Publication date: July 20, 2023
    Inventors: Jingyi Fang, Charles Marc Gobeil
  • Patent number: 10949069
    Abstract: Systems, apparatuses, and methods for performing a user interface action are provided. In one embodiment, an example method includes receiving, by one or more computing devices, data indicative of a user input directed to causing a motion of a virtual camera associated with a user interface. The method further includes detecting, by the one or more computing devices, a shake event associated with the user interface based at least in part on the motion of the virtual camera. The method further includes performing, by the one or more computing devices, an action associated with the user interface based at least in part on the detected shake event.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: March 16, 2021
    Assignee: Google LLC
    Inventor: Jingyi Fang
  • Publication number: 20200272308
    Abstract: Systems, apparatuses, and methods for performing a user interface action are provided. In one embodiment, an example method includes receiving, by one or more computing devices, data indicative of a user input directed to causing a motion of a virtual camera associated with a user interface. The method further includes detecting, by the one or more computing devices, a shake event associated with the user interface based at least in part on the motion of the virtual camera. The method further includes performing, by the one or more computing devices, an action associated with the user interface based at least in part on the detected shake event.
    Type: Application
    Filed: March 5, 2020
    Publication date: August 27, 2020
    Inventor: Jingyi Fang
  • Patent number: 10606457
    Abstract: Systems, apparatuses, and methods for performing a user interface action are provided. In one embodiment, an example method includes receiving, by one or more computing devices, data indicative of a user input directed to causing a motion of a virtual camera associated with a user interface. The method further includes detecting, by the one or more computing devices, a shake event associated with the user interface based at least in part on the motion of the virtual camera. The method further includes performing, by the one or more computing devices, an action associated with the user interface based at least in part on the detected shake event.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: March 31, 2020
    Assignee: Google LLC
    Inventor: Jingyi Fang
  • Publication number: 20190072407
    Abstract: Systems, apparatuses, and methods for providing an interactive, geo-contextual interface are provided. In one embodiment, a method can include providing a user interface for display on a display device. The user interface can include a display area for presenting visual content, the visual content representing an original location on a three-dimensional body. The method can include providing an interactive widget for display on the display device. The widget can be represented as three-dimensional and having an appearance corresponding to the three-dimensional body. The method can include receiving data indicative of a user input directed to at least one of the user interface display area and the widget. The method can include adjusting at least one of the visual content, the widget, and the visual indicator based at least in part on the data indicative of the user input.
    Type: Application
    Filed: July 26, 2016
    Publication date: March 7, 2019
    Inventors: Jingyi Fang, Sean Askay
  • Publication number: 20180101293
    Abstract: Systems, apparatuses, and methods for performing a user interface action are provided. In one embodiment, an example method includes receiving, by one or more computing devices, data indicative of a user input directed to causing a motion of a virtual camera associated with a user interface. The method further includes detecting, by the one or more computing devices, a shake event associated with the user interface based at least in part on the motion of the virtual camera. The method further includes performing, by the one or more computing devices, an action associated with the user interface based at least in part on the detected shake event.
    Type: Application
    Filed: October 11, 2016
    Publication date: April 12, 2018
    Inventor: Jingyi Fang
  • Patent number: D820877
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: June 19, 2018
    Assignee: Google LLC
    Inventors: Rachel Elizabeth Inman, Sean Askay, Jingyi Fang, Gopal Shah