Patents by Inventor Stefan Welker

Stefan Welker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11919169
    Abstract: An example computer-implemented method includes receiving, from one or more vision components in an environment, vision data that captures features of the environment, including object features of an object that is located in the environment, and prior to a robot manipulating the object: (i) determining based on the vision data, at least one first adjustment to a programmed trajectory of movement of the robot operating in the environment to perform a task of transporting the object, and (ii) determining based on the object features of the object, at least one second adjustment to the programmed trajectory of movement of the robot operating in the environment to perform the task, and causing the robot to perform the task, in accordance with the at least one first adjustment and the at least one second adjustment to the programmed trajectory of movement of the robot.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: March 5, 2024
    Assignee: Google LLC
    Inventors: Johnny Lee, Stefan Welker
  • Publication number: 20230398690
    Abstract: Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
    Type: Application
    Filed: August 11, 2023
    Publication date: December 14, 2023
    Inventors: Johnny Lee, Stefan Welker
  • Patent number: 11724398
    Abstract: Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: August 15, 2023
    Assignee: GOOGLE LLC
    Inventors: Johnny Lee, Stefan Welker
  • Publication number: 20220371195
    Abstract: An example computer-implemented method includes receiving, from one or more vision components in an environment, vision data that captures features of the environment, including object features of an object that is located in the environment, and prior to a robot manipulating the object: (i) determining based on the vision data, at least one first adjustment to a programmed trajectory of movement of the robot operating in the environment to perform a task of transporting the object, and (ii) determining based on the object features of the object, at least one second adjustment to the programmed trajectory of movement of the robot operating in the environment to perform the task, and causing the robot to perform the task, in accordance with the at least one first adjustment and the at least one second adjustment to the programmed trajectory of movement of the robot.
    Type: Application
    Filed: November 19, 2019
    Publication date: November 24, 2022
    Inventors: Johnny Lee, Stefan Welker
  • Publication number: 20220355483
    Abstract: An example method for providing a graphical user interface (GUI) of a computing device includes receiving an input indicating a target pose of the robot, providing for display on the GUI of the computing device a transparent representation of the robot as a preview of the target pose in combination with the textured model of the robot indicating the current state of the robot, generating a boundary illustration on the GUI representative of a limit of a range of motion of the robot, based on the target pose extending the robot beyond the boundary illustration, modifying characteristics of the transparent representation of the robot and of the boundary illustration on the GUI to inform of an invalid pose, and based on the target pose being a valid pose, sending instructions to the robot causing the robot to perform the target pose.
    Type: Application
    Filed: November 19, 2019
    Publication date: November 10, 2022
    Inventors: Johnny Lee, Stefan Welker
  • Patent number: 11491514
    Abstract: An apparatus for sorting containers, in particular, beverage containers, may comprise at least one conveyor belt for transporting a multitude of containers, an identification device for identifying a decor on each container conveyed on the conveyor belt, wherein selection data can be generated by the identification device depending on the respective decor present on a specifically detected container, and a selection device for repositioning individual containers on the conveyor belt depending on the respectively identified decor. The selection device forms a continuous track and has shifting means for shifting the identified containers. The continuous track extends along the conveyor belt in a selection region, and the individual shifting means in the selection region can be accelerated to the same speed as the conveyor belt by means of a transport device, thereby being adjacent to a specific and previously identified container.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: November 8, 2022
    Assignee: QUISS QUALITAETS-INSPEKTIONSSYSTEME UND SERVICE GMBH
    Inventors: Stefan Welker, Bernhard Gruber
  • Publication number: 20220152833
    Abstract: Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
    Type: Application
    Filed: November 24, 2021
    Publication date: May 19, 2022
    Inventors: Johnny Lee, Stefan Welker
  • Patent number: 11213953
    Abstract: Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: January 4, 2022
    Assignee: GOOGLE LLC
    Inventors: Johnny Lee, Stefan Welker
  • Patent number: 11054918
    Abstract: Systems and methods for identifying locations and controlling devices are provided. For example, a user may indicate a location by aiming at the location from multiple positions in a physical space. The user may also identify a controllable device to control by aiming at the device. Example systems and methods include determining a first position within a three-dimensional space, receiving a first directional input, and determining a first ray based on the first position and first directional input. Example systems and methods also include determining a second position within the three-dimensional space, receiving a second directional input, and determining a second ray based on the second position and second directional input. Example systems and methods may also include identifying a location within a three-dimensional space based on the first ray and the second ray.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: July 6, 2021
    Assignee: GOOGLE LLC
    Inventors: Steven Goldberg, Charles L. Chen, Stefan Welker
  • Publication number: 20210102820
    Abstract: A method includes: triggering presentation of at least a portion of a map on a device that is in a map mode, wherein a first point of interest (POI) object is placed on the map, the first POI object representing a first POI located at a first physical location; detecting, while the map is presented, an input triggering a transition of the device from the map mode to an augmented reality (AR) mode; triggering presentation of an AR view on the device in the AR mode, the AR view including an image captured by a camera of the device, the image having a field of view; determining whether the first physical location of the first POI is within the field of view; and if so, triggering placement of the first POI object at a first edge of the AR view.
    Type: Application
    Filed: February 23, 2018
    Publication date: April 8, 2021
    Inventors: Andre Le, Stefan Welker, Paulo Coelho
  • Publication number: 20210023711
    Abstract: Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
    Type: Application
    Filed: July 26, 2019
    Publication date: January 28, 2021
    Inventors: Johnny Lee, Stefan Welker
  • Patent number: 10802711
    Abstract: Systems and methods are described that include generating a virtual environment for display in a head-mounted display device. The virtual environment may include at least one three-dimensional virtual object having a plurality of volumetric zones configured to receive virtual contact. The method may also include detecting a plurality of inputs corresponding to a plurality of actions performed in the virtual environment on the at least one three-dimensional virtual object. Each action corresponds to a plurality of positions and orientations associated with at least one tracked input device. The method may include generating, for each action and while detecting the plurality of inputs, a plurality of prediction models and determining based on the plurality of prediction models in which of the plurality of volumetric zones the at least one tracked input device is predicted to virtually collide.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: October 13, 2020
    Assignee: Google LLC
    Inventors: Manuel Christian Clement, Andrey Doronichev, Stefan Welker
  • Patent number: 10754497
    Abstract: Systems and methods are described for generating a virtual environment including at least one three-dimensional virtual object within a user interface provided in a head mounted display device, detecting a first interaction pattern and a second interaction pattern. In response to detecting the second interaction pattern, a modified version of the three-dimensional virtual object at the first virtual feature is generated according to the first interaction pattern and at the second virtual feature according to the second interaction pattern. The modified version of the three-dimensional virtual object is provided in the user interface in the head mounted display device.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: August 25, 2020
    Assignee: Google LLC
    Inventors: Stefan Welker, Manuel Christian Clement
  • Publication number: 20200233502
    Abstract: Systems and methods for identifying locations and controlling devices are provided. For example, a user may indicate a location by aiming at the location from multiple positions in a physical space. The user may also identify a controllable device to control by aiming at the device. Example systems and methods include determining a first position within a three-dimensional space, receiving a first directional input, and determining a first ray based on the first position and first directional input. Example systems and methods also include determining a second position within the three-dimensional space, receiving a second directional input, and determining a second ray based on the second position and second directional input. Example systems and methods may also include identifying a location within a three-dimensional space based on the first ray and the second ray.
    Type: Application
    Filed: April 8, 2020
    Publication date: July 23, 2020
    Inventors: Steven Goldberg, Charles L. Chen, Stefan Welker
  • Publication number: 20200139407
    Abstract: An apparatus for sorting containers, in particular, beverage containers, may comprise at least one conveyor belt for transporting a multitude of containers, an identification device for identifying a decor on each container conveyed on the conveyor belt, wherein selection data can be generated by the identification device depending on the respective decor present on a specifically detected container, and a selection device for repositioning individual containers on the conveyor belt depending on the respectively identified decor. The selection device forms a continuous track and has shifting means for shifting the identified containers. The continuous track extends along the conveyor belt in a selection region, and the individual shifting means in the selection region can be accelerated to the same speed as the conveyor belt by means of a transport device, thereby being adjacent to a specific and previously identified container.
    Type: Application
    Filed: May 4, 2018
    Publication date: May 7, 2020
    Applicant: QUISS QUALITAETS-INSPEKTIONSSYSTEME UND SERVICE AG
    Inventors: Stefan WELKER, Bernhard GRUBER
  • Patent number: 10642991
    Abstract: Computer-implemented systems and methods are described for configuring a plurality of privacy properties for a plurality of virtual objects associated with a first user and a virtual environment being accessed using a device associated with the first user, triggering for display, in the virtual environment, the plurality of virtual objects to the first user accessing the virtual environment, determining whether at least one virtual object is associated with a privacy setting corresponding to the first user. In response to determining that a second user is attempting to access the one virtual object, a visual modification may be applied to the object based on a privacy setting. The method may also include triggering for display, the visual modification of the at least one virtual object, to the second user while continuing to trigger display of the at least one virtual object without the visual modification to the first user.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: May 5, 2020
    Assignee: GOOGLE INC.
    Inventors: Manuel Christian Clement, Stefan Welker
  • Patent number: 10636222
    Abstract: Techniques of generating a virtual environment in a virtual reality system involves changing, within a user interface of the second user, an attribute of an avatar representing the first user while maintaining a spatial position of an object with which the first user is interacting. In this way, the second user may see only non-threatening or otherwise pleasant avatars within their user interface while other users may not perceive any change to the virtual environment as displayed in their respective user interfaces.
    Type: Grant
    Filed: May 4, 2017
    Date of Patent: April 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Stefan Welker, Tim Gleason, Ian MacGillivray, Darwin Yamamoto, Shawn Buessing
  • Patent number: 10636199
    Abstract: Techniques of displaying a virtual environment in a HMD involve generating a lighting scheme within a virtual environment configured to reveal a real object in a room in the virtual environment in response to a distance between a user in the room and the real object decreasing while the user is immersed in the virtual environment. Such a lighting scheme protects a user from injury resulting from collision with real objects in a room while immersed in a virtual environment.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: April 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Thor Lewis, Stefan Welker
  • Patent number: 10620721
    Abstract: Systems and methods for identifying locations and controlling devices are provided. For example, a user may indicate a location by aiming at the location from multiple positions in a physical space. The user may also identify a controllable device to control by aiming at the device. Example systems and methods include determining a first position within a three-dimensional space, receiving a first directional input, and determining a first ray based on the first position and first directional input. Example systems and methods also include determining a second position within the three-dimensional space, receiving a second directional input, and determining a second ray based on the second position and second directional input. Example systems and methods may also include identifying a location within a three-dimensional space based on the first ray and the second ray.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 14, 2020
    Assignee: GOOGLE LLC
    Inventors: Steven Goldberg, Charles L. Chen, Stefan Welker
  • Patent number: 10573288
    Abstract: Methods and apparatus to use predicted actions in VR environments are disclosed. An example method includes predicting a predicted time of a predicted virtual contact of a virtual reality controller with a virtual object, determining, based on at least one parameter of the predicted virtual contact, a characteristic of a virtual output the object would make in response to the virtual contact, and initiating producing the virtual output before the predicted time of the virtual contact of the controller with the virtual object.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: February 25, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Stefan Welker