Patents by Inventor Eugen Solowjow
Eugen Solowjow has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240362855Abstract: System and method are disclosed for training a generative adversarial network pipeline that can produce realistic artificial depth images useful as training data for deep learning networks used for robotic tasks. A generator network receives a random noise vector and a computer aided design (CAD) generated depth image and generates an artificial depth image. A discriminator network receives either the artificial depth image or a real depth image in alternation, and outputs a predicted label indicating a discriminator decision as to whether the input is the real depth image or the artificial depth image. Training of the generator network is performed in tandem with the discriminator network as a generative adversarial network. A generator network cost function minimizes correctly predicted labels, and a discriminator cost function maximizes correctly predicted labels.Type: ApplicationFiled: August 10, 2022Publication date: October 31, 2024Applicant: Siemens AktiengesellschaftInventors: Wei Xi Xia, Eugen Solowjow, Shashank Tamaskar, Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Gokul Narayanan Sathya Narayanan, Yash Shahapurkar, Chengtao Wen
-
Publication number: 20240335941Abstract: It is recognized herein that current approaches to autonomous operations are often limited to grasping and manipulation operations that can be performed in a single step. It is further recognized herein that there are various operations in robotics (e.g., assembly tasks) that require multiple steps or a sequence of motions to be performed. To determine or plan a sequence of motions for fulfilling a task, an autonomous system that includes a robot can perform object recognition, pose estimation, affordance analysis, decision-making, probabilistic task or motion planning, and object manipulation.Type: ApplicationFiled: August 31, 2021Publication date: October 10, 2024Applicant: Siemens AktiengesellschaftInventors: Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Martin Sehr, Eugen Solowjow, Chengtao Wen, Wei Xi Xia, Xiaowen Yu, Shashank Tamaskar
-
Publication number: 20240296662Abstract: A computer-implemented method for building an object detection module uses mesh representations of objects belonging to specified object classes of interest to render images by a physics-based simulator. Each rendered image captures a simulated environment containing objects belonging to multiple object classes of interest placed in a bin or on a table. The rendered images are generated by randomizing a set of parameters by the simulator to render a range of simulated environments. The randomized parameters include environmental and sensor-based parameters. A label is generated for each rendered image, which includes a two-dimensional representation indicative of location and object classes of objects in that rendered image frame. Each rendered image and the respective label constitute a data sample of a synthetic training dataset. A deep learning model is trained using the synthetic training dataset to output object classes from an input image of a real-world physical environment.Type: ApplicationFiled: August 6, 2021Publication date: September 5, 2024Applicant: Siemens CorporationInventors: Eugen Solowjow, Ines Ugalde Diaz, Yash Shahapurkar, Juan L. Aparicio Ojea, Heiko Claussen
-
Publication number: 20240253234Abstract: An autonomous system can include a depth camera configured to capture a depth image of a bin that contains a plurality of objects from a first direction, so as to define a captured image. Based on the bottom end of the bin and the captured image, the system can generate a cropped region that defines a plane along a second direction and a third direction that are both substantially perpendicular to the first direction. Based on the captured image, the system can make a determination as to whether at least one object of the plurality of objects lies outside the cropped region. Based on the determination, the system can select a final region of interest for determining grasp points on the plurality of objects.Type: ApplicationFiled: December 27, 2023Publication date: August 1, 2024Applicant: Siemens AktiengesellschaftInventors: Ajay Balasubramanian, Eugen Solowjow, Ines Ugalde Diaz, Chengtao Wen
-
Publication number: 20240238968Abstract: An autonomous system can detect out-of-distribution (OOD) data in robotic grasping systems, based on evaluating image inputs of the robotic grasping systems. Furthermore, the system makes various decisions based on detecting the OOD data, so as to avoid inefficient or hazardous situations or other negative consequences (e.g., damage to products). For example, the system can determine whether a suction-based gripper is optimal for grasping objects in a given scene, based at least in part on determining whether an image defines OOD data.Type: ApplicationFiled: December 28, 2023Publication date: July 18, 2024Applicant: Siemens AktiengesellschaftInventors: Yash Shahapurkar, William Yamada, Eugen Solowjow, Gokul Narayanan Sathya Narayanan
-
Publication number: 20240208069Abstract: Fully flexible kitting processes can be automated by generating pick and place motions for multi-robot, multi-gripper, robotic systems.Type: ApplicationFiled: May 25, 2021Publication date: June 27, 2024Applicant: Siemens AktiengesellschaftInventors: Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Gokul Narayanan Sathya Narayanan, Eugen Solowjow, Chengtao Wen, Wei Xi Xia, Yash Shahapurkar, Shashank Tamaskar
-
Publication number: 20240198515Abstract: A covariate shift generally refers to the change of the distribution of the input data (e.g., noise distribution) between the training and inference regimes. Such covariate shifts can degrade the performance grasping neural networks, and thus robotic grasping operations. As described herein, an output of a grasp neural network can be transformed, so as to determine appropriate locations on a given object for a robot or autonomous machine to grasp.Type: ApplicationFiled: May 25, 2021Publication date: June 20, 2024Applicant: Siemens AktiengesellschaftInventors: Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Gokul Narayanan Sathya Narayanan, Eugen Solowjow, Chengtao Wen, Wei Xi Xia, Yash Shahapurkar, Shashank Tamaskar
-
Publication number: 20240198526Abstract: In some cases, grasp point algorithms can be implemented so as to compute grasp points on an object that enable a stable grasp. It is recognized herein, however, that in practice a robot in motion can drop the object or otherwise have grasp issues when the object is grasped at the computed stable grasp points. Path constraints that can differ based on a given object are generated while generating the trajectory for a robot, so as to ensure that a grasp remains stable throughout the motion of the robot.Type: ApplicationFiled: May 25, 2021Publication date: June 20, 2024Applicant: Siemens AktiengesellschaftInventors: Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Gokul Narayanan Sathya Narayanan, Eugen Solowjow, Chengtao Wen, Wei Xi Xia, Yash Shahapurkar, Shashank Tamaskar
-
Publication number: 20240198530Abstract: In described embodiments of method for executing autonomous bin picking, a physical environment comprising a bin containing a plurality of objects is perceived by one or more sensors. Multiple artificial intelligence (AI) modules feed from the sensors to compute grasping alternatives, and in some embodiments, detected objects of interest. Grasping alternatives and their attributes are computed based on the outputs of the AI modules in a high-level sensor fusion (HLSF) module. A multi-criteria decision making (MCDM) module is used to rank the grasping alternatives and select the one that maximizes the application utility while satisfying specified constraints.Type: ApplicationFiled: June 25, 2021Publication date: June 20, 2024Applicant: Siemens CorporationInventors: Ines Ugalde Diaz, Eugen Solowjow, Juan L. Aparicio Ojea, Martin Sehr, Heiko Claussen
-
Publication number: 20240066723Abstract: It is recognized It is recognized herein that current approaches to robotic picking lack efficiency and capabilities. In particular, current approaches often do not properly or efficiently estimate the pose of bins, due to various technical challenges in doing so, which can impact grasp computations and overall performance of a given robot. The pose of the bin can be determined or estimated based on depth images. Such bin pose estimation can be performed during runtime of a given robot, such that grasping can be enhanced due to the bin pose estimations.Type: ApplicationFiled: August 7, 2023Publication date: February 29, 2024Inventors: Eduardo Moura Cirilo Rocha, Husnu Melih Erdogan, Eugen Solowjow, Ines Ugalde Diaz, Yash Shahapurkar, Nan Tian, Paul Andreas Batsii, Christopher Schuette
-
Patent number: 11883947Abstract: A system controller for visual servoing includes a technology module with dedicated hardware acceleration for deep neural network that retrieves a desired configuration of a workpiece object being manipulated by a robotic device and receives visual feedback information from one or more sensors on or near the robotic device that includes a current configuration of the workpiece object. The hardware accelerator executes a machine learning model trained to process the visual feedback information and determine a configuration error based on a difference between the current configuration of the workpiece object and the desired configuration of the workpiece object. A servo control module adapts a servo control signal to the robotic device for manipulation of the workpiece object in response to the configuration error.Type: GrantFiled: September 30, 2019Date of Patent: January 30, 2024Assignee: SIEMENS AKTIENGESELLSCHAFTInventors: Heiko Claussen, Martin Sehr, Eugen Solowjow, Chengtao Wen, Juan L. Aparicio Ojea
-
Publication number: 20240012400Abstract: A computer-implemented method for failure classification of a surface treatment process includes receiving one or more process parameters that influence one or more failure modes of the surface treatment process and receiving sensor data pertaining to measurement of one or more process states pertaining to the surface treatment process. The method includes processing the received one or more process parameters and the sensor data by a machine learning model deployed on an edge computing device controlling the surface treatment process to generate an output indicating, in real-time, a probability of process failure via the one or more failure modes. The machine learning model is trained on a supervised learning regime based on process data and failure classification labels obtained from physics simulations of the surface treatment process in combination with historical data pertaining to the surface treatment process.Type: ApplicationFiled: August 28, 2020Publication date: January 11, 2024Applicant: Siemens CorporationInventors: Shashank Tamaskar, Martin Sehr, Eugen Solowjow, Wei Xi Xia, Juan L. Aparicio Ojea, Ines Ugalde Diaz
-
Publication number: 20230359864Abstract: An edge device can be configured to perform industrial control operations within a production environment that defines a physical location. The edge device can include a plurality of neural network layers that define a deep neural network. The edge device be configured to obtain data from one or more sensors at the physical location defined by the production environment. The edge device can be further configured to perform one or more matrix operations on the data using the plurality of neural network layers so as to generate a large scale matrix computation at the physical location defined by the production environment. In some examples, the edge device can send the large scale matrix computation to a digital twin simulation model associated with the production environment, so as to update the digital twin simulation model in real time.Type: ApplicationFiled: August 31, 2020Publication date: November 9, 2023Applicant: Siemens CorporationInventors: Martin Sehr, Eugen Solowjow, Wei Xi Xia, Shashank Tamaskar, Ines Ugalde Diaz, Heiko Claussen, Juan L. Aparicio Ojea
-
Publication number: 20230330858Abstract: In an example aspect, a first object (e.g., an electronic component) is inserted by a robot into a second object (e.g., a PCB). An autonomous system can capture a first image of the first object within a physical environment. The first object can define a mounting interface configured to insert into the second object. Based on the first image, a robot can grasp the first object within the physical environment. While the robot grasps the first object, the system can capture a second image of the first object. The second image can include the mounting interface of the first object. Based on the second image of the first object, the system can determine a grasp offset associated with the first object. The grasp offset can indicate movement associated with the robot grasping the first object within the physical environment. The system can also capture an image of the second object. Based on the grasp offset and the image of the second object, the robot can insert the first object into the second object.Type: ApplicationFiled: September 9, 2021Publication date: October 19, 2023Applicant: Siemens CorporationInventors: Eugen Solowjow, Juan L. Aparicio Ojea, Avinash Kumar, Matthias Loskyll, Gerrit Schoettler
-
Publication number: 20230316115Abstract: A computer-implemented method includes operating a controllable physical device to perform a task. The method also includes miming forward simulations of the task by a physics engine based on one or more physics parameters. The physics engine communicates with a parameter data layer where each of the one or more physics parameters is modeled with a probability distribution. For each forward simulation run, a tuple of parameter values is sampled from the probability distribution of the one or more physics parameters and fed to the physics engine. The method includes obtaining an observation pertaining to the task from the physical environment and a corresponding forward simulation outcome associated with each sampled tuple of parameter values. The method then includes updating the probability distribution of the one or more physics parameters in the parameter data layer based on the observation from the physical environment and the corresponding forward simulation outcomes.Type: ApplicationFiled: August 28, 2020Publication date: October 5, 2023Applicant: Siemens AktiengesellschaftInventors: Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Martin Sehr, Eugen Solowjow, Chengtao Wen, Wei Xi Xia, Xiaowen Yu, Shashank Tamaskar
-
Publication number: 20230305574Abstract: It is recognized herein that robots or autonomous systems can lose time when computing grasp scores for empty bins. Further, when grasps are attempted on empty bins, for instance due to the related grasp score computations, the robot can lose additional time through being used unnecessarily to attempt the grasp. Such usage can wear on the robot, or damage the robot, in some cases. An autonomous system can classify or determine whether a bin contains an object or is empty, for example, such that a grasp computation is not performed when the bin is empty. In some examples, a system classifies a given bin at runtime before each grasp computation is performed. Thus, systems described herein can avoid performing unnecessary grasp computations, thereby conserving processing time and overheard, among addressing other technical problems.Type: ApplicationFiled: March 10, 2023Publication date: September 28, 2023Applicant: Siemens AktiengesellschaftInventors: Ines Ugalde Diaz, Eugen Solowjow, Yash Shahapurkar, Husnu Melih Erdogan, Eduardo Moura Cirilo Rocha
-
Publication number: 20230228688Abstract: Robots might interact with planar objects (e.g., garments) for process automation, quality control, to perform sewing operations, or the like. It is recognized herein that robots interacting with such planar objects can pose particular problems, for instance problems related to detecting the planar object and estimating the pose of the detected planar object. A system can be configured to detect or segment planar objects, such as garments. The system can include a three-dimensional (3D) sensor positioned to detect a planar object along a transverse direction. The system can further include a first surface that supports the planar object. The first surface can be positioned such that the planar object is disposed between the first surface and the 3D sensor along the transverse direction. In various examples, the 3D sensor is configured to detect the planar object without detecting the first surface.Type: ApplicationFiled: August 10, 2022Publication date: July 20, 2023Inventors: Eduardo Moura Cirilo Rocha, Shashank Tamaskar, Wei Xi Xia, Eugen Solowjow, Nan Tian, Gokul Narayanan Sathya Narayanan
-
Publication number: 20230214665Abstract: Distributed neural network boosting is performed by a neural network system through operating at least one processor. A method comprises providing a boosting algorithm that distributes a model among a plurality of processing units each being a weak learner of multiple weak learners that can perform computations independent from one another yet process data concurrently. The method further comprises enabling a distributed ensemble learning which enables a programmable logic controller (PLC) to use more than one processing units of the plurality of processing units to scale an application and training the multiple weak learners using the boosting algorithm. The multiple weak learners are machine learning models that do not capture an entire data distribution and are purposefully designed to predict with a lower accuracy. The method further comprises using the multiple weak learners to vote for a final hypothesis based on a feed forward computation of neural networks.Type: ApplicationFiled: April 17, 2020Publication date: July 6, 2023Inventors: Wei Xi Xia, Xiaowen Yu, Shashank Tamaskar, Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Martin Sehr, Eugen Solowjow, Chengtao Wen
-
Patent number: 11667034Abstract: Computerized system and method are provided. A robotic manipulator (12) is arranged to grasp objects (20). A gripper (16) is attached to robotic manipulator (12), which includes an imaging sensor (14). During motion of robotic manipulator (12), imaging sensor (14) is arranged to capture images providing different views of objects in the environment of the robotic manipulator. A processor (18) is configured to find, based on the different views, candidate grasp locations and trajectories to perform a grasp of a respective object in the environment of the robotic manipulator. Processor (18) is configured to calculate respective values indicative of grasp quality for the candidate grasp locations, and, based on the calculated respective values indicative of grasp quality for the candidate grasp locations, processor (18) is configured to select a grasp location likely to result in a successful grasp of the respective object.Type: GrantFiled: February 12, 2020Date of Patent: June 6, 2023Assignee: Siemens AktiengesellschaftInventors: Heiko Claussen, Martin Sehr, Eugen Solowjow, Chengtao Wen, Juan L. Aparicio Ojea
-
Publication number: 20230158679Abstract: Autonomous operations, such as robotic grasping and manipulation, in unknown or dynamic environments present various technical challenges. For example, three-dimensional (3D) reconstruction of a given object often focuses on the geometry of the object without considering how the 3D model of the object is used in solving or performing a robot operation task. As described herein, in accordance with various embodiments, models are generated of objects and/or physical environments based on tasks that autonomous machines perform with the objects or within the physical environments. Thus, in some cases, a given object or environment may be modeled differently depending on the task that is performed using the model. Further, portions of an object or environment may be modeled with varying resolutions depending on the task associated with the model.Type: ApplicationFiled: April 6, 2020Publication date: May 25, 2023Inventors: Chengtao Wen, Heiko Claussen, Xiaowen Yu, Eugen Solowjow, Richard Gary McDaniel, Swen Elpelt, Juan L. Aparicio Ojea