Patents by Inventor Yu-Wei Chao
Yu-Wei Chao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12230595Abstract: A method of forming an integrated circuit structure includes forming a patterned passivation layer over a metal pad, with a top surface of the metal pad revealed through a first opening in the patterned passivation layer, and applying a polymer layer over the patterned passivation layer. The polymer layer is substantially free from N-Methyl-2-pyrrolidone (NMP), and comprises aliphatic amide as a solvent. The method further includes performing a light-exposure process on the polymer layer, performing a development process on the polymer layer to form a second opening in the polymer layer, wherein the top surface of the metal pad is revealed to the second opening, baking the polymer, and forming a conductive region having a via portion extending into the second opening.Type: GrantFiled: May 28, 2021Date of Patent: February 18, 2025Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING CO., LTD.Inventors: Ming-Da Cheng, Yung-Ching Chao, Chun Kai Tzeng, Cheng Jen Lin, Chin Wei Kang, Yu-Feng Chen, Mirng-Ji Lii
-
Publication number: 20240371082Abstract: In various examples, an autonomous system may use a multi-stage process to solve three-dimensional (3D) manipulation tasks from a minimal number of demonstrations and predict key-frame poses with higher precision. In a first stage of the process, for example, the disclosed systems and methods may predict an area of interest in an environment using a virtual environment. The area of interest may correspond to a predicted location of an object in the environment, such as an object that an autonomous machine is instructed to manipulate. In a second stage, the systems may magnify the area of interest and render images of the virtual environment using a 3D representation of the environment that magnifies the area of interest. The systems may then use the rendered images to make predictions related to key-frame poses associated with a future (e.g., next) state of the autonomous machine.Type: ApplicationFiled: July 12, 2024Publication date: November 7, 2024Inventors: Ankit Goyal, Valts Blukis, Jie Xu, Yijie Guo, Yu-Wei Chao, Dieter Fox
-
Publication number: 20240273810Abstract: In various examples, a machine may generate, using sensor data capturing one or more views of an environment, a virtual environment including a 3D representation of the environment. The machine may render, using one or more virtual sensors in the virtual environment, one or more images of the 3D representation of the environment. The machine may apply the one or more images to one or more machine learning models (MLMs) trained to generate one or more predictions corresponding to the environment. The machine may perform one or more control operations based at least on the one or more predictions generated using the one or more MLMs.Type: ApplicationFiled: February 1, 2024Publication date: August 15, 2024Inventors: Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox
-
Publication number: 20240261971Abstract: Apparatuses, systems, and techniques to generate control commands. In at least one embodiment, control commands are generated based on, for example, one or more images depicting a hand.Type: ApplicationFiled: August 9, 2023Publication date: August 8, 2024Inventors: Yuzhe Qin, Wei Yang, Yu-Wei Chao, Dieter Fox
-
Publication number: 20240157557Abstract: Apparatuses, systems, and techniques to control a real-world and/or virtual device (e.g., a robot). In at least one embodiment, the device is controlled based, at least in part on, for example, one or more neural networks. Parameter values for the neural network(s) may be obtained by training the neural network(s) to control movement of a first agent with respect to at least one first target while avoiding collision with at least one stationary first holder of the at least one first target, and updating the parameter values by training the neural network(s) to control movement of a second agent with respect to at least one second target while avoiding collision with at least one non-stationary second holder of the at least one second target.Type: ApplicationFiled: March 23, 2023Publication date: May 16, 2024Inventors: Sammy Joe Christen, Wei Yang, Claudia Perez D'Arpino, Dieter Fox, Yu-Wei Chao
-
Patent number: 11893468Abstract: Apparatuses, systems, and techniques to identify a goal of a demonstration. In at least one embodiment, video data of a demonstration is analyzed to identify a goal. Object trajectories identified in the video data are analyzed with respect to a task predicate satisfied by a respective object trajectory, and with respect to motion predicate. Analysis of the trajectory with respect to the motion predicate is used to assess intentionality of a trajectory with respect to the goal.Type: GrantFiled: July 16, 2020Date of Patent: February 6, 2024Assignee: NVIDIA CorporationInventors: Yu-Wei Chao, De-An Huang, Christopher Jason Paxton, Animesh Garg, Dieter Fox
-
Publication number: 20230294276Abstract: Approaches presented herein provide for simulation of human motion for human-robot interactions, such as may involve a handover of an object. Motion capture can be performed for a hand grasping and moving an object to a location and orientation appropriate for a handover, without a need for a robot to be present or an actual handover to occur. This motion data can be used to separately model the hand and the object for use in a handover simulation, where a component such as a physics engine may be used to ensure realistic modeling of the motion or behavior. During a simulation, a robot control model or algorithm can predict an optimal location and orientation to grasp an object, and an optimal path to move to that location and orientation, using a control model or algorithm trained, based at least in part, using the motion models for the hand and object.Type: ApplicationFiled: December 30, 2022Publication date: September 21, 2023Inventors: Yu-Wei Chao, Yu Xiang, Wei Yang, Dieter Fox, Chris Paxton, Balakumar Sundaralingam, Maya Cakmak
-
Publication number: 20230294277Abstract: Approaches presented herein provide for predictive control of a robot or automated assembly in performing a specific task. A task to be performed may depend on the location and orientation of the robot performing that task. A predictive control system can determine a state of a physical environment at each of a series of time steps, and can select an appropriate location and orientation at each of those time steps. At individual time steps, an optimization process can determine a sequence of future motions or accelerations to be taken that comply with one or more constraints on that motion. For example, at individual time steps, a respective action in the sequence may be performed, then another motion sequence predicted for a next time step, which can help drive robot motion based upon predicted future motion and allow for quick reactions.Type: ApplicationFiled: June 30, 2022Publication date: September 21, 2023Inventors: Wei Yang, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Yu-Wei Chao, Dieter Fox, Iretiayo Akinola
-
Publication number: 20230234233Abstract: Apparatuses, systems, and techniques to place one or more objects in a location and orientation. In at least one embodiment, one or more circuits are to use one or more neural networks to cause one or more autonomous devices to place one or more objects in a location and orientation based, at least in part, on one or more images of the location and orientation.Type: ApplicationFiled: January 26, 2022Publication date: July 27, 2023Inventors: Ankit Goyal, Arsalan Mousavian, Christopher Jason Paxton, Yu-Wei Chao, Dieter Fox
-
Publication number: 20230202031Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.Type: ApplicationFiled: March 1, 2023Publication date: June 29, 2023Inventors: Wei Yang, Christopher Jason Paxton, Yu-Wei Chao, Dieter Fox
-
Publication number: 20230145208Abstract: Apparatuses, systems, and techniques to train a machine learning model. In at least one embodiment, a first machine learning model is trained to infer a concept based on first information, training data is labeled using the first machine learning model, and a second machine learning model is trained to infer the concept using the labeled training data.Type: ApplicationFiled: November 7, 2022Publication date: May 11, 2023Inventors: Andreea Bobu, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Wei Yang, Yu-Wei Chao, Dieter Fox
-
Patent number: 11597078Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.Type: GrantFiled: July 28, 2020Date of Patent: March 7, 2023Assignee: NVIDIA CORPORATIONInventors: Wei Yang, Christopher Jason Paxton, Yu-Wei Chao, Dieter Fox
-
Publication number: 20220032454Abstract: A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.Type: ApplicationFiled: July 28, 2020Publication date: February 3, 2022Inventors: Wei Yang, Christopher Jason Paxton, Yu-Wei Chao, Dieter Fox
-
Publication number: 20210086364Abstract: A human pilot controls a robotic arm and gripper by simulating a set of desired motions with the human hand. In at least one embodiment, one or more images of the pilot's hand are captured and analyzed to determine a set of hand poses. In at least one embodiment, the set of hand poses is translated to a corresponding set of robotic-gripper poses. In at least one embodiment, a set of motions is determined that perform the set of robotic-gripper poses, and the robot is directed to perform the set of motions.Type: ApplicationFiled: July 17, 2020Publication date: March 25, 2021Inventors: Ankur Handa, Karl Van Wyk, Wei Yang, Yu-Wei Chao, Dieter Fox, Qian Wan
-
Publication number: 20210081752Abstract: Apparatuses, systems, and techniques to identify a goal of a demonstration. In at least one embodiment, video data of a demonstration is analyzed to identify a goal. Object trajectories identified in the video data are analyzed with respect to a task predicate satisfied by a respective object trajectory, and with respect to motion predicate. Analysis of the trajectory with respect to the motion predicate is used to assess intentionality of a trajectory with respect to the goal.Type: ApplicationFiled: July 16, 2020Publication date: March 18, 2021Inventors: Yu-Wei Chao, De-An Huang, Christopher Jason Paxton, Animesh Garg, Dieter Fox
-
Patent number: 10475207Abstract: A forecasting neural network receives data and extracts features from the data. A recurrent neural network included in the forecasting neural network provides forecasted features based on the extracted features. In an embodiment, the forecasting neural network receives an image, and features of the image are extracted. The recurrent neural network forecasts features based on the extracted features, and pose is forecasted based on the forecasted features. Additionally or alternatively, additional poses are forecasted based on additional forecasted features.Type: GrantFiled: August 7, 2018Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Jimei Yang, Yu-Wei Chao, Scott Cohen, Brian Price
-
Publication number: 20180357789Abstract: A forecasting neural network receives data and extracts features from the data. A recurrent neural network included in the forecasting neural network provides forecasted features based on the extracted features. In an embodiment, the forecasting neural network receives an image, and features of the image are extracted. The recurrent neural network forecasts features based on the extracted features, and pose is forecasted based on the forecasted features. Additionally or alternatively, additional poses are forecasted based on additional forecasted features.Type: ApplicationFiled: August 7, 2018Publication date: December 13, 2018Inventors: Jimei Yang, Yu-Wei Chao, Scott Cohen, Brian Price
-
Publication number: 20180293738Abstract: A forecasting neural network receives data and extracts features from the data. A recurrent neural network included in the forecasting neural network provides forecasted features based on the extracted features. In an embodiment, the forecasting neural network receives an image, and features of the image are extracted. The recurrent neural network forecasts features based on the extracted features, and pose is forecasted based on the forecasted features. Additionally or alternatively, additional poses are forecasted based on additional forecasted features.Type: ApplicationFiled: April 7, 2017Publication date: October 11, 2018Inventors: Jimei Yang, Yu-Wei Chao, Scott Cohen, Brian Price
-
Patent number: 10096125Abstract: A forecasting neural network receives data and extracts features from the data. A recurrent neural network included in the forecasting neural network provides forecasted features based on the extracted features. In an embodiment, the forecasting neural network receives an image, and features of the image are extracted. The recurrent neural network forecasts features based on the extracted features, and pose is forecasted based on the forecasted features. Additionally or alternatively, additional poses are forecasted based on additional forecasted features.Type: GrantFiled: April 7, 2017Date of Patent: October 9, 2018Assignee: Adobe Systems IncorporatedInventors: Jimei Yang, Yu-Wei Chao, Scott Cohen, Brian Price