Patents by Inventor Julian Ibarz
Julian Ibarz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240118667Abstract: Implementations disclosed herein relate to mitigating the reality gap through training a simulation-to-real machine learning model (“Sim2Real” model) using a vision-based robot task machine learning model. The vision-based robot task machine learning model can be, for example, a reinforcement learning (“RL”) neural network model (RL-network), such as an RL-network that represents a Q-function.Type: ApplicationFiled: May 15, 2020Publication date: April 11, 2024Inventors: Kanishka Rao, Chris Harris, Julian Ibarz, Alexander Irpan, Seyed Mohammad Khansari Zadeh, Sergey Levine
-
Patent number: 11717959Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).Type: GrantFiled: June 28, 2018Date of Patent: August 8, 2023Assignee: GOOGLE LLCInventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
-
Patent number: 11685045Abstract: Asynchronous robotic control utilizing a trained critic network. During performance of a robotic task based on a sequence of robotic actions determined utilizing the critic network, a corresponding next robotic action of the sequence is determined while a corresponding previous robotic action of the sequence is still being implemented. Optionally, the next robotic action can be fully determined and/or can begin to be implemented before implementation of the previous robotic action is completed. In determining the next robotic action, most recently selected robotic action data is processed using the critic network, where such data conveys information about the previous robotic action that is still being implemented. Some implementations additionally or alternatively relate to determining when to implement a robotic action that is determined in an asynchronous manner.Type: GrantFiled: September 8, 2020Date of Patent: June 27, 2023Assignee: X DEVELOPMENT LLCInventors: Alexander Herzog, Dmitry Kalashnikov, Julian Ibarz
-
Patent number: 11477243Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for off-policy evaluation of a control policy. One of the methods includes obtaining policy data specifying a control policy for controlling a source agent interacting with a source environment to perform a particular task; obtaining a validation data set generated from interactions of a target agent in a target environment; determining a performance estimate that represents an estimate of a performance of the control policy in controlling the target agent to perform the particular task in the target environment; and determining, based on the performance estimate, whether to deploy the control policy for controlling the target agent to perform the particular task in the target environment.Type: GrantFiled: March 23, 2020Date of Patent: October 18, 2022Assignee: Google LLCInventors: Kanury Kanishka Rao, Konstantinos Bousmalis, Christopher K. Harris, Alexander Irpan, Sergey Vladimir Levine, Julian Ibarz
-
Patent number: 11341364Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.Type: GrantFiled: September 20, 2018Date of Patent: May 24, 2022Assignee: Google LLCInventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
-
Publication number: 20210237266Abstract: Using large-scale reinforcement learning to train a policy model that can be utilized by a robot in performing a robotic task in which the robot interacts with one or more environmental objects. In various implementations, off-policy deep reinforcement learning is used to train the policy model, and the off-policy deep reinforcement learning is based on self-supervised data collection. The policy model can be a neural network model. Implementations of the reinforcement learning utilized in training the neural network model utilize a continuous-action variant of Q-learning. Through techniques disclosed herein, implementations can learn policies that generalize effectively to previously unseen objects, previously unseen environments, etc.Type: ApplicationFiled: June 14, 2019Publication date: August 5, 2021Inventors: Dmitry Kalashnikov, Alexander Irpan, Peter Pastor Sampedro, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Sergey Levine
-
Publication number: 20200338722Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).Type: ApplicationFiled: June 28, 2018Publication date: October 29, 2020Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
-
Publication number: 20200304545Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for off-policy evaluation of a control policy. One of the methods includes obtaining policy data specifying a control policy for controlling a source agent interacting with a source environment to perform a particular task; obtaining a validation data set generated from interactions of a target agent in a target environment; determining a performance estimate that represents an estimate of a performance of the control policy in controlling the target agent to perform the particular task in the target environment; and determining, based on the performance estimate, whether to deploy the control policy for controlling the target agent to perform the particular task in the target environment.Type: ApplicationFiled: March 23, 2020Publication date: September 24, 2020Inventors: Kanury Kanishka Rao, Konstantinos Bousmalis, Christopher K. Harris, Alexander Irpan, Sergey Vladimir Levine, Julian Ibarz
-
Publication number: 20200279134Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network that is used to control a robotic agent interacting with a real-world environment.Type: ApplicationFiled: September 20, 2018Publication date: September 3, 2020Inventors: Konstantinos Bousmalis, Alexander Irpan, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Julian Ibarz, Sergey Vladimir Levine, Kurt Konolige, Vincent O. Vanhoucke, Matthew Laurance Kelcey
-
Patent number: 9536314Abstract: A method for reconstructing a three-dimension image includes receiving a plurality of two-dimensional images and projection information of the two-dimensional images, projecting a plurality of rays onto the plurality of two-dimensional images, determining correspondence information between pixels of different ones of the plurality of two-dimensional images, determining a value of each of the pixels, and reconstructing a three-dimension image by integrating the plurality of rays, wherein a position on each ray can be associated to one pixel of the plurality of two-dimensional images.Type: GrantFiled: October 19, 2011Date of Patent: January 3, 2017Assignee: SIEMENS MEDICAL SOLUTIONS USA, INC.Inventors: Mathieu Chartouni, Liron Yatziv, Julian Ibarz, Chen-Rui Chou, Atilla Peter Kiraly, Christophe Chefd'hotel
-
Patent number: 9454714Abstract: Systems and methods for sequence transcription with neural networks are provided. More particularly, a neural network can be implemented to map a plurality of training images received by the neural network into a probabilistic model of sequences comprising P(S|X) by maximizing log P(S|X) on the plurality of training images. X represents an input image and S represents an output sequence of characters for the input image. The trained neural network can process a received image containing characters associated with building numbers. The trained neural network can generate a predicted sequence of characters by processing the received image.Type: GrantFiled: December 31, 2014Date of Patent: September 27, 2016Assignee: Google Inc.Inventors: Julian Ibarz, Yaroslav Bulatov, Ian Goodfellow
-
Patent number: 8965112Abstract: Systems and methods for sequence transcription with neural networks are provided. More particularly, a neural network can be implemented to map a plurality of training images received by the neural network into a probabilistic model of sequences comprising P(S|X) by maximizing log P(S|X) on the plurality of training images. X represents an input image and S represents an output sequence of characters for the input image. The trained neural network can process a received image containing characters associated with building numbers. The trained neural network can generate a predicted sequence of characters by processing the received image.Type: GrantFiled: December 17, 2013Date of Patent: February 24, 2015Assignee: Google Inc.Inventors: Julian Ibarz, Yaroslav Bulatov, Ian Goodfellow
-
Patent number: 8868522Abstract: Systems and methods for updating geographic data based on a transaction are provided. In some aspects, one or more transaction records associated with a business are accessed from a memory. Each transaction record identifies a transaction time, geographic location data, and transaction information. A geocoded record of the business is selected to update, based on the geographic location data of the one or more transaction records. The selected geocoded record is updated based on at least one of the transaction time or the transaction information identified in the transaction records.Type: GrantFiled: November 30, 2012Date of Patent: October 21, 2014Assignee: Google Inc.Inventors: Marco Zennaro, Kong Man Cheung, Julian Ibarz, Liron Yatziv, Sacha Christophe Arnoud
-
Patent number: 8538106Abstract: A method for three-dimensional esophageal reconstruction includes acquiring a first X-ray image from a first angle with respect to a subject using a first X-ray imager. At least a second X-ray image is acquired from a second angle, different than the first angle, with respect to the subject using a second X-ray imager. Additional X-ray images may be acquired from additional angle. A three-dimensional model of the esophagus is generated from the at least two X-ray images acquired at different angles. A set of fluoroscopic X-ray images is acquired using either the first X-ray imager or the second X-ray imager. The three-dimensional model of the esophagus is registered to the acquired set of fluoroscopic X-ray images. The three-dimensional model of the esophagus is displayed overlaying the set of fluoroscopic X-ray images.Type: GrantFiled: October 12, 2010Date of Patent: September 17, 2013Assignee: Siemens AktiengesellschaftInventors: Julian Ibarz, Norbert Strobel, Liron Yatziv
-
Publication number: 20120098832Abstract: A method for reconstructing a three-dimension image includes receiving a plurality of two-dimensional images and projection information of the two-dimensional images, projecting a plurality of rays onto the plurality of two-dimensional images, determining correspondence information between pixels of different ones of the plurality of two-dimensional images, determining a value of each of the pixels, and reconstructing a three-dimension image by integrating the plurality of rays, wherein a position on each ray can be associated to one pixel of the plurality of two-dimensional images.Type: ApplicationFiled: October 19, 2011Publication date: April 26, 2012Applicant: Siemens CorporationInventors: Mathieu Chartouni, Julian Ibarz, Liron Yatziv
-
Publication number: 20110091087Abstract: A method for three-dimensional esophageal reconstruction includes acquiring a first X-ray image from a first angle with respect to a subject using a first X-ray imager. At least a second X-ray image is acquired from a second angle, different than the first angle, with respect to the subject using a second X-ray imager. Additional X-ray images may be acquired from additional angle. A three-dimensional model of the esophagus is generated from the at least two X-ray images acquired at different angles. A set of fluoroscopic X-ray images is acquired using either the first X-ray imager or the second X-ray imager. The three-dimensional model of the esophagus is registered to the acquired set of fluoroscopic X-ray images. The three-dimensional model of the esophagus is displayed overlaying the set of fluoroscopic X-ray images.Type: ApplicationFiled: October 12, 2010Publication date: April 21, 2011Applicant: Siemens CorporationInventors: Julian Ibarz, Norbert Strobel, Liron Yatziv
-
Publication number: 20110090222Abstract: A method for imaging a myocardial surface includes receiving an image volume. A myocardial surface is segmented within the received image volume. A polygon mesh of the segmented myocardial surface is extracted. A surface texture is calculated from voxel information taken along a path normal to the surface of the myocardium. A view of the myocardial surface is rendered using the calculated surface texture.Type: ApplicationFiled: October 5, 2010Publication date: April 21, 2011Applicant: Siemens CorporationInventors: Julian Ibarz, Liron Yatziv, Romain Moreau-Gobard, James Williams
-
Publication number: 20110082667Abstract: A method for simultaneous visualization of the outside and the inside of a surface model at a selected view orientation includes receiving a digitized representation of a surface of a segmented object, where the surface representation comprises a plurality of points, receiving a selection of a viewing direction for rendering the object, calculating an inner product image be calculating an inner product {right arrow over (n)}p·{right arrow over (d)} at each point on the surface mesh, where {right arrow over (n)}p is a normalized vector representing the normal direction of the surface mesh at a point p towards the exterior of the object and {right arrow over (d)} is a normalized vector representing the view direction, and rendering the object using an opacity that is a function of the denoised inner product image to yield a rendered object, where an interior of the object is rendered.Type: ApplicationFiled: September 3, 2010Publication date: April 7, 2011Applicant: Siemens CorporationInventors: Julian Ibarz, Liron Yatziv, Norbert Strobel