Patents by Inventor David Constantine
David Constantine has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240160901Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.Type: ApplicationFiled: January 8, 2024Publication date: May 16, 2024Inventors: Tom Van de Wiele, Volodymyr Mnih, Andriy Mnih, David Constantine Patrick Warde-Farley
-
Publication number: 20240024667Abstract: A neuromodulation device includes a movable arm coupled with a main body. The movable arm is configured to transition between an open configuration and a closed configuration. The movable arm is biased towards the closed configuration. A nerve stimulation chamber is defined at least in part by the movable arm. The nerve stimulation chamber is configured to retain a nerve therein. The device has an open channel through which the nerve travels. The channel is defined at least in part by the movable arm. The size and shape of the chamber and the channel are adjustable by the movement of the arm in response to contact with the nerve. The channel may be continuously axial along a longitudinal axis of the channel when the movable arm is in the closed configuration. The chamber has an electrode.Type: ApplicationFiled: July 22, 2023Publication date: January 25, 2024Inventors: Jayme Coates, David Constantine, Mario Romero-Ortega, Mark Tauer
-
Publication number: 20240024668Abstract: A method stimulates a nerve by providing a device having an arm formed of an elastomeric material. A channel through which the nerve travels is defined at least in part by the arm. The channel has a continuously axial longitudinal axis at rest. A nerve stimulation chamber is defined at least in part by the arm. The nerve stimulation chamber is configured to retain the nerve therein. The device has an electrode in the chamber. The method reduces a dimension of a nerve. The is less than 50% of the diameter of the undeformed nerve. The device is configured to apply less than 6.7 kPa of pressure to the nerve at any given point. The nerve is positioned in the chamber, and the cross-sectional dimension of the stretched nerve is increased. At least 20% of the perimeter of the nerve is maintained in contact with the electrode.Type: ApplicationFiled: July 22, 2023Publication date: January 25, 2024Inventors: Jayme Coates, David Constantine, Mario Romero-Ortega, Mark Tauer
-
Patent number: 11868866Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.Type: GrantFiled: November 18, 2019Date of Patent: January 9, 2024Assignee: Deep Mind Technologies LimitedInventors: Tom Van de Wiele, Volodymyr Mnih, Andriy Mnih, David Constantine Patrick Warde-Farley
-
Publication number: 20230338029Abstract: A nerve regeneration system includes a nerve guide having a proximal end and a distal end. The system includes nerve growth factor configured to enhance the growth of axons and associated nerve tissue. The nerve growth factor has a first concentration nearer to a proximal end and a second growth factor concentration nearer to a distal end. The second growth factor concentration is higher than the first growth factor concentration. The system includes myelination factor configured to enhance myelination of the grown axons. The myelination factor has a first myelination factor concentration nearer to the proximal end, a third myelination factor concentration nearer to the distal end, and a second myelination factor concentration between the first myelination factor concentration and the third myelination factor concentration. The second myelination factor concentration is higher than the first myelination factor and higher than the third myelination factor concentration.Type: ApplicationFiled: April 5, 2023Publication date: October 26, 2023Inventors: Mario I. Romero-Ortega, David Constantine, Jeffrey Petruska
-
Publication number: 20230325635Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network for use in controlling an agent using relative variational intrinsic control. In one aspect, a method includes: selecting a skill from a set of skills; generating a trajectory by controlling the agent using the policy neural network while the policy neural network is conditioned on the selected skill; processing an initial observation and a last observation using a relative discriminator neural network to generate a relative score; processing the last observation using an absolute discriminator neural network to generate an absolute score; generating a reward for the trajectory from the absolute score corresponding to the selected skill and the relative score corresponding to the selected skill; and training the policy neural network on the reward for the trajectory.Type: ApplicationFiled: September 10, 2021Publication date: October 12, 2023Inventors: David Constantine Patrick Warde-Farley, Steven Stenberg Hansen, Volodymyr Mnih, Kate Alexandra Baumli
-
Patent number: 11727281Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent that interacts with an environment. In one aspect, a system comprises: an action selection subsystem that selects actions to be performed by the agent using an action selection policy generated using an action selection neural network; a reward subsystem that is configured to: receive an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment; generate a reward using an embedded representation of the observation characterizing the current state of the environment and an embedded representation of the observation characterizing the goal state of the environment; and a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques.Type: GrantFiled: January 27, 2022Date of Patent: August 15, 2023Assignee: DeepMind Technologies LimitedInventors: David Constantine Patrick Warde-Farley, Volodymyr Mnih
-
Publication number: 20230149709Abstract: A method and/or system controls bladder function of a patient having a symptom related to overactive bladder (OAB) or stress unitary incontinence (SUI). The symptom related to OAB is produced by a natural OAB signal generated by the patient's body. To that end, the method couples an electrode to a prescribed somatic motor nerve (e.g., the perineal nerve) associated with the pelvic floor, and then transmits, via the electrode, an OAB control signal to the prescribed somatic motor nerve. The OAB control signal is configured to activate the pelvic floor in a prescribed manner to mitigate the effect of the natural OAB signal on the pelvic floor.Type: ApplicationFiled: November 12, 2022Publication date: May 18, 2023Inventors: Mario I. Romero-Ortega, David Constantine
-
Publication number: 20220164673Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent that interacts with an environment. In one aspect, a system comprises: an action selection subsystem that selects actions to be performed by the agent using an action selection policy generated using an action selection neural network; a reward subsystem that is configured to: receive an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment; generate a reward using an embedded representation of the observation characterizing the current state of the environment and an embedded representation of the observation characterizing the goal state of the environment; and a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques.Type: ApplicationFiled: January 27, 2022Publication date: May 26, 2022Inventors: David Constantine Patrick Warde-Farley, Volodymyr Mnih
-
Patent number: 11263531Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent that interacts with an environment. In one aspect, a system comprises: an action selection subsystem that selects actions to be performed by the agent using an action selection policy generated using an action selection neural network; a reward subsystem that is configured to: receive an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment; generate a reward using an embedded representation of the observation characterizing the current state of the environment and an embedded representation of the observation characterizing the goal state of the environment; and a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques.Type: GrantFiled: May 20, 2019Date of Patent: March 1, 2022Assignee: DeepMind Technologies LimitedInventors: David Constantine Patrick Warde-Farley, Volodymyr Mnih
-
Publication number: 20210357731Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.Type: ApplicationFiled: November 18, 2019Publication date: November 18, 2021Inventors: Tom Van de Wiele, Volodymyr Mnih, Andriy Mnih, David Constantine Patrick Warde-Farley
-
Publication number: 20190354869Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent that interacts with an environment. In one aspect, a system comprises: an action selection subsystem that selects actions to be performed by the agent using an action selection policy generated using an action selection neural network; a reward subsystem that is configured to: receive an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment; generate a reward using an embedded representation of the observation characterizing the current state of the environment and an embedded representation of the observation characterizing the goal state of the environment; and a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques.Type: ApplicationFiled: May 20, 2019Publication date: November 21, 2019Inventors: David Constantine Patrick Warde-Farley, Volodymyr Mnih
-
Publication number: 20110046522Abstract: In a first embodiment, an ultrasound energy delivery assembly includes a waveguide and a catheter having a capture member. The capture member extends radially inward from an interior surface of the catheter into a lumen of the catheter and is configured to retain the enlarged distal tip so that the enlarged distal tip is temporarily prevented from moving proximally. In a second embodiment, an ultrasound energy delivery assembly includes a waveguide and a sheath covering at least a portion of the waveguide. In a third embodiment, an ultrasound energy delivery assembly includes a waveguide and a dual lumen catheter having a capture member. In a fourth embodiment, an ultrasound energy delivery assembly includes a waveguide and a catheter having a proximal waveguide lumen and a proximal guide wire lumen that merges with the proximal waveguide lumen at their respective distal ends to form a distal lumen.Type: ApplicationFiled: August 10, 2010Publication date: February 24, 2011Applicant: BOSTON SCIENTIFIC SCIMED INC.Inventors: Huey Chan, Sean McFerran, Stephen Porter, Del Kjos, Steve Forcucci, David Constantine, Jeffrey J. Vaitekunas, Katie Kane, Mark Hamm