MODULE FOR UNDERWATER REMOTELY OPERATED VEHICLES

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling a remotely operated vehicle (ROV) for performing an underwater task. One apparatus includes a watertight housing; a mounting hardware that attaches the watertight housing to the ROV; one or more sensors in the watertight housing, the one or more sensors configured to generate sensor data that is associated with an underwater task; and one or more processors in the watertight housing, the one or more processors configured to: receive the sensor data from the one or more sensors; generate a navigation plan for the ROV using the sensor data; determine, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and provide the control instructions to an interface of the ROV configured to communicate with the apparatus.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 63/415,587, filed on Oct. 12, 2022, the contents of which are incorporated by reference herein.

BACKGROUND

A remotely operated vehicle (ROV) is a tethered underwater mobile device to help humans safely study a body of water, such as the ocean. ROVs are often used when diving by humans is either impractical or dangerous, such as working in deep water or investigating submerged hazards. ROVs can include sensors such as cameras, lights to capture information about objects underwater, can include robotic arms to grab things underwater, or a combination of both.

An ROV is typically controlled and piloted from a surface vessel via an umbilical cable. The umbilical cable can provide electric power and allow the transfer of data between the surface vessel and the ROV. For example, an ROV can be controlled by an operator on a surface vessel using a joystick.

SUMMARY

In general, innovative aspects of the subject matter described in this specification relate to a module (e.g., an apparatus, or a hardware kit) that attaches to an ROV and autonomously controls the ROV for performing an underwater task. The module includes a watertight housing and a mounting hardware that attaches the watertight housing to the ROV. The module includes one or more sensors that are configured to generate sensor data that is associated with the underwater task. The module includes a processor for performing the underwater task. The processor can determine a navigation plan for the ROV using the sensor data, and can control the ROV to perform the underwater task by transmitting control instructions over an interface of the ROV.

One innovative aspect of the subject matter described in this specification is embodied in a method that includes receiving sensor data from one or more sensors included in an apparatus, wherein the apparatus is configured to attach to a remotely operated vehicle (ROV), wherein the one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task, wherein a mounting hardware attaches the watertight housing to the ROV; generating a navigation plan for the ROV using the sensor data; determining, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and providing the control instructions to an interface of the ROV configured to communicate with the apparatus.

Other implementations of this and other aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. A system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions. One or more computer programs can be so configured by virtue of having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.

The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. The one or more processors include a graphics processing unit configured to process the sensor data using a machine learning algorithm. The apparatus includes a machine learning engine trained generate a result of one or more computer vision tasks based on input indicative of the sensor data. The mounting hardware includes at least one of a clamping system, a screwing system, or a magnetic system. The ROV is untethered to any surface vessel. The one or more sensors are customized for the apparatus in accordance with the underwater task. The apparatus includes a communication engine configured to communicate with a surface vessel or another ROV. The one or more processors are configured to generate the navigation plan for the ROV using the sensor data by fusing the sensor data obtained from multiple sensors. The interface of the ROV includes an application programming interface (API) through which the ROV receives control instructions.

Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. An ROV attachment module can control an ROV to map large underwater areas (e.g., seabed or lakebed) autonomously, reducing the amount of human operator time spent in manually controlling the ROV and searching for an area of interest. The ROV attachment module can be compatible with a wide range of ROVs. In some implementations, the ROV attachment module can include an on-board graphics processing unit (GPU) that can run machine learning algorithms on-board the ROV attachment, e.g., while the ROV is navigating in the water. Thus, the ROV attachment can use the machine learning algorithms to automatically identify points of interest and can dynamically drive the ROV over to areas to collect more data.

Various modules or kits can be developed and provided for performing different underwater tasks, and a user having any type of ROV can pick and choose a module to perform a desired underwater task, without a need to buy customized expensive ROVs or a need to buy customized attachments for specific types of ROVs. Instead of being limited by sensors available on an ROV, the module described in this specification can include its own sensors and can generate an autonomous navigation plan for the ROV using the sensor data obtained from the module's sensors. In some implementations, the module can fuse sensor data from multiple sensors (e.g., cameras, magnetic sensors, sonar, etc.) to generate control instructions that are compatible with an existing interface, e.g., an application programming interface (API), of an ROV.

The details of one or more implementations of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram illustrating an example environment for performing an underwater tasking by a module attached to an ROV.

FIG. 2 is a flow chart illustrating an example of a process for performing an underwater task by a module attached to an ROV.

Like reference numbers and designations in the various drawings indicate like elements.

DETAILED DESCRIPTION

FIG. 1 is a diagram illustrating an example environment 100 for performing an underwater tasking by a module attached to an ROV. The environment 100 includes an example remotely operated vehicle (ROV) 102, and an enhanced ROV 120 to which a module 121 described in this specification is attached. The example ROV 102 is included here to be compared with the enhanced ROV 120. The enhanced ROV 120 and the module 121 can work alone without the ROV 102.

The example ROV 102 is a tethered underwater mobile device to help humans safely study a body of water 118, such as the ocean. The ROV 102 can be configured to perform an underwater task. Examples of underwater tasks include seabed mapping, object searching, biodiversity (e.g., fish) observation, pollution observation, and so on. For example, the ROV 102 can be used for imaging and mapping coral reefs or seagrass beds 114. As another example, the ROV 102 can be used to count types and numbers of fish in an underwater region. The ROV 102 can be used when diving by humans is either impractical or dangerous, such as working in deep water or investigating submerged hazards. The ROV 102 can include sensors such as cameras, lights to capture information about objects underwater, can include robotic arms to grab things underwater, or a combination of both.

The ROV 102 is controlled and piloted from a surface vessel 108 via an umbilical cable 110 as the main tethering device. The surface vessel 108 can be a docking bay that can charge one or more ROVs and can store data collected by the ROVs. The umbilical cable 110 provides electric power and allows the transfer of data between the surface vessel and the ROV 102. For example, the ROV 102 can be controlled by a human operator on a surface vessel using a joystick. The human operator can send control instructions 112 to the ROV 102 through an interface 106 of the ROV 102. The control instructions 112 are readable by the ROV 102 through the interface 106, e.g., an application programming interface (API). For example, the control instructions 112 can include “moving forward”, “moving back”, “hover”, “stay stationary”, “do a spiral”, etc. The control instructions 112 can be specific to the thruster system of the ROV, e.g., the motor 104 of the ROV 102. For example, the control instructions 112 can include motor speed, fin position, etc.

However, in some cases, the underwater task can include imaging and/or mapping a large area, and the area would require a long time to be imaged and/or mapped by a single ROV that is manually controlled by a human operator. Therefore, it is desirable to have one or more ROVs that can autonomously navigate underwater and can perform underwater tasks over a larger area.

The environment 100 includes an enhanced ROV 120 to which a module 121 is attached. The module 121 is an apparatus or a hardware kit, e.g., a hardware module, not a software module. Instead of being controlled by an operator on a surface vessel 108 via an umbilical cable 110, the enhanced ROV 120 can be controlled by the module 121 that attaches to the ROV 120 and autonomously determines the navigation plan for the ROV 120 for performing an underwater task.

Instead of using a customized ROV specially programmed for a particular underwater task, an underwater task can be autonomously performed by attaching a module described in this specification to a regular ROV of any type. For example, the enhanced ROV 120 can be the same as the example ROV 102. The enhanced ROV 120 can be any type of ROV, e.g., an ROV of any brand or model. For example, the enhanced ROV 120 can be a smart ROV with complex computation software modules or a simple, basic ROV. The enhanced ROV 120 can be a special purpose ROV or a general purpose ROV.

Similar to the example ROV 102, the enhanced ROV 120 also includes a thruster system, e.g., including the motor 122. Similar to the example ROV 102, the enhanced ROV 120 also includes an interface 124, e.g., an API, through which the ROV 120 receives control instructions 126. Instead of having a special interface with the ROV 120, the module 121 can transmit control instructions 126 to the ROV 120 through the ROV's existing interface that receives instructions from a human operator, e.g., through the umbilical cable 110. For example, the module 121 can communicate with the ROV 120 the same way the ROV 120 communicates with an operator that controls the ROV 120 on the surface vessel 108.

Instead of receiving control instructions 112 from a surface vessel 108 via the umbilical cable 110, the enhanced ROV 120 receives the control instructions 126 from the module 121 attached to the ROV 120. The module 121 can send the control instructions 126 to the ROV 120 via the cable 128 that connects to the interface 124. Instead of tethering through the umbilical cable 110, the ROV 120 is no longer physically connected to the surface vessel 108 and is not limited by the constraint of the umbilical cable 110. Therefore, the ROV 120 can perform underwater tasks over a large region under the water 118.

The module 121 is a kit that can be configured to perform an underwater task using an ROV. The module 121 can include a watertight housing 130 such that the module 121 can be used underwater. For example, the watertight housing 130 can be a watertight box that can be used underwater. The module 121 can include a mounting hardware 132 that attaches the module 121, e.g., the watertight housing, to the ROV 120. The mounting hardware 132 can include a clamping system, a screwing system, a magnetic system, or a combination of these. For example, the module 121 can be clamped onto the sides of the ROV 120. As another example, the module 121 can bolt onto the ROV 120.

The module 121 can include one or more sensors 136 in the watertight housing 130. The one or more sensors 136 can be configured to generate sensor data that is associated with an underwater task. In some implementations, the one or more sensors can be customized for the module 121 in accordance with the underwater task. The one or more sensors 136 can include a camera sensor, a sonar sensor, a magnetometer, a radar sensor, or a combination of these. For example, a stereo camera can be configured to generate three dimensional images through stereo photography. As another example, a sonar sensor can use sound propagation to navigate, measure distances, or detect objects under the surface of the water. Other telemetry techniques are also possible using the one or more sensors 136.

The module 121 can include a processor 134 in the watertight housing 130. The processor 134 can be a part of a computation engine that controls the ROV 120. The processor 134 can control the ROV 120 to perform an underwater task. Examples of the underwater tasks can be similar to the types of tasks described above for the example ROV 102.

The processor 134 can be configured to receive the sensor data from the one or more sensors 136. Instead of using one or more existing sensors belonging to the ROV 120, the processor 134 can be configured to receive the sensor data obtained from the one or more sensors 136 of the module 121 and the sensor data can be associated with an underwater task.

For example, the module 121 can be a kit for a seagrass mapping task. The module 121 can include a camera sensor that can be configured to detect seagrass on the seagrass bed 114. The sensor data can include images or videos of the seagrass bed 114. The sensor data can be used to determine whether the ROV 120 is located at an underwater region with seagrass or without seagrass.

The processor 134 can be configured to generate a navigation plan for the ROV 120 using the sensor data. The processor 134 can apply a determination logic related to the underwater task. The processor 134 can fuse sensor data obtained from multiple sensors and/or different types of sensors, e.g., cameras, sonars, radars, magnetic sensors, etc., and can use the fused sensor data to generate the navigation plan. The navigation plan can include a position, an orientation, or both, of the ROV 120, based upon the sensor data.

For example, the processor 134 can be configured to generate a navigation plan that includes moving the ROV 120 to a target location in the water 118. The navigation plan can include moving the ROV 120 to a position over the seagrass bed 114 at a desired tilt angle. For example, if the sensor data shows no seagrass at a first region, the processor 134 can determine a navigation plan that includes performing a search pattern at a different region. As another example, the navigation plan can include avoiding collisions with certain objects in the water 118.

In some implementations, the processor 134 can be configured to receive sensor data from one or more sensors included in the ROV 120. The processor 134 can be configured to generate a navigation plan for the ROV 120 using the sensor data from the one or more sensors of the module 121, the sensor data from the one or more sensors included in the ROV 120, or both.

The processor 134 can be configured to determine, using the navigation plan, control instructions 126 that are readable by the ROV 120. The processor 134 can translate the navigation plan into control instructions 126 that are consumable, or actionable by the ROV 120. The processor 134 can determine the control instructions 126 by calculating how to accomplish the navigation plan through the thruster system of the ROV 120. The control instructions 126 can be specifically generated according to the thruster system of the ROV 120, e.g., the motor 122 of the ROV 120, the jet system of the ROV 120, the fin position of the ROV 120, etc.

Examples of control instructions include “moving forward”, “staying stationary”, “doing a spiral”, “tilting 30 degrees”, etc. For example, if the navigation plan is to perform a search pattern, the processor 134 can generate control instructions 126 that moves the ROV 120 to each grid of a grid map of a target region. In some implementations, the control instructions can include sensor-based instructions, e.g., “turn on camera”, “turn on the water-quality sensor”, etc. In some implementations, the control instructions can include complicated high-level instructions, e.g., “search for fish”, “return home”, etc.

The processor 134 can transmit the control instructions 126 to the ROV over the interface 124 of the ROV. The module 121 communicates with the ROV 120 through the same interface 124 that usually connects to the surface vessel 108 via an umbilical cable 110. The ROV does not need to differentiate whether the control instructions 126 are from a human operator on a surface vessel 108 or the module 121 because the control instructions 126 from the module 121 are in the same format as the control instructions 112 from the human operator.

After receiving the control instructions 126 from the module 121, the ROV 120 can perform the underwater task by following the control instructions 126. For example, the ROV 120 can move to a target region, can rotate to an orientation, or both. The ROV can obtain data for the underwater task by scanning the target region using one or more sensors of the ROV 120, or one or more sensors of the module 121, or a combination of both.

After performing the underwater task, the ROV 120 can send the data for the underwater task to a computer through wireless or wired communication techniques. For example, the ROV 120 can navigate to the surface vessel 108. The ROV 120 can upload the data for the underwater task to a computer at the surface vessel 108. The computer at the surface vessel 108 can store the data for the underwater task. In some implementations, the computer at the surface vessel 108 can fuse data from multiple ROVs or multiple trips of a single ROV into a more complete result, e.g., a full map of the seagrass bed 114. In some implementations, the ROV 120 can connect to the surface vessel 108 to recharge the batteries of the ROV 120.

There can be many different kinds of underwater tasks, and each underwater task can have a respective module that can be attached to any type of ROV to perform the task. For example, available modules to be purchased at a store can include a module for seagrass mapping, a module for counting fish, a module for biodiversity, and a module for pollution detection. A user can obtain one of these modules for a desired underwater task, and can easily attach the module to any type of ROV. The module can generate control instructions for that particular ROV that it is attached to, and can use the control instructions to control the particular ROV to autonomously perform the desired underwater task. Instead of designing a module for a specific type of ROVs, the module described in this specification can work with any type (e.g., brand) of ROV. For example, the module described in this specification includes a mapping system that converts navigation plans into control instructions that are specific to any type of ROV.

FIG. 2 is a flow chart illustrating an example of a process 200 for performing an underwater task by a module attached to an ROV. The process 200 can be performed by one or more computer systems, for example, the module 121 that attaches to an ROV 120, the processor 134, or a combination of both. In some implementations, some or all of the process 200 can be performed by another computer system, which can be located near the water 118 or at a remote location.

The system receives sensor data from one or more sensors (202). For example, the module 121 can include stereo cameras that can capture an image or a video of the seagrass bed 114.

The system generates a navigation plan for an ROV using the sensor data (204). In some implementations, the module 121 can include a machine learning engine trained to generate a result of one or more computer vision tasks based on input indicative of the sensor data. For example, the module 121 can include a machine learning engine trained to generate a result of object detection tasks to detect fish in the water 118 based on camera image input. In some implementations, the machine learning engine can be trained to search for objects on the seabed and can be trained to generate a navigation plan for the ROV to move towards them for better camera views. In some implementations, the machine learning engine can be trained to generate a navigation plan for the ROV to get close to coral reefs while avoiding collision with the coral reefs and making sure the ROV does not hit the coral reefs. In some implementations, the machine learning engine can be trained to identify seagrass growths and can generate a navigation plan for the ROV to drive a distance above the seagrass, avoiding the ROV to get stuck in the seagrass.

In some implementations, the processor 134 (e.g., the machine learning engine) can include a graphics processing unit (GPU), e.g., an on-board GPU of the module 121. The on-board GPU can be configured to process the sensor data using one or more machine learning algorithms. The on-board GPU can run the one or more machine learning algorithms while the ROV is navigating in the water. Thus, the system can automatically identify points of interest and can dynamically drive the ROV over to areas to collect data, e.g., more detailed data.

The system determines, using the navigation plan, control instructions that are readable by the ROV (206). In some implementations, the module 121 can generate the control instruction using closed-loop algorithms for directing the ROV. The closed-loop algorithms can be performed while the ROV is navigating in the water. The closed-loop algorithms can be machine learning algorithms, non-machine learning algorithms, or a combination of both. Examples of activities that are based on non-machine learning algorithms can include: guiding the ROV in a certain pattern based on Global Positioning System (GPS) coordinates, guiding the ROV in accordance with readings from one or more sensors (e.g., oil sensor, turbidity sensor, and salinity sensor), etc.

In some implementations, the system can receive the sensor data from more sensors and can send the sensor data to a remote computer for processing. In some implementations, the system can generate a navigation plan for the ROV using the sensor data, by computations being done at a remote location, e.g., on a cloud platform. In some implementations, the system can, using the navigation plan, determine the control instructions by computations being done at a remote location, e.g., on a cloud platform. The system can send the navigation plan, the control instructions, or a combination of both, to the ROV over a wireless network.

In some implementations, the module 121 can perform automated control of the ROV based on the thruster system of the ROV 120 and the one or more sensors. The system can determine the control instructions based on the available engine, jets, fins of the ROV, and the position, sensing range of the one or more sensors of the module 121.

The system transmits the control instructions to the ROV over an interface of the ROV (208). After receiving the control instructions, the ROV can read and interpret the control instructions. The ROV can perform the underwater task by following the control instructions.

In some implementations, the underwater task can include imaging and/or mapping a large area, and the area would require a long time to be imaged and/or mapped by a single ROV. In some implementations, the system can include a fleet of multiple ROVs and some or all of the ROVs can be autonomously controlled by a module 121.

In some implementations, the module can include a communication engine, e.g., an acoustic communication engine, that can communicate with a mothership, e.g., the surface vessel 108, the rest of the fleet, or both. In some implementations, a planning system at the mothership can plan routes and mapping areas for each ROV to avoid overlap. In some implementations, an ROV of the fleet of multiple ROVs can dock back at the mothership to recharge and to upload sensor data, e.g., video footage, to a computer.

In some implementations, a computer at the mothership can fuse sensor data, e.g., telemetry data and video footage, to provide a full map of an area, e.g., a seagrass bed 114. For example, the full map can include video footage, pictures, sonar, and other telemetry data. In some implementations, the computer at the mothership can include a machine learning engine, and the machine learning engine can be trained to highlight points of interest for human operators to perform further review.

This specification uses the term “configured” in connection with systems and computer program components. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.

The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.

In this specification the term “engine” is used broadly to refer to a software-based system, subsystem, or process that is programmed to perform one or more specific functions. Generally, an engine will be implemented as one or more software modules or components, installed on one or more computers in one or more locations. In some cases, one or more computers will be dedicated to a particular engine; in other cases, multiple engines can be installed and running on the same computer or computers.

The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.

Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.

Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.

To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone that is running a messaging application, and receiving responsive messages from the user in return.

Data processing apparatus for implementing machine learning models can also include, for example, special-purpose hardware accelerator units for processing common and compute-intensive parts of machine learning training or production, i.e., inference, workloads.

Machine learning models can be implemented and deployed using a machine learning framework, e.g., a TensorFlow framework, a Microsoft Cognitive Toolkit framework, an Apache Singa framework, or an Apache MXNet framework.

Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.

The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a sub combination.

Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

Claims

1. An apparatus configured to attach to a remotely operated vehicle (ROV), the apparatus comprising:

a watertight housing;
a mounting hardware that attaches the watertight housing to the ROV;
one or more sensors in the watertight housing, the one or more sensors configured to generate sensor data that is associated with an underwater task; and
one or more processors in the watertight housing, the one or more processors configured to: receive the sensor data from the one or more sensors; generate a navigation plan for the ROV using the sensor data; determine, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and provide the control instructions to an interface of the ROV configured to communicate with the apparatus.

2. The apparatus of claim 1, wherein the one or more processors comprise a graphics processing unit configured to process the sensor data using a machine learning algorithm.

3. The apparatus of claim 1, comprising a machine learning engine trained generate a result of one or more computer vision tasks based on input indicative of the sensor data.

4. The apparatus of claim 1, wherein the mounting hardware comprises at least one of a clamping system, a screwing system, or a magnetic system.

5. The apparatus of claim 1, wherein the ROV is untethered to any surface vessel.

6. The apparatus of claim 1, wherein the one or more sensors are customized for the apparatus in accordance with the underwater task.

7. The apparatus of claim 1, comprising a communication engine configured to communicate with a surface vessel or another ROV.

8. The apparatus of claim 1, wherein the one or more processors are configured to generate the navigation plan for the ROV using the sensor data by fusing the sensor data obtained from multiple sensors.

9. The apparatus of claim 1, wherein the interface of the ROV comprises an application programming interface (API) through which the ROV receives control instructions.

10. A computer-implemented method, comprising:

receiving sensor data from one or more sensors included in an apparatus, wherein the apparatus is configured to attach to a remotely operated vehicle (ROV), wherein the one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task, wherein a mounting hardware attaches the watertight housing to the ROV;
generating a navigation plan for the ROV using the sensor data;
determining, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and
providing the control instructions to an interface of the ROV configured to communicate with the apparatus.

11. The method of claim 10, comprising: processing, by a graphics processing unit, the sensor data using a machine learning algorithm.

12. The method of claim 10, comprising: generating, by a machine learning engine, a result of one or more computer vision tasks based on input indicative of the sensor data.

13. The method of claim 10, wherein the mounting hardware comprises at least one of a clamping system, a screwing system, or a magnetic system.

14. The method of claim 10, wherein the ROV is untethered to any surface vessel.

15. The method of claim 10, wherein the one or more sensors are customized for the apparatus in accordance with the underwater task.

16. The method of claim 10, wherein the apparatus comprises a communication engine configured to communicate with a surface vessel or another ROV.

17. The method of claim 10, wherein generating the navigation plan for the ROV using the sensor data comprises fusing the sensor data obtained from multiple sensors to generate the navigation plan.

18. The method of claim 10, wherein the interface of the ROV comprises an application programming interface (API) through which the ROV receives control instructions.

19. A system comprising one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:

receiving sensor data from one or more sensors included in an apparatus, wherein the apparatus is configured to attach to a remotely operated vehicle (ROV), wherein the one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task, wherein a mounting hardware attaches the watertight housing to the ROV;
generating a navigation plan for the ROV using the sensor data;
determining, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and
providing the control instructions to an interface of the ROV configured to communicate with the apparatus.

20. The system of claim 19, the operations comprise: processing, by a graphics processing unit, the sensor data using a machine learning algorithm.

Patent History
Publication number: 20240124111
Type: Application
Filed: Aug 10, 2023
Publication Date: Apr 18, 2024
Inventors: Thomas Robert Swanson (Sunnyvale, CA), Harrison Pham (Sunnyvale, CA), Kathy Sun (Boulder Creek, CA), Matthew Aaron Knoll (Mountain View, CA)
Application Number: 18/232,766
Classifications
International Classification: B63G 8/00 (20060101);