TOOL JOINT ASSIST
A tool joint assist system includes one or more iron roughnecks for making up or breaking out tubular joints; a programmable logic controller for each of the one or more iron roughnecks; at least one camera associated with each of the one or more iron roughnecks; a computer that calculates a stick-up height; and one or more user-input devices. Each programmable logic controller controls only one iron roughneck. The computer includes a processor, memory, and one or more monitors displaying an image from the at least one camera. The programmable logic controllers, the cameras, the monitors, the user-input devices, and the computer are connected by a network.
The present document is based on and claims priority to U.S. Provisional Application Ser. No. 62/349,297, filed 13 Jun. 2016 which is incorporated herein by reference in its entirety.
BACKGROUNDWells are used in the oil and gas industry to explore subterranean formations and to produce hydrocarbon liquids and hydrocarbon gases. Drilling the wells may include making up, or assembling, and/or breaking out, or disassembling, tubulars. Making up and breaking out tubulars may be done on the drilling rig floor above the well and may be part of running in the hole or tripping out of the hole. In addition, work on tubulars may include making up or breaking out stands of pipe, for instance, over a mousehole in order to speed up drilling operations. A stand of pipe is two or three single joints, or segments, of drill pipe or drill collars that remain screwed together during tripping operations. Assembling and disassembling tubulars may be performed by an iron roughneck under the control of an operator. In some cases, as in working over a mousehole, a second iron roughneck may be used. The iron roughneck must be properly positioned to enable it to grab the lower and upper tubulars of the joint being made up or broken out. The location in a horizontal plane of the lower and upper tubulars may be stable from joint to joint within the tolerances of the iron roughneck. In contrast, the vertical positioning of the iron roughneck may need to be adjusted before each operation. The height of the lower tubular above a fixed position, say, the rig floor, is known as the stick-up height. This height may differ for each joint that is made up or broken out. Thus, the operator must have a way to determine this height in order to position the iron roughneck properly. In some circumstances, the location of the operator may prevent direct observation of the stick-up height, including because the iron roughneck blocks the operator's view. In such circumstances, a second observer may need to relay positioning information to the operator. Such a method may be inefficient. Further, requiring additional personnel to be near the tubulars increases safety risks at the wellsite.
SUMMARYThis summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
In one aspect, embodiments disclosed herein relate to.
In one or more embodiments of the present disclosure, a tool joint assist system may comprise one or more iron roughnecks for making up or breaking out tubular joints; a programmable logic controller for each of the one or more iron roughnecks, each programmable logic controller controlling only one iron roughneck; at least one camera associated with each of the one or more iron roughnecks; a computer that calculates a stick-up height, the computer may comprise a processor; memory; one or more monitors displaying an image from the at least one camera; and one or more user-input devices, wherein the programmable logic controllers, the cameras, the monitors, the user-input devices, and the computer are connected by a network.
In one or more embodiments of the present disclosure, a positioning method may comprise capturing a two-dimensional image of a first object to be manipulated by a second object; displaying the captured two-dimensional image of the first object combined with an aiming line overlay; adjusting a location of the aiming line overlay relative to the first object in the captured two-dimensional image according to user input; calculating a parameter of the first object from the adjusted location of the aiming line overlay; sending the parameter to a control system for the second object; and setting a position of the second object by the control system based on the parameter of the first object.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
One or more embodiments of the present disclosure relate to the making up and/or breaking up of two objects together, such as a two pieces of a tubular that join together to form a tool joint. Specifically, one or more embodiments of the present disclosure relate to systems and methods that may be used to assist in the tool joint formation or tool joint disassembly. However, it is also intended that the system and methods may be applied to other objects in other applications.
Embodiments of the present disclosure will now be described in detail with reference to the accompanying figures. Like elements in the various figures may be denoted by like reference numerals for consistency. Further, in the following detailed description of embodiments of the present disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the claimed subject matter. However, it will be apparent to one of ordinary skill in the art that the embodiments disclosed herein may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. Additionally, it will be apparent to one of ordinary skill in the art that the scale of the elements presented in the accompanying figures may vary without departing from the scope of the present disclosure.
In the present disclosure, the following definitions are applied.
The term “or” is understood to be an “inclusive or” unless explicitly stated otherwise. Under the definition of “inclusive or,” the expression “A or B” is understood to mean “A alone, B alone, or both A and B.” Similarly, “A, B, or C” is understood to mean “A alone, B alone, C alone, both A and B, both A and C, both B and C, or A and B and C.”
An iron roughneck is a piece of equipment used to connect and disconnect segments of pipe, casing, and the like on a drilling rig. The iron roughneck may be hydraulically powered. The iron roughneck may clamp a bottom length of pipe and may turn a top length of pipe to connect or disconnect the pipe segments to make up or break out a tool joint.
An operator is a person who controls the operation of the iron roughneck. The operator may be a driller, an assistant driller, or some other person involved in the drilling operation.
Tubulars is a generic term pertaining to any type of oilfield pipe, such as drill pipe, drill collars, pup joints, casing, production tubing, pipeline, and the like. Examples and embodiments described using one type of tubular should be understood to apply to other types of tubulars unless explicitly stated otherwise. A tubular joint is the connection of two tubulars. In some cases, this may be done with threaded connections.
A top drive is a device that turns a drill string and is suspended from a hook.
To make up is to connect tools or tubulars by assembling the threaded connections incorporated at either end of every tool and tubular. The threaded tool joints may be correctly identified and then torqued to the correct value to ensure a secure tool string without damaging the tool or tubular body.
To break out is to unscrew drill string components, which are coupled by various threadforms known as connections, including tool joints and other threaded connections.
A two-dimensional (2D) image refers to an image with two spatial dimensions. Time in not considered as a dimension of an image or an image stream in the case of video streaming or a series of images in time.
The wellsite 1 may include a derrick 2 erected above a rig floor 3. The wellsite 1 may include lifting gear such as a crown block 4 mounted to the derrick 2 and a traveling block 5. The crown block 4 and the traveling block 5 are shown as being interconnected by a cable 6 that is driven by a draw works 7 to control the upward and downward movement of the traveling block 5. The draw works 7 may be configured to be automatically operated to control a rate of drop or release of a drill string into a wellbore 11 during drilling. A wellbore may also be referred to as a borehole.
The traveling block 5 may carry a hook 8 from which is suspended a top drive 9. The top drive 9 supports a drill string 10 in the wellbore 11. In some embodiments, a rotary table and a kelly may be used to turn the drill string 10 instead of a top drive 9. According to one or more embodiments, a longitudinal end of the drill string 10 may include a bottom hole assembly (BHA) 18 The bottom hole assembly (BHA) 18 may include drill string components 16 like measurements-while-drilling (MWD) and logging-while-drilling (LWD) tools, jars, vibrational tools, downhole motors, and the like, as well as a downhole tool 13. Examples of downhole tools 13 include a drill bit, reamer, perforating gun, mill, cementing tool, and the like. The BHA 18 may also include a motor 17, for example a steerable drilling motor. The walls of the wellbore 11 may optionally be lined with a casing string (not shown) that is cemented in place. In such instances, the casing string (not shown) may be run into the wellbore 11 upon drilling to a given depth. However, in order to run the casing (not shown) into the wellbore 11, drill string 10, including BHA 18, may be pulled out of the hole, and then re-run unto the hole upon completion of the casing string being run into the hole in order to drill the next stage of the wellbore 11.
The drill string 10 may include coiled tubing, a wireline, or segments of interconnected drill pipes 15 (e.g., drill pipe, drill collars, transition or heavy weight drill pipe, and the like). Where the drill string includes segmented tubulars, a make-up device 14 may be used to connect a tool joint of one tubular to a tool joint of another tubular. For instance, the make-up device 14 may include power tongs or an iron roughneck used to apply torque for connecting a pin connection of one tubular with a box connection of a mating tubular. The make-up device 14 may also be used to break down connections when tripping the drill string 10 out of the wellbore 11. Make-up device may also be used to run casing string (not shown) into the wellbore 11. Thus, embodiments of the present disclosure apply equally to the joining of casing strings together.
Segments of interconnected drill pipe 15 may in one or more embodiments look like drill pipe 230 shown schematically in
In some embodiments, an iron roughneck is positioned to make up a joint like the one illustrated in
In one or more embodiments of the present disclosure, a system is provided to allow the operator to correctly position the iron roughneck (not shown in
In one or more embodiments, a tool joint assist system 700 may include one or more cameras 765, 767 for recording 2D images. While the tool joint assist system 700 may operate with a single camera per iron roughneck, in some embodiments, more than one camera may be associated with each iron roughneck, any of the additional cameras being available in case of malfunction or failure of the primary camera 765 for a particular iron roughneck RN1. However, even if multiple cameras are present, in some embodiments, the tool joint assist system 700 and methods associated with the system 700 may use only one camera 765 to provide the necessary 2D image data.
Referring again to
In some embodiments, the camera and the 2D images it captures may be calibrated. The calibration may allow conversion of pixel location in a 2D image to a spatial position at the wellsite. In some embodiments, that position may be a stick-up height above the rig floor. The spatial position may be recorded in millimeters. When a plurality of cameras is present, a target may be selected for calibration. That target may be a mousehole, a well center, or the like. Calibration may be performed using an object of known dimension. A calibration image is acquired. In some cases, a sheet of A4 paper may be used in the calibration image as an object of known dimension; however, any object of known size may be used. A command may be given to freeze the image to prevent movement of the reference object during calibration. A top reference line may be moved to the top of the reference object, for example, an A4 sheet of paper. A bottom reference line may be moved to the bottom of the reference object. A base reference line may be moved to a zero position of the iron roughneck. In one or more embodiments, the zero position may be the drilling rig floor or the like. The height of the reference object may be entered by a user. The calibration may take about 5 minutes. In some embodiments, only a mechanical zoom is used on the camera in order not to decalibrate the 2D image. In some embodiments, any distortion of the image may be taken into account by the calibration, for example, image distortion caused by the lens. In one or more embodiments, only the center portion of the 2D image is of interest. In some embodiments, a calibration that is accurate to +/−5 mm may be adequate.
In one or more embodiments, the upper and lower pipe segments 345, 350 may be held and manipulated by an iron roughneck (not shown in
Referring again to
In one or more embodiments, 2D images from camera 365 may be individually captured or may be part of a continuous video stream. In either case, a 2D image may be displayed on a monitor 737, 739 where it may be viewed by an operator through a graphical user interface.
Referring back to
Referring to
In some embodiments, a control system is set to auto sequence 670 and a correct step, e.g., take a snapshot of a tool joint, is entered 672. Taking a snapshot is understood to be a way to capture an image. The snapshot may be taken anytime between when the slips are set (i.e., the tubular is not moving) and when the iron roughneck clamps onto the tubular to start the make up or break out. In one or more embodiments, the snapshot may be taken just before the iron roughneck extends from a parked position to a target, for example, a well center, a mousehole, or the like. In some embodiments, the control system may be a PLC to control an iron roughneck. In some embodiments, a device other than an iron roughneck may be controlled by the PLC.
The capture command, e.g., “take a snapshot,” is sent 674 from the PLC and detected 676 by a computer. In some embodiments, the capture command is sent using UDP/IP Multicast protocol. The capture command may be received by the computer using Applicom PC Network Interface (PCNI) driver software, which may then pass the capture command to the tool joint assist application (residing, for example, on the computer). In one or more embodiments, the computer may be receiving a sequence of 2D image snapshots or a 2D video stream from a camera associated with the iron roughneck. The 2D image snapshots may be in JPEG format. In some embodiments, the video stream may be in MJPEG format. In one or more embodiments, 2D image data may be communicated from the camera to a camera interface in the computer, for example, by an open source library that handles video cards, such as AForge (and Aforge.dll or AForge.video.dll, in particular). The tool joint assist application may then capture a 2D image of a first object from the camera at 678. In one or more embodiments, the first object may be a tool joint being made up or broken out. The 2D image of the first object may then be displayed within the graphical user interface on a monitor, where the image along with an aiming line overlay may be seen by an operator at 680. The operator, providing user input, may then adjust the location of the aiming line overlay such that the aiming line overlay is aligned with a particular feature of the first object at 682. In some embodiments, when the first object is a tool joint, the aiming line overlay may be aligned with the top of the lower tubular. In some embodiments, the operator may adjust the aiming line overlay using a user-input device. The user-input device may be a keyboard, a mouse, a touchpad, a joystick, a touchscreen, or the like. When the operator is satisfied with the location of the aiming line overlay, the location may be confirmed using the same user-input device or a different one. Confirming operations may include clicking a button one or more times, tapping a touchpad or touchscreen one or more times, pressing a key, e.g., an <Enter> key on a keyboard, and the like. In some embodiments, the user-input device may communicate to the tool joint assist application using the Applicom PCNI driver. In some embodiments, the Applicom PCNI driver may communicate with the tool joint assist software using Applicom API DLL or Applicom DB. At 684, the adjusted location of the aiming line overlay may be used along with camera calibration data to calculate a parameter. In some embodiments, for example when an iron roughneck is being used to make up or break out tubulars, the parameter may be the stick-up height.
At 686, the parameter may be sent to the control system of a second object. In some embodiments, the control system may be a PLC. In some embodiments, the second object may be an iron roughneck. At 688, the position of the second object is set by the control system based on the parameter.
In one or more embodiments, a step of calibrating using a calibration image may be performed prior to calculating a parameter. Details about calibration have been provided above.
The tool joint assist system and method of the present disclosure provides several advantages. The system allows an operator to accurately position a piece of equipment, often heavy equipment, e.g., an iron roughneck, from a remote, safe location. It does not require a second person to assist in the positioning process by passing visual information to the operator. Both of these advantages offer improves safety to personnel operating heavy equipment. In addition, 3D sensors and their incumbent computational cost and need for powerful computers are not required. Only a single camera acquiring a 2D image is required. Further the system is simple and robust, having few requirements and making few assumptions. Among assumptions that are not required in the present disclosure are 1) both tubulars are present in the field of view, 2) that the distance between tubulars to be made up or broken out and other objects in the FOV is sufficiently large or of different enough shape to make it distinguishable from the tubulars, 3) histogram analysis of the data, 4) objects have different colors for segmentation because the present disclosure does not require segmentation, 5) that the drill floor is flat, 6) that a region around the mousehole is planar, 7) that the tubular is cylindrical, 8) that the lower and upper parts of the FOV solely contains non-cased pipes, 9) assume anything about the difference in radius between the cased and non-cased part of the pipe.
The tool joint assist system may be used with a plurality of iron roughnecks. Each of the plurality of iron roughnecks may have a plurality of cameras as a redundancy in the event of failure or malfunction on the part of one or more cameras. The tool join assist system, however, utilizes only one of the plurality of cameras, that camera providing two-dimensional images. Advantageously, the tool joint assist system does not use any three-dimensional sensors, where three dimensional refers to three spatial dimensions. By not using three-dimensional sensors, the system requires less equipment, less costly equipment, less network bandwidth, and less computational power, all providing cost savings and added efficiency to the system.
Embodiments may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in
The computer processor(s) (802) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (800) may also include one or more input devices (810), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
The communication interface (812) may include an integrated circuit for connecting the computing system (800) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
Further, the computing system (800) may include one or more output devices (807), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (802), non-persistent storage (804), and persistent storage (806). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms.
Software instructions in the form of computer readable program code to perform embodiments of the disclosure may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the disclosure.
The computing system (800) in
Although not shown in
The nodes (e.g., node X (822), node Y (824)) in the network (820) may be configured to provide services for a client device (826). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (826) and transmit responses to the client device (826). The client device (826) may be a computing system, such as the computing system shown in
The computing system or group of computing systems described in
Based on the client-server networking model, sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device. Foremost, following the client-server networking model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until the server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process. Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters (e.g., bytes).
Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, one authorized process may mount the shareable segment, other than the initializing process, at any given time.
Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the disclosure. The processes may be part of the same or different application and may execute on the same or different computing system.
Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the disclosure may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.
By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.
Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the disclosure, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system (800) in
Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).
The extracted data may be used for further processing by the computing system. For example, the computing system of
The computing system in
The user, or software application, may submit a statement or query into the DBMS. Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application.
The computing system of
For example, a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.
Data may also be presented through various audio methods. In particular, data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device.
Data may also be presented to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.
The above description of functions presents only a few examples of functions performed by the computing system of
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus, although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures. It is the express intention of the applicant not to invoke 35 U.S.C. § 112, paragraph 6 for any limitations of any of the claims herein, except for those in which the claim expressly uses the words ‘means for’ together with an associated function.
Claims
1. A tool joint assist system comprising:
- one or more iron roughnecks for making up or breaking out tubular joints;
- a programmable logic controller for each of the one or more iron roughnecks, each programmable logic controller controlling only one iron roughneck;
- at least one camera associated with each of the one or more iron roughnecks;
- a computer that calculates a stick-up height, the computer comprising: a processor; memory;
- one or more monitors displaying an image from the at least one camera; and
- one or more user-input devices,
- wherein the programmable logic controllers, the cameras, the monitors, the user-input devices, and the computer are connected by a network.
2. The tool joint assist system of claim 1, wherein the at least one camera comprises a single camera.
3. The tool joint assist system of claim 1, wherein at least one of the one or more monitors is also a user-input device.
4. The tool joint assist system of claim 1, wherein the one or more user-input devices is selected from the group consisting of a keyboard, a joystick, a mouse, a trackball, a touchscreen, and a touchpad.
5. The tool joint assist system of claim 1, wherein the system does not comprise a three-dimensional sensor.
6. A positioning method comprising:
- capturing a two-dimensional image of a first object to be manipulated by a second object;
- displaying the captured two-dimensional image of the first object combined with an aiming line overlay;
- adjusting a location of the aiming line overlay relative to the first object in the captured two-dimensional image according to user input;
- calculating a parameter of the first object from the adjusted location of the aiming line overlay;
- sending the parameter to a control system for the second object; and
- setting a position of the second object by the control system based on the parameter of the first object.
7. The positioning method according to claim 6, further comprising:
- setting the control system to automatic sequence;
- entering a correct step of capturing an image of a tool joint;
- sending a command from the control system to a computer to capture a two-dimensional image;
- detecting the command to capture the two-dimensional image with the software application;
8. The positioning method according to claim 6, wherein the control system comprises a programmable logic controller.
9. The positioning method according to claim 6,
- wherein the control system controls an iron roughneck,
- wherein the first object is a tubular,
- wherein the second object is the iron roughneck,
- wherein adjusting the location of the aiming line places the aiming line over a tool joint, and
- wherein the parameter is a stickup height.
10. The positioning method according to claim 6, further comprising calibrating using a calibration image, the calibrating being performed prior to calculating a parameter.
11. The positioning method according to claim 6, further comprising streaming a plurality of images from a camera to the computer,
- wherein capturing the two-dimensional image comprises capturing one of the plurality of images streamed from the camera.
12. The positioning method according to claim 11, wherein the plurality of images streamed from the camera is a video stream.
13. The positioning method according to claim 6, wherein only one tubular is included in the two-dimensional image.
14. The positioning method according to claim 6, wherein there is no minimum separation distance between the tubulars to be made up or broken out and other objects included in the two-dimensional image.
15. The positioning method according to claim 6 further comprising confirming the location of the aiming line.
16. The positioning method according to claim 15, wherein confirming is performed using one or more user-input devices selected from the group consisting of a keyboard, a joystick, a mouse, a trackball, a touchscreen, and a touchpad.
Type: Application
Filed: Jun 13, 2017
Publication Date: Jun 27, 2019
Inventor: Joachim LEITE (Kristiansand)
Application Number: 16/309,031