DUAL NODE COMPOSITE IMAGE SYSTEM ARCHITECTURE
When employing a multi-node computing architecture for commercial articulated vehicles to generate a composite surround view of the vehicle, each processing node is associated with a vehicle segment. Each node has its own processor, memory and sensors. A master-slave relationship exists in the architecture. The segment processors collect, format, manage and package their local information. Communication between nodes is controlled by the master node, which can limit, adjust, subsample and alter the information and information flow from the slave nodes. The slave nodes can be freely recombined, using a pre-established communication standard or a format agreed upon at combination time.
Latest Bendix Commercial Vehicle Systems LLC Patents:
- System and Method for Geolocated In-Vehicle False Alarm Filtering and Proactive True Alarm Early Warning
- System and Method for Increasing Sensitivity of a Collision Avoidance System of a Vehicle Based on Driver Awareness State
- Systems and method for integrating driver facing imaging device with a vehicle blind spot detection system
- Modular steering column assembly
- Multi-Sensor Advanced Driver Assistance System and Method for Generating a Conditional Stationary Object Alert
The present application finds particular application in surround-view vehicle imaging systems. However, it will be appreciated that the described techniques may also find application in other vehicle monitoring systems, other imaging systems, or other vehicle safety systems.
Conventional surveillance systems for trucks do not provide a system architecture for image processing between interchangeable vehicle sections, with possibly varying numbers of cameras and calibrations. Without knowledge of the number of cameras and their calibration, and without sufficient electronic control unit (ECU) capacity to process the incoming camera images, a surround view of the entire vehicle cannot be generated.
The present innovation provides new and improved systems and methods that facilitate generating a surround view image for an articulated commercial vehicle with interchangeable vehicle segments, which overcome the above-referenced problems and others.
SUMMARYIn accordance with one aspect, a system that facilitates generating a composite surround view image using a multi-node computer architecture for an articulated commercial vehicle comprises a first set of one or more cameras that capture images of an area surrounding a first portion of the articulated vehicle, and a first processing node that receives captured image data from the first set of one or more cameras. The system further comprises a second set of one or more cameras that capture images of an area surrounding a second portion of the articulated vehicle, and a second processing node that receives captured image data from the second set of one or more cameras and forwards the captured image data to the first processing node. The first processing node is configured to generate a composite surround view image of the articulated vehicle from the captured image data received from the first and second sets of one or more cameras.
In accordance with another aspect, an electronic controller unit (ECU) that facilitates generating a composite surround view image using a multi-node computer architecture for an articulated commercial vehicle comprises a processor configured to receive from a first set of one or more cameras captured video data of an area surrounding a first portion of the articulated vehicle, receive from a secondary ECU video data captured by a second set of one or more cameras of an area surrounding a second portion of the articulated vehicle, and determine an angle of articulation between the first and second portions of the articulated vehicle. The processor is further configured, for each video frame, to stitch together a composite surround view image of the articulated vehicle from the captured image data received from the first and second sets of one or more cameras while compensating for the determined articulation angle.
In accordance with another aspect, a method of generating a composite surround view image using a multi-node computer architecture for an articulated commercial vehicle comprises receiving at a first processing node captured image data of an area surrounding a first portion of the articulated vehicle from a first set of one or more cameras, receiving at a second processing node captured image data of an area surrounding a second portion of the articulated vehicle from a second set of one or more cameras, and receiving at the first processing node the captured image data of the area surrounding the second portion of the articulated vehicle. The method further comprises generating a surround view image of the articulated vehicle from the captured image data received from the first and second sets of one or more cameras.
In accordance with another aspect, an apparatus that facilitates generating a composite surround view image using a multi-node computer architecture for an articulated commercial vehicle comprises first receiving means for receiving at a first processing node captured image data of an area surrounding a first portion of the articulated vehicle from a first set of one or more cameras, and second receiving means for receiving at a second processing node captured image data of an area surrounding a second portion of the articulated vehicle from a second set of one or more cameras. The first receiving means is further configured to receive from the second processing node the captured image data of the area surrounding the second portion of the articulated vehicle. The apparatus further comprises processing means for determining an articulation angle between the first and second portions of the articulated vehicle and compensating for the determined articulation angle when generating a surround view image of the articulated vehicle from the captured image data received from the first and second sets of one or more cameras.
One advantage is that a surround view can be generated while preserving flexible coupling of tractors to trailers.
Another advantage is that a need to recalibrate or re-dimension ECUs as different or multiple trailers are used is mitigated
Still further advantages of the subject innovation will be appreciated by those of ordinary skill in the art upon reading and understanding the following detailed description.
The innovation may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating various aspects and are not to be construed as limiting the invention.
The foregoing problems are overcome by the herein-described systems and methods, which facilitate providing multiple electronic control units (ECUs), one on the tractor and another on (each) trailer portion of a multi-segment vehicle, and the respective views generated by cameras associated with each ECU are stitched together to generate a surround view. By “surround view” is meant a full or partial view of the surrounding of the vehicle (e.g., between a 45 degree view and a 360 view, or the like). Each ECU has knowledge of the number and calibration of each camera associated with it. A datalink between the ECUs, using e.g. a compressed data format, enables a surround view to be generated for the driver. Each ECU's capacity may be apportioned for the number of cameras whose images need to be processed.
Trucks often have separable, re-combinable, multiple segments, with a tractor and trailer, or even multiple trailers. Information processing systems for such separable segment vehicles, each segment of which may have its own set of sensors and functions, often create a need for specialized computer architectures. That is, each vehicle segment typically has its own set of sensors, from which information is needed for proper, safe, effective, truck operation. Such sensors may include cameras, speed capturing devices, tire pressure, airflow meters, etc. Because the sensors may each have their own set of characteristics (e.g. field of view and installation parameters for cameras), and because their number and type may vary, the herein-described systems and methods provide an ECU in each vehicle segment to coordinate these signals. Each segment's ECU collects, formats, manages and packages its local information and information flow, including any parameters associated with its segment. The segment ECUs may transmit their information to a master ECU, e.g. located in the tractor, first, front or primary segment of the vehicle. In one embodiment, a backup battery in each segment facilitates each segment operating independently, without an external power source, such as when parked at a freight terminal. The result is a separable, re-combinable, computer architecture for commercial vehicles. In another embodiment, a freight terminal can use the segment sensors without a tractor present, monitoring loading and unloading via cameras, for instance. A suitably equipped external portable computing device, such as a smart phone, can communicate and control a vehicle segment, determining for instance, the tire pressures of the segment.
Similarly, a trailer or secondary portion 22 of the vehicle comprises a second ECU 24 that is coupled (e.g., wirelessly or by a wired connection) to a plurality of cameras 26, 28, 30, 32, and which receives image frames captured by the plurality of cameras. The second ECU comprises an antenna or transceiver 34 via which the second ECU communicates with an antenna or transceiver 36 coupled to the first ECU 14. In one embodiment, the second ECU 24 stitches the images from its plurality of cameras into a partial surround view of the trailer portion of the vehicle and transmits the trailer image data to the first ECU, which then stiches the stitched trailer image data with the previously stitched tractor image data to generate a complete surround view image. As stitching reduces the total amount of pixels (as it rejects those not used in the surround view), data transmission bandwidth is reduced.
Each ECU (e.g., tractor ECU, trailer ECU(s)) has knowledge of the number of cameras and calibration (e.g. optical axis location, focal length, etc.) of each camera associated with it. A datalink between the tractor ECU and trailer ECU enables a complete vehicle surround view to be generated for the driver and/or other viewers. If a trailer ECU fails, the information from the cameras connected to the tractor controller can still provide views of the side of the trailer. In addition, the tractor (first) ECU 14 does not need to be recalibrated when new trailers are added because each trailer portion has its own ECU and associated parameter storage. Each ECU can thus be considered an independent node with knowledge of the other processing nodes (ECUs) around it. The surround view systems are linkable with the tractor and trailers capable of being coupled and uncoupled despite the number of cameras or lack of cameras on each. The driver is provided a consistent surround view of the vehicle regardless of which tractor is connected to which trailer. By providing all the camera information between the tractor and trailer ECUs, a complete surround view of the combined vehicle can be made available to either or both of the tractor and trailer ECUs. In addition, this system is more serviceable as a trailer camera system can be serviced without a corresponding tractor camera system.
Each ECU comprises a respective processor 38a, 38b (collectively, a processor 38) that executes, and a memory 40a, 40b (collectively, a memory 40) that stores, computer-executable instructions (e.g., modules, routines, programs, applications, etc.) for performing the various methods, techniques protocols, etc., described herein. The memory 40 may include volatile memory, non-volatile memory, solid state memory, flash memory, random-access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electronic erasable programmable read-only memory (EEPROM), variants of the foregoing memory types, combinations thereof, and/or any other type(s) of memory suitable for providing the described functionality and/or storing computer-executable instructions for execution by the processor 38. Additionally, “module,” as used herein denotes a set of computer-executable instructions (e.g., a routine, sub-routine, program, application, or the like) that is persistently stored on the computer-readable medium or memory for execution by the processor.
The trailer cameras 26, 28, 30, 32 have overlapping fields of view, capture video image data of their respective views of the surroundings of the trailer 22, and transmit the captured data to the trailer ECU 24, which stitches the captured data together into a composite trailer surround view image. In one embodiment, the trailer ECU 24 provides a stitched surround view image of the trailer comprising stitched image data from each of the trailer cameras for each image frame.
In another embodiment, the trailer ECU 24 provides the captured image data from each of the trailer cameras directly to the tractor ECU 14 for stitching with the tractor camera image frames. In yet another embodiment, the trailer cameras are independently powered (e.g., via a wired lead or a battery) and transmit image frame data directly to the tractor ECU for stitching. Additionally or alternatively, the trailer cameras can be configured to periodically receive an acknowledgement message from the trailer ECU when transmitting image data thereto, and upon one or more missing acknowledgement messages, the trailer cameras can transition to transmitting the captured image data directly to the tractor ECU 14.
Articulation angle α between the tractor 12 and trailer 22 is determined by the tractor and/or the trailer ECU (e.g., by analyzing captured image data of the back of the tractor relative to image data of the front of the trailer or by a mechanical sensor at the kingpin), and accounted for by the tractor ECU 14 or ECU 24 when stitching the composite surround view image of the vehicle together in order to provide a seamless bird's eye view for the driver and improve image aesthetic quality.
In the example of
Similarly, the trailer or secondary portion 22 of the vehicle comprises the second ECU 24 that is coupled (e.g., wirelessly or by a wired connection) to a plurality of cameras 26, 28, 30, 32, and which receives image frames captured by the plurality of trailer cameras. The second ECU comprises an antenna or transceiver 34 via which the second ECU communicates with an antenna or transceiver 36 coupled to the first ECU 14. Additionally, the ECUs 14, 24 each comprise a respective processor 38a, 38b and memory 40a, 40b as described with regard to
In one embodiment, the second ECU 24 stitches the images from its plurality of cameras into a partial surround view of the trailer portion of the vehicle and transmits the composite trailer image data to the iTAP module 154, which then forwards the stitched trailer image data to the portable computing device 152 (e.g., a smartphone or tablet device, etc.) for display to the driver. In another embodiment, the tractor (first) ECU also transmits a stitched tractor surround view image to the portable computing device 152, which in turn stitches together the trailer and tractor surround views into a composite surround view of the vehicle. In this example, it is the portable computing device that acts as a master node, as opposed to the first ECU (as is the case in the examples of
The iTAP module collects data from multiple onboard systems and/or devices. For instance, the iTAP module receives and provides to the portable computing device 152 data associated with, e.g., tire pressure monitor sensors, onboard cameras, load leveling information and electro-pneumatic air suspension information, trailer tilt, roll stability, trailer weight information, finisher brake status information, supply pressure information, and any other suitable auxiliary system information. Additionally, the iTAP module 154 is configured to determine articulation angle of the trailer relative to the tractor, and the articulation angle information is used to adjust the composite image when stitching together the image data to form the composite surround view image of the vehicle. Stitching for articulated vehicles accounts for the rotation between vehicle segments. The partial, stitched camera view of the trailer sent forward to tractor the ECU 14 is rotated and then joined to the partial, stitched view for the tractor. In addition, the stitching line may be adjusted to allow for the changed inter-segment angles. The collected data and articulation angle information can be transmitted by the given segment's ECU continuously with each composite segment image frame or at predetermined intervals.
To couple the iTAP module 154 on a given trailer to the tractor ECU 14 and/or the portable computing device 152, a handshake protocol is used when the trailer is attached to the tractor. For instance, the driver opens the iTAP app on his phone and is prompted to provide a signal through the vehicle to indicate that the phone and/or tractor ECU should be paired with the iTAP module. In one example, the driver is prompted to apply a predetermined amount of force to the trailer brakes. The iTAP module detects the applied brake force and the driver is provided with a graphic on the portable computing device indicating the level of detected brake force. Once the driver applies the predetermined brake force, the iTAP module pairs with the portable computing device and/or the tractor ECU and the system is ready.
With continued reference to
According to another embodiment example, one or more of the tractor ECU, the trailer ECU, and the iTAP module can determine that a data link between the master node and one or more slave nodes is congested. In such a case, the ECU for the slave node(s) can be instructed to subsample or additionally compress the data it transmits (e.g., image data, sensor data, etc.) so that the data link can handle the transmission. In the case of a WiFi data link having limited bandwidth, image data can be trimmed to remove pixels that do not change over time as the video frames proceed. In another embodiment, the video data is compressed for transmission when the WiFi data link is overburdened. The determination of whether to compress the data, and how much to compress the data can also be made locally by the slave ECU.
In another embodiment, one or both of the ECUs 14, 24, identifies a defective or saturated camera, and refrains from transmitting or using this image data received from the identified camera(s) in order to reduce bandwidth consumption. For instance, if a camera sees nothing when it is blinded (saturated) by light or is defective, the corresponding ECU can send a malfunction indication but refrain from transmitting data captured by the malfunctioning camera.
Additionally, the ECU 14 receives sensor data 212 from one or more sensors located on the tractor and/or trailer. Sensor data can include without limitation tire pressure data, trailer tilt angle, vehicle weight, load balance information, and the like, and can be displayed to a driver along with the composite surround view images. The processor also executes a data link monitoring module 214 that monitors traffic load on the data link between the primary (tractor) ECU 14 and the trailer ECU(s). Upon a determination that the data link is overloaded, the processor executes a data throttling module 216 that instructs the trailer ECU(s) to subsample or additionally compress one or more of the image data and the sensor data being transmitted to the primary ECU over the data link.
It will be appreciated that the herein-described secondary (trailer) processing nodes or ECUs can comprise similar or identical modules, software, instructions, or the like to those of the primary ECU 14 in order to perform the various functions (e.g., image frame stitching, data subsampling, etc.) described herein.
At 256, a composite surround view image of the articulated vehicle is generated from the captured image data received from the first and second pluralities of cameras. When generating the composite surround view image, the first processing node can stitch together data captured by individual cameras in the first and second pluralities of cameras, or can stitch together captured data from the first plurality of cameras (mounted to the tractor) into a first surround view image, and stitch the first surround view image together with a second surround view image of the trailer received from the second processing node.
According to another embodiment, a third processing node (e.g., a smart phone or a tablet or the like) receives the first surround view image from the first processing node and the second surround view image from the second processing node, and performs the stitching protocol on the first and second surround view images to generate the composite surround view image of the articulated vehicle for display to a viewer on the smartphone or tablet. The third processing node can be configured to receive the first and second surround view images via an iTAP module associated with the trailer portion of the vehicle, wherein the iTAP module acts as a gateway device between the third processing node and the first processing node.
In another embodiment, the first processing node determines an articulation angle between the tractor portion and the trailer portion of the vehicle and accounts for the determined articulation angle when generating the composite surround view image (e.g., by adjusting pixel data to smooth the image). Additionally, each of the first and second processing nodes can receive sensor data from one or more sensors (e.g., tire pressure monitoring sensor, accelerometers, etc.) associated with their respective vehicle portions. The sensor data can be presented to the viewer (e.g., a driver) along with the composite surround view images. Moreover, the first processing node can monitor a data link between the first and second processing nodes and can instruct the second processing node to subsample one or both of the sensor data and the image data to reduce traffic on the data link when desired.
The innovation has been described with reference to several embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the innovation be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims
1-24. (canceled)
25. A system that facilitates generating composite vehicle data using a multi-node computer architecture for an articulated vehicle, comprising:
- a first set of one or more sensors that capture data from a first portion of the articulated vehicle;
- a first processing node that receives captured data from the first set of one or more sensors;
- a second set of one or more sensors that capture data from a second portion of the articulated vehicle;
- a second processing node that receives captured data from the second set of one or more sensors, generates a reduced data set from the captured data from the second set of one or more sensors, and transmits the reduced data set to the first processing node;
- wherein the first processing node is configured to generate composite data for the articulated vehicle from the captured sensor data received from the first set of one or more sensors and the reduced data set received from the second processing node.
26. The system according to claim 25, wherein the first processing node is configured to control the amount and type of data transmitted by the second processing node.
27. The system according to claim 26, wherein the first processing node detects errors in data transmission from the second processing node, and controls the amount and type of data transmitted by the second processing node in order to remove at least one of redundant pixels, unchanging pixels, and unnecessary pixels.
28. The system according to claim 25, wherein the second processing node is configured to measure bandwidth available to transmit data to the first processing node, and to generate the reduced data set to accommodate the available data transmission bandwidth.
29. The system according to claim 25, wherein the reduced data set is generated by at least one of data compression, subsampling, and reduced transmission frequency, in order to reduce bandwidth consumption.
30. The system according to claim 25, wherein the second processing node is further configured to receive and relay to the first processing node sensor information for one or more monitored parameters of the second portion of the articulated vehicle.
31. The system according to claim 30, wherein the first processing node is further configured to detect that a data link between the first and second processing nodes is overloaded, and to instruct the second processing node to reduce data flow of at least one of the image data and the sensor information transmitted over the data link.
32. The system according to claim 31, wherein the second processing node is configured to reduce data flow by subsampling data received from the second set of one or more sensors when generating the reduced data set.
33. The system according to claim 25, wherein the first processing node is an electronic control unit (ECU) in a tractor portion of the articulated vehicle, and wherein the second processing node is an ECU in a trailer portion of the articulated vehicle.
34. A method for generating composite vehicle data using a multi-node computer architecture for an articulated vehicle, comprising:
- receiving, at a first processing node, data captured by a first set of one or more sensors from a first portion of the articulated vehicle;
- receiving, at a second processing node, data captured from a second portion of the articulated vehicle by a second set of one or more sensors;
- generating, by the second processing node, a reduced data set from the captured data from the second set of one or more sensors, and transmitting the reduced data set to the first processing node; and
- generating, by the first processing node, composite data for the articulated vehicle from the captured sensor data received from the first set of one or more sensors and the reduced data set received from the second processing node.
35. The method according to claim 34, further comprising controlling, by the first processing node, the amount and type of data transmitted by the second processing node.
36. The method according to claim 35, further comprising, at the first processing node, detecting errors in data transmission from the second processing node, and controlling the amount and type of data transmitted by the second processing node in order to remove at least one of redundant pixels, unchanging pixels, and unnecessary pixels.
37. The method according to claim 34, further comprising, at the second processing node, measuring bandwidth available to transmit data to the first processing node, and generating the reduced data set to accommodate the available data transmission bandwidth.
38. The method according to claim 34, further comprising generating the reduced data set by at least one of data compression, subsampling, and reduced transmission frequency, in order to reduce bandwidth consumption.
39. The method according to claim 34, further comprising, at the second processing node, receiving and relaying to the first processing node sensor information for one or more monitored parameters of the second portion of the articulated vehicle.
40. The method according to claim 39, further comprising, at the first processing node, detecting that a data link between the first and second processing nodes is overloaded, and instructing the second processing node to reduce data flow of at least one of the image data and the sensor information transmitted over the data link.
41. The method according to claim 40, further comprising, at the second processing node, reducing data flow by subsampling data received from the second set of one or more sensors when generating the reduced data set.
42. The method according to claim 34, wherein the first processing node is an electronic control unit (ECU) in a tractor portion of the articulated vehicle, and wherein the second processing node is an ECU in a trailer portion of the articulated vehicle.
43. A multi-node computer architecture configured to generate composite vehicle data for an articulated vehicle, comprising:
- a first processing node configured to receive data captured by a first set of one or more sensors from a first portion of the articulated vehicle;
- a second processing node configured to receive data captured from a second portion of the articulated vehicle by a second set of one or more sensors;
- wherein the second processing node is further configured to generate a reduced data set from the captured data from the second set of one or more sensors, and to transmit the reduced data set to the first processing node; and
- wherein the first processing node is further configured to generate composite data for the articulated vehicle from the captured sensor data received from the first set of one or more sensors and the reduced data set received from the second processing node.
44. The multi-node computer architecture according to claim 43, wherein the second processing node is further configured to measure bandwidth available to transmit data to the first processing node, and to generate the reduced data set to accommodate the available data transmission bandwidth by at least one of data compression, subsampling, and reduced transmission frequency.
Type: Application
Filed: Dec 8, 2017
Publication Date: Apr 5, 2018
Applicant: Bendix Commercial Vehicle Systems LLC (Elyria, OH)
Inventors: Andreas U. Kuehnle (Villa Park, CA), Marton Gyori (Budapest), Hans M. Molin (Mission Viejo, CA), Karl H. Jones (Fullerton, CA), Travis G. Ramler (Fairfield, UT)
Application Number: 15/835,507