MOVING OBJECT CONTROLLER, MOVING OBJECT CONTROL METHOD, AND INTEGRATED CIRCUIT

- MegaChips Corporation

A moving object controller efficiently generates an environmental map and performs highly accurate state estimation in a short time to appropriately control a moving object. An observation obtaining unit obtains observation data from an observable event. A landmark prediction unit generates a landmark prediction signal including predictive information about a landmark at a current time. A landmark detection unit detects information about the landmark at the current time, and generates a landmark detection signal indicating the detection result. A state estimation unit estimates an internal state of the moving object to obtain data indicating an estimated internal state of the moving object at the current time, and estimates the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

This application claims priority to Japanese Patent Application No. 2015-059367, filed on Mar. 23, 2015, the entire disclosure of which is hereby incorporated herein by reference (IBR).

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to a technique for controlling a moving object using simultaneous localization and mapping (SLAM).

2. Description of the Background Art

Using simultaneous localization and mapping (SLAM), a moving object, such as an autonomous robot, estimates its position and simultaneously generates an environmental map while moving autonomously.

A robot (moving object) that moves autonomously with SLAM generates an environmental map around the robot using observation data obtained with a sensor mounted on the robot and using inputs for controlling the robot, while estimating the position and the posture of the robot in the generated environmental map. More specifically, the robot estimates its position and posture Xt (or X1:t={X1, X2, . . . , Xt}) and an environmental map m using a control input sequence U1:t={U1, U2, . . . , Ut} and an observation sequence Z1:t={Z1, Z2, . . . , Zt} in the state space model where the robot follows the system model xt˜g(xt|xt-1, Ut) (Ut is a control input) and the observation follows the observation model Zt˜h (Zt|xt, m) (m is an environmental map).

Techniques have been developed to estimate the posture of a robot with high accuracy using particle filters (e.g., Japanese Unexamined Patent Application Publication No. 2008-126401, or Patent Literature 1).

A sensor used to obtain observation data may typically be an RGB camera incorporating a visible light image sensor, or a range finder that obtains distance information using infrared light or laser light. Two or more different sensors may be used in combination to obtain observation data.

A robot (moving object) that moves autonomously with SLAM typically generates a topological environmental map, instead of generating a geometric environmental map. The environmental map is obtained using, for example, information about landmarks. In this case, the environmental map contains a plurality of sets of landmark information. Each landmark has parameters including variance-covariance matrices expressing (1) positional information about the landmark and (2) the uncertainty of positional information about the landmark.

When the robot that moves autonomously with SLAM incorporates an RGB camera as its sensor, the robot uses, as landmarks, points or lines with features or objects with distinguishable markers. The robot detects (identifies) an image area corresponding to a landmark in an image captured by the RGB camera to obtain information about the landmark. In this case, the actual position of the robot relative to the actual position of the landmark is determined using the image obtained by the RGB camera, and observation data at the current time is obtained based on the identification result.

To determine the position of a landmark in an image obtained by the RGB camera, the above technique involves searching of the entire image to detect a landmark. This increases the amount of processing to identify the landmark position, and thus increases the processing time. The longer processing time taken to identify the position of a landmark increases the interval at which the estimation result is updated in the autonomous robot. This may cause a larger error between the actual posture of the robot and the estimate, and may disable the robot (moving object) from moving autonomously in a stable manner.

In response to the above problems, it is an object of the present invention to provide a moving object controller, a program, and an integrated circuit for controlling a moving object in an appropriate manner by generating an environmental map efficiently and performing a highly accurate state estimate process in a short period of time.

SUMMARY

A first aspect of the invention provides a moving object controller for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object. The controller includes an observation obtaining unit, a landmark prediction unit, a landmark detection unit, and a state estimation unit.

The observation obtaining unit obtains observation data from an observable event.

The landmark prediction unit generates a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time.

The landmark detection unit detects information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction unit, and generates a landmark detection signal indicating a result of the detection.

The state estimation unit estimates the internal state of the moving object based on the observation data obtained by the observation obtaining unit and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimates the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a schematic block diagram of a moving object controller 1000 according to a first embodiment.

FIG. 2 is a diagram describing the positional relationship between the estimated positions of a robot Rbt1, a landmark LM1, and a landmark LM2 at time t−1.

FIG. 3 is a diagram describing the positional relationship between the estimated positions of the robot Rbt1 and the landmarks LM1 and LM2 at time t.

FIG. 4 is a schematic diagram describing the distribution of particles (one example) for estimating the position of the robot Rbt1 on the x-y plane of the internal state of the robot Rbt1 and the distribution of particles (one example) for estimating the positions of the landmarks LM1 and LM2 on the x-y plane of the internal state of the robot.

FIG. 5 is a schematic diagram showing an image D1 at time t−1 (image Imgt-1) and the image D1 at time t (image Imgt).

FIG. 6 is a schematic block diagram of a moving object controller 2000 according to a second embodiment.

FIG. 7 is a schematic block diagram of a landmark prediction unit 5A in the second embodiment.

FIG. 8 is a schematic block diagram of a landmark detection unit 2A in the second embodiment.

FIG. 9 is a schematic block diagram of a moving object controller 2000A according to a first modification of the second embodiment.

FIG. 10 is a schematic block diagram of a landmark prediction unit 5B in the first modification of the second embodiment.

FIG. 11 is a schematic block diagram of a landmark detection unit 2B in the first modification of the second embodiment.

DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment

A first embodiment will now be described with reference to the drawings.

1.1 Structure of Moving Object Controller

FIG. 1 is a schematic block diagram of a moving object controller 1000 according to a first embodiment.

The moving object controller 1000 is mounted on a moving object, such as an autonomous robot Rbt1 (not shown).

As shown in FIG. 1, the moving object controller 1000 includes an observation obtaining unit 1, a landmark detection unit 2, a state estimation unit 3, a storage unit 4, and a landmark prediction unit 5.

For ease of explanation, observation data D1 obtained by the observation obtaining unit 1 is an image (image data) captured by an image sensor (not shown) mounted on the moving object.

The observation obtaining unit 1 receives input data Din and obtains observation data D1 based on the input data Din, and outputs the obtained observation data D1 to the landmark detection unit 2 and a state update unit 31 included in the state estimation unit 3.

The landmark detection unit 2 receives observation data D1, which is output from the observation obtaining unit 1, and a landmark prediction signal Sig_pre, which is output from the landmark prediction unit 5. The landmark detection unit 2 detects an image area corresponding to a landmark in an image formed using the observation data D1 (image data D1) based on the landmark prediction signal Sig_pre. The landmark detection unit 2 then outputs a signal including the detected image area to the state update unit 31 and an environmental map update unit 32 included in the state estimation unit 3 as a landmark detection signal LMt,

As shown in FIG. 1, the state estimation unit 3 includes the state update unit 31 and the environmental map update unit 32.

The state update unit 31 receives observation data D1, which is output from the observation obtaining unit 1, a landmark detection signal LMt, which is output from the landmark detection unit 2, control input data Ut for a moving object, and internal state data Xt-1 indicating the internal state of the moving object at time t−1, which is read from the storage unit 4. The state update unit 31 estimates the internal state of the moving object based on the observation data D1, the landmark detection signal LMt, the control input data Ut, and the internal state data Xt-1 of the moving object at time t−1 to obtain the estimation result as the internal state data Xt of the moving object at the current time t (update the internal state data). The state update unit 31 then outputs the internal state data Xt of the moving object at the current time t out of the state estimation unit 3, and also to the storage unit 4. The internal state data Xt externally output from the state estimation unit 3 is used by, for example, a functional unit (central control unit) that centrally controls the moving object (autonomous robot) (not shown) as data used to control the moving object (autonomous robot).

The state update unit 31 outputs the control input data Ut for a robot Rbt1 at the current time t to the storage unit 5.

The environmental map update unit 32 receives a landmark detection signal LMt, which is output from the landmark detection unit 2, and environmental map data mt-1 at time t−1, which is read from the storage unit 4. The environmental map update unit 32 obtains information about a detected landmark based on the landmark detection signal LMt, and updates the environmental map data based on the obtained information and the environmental map data mt-1 at time t−1 to obtain environmental map data mt at the current time t. The environmental map update unit 32 then outputs the updated environmental map data mt out of the state estimation unit 3, and also to the storage unit 4. The environmental map data mt externally output from the state estimation unit 3 is used by, for example, the functional unit (central control unit) that centrally controls the moving object (autonomous robot) (not shown) as data used to control the moving object (autonomous robot).

The storage unit 4 receives and stores internal state data Xt of the moving object at the current time t and control input data Ut, which are output from the state update unit 31 included in the state estimation unit 3, and environmental map data mt. The data stored in the storage unit 4 is read by the landmark prediction unit 5 at predetermined timing. The data stored in the storage unit 4 is also read by the state estimation unit 3 at predetermined timing. The storage unit 4 can store a plurality of sets of internal state data and environmental map data obtained at a plurality of times.

The landmark prediction unit 5 reads internal state data and environmental map data at a time preceding the current time from the storage unit 4.

In the example described below, the landmark prediction unit 5 reads data at time t−1, which is the time preceding the current time t by one time.

The data at the current time t refers to data obtained using the target observation data D1 by the moving object controller 1000 (internal state data Xt of the moving object and environmental map data mt). The data at time t−1 preceding the current time t by one time refers to data obtained by the moving object controller 1000 at the time preceding the current time t by one time (internal state data Xt-1 of the moving object and environmental map data mt-1). The data at time t-k preceding the current time t by k times (k is a natural number) refers to data obtained by the moving object controller 1000 at the time preceding the current time t by k times (internal state data Xt-k of the moving object and environmental map data

The above data may be obtained at synchronous timing or at asynchronous timing.

1.2 Operation of Moving Object Controller

The operation of the moving object controller 1000 with the above-described structure will now be described.

In the example described below, the robot Rbt1 is a moving object that moves autonomously. The input data Din is an image (captured image) captured by an imaging device (not shown) incorporating a visible light image sensor, which is mounted on the robot Rbt1. The environmental map is generated based on information about landmarks.

FIG. 2 is a diagram describing the positional relationship between the estimated positions of the robot Rbt1, a landmark LM1, and a landmark LM2 at time t−1 (viewed from above). In FIG. 2, the space is defined by x-axis and y-axis, and the origin indicated by 0.

FIG. 3 is a diagram describing the positional relationship between the estimated positions of the robot Rbt1 and the landmarks LM1 and LM2 at time t (viewed from above). In FIG. 3, the space is also defined by x-axis and y-axis, and the origin as in FIG. 2.

In the example described below, the robot Rbt1 at the position shown in FIG. 2 receives input control data for moving the robot Rbt1 in the direction of arrow Dir1 shown in FIG. 2 at time t−1. The robot Rbt1 moves to the position shown in FIG. 3 at time t.

The moving object controller 1000 uses (1) data indicating the internal state of the robot Rbt1 and (2) data representing an environmental map described below.

1.2.1 Internal State Data of Robot Rbt1

The moving object controller 1000 uses, as data Xt indicating the internal state of the robot Rbt1,

(1) internal state data XPt indicating the position and the angle of the robot Rbt1 at time t, and

(2) variance-covariance matrix data V_mtxt (Rbt1) of the robot Rbt1 at time t.

The internal state data Xt of the robot Rbt1 at time t is written as


Xt=[XPt,V_mtxt(Rbt1)].

The internal state data XPt indicating the position and the angle of the robot Rbt1 at time t is a three-dimensional state vector written as


XPt=(x0t,y0tr),

where x0t is an expected value of the x-coordinate of the robot Rbt1 at time t,

y0t is an expected value of the y-coordinate of the robot Rbt1 at time t, and

θt is an expected value of the angle of the robot Rbt1 (orientation of the robot Rbt1) with respect to the negative direction of x-axis at time t.

The moving object controller 1000 uses particle filtering (Monte Carlo approximation) to estimate the internal state of the robot Rbt1.

In particle filtering, the posterior probability distribution of a state vector XPt-1 at time t−1 is used as the prior probability distribution of the state vector XPt at the current time t, and the predictive probability distribution of the state vector XPt at the current time t is obtained through prediction using the prior probability distribution of the state vector XPt at the current time t. A likelihood is then calculated based on the obtained predictive probability distribution and actual observation data (observation data at the current time t). At the ratio proportional to the calculated likelihood, particles are sampled without changing the total number of the particles. Through the processing, the internal state of the robot Rbt1 is estimated.

FIG. 4 is a schematic diagram describing the distribution of particles (one example) for estimating the position of the robot Rbt1 on the x-y plane of the internal state of the robot Rbt1.

In this example, M1 particles (M1 is a natural number) are used to estimate the internal state of the robot Rbt1.

The moving object controller 1000 uses the variance-covariance matrix data V_mtxt (Rbt1) of the robot Rbt1 at time t as data indicating the internal state of the robot Rbt1. More specifically, the variance-covariance matrix data V_mtxt (Rbt1) of the robot Rbt1 at time t is written using the formula below.

Formula 1 V_mtx t ( Rbt 1 ) = [ E [ ( x 0 t ( i ) - x 0 t ) ( x 0 t ( i ) - x 0 t ) ] E [ ( x 0 t ( i ) - x 0 t ) ( y 0 t ( i ) - y 0 t ) ] E [ ( y 0 t ( i ) - y 0 t ) ( x 0 t ( i ) - x 0 t ) ] E [ ( y 0 t ( i ) - y 0 t ) ( y 0 t ( i ) - y 0 t ) ] ] ( 1 )

In the formula, x0t(i) is the x-coordinate of the i-th particle (i is a natural number, and 1≦i≦M1), which is used to estimate the internal state of the robot Rbt1, and y0t(i) is the y-axis coordinate of the i-th particle (i is a natural number, and 1≦i≦M1), which is used to estimate the internal state of the robot Rbt1.

E[a] is an expected value of a. In other words, E[(x0t(i)−x0t)(x0t(i)−x0t)] is a variance of the x-coordinates of the M1 particles (M1 is a natural number) for estimating the internal state of the robot Rbt1. E[(y0t(i)−y0t)(y0t(i)−y0t)] is a variance of the y-coordinates of the M1 particles (M1 is a natural number) for estimating the internal state of the robot Rbt1.

Further, x0t is an expected value (average value) of the x-coordinates of the M1 particles (M1 is a natural number), and y0t is an expected value (average value) of the y-coordinates of the M1 particles (M1 is a natural number).

As described above, the moving object controller 1000 uses (A) the internal state data XPt indicating the position and the angle of the robot Rbt1 at time t and (B) the variance-covariance matrix data V_mtxt (Rbt1) of the robot Rbt1 at time t as the data Xi indicating the internal state of the robot Rbt1.

1.2.2 Data Representing Environmental Map

The moving object controller 1000 generates environmental map data mt at time t as data representing an environmental map.

The environmental map data mt includes information about landmarks. When the robot Rbt1 detects a landmark, information about the detected landmark is added to the environmental map data mt.

The moving object controller 1000 uses the environmental map data mt at time t written as


mt=[LM1Pt,V_mtxt(LM1),LM2Pt,V_mtxt(LM2), . . . ,LMkPt,V_mtxt(LMk)],

where k is a natural number.

In this formula, LMkPt is internal state data corresponding to the position of the k-th landmark at time t, and V_mtxt (LMk) is variance-covariance matrix data of the k-th landmark at time t.

At time t, information about the n-th landmark (n is a natural number, and n≦k) at time t includes (A) the internal state data LMnPt and (B) the variance-covariance matrix data V_mtxt (LMn). The data sets (A) and (B) for k landmarks are used as the environmental map data mt at time t.

In the example of FIG. 3, the environmental map data mt includes information about two landmarks LM1 and LM2. In the example of FIG. 3, the environmental map data mt is written as


mt=[LM1Pt,V_mtxt(LM1),LM2Pt,V_mtxt(LM2)].

More specifically, the environmental map data mt in FIG. 3 includes

(1) internal state data LM1Pt corresponding to the position of the landmark LM1 and variance-covariance matrix data V_mtxt (LM1), and

(2) internal state data LM2Pt corresponding to the position of the landmark LM2 and variance-covariance matrix data V_mtxt (LM2).

For ease of explanation, a case in which environmental map data is obtained using the two landmarks LM1 and LM2 will be described below with reference to FIGS. 2 and 3.

The internal state data LM1 Pt corresponding to the position of the landmark LM1 is a two-dimensional vector, and indicates an expected value of the position of the landmark LM1 on the x-y plane. In other words, the internal state data LM1Pt corresponding to the position of the landmark LM1 is written as


LM1Pt=(x1t,y1t),

where x1t is an expected value of the x-coordinate of the landmark LM1 at time t, and

y1t is an expected value of the y-coordinate of the landmark LM1 at time t.

The internal state data LM2P1 corresponding to the position of the landmark LM2 is a two-dimensional vector, and indicates an expected value of the position of the landmark LM2 on the x-y plane. In other words, the internal state data LM2Pt corresponding to the position of the landmark LM2 is written as


LM2Pt=(x2t,y2t),

where x2t is an expected value of the x-coordinate of the landmark LM2 at time t, and

y2t is an expected value of the y-coordinate of the landmark LM2 at time t.

The moving object controller 1000 uses particle filtering (Monte Carlo approximation) to estimate information about landmarks.

In particle filtering to estimate internal state data corresponding to the position of the landmark LM1, the posterior probability distribution of the state vector LM1Pt-1 of the landmark LM1 at time t−1 is used as the prior probability distribution of the state vector LM1Pt of the landmark LM1 at the current time t. The predictive probability distribution of the state vector LM1Pt of the landmark LM1 at the current time t is obtained through prediction using the prior probability distribution of the state vector LM1Pt of the landmark LM1 at the current time t. A likelihood is then calculated based on the obtained predictive probability distribution and actual observation data (observation data at the current time t). At the ratio proportional to the calculated likelihood, particles are sampled without changing the total number of the particles. Through the processing, the information (internal state) about the landmark LM1 is estimated.

In particle filtering to estimate internal state data corresponding to the position of the landmark LM2, the posterior probability distribution of the state vector LM2Pt-1 of the landmark LM2 at time t−1 is used as the prior probability distribution of the state vector LM2Pt of the landmark LM2 at the current time t. The predictive probability distribution of the state vector LM2Pt of the landmark LM2 at the current time t is obtained through prediction using the prior probability distribution of the state vector LM2Pt of the landmark LM2 at the current time t. A likelihood is then calculated based on the obtained predictive probability distribution and actual observation data (observation data at the current time t). At the ratio proportional to the calculated likelihood, particles are sampled without changing the total number of the particles. Through the processing, the information (internal state) about the landmark LM2 is estimated.

FIG. 4 is a schematic diagram describing the distribution of particles (one example) for estimating the information about the landmarks LM1 and LM2 (internal state data corresponding to the position), or the positions of the landmarks LM1 and LM2 on the x-y plane.

M11 particles (M11 is a natural number) are used to estimate the information about the landmark LM1 (position on the x-y plane), and M12 particles (M12 is a natural number) are used to estimate the information about the landmark LM2 (position on the x-y plane).

Variance-covariance matrix data V_mtxt (LM1) of the landmark LM1 at time t is used as the data indicating the information about the landmark LM1 (internal state of the landmark LM1). More specifically, the variance-covariance matrix data V_mtxt (LM1) of the landmark LM1 at time t is written using the formula below.

Formula 2 V_mtx t ( LM 1 ) = [ E [ ( x 1 t ( i ) - x 1 t ) ( x 1 t ( i ) - x 1 t ) ] E [ ( x 1 t ( i ) - x 1 t ) ( y 1 t ( i ) - y 1 t ) ] E [ ( y 1 t ( i ) - y 1 t ) ( x 1 t ( i ) - x 1 t ) ] E [ ( y 1 t ( i ) - y 1 t ) ( y 1 t ( i ) - y 1 t ) ] ] ( 2 )

In the formula, x1t(i) is the x-coordinate of the i-th particle (i is a natural number, and 1≦i≦M11), which is used to estimate the internal state of the landmark LM1, and y1t(i) is the y-coordinate of the i-th particle (i is a natural number, and 1≦i≦M11), which is used to estimate the internal state of the landmark LM1.

E[a] is an expected value of a. In other words, E[(x1t(i)−x1t)(x1t(i)−x1t)] is a variance of the x-coordinates of the M11 particles (M11 is a natural number) for estimating the internal state of the landmark LM1. E[(y1t(i)−y1t)(y1t(i)−y1t)] is a variance of the y-coordinates of the M11 particles (M11 is a natural number) for estimating the internal state of the landmark LM1.

Further, x1t is an expected value (average value) of the x-coordinates of the M11 particles (M11 is a natural number) for estimating the internal state of the landmark LM1, and y1t is an expected value (average value) of the y-coordinates of the M11 particles (M11 is a natural number) for estimating the internal state of the landmark LM1.

Variance-covariance matrix data V_mtxt (LM2) of the landmark LM2 at time t is used as the data indicating the information about the landmark LM2 (internal state of the landmark LM2). More specifically, the variance-covariance matrix data V_mtxt (LM2) of the landmark LM2 at time t is written using the formula below.

Formula 3 V_mtx t ( LM 2 ) = [ E [ ( x 2 t ( i ) - x 2 t ) ( x 2 t ( i ) - x 2 t ) ] E [ ( x 2 t ( i ) - x 2 t ) ( y 2 t ( i ) - y 2 t ) ] E [ ( y 2 t ( i ) - y 2 t ) ( x 2 t ( i ) - x 2 t ) ] E [ ( y 2 t ( i ) - y 2 t ) ( y 2 t ( i ) - y 2 t ) ] ] ( 3 )

In the formula, x2t(i) is the x-coordinate of the i-th particle (i is a natural number, and 1≦i M12), which is used to estimate the internal state of the landmark LM2, and y2t(i) is the y-coordinate of the i-th particle (i is a natural number, and 1≦i≦M12), which is used to estimate the internal state of the landmark LM2.

E[a] is an expected value of a. In other words, E[(x2t(i)−x2t)(x2t(i)−x2t)] is a variance of the x-coordinates of the M12 particles (M12 is a natural number) for estimating the internal state of the landmark LM2. E[(y2t(i)−y2t)(y2t(i)−y2t)] is a variance of the y-coordinates of the M12 particles (M12 is a natural number) for estimating the internal state of the landmark LM2.

Further, x2t is an expected value (average value) of the x-coordinates of the M12 particles (M12 is a natural number) for estimating the internal state of the landmark LM2, and y2t is an expected value (average value) of the y-coordinates of the M12 particles (M12 is a natural number) for estimating the internal state of the landmark LM2.

As described above, the moving object controller 1000 generates environmental map data mt at time t using the information about landmarks (internal state data corresponding to the positions of the landmarks and variance-covariance matrix data).

1.2.3 Details of Processing

The processing of the moving object controller 1000 will now be described in detail. Processing at Time t−1

At time t−1, the robot Rbt1 and the landmarks LM1 and LM2 are arranged as shown in FIG. 2. Internal state data Xt-1 of the robot Rbt1 and environmental map data mt-1 obtained by the state estimation unit 3 included in the moving object controller 1000 at time t−1 are written below.

Internal state data Xt-1 of the robot Rbt1


Xt-1=[XPt-1,V_mtxt-1(Rbt1)]


XPt-1=(x0t-1,y0t-1t-1)

Environmental map data mt-1


mt-1=[LM1Pt-1,V_mtxt-1(LM1),LM2Pt-1,V_mtxt-1(LM2)]


LM1Pt-1=(x1t-1,y1t-1)


LM2Pt-1=(x2t-1,y2t-1)

The internal state data Xt-1 and the environmental map data mt-1 obtained by the state estimation unit 3 included in the moving object controller 1000 at time t−1 are stored into the storage unit 4.

At time t−1, control input data Ut-1 is input into the moving object controller 1000. The control input data Ut-1 is used to move the robot Rbt1 in the direction of arrow Dir1 shown in FIG. 2. More specifically, the control input data Ut-1 is written as


Ut-1=(Vxt-1,Vyt-1), and


θt-1=tan−1(−Vyt-1/Vxt-1),

where Vxt-1 is the speed in x-direction and

Vyt-1 is the speed in y-direction.

When controlled using the control input data Ut-1, the robot Rbt1 is predicted to move to the position shown in FIG. 3 at time t. The distance between the point (x0t-1, y0t-1) in FIG. 2 and the point (x0t, y0t) in FIG. 3 is the distance (estimated distance) by which the robot Rbt1 moves from time t−1 to time t at speed Vxt-1 in x-direction and at speed Vyt-1 in y-direction.

The imaging device (not shown) mounted on the robot Rbt1 incorporates an optical system with an optical axis extending in the direction of arrow Dir shown in FIG. 2. The imaging device captures an image around the robot Rbt1 at an angle α shown in FIG. 2. The imaging device in the robot Rbt1 captures an image of an area AR_capt-1 indicated by broken lines in FIG. 2 at time t−1.

The left part of FIG. 5 schematically shows an image Imgt-1 captured by the imaging device in the robot Rbt1 at time t−1. As shown in the left part of FIG. 5, the captured image includes a circular landmark LM1. In the left part of FIG. 5, a point corresponding to the internal state data LM1Pt-1=y1t-1) for the position of the landmark LM1 at time t−1 is depicted as a black dot.

Processing at Time t

At time t, the robot Rbt1 controlled using the control input Ut-1 is predicted to have moved to the position shown in FIG. 3.

At time t, an image captured by the imaging device in the robot Rbt1 is input into the observation obtaining unit 1 as input data Din.

The observation obtaining unit 1 subjects the input data Din to camera signal processing (e.g., gain adjustment, gamma correction, and white balance adjustment), and outputs the resultant image signal D1 (image D1) to the landmark detection unit 2 as observation data.

The landmark prediction unit 5 obtains data STt-1 at time t−1 from the storage unit 4.

The data STt-1 includes (1) the internal state data Xt-1 of the robot Rbt1 at time t−1, (2) the environmental map data mt-1 at time t−1, and (3) the control input data Ut-1 for the robot Rbt1 at time t−1.

The landmark prediction unit 5 predicts the information about the landmark at time t based on the data STt-1 at time t−1.

More specifically, the landmark prediction unit 5 predicts a position at which the landmark is likely to be in the image D1. Based on the control input data Ut-1 at time t−1 used to control the robot Rbt1, the robot Rbt1 is likely to be at the position shown in FIG. 3 at time t, and an area AR_capt defined by broken lines in FIG. 3 is likely to be captured by the imaging device in the robot Rbt1 at time t. The image (image D1) captured by the imaging device in the robot Rbt1 is likely to contain the landmark LM1. The area of the image D1 that is likely to contain the landmark LM1 can be calculated based on (1) the internal state data Xt-1 of the robot Rbt1 at time t−1, (2) the environmental map data mt-1 at time t−1 (positional information about the landmark LM1 at time t−1), and (3) the control input data Ut-1 for the robot Rbt1 at time t−1.

The right part of FIG. 5 shows the obtained image area AR1t (one example) that is likely to contain the calculated landmark LM1.

The right part of FIG. 5 schematically shows an image area AR1t of the image D1 (image shown as Imgt in FIG. 5) that is likely to contain the landmark LM1.

The landmark prediction unit 5 outputs a signal containing (1) information indicating that the landmark LM1 is solely likely to be contained in the image D1 at time t and (2) information for identifying the image area AR1t that is likely to contain the landmark LM1, calculated through the above processing, to the landmark detection unit 2 as a landmark prediction signal Sig_pre.

For ease of explanation, the image area AR1t is elliptical as shown in the right part of FIG. 5. The landmark LM1 is likely to be detected at the center of the elliptical image area AR1t, and is less likely to be detected at positions more away from the center of the elliptical image area AR1t.

The size of the image area AR1t is set based on, for example, the positional information about the landmark LM1 included in the environmental map data mt-1 at time t−1 and/or the variance-covariance matrix data of the landmark LM1. The size of the image area AR1t may be fixed to a predetermined size to simplify the processing. The same applies to the size of an image area ARkt for the k-th landmark (k is a natural number).

The landmark detection unit 2 determines that the landmark LM1 is solely likely to be contained in the image D1 at time t based on the landmark prediction signal Sig_pre, which is provided from the landmark prediction unit 5. The landmark detection unit 2 also performs processing to detect the landmark LM1 using the image area AR1t that is likely to contain the landmark LM1

As shown in the right part of FIG. 5, the landmark LM1 is at the position indicated by LM1 in the image D1 (image Imgt). The landmark detection unit 2 starts searching for the landmark LM1 at the center of the image area AR1t and then sequentially searches for the landmark LM1 at positions away from the search initial position. In the right part of FIG. 5, the landmark LM1 is near the center of the image area AR1t. In this case, the landmark detection unit 2 detects the landmark LM1 immediately after starting the search for the landmark LM1 from the center of the image area AR1t. Each landmark may be provided with landmark identification information (marker), such as a pattern, a color, or a character indicating a specific code. In this case, the landmark detection unit 2 can detect a specific landmark by detecting its landmark identification information (marker) in the image D1 (image Imgt) (identify a detected landmark).

The landmark detection unit 2 obtains the position (coordinate on the x-y plane shown in FIGS. 2 and 3) and the size of the landmark LM1 in the environmental map based on the position and the size of the landmark LM1 in the image D1 (image Imgt) detected through the processing described above and the imaging parameters (including the viewing angle and the focal length) obtained when the image D1 (image Imgt) is captured. The landmark detection unit 2 then outputs a signal containing information for identifying the position and the size of the landmark LM1 in the environmental map to the state update unit 31 and the environmental map update unit 32 included in the state estimation unit 3 as a landmark detection signal LMt.

The state update unit 31 included in the state estimation unit 3 estimates the internal state of the robot Rbt1 using the observation data D1, which is output from the observation obtaining unit 1, the landmark detection signal LMt, which is output from the landmark detection unit 2, and the control input data Ut for the moving object (robot Rbt1). The state update unit 31 obtains the estimation result as internal state data Xt of the robot Rbt1 at the current time t (updates the internal state data).

The state update unit 31 obtains (updates) the internal state data Xt at the current time t through particle filtering. More specifically, the state update unit 31 uses the posterior probability distribution of the state vector XPt-1 at time t−1 as the prior probability distribution of the state vector XPt at the current time t, and obtains the predictive probability distribution of the state vector XPt at the current time t through prediction using the prior probability distribution of the state vector XPt at the current time t. The state update unit 31 then calculates a likelihood based on the obtained predictive probability distribution and actual observation data (observation data at the current time t). At the ratio proportional to the calculated likelihood, the state update unit 31 samples particles without changing the total number of the particles. Through the processing, the internal state of the robot Rbt1 is estimated.

The actual observation data used to obtain the state vector XPt at the current time t is the data obtained based on, for example, at least one of (1) the image D1 output from the observation obtaining unit 1, (2) the information obtained from the landmark detection signal LMt for identifying the position and the size of the landmark (landmark LMk) in the environmental map, and (3) other information for identifying the position and the size of the robot Rbt1 in the environmental map (e.g., information obtained by a position sensor mounted on the robot Rbt1).

The state update unit 31 obtains variance-covariance matrix data V_mtxt (Rbt1) of the robot Rbt1 at time t as the data indicating the internal state of the robot Rbt1. FIGS. 2 to 4 each schematically show an ellipse containing most of the particles for obtaining the internal state corresponding to the position of the robot Rbt1. This ellipse is shaped and sized based on the values of the elements of the variance-covariance matrix and/or values calculated using the values of such matrix elements. More specifically, the ellipse shown in FIGS. 2 to 4 indicates the uncertainty of the internal state data corresponding to the position of the robot Rbt1. A smaller ellipse indicates that the position of the robot Rbt1 is estimated with higher accuracy.

As described above, the internal state data Xt of the robot Rbt1 at time t obtained by the state update unit 31 is output out of the state estimation unit 3, and also to the storage unit 4.

The environmental map update unit 32 included in the state estimation unit 3 estimates the internal state corresponding to the position of a landmark based on a landmark detection signal LMt, which is output from the landmark detection unit 2. The environmental map update unit 32 then obtains environmental map data mt at time t based on the estimation result. The processing for the landmark LM1 will now be described.

The environmental map update unit 32 obtains (updates) internal state data LM1Pt corresponding to the position of the landmark LM1 at the current time t through particle filtering. More specifically, the environmental map update unit 32 uses the posterior probability distribution of the internal state data LM1Pt-1 corresponding to the position of the landmark LM1 at time t−1 as the prior probability distribution of the internal state data LM1Pt corresponding to the position of the landmark LM1 at the current time t, and obtains the predictive probability distribution of the internal state data LM1Pt corresponding to the position of the landmark LM1 at the current time t through prediction using the prior probability distribution of the internal state data LM1Pt corresponding to the position of the landmark LM1 at the current time t. The environmental map update unit 32 then calculates a likelihood based on the obtained predictive probability distribution and actual observation data (observation data at the current time t). At the ratio proportional to the calculated likelihood, the environmental map update unit 32 samples particles without changing the total number of the particles. Through the processing, the internal state corresponding to the position of the landmark LM1 is estimated.

The actual observation data used to obtain the internal state data LM1Pt corresponding to the position of the landmark LM1 at the current time t is the data obtained based on, for example, information for identifying the position and the size of the landmark LM1 in the environmental map obtained based on the landmark detection signal LMt.

The environmental map update unit 32 obtains variance-covariance matrix data V_mtxt (LM1) of the landmark LM1 at time t as the data indicating the internal state corresponding to the position of the landmark LM1. FIGS. 2 to 4 each schematically show an ellipse containing most of the particles for obtaining the internal state corresponding to the position of the landmark LM1. This ellipse is shaped and sized based on the values of the elements of the variance-covariance matrix data and/or values calculated using the values of such matrix elements. More specifically, the ellipse shown in FIGS. 2 to 4 indicates the uncertainty of the internal state data corresponding to the position of the landmark LM1. A smaller ellipse indicates that the position of the landmark LM1 is estimated with higher accuracy.

The environmental map update unit 32 obtains internal state data LM2Pt corresponding to the position of the landmark LM2 and variance-covariance matrix data V_mtxt (LM2) with the same procedure described above.

The environmental map update unit 32 obtains (updates) the environmental map data mt at time t based on the data obtained through the processing described above. The environmental map data m1 obtained by the environmental map update unit 32 is then output out of the state estimation unit 3, and also to the storage unit 4.

Processing at Time t+1 and Subsequent Times

The processing described above will be repeated at time t+1 and subsequent times.

Through the processing described above, the moving object controller 1000 can continuously obtain (1) environmental map data mt and (2) internal state data Xt of the robot Rbt1 with high accuracy. This highly accurate data is used to control the robot Rbt1 in an appropriate manner.

As described above, the moving object controller 1000 includes the landmark prediction unit that searches for a predetermined landmark by predicting an image area that is likely to contain the predetermined landmark in an image D1 of observation data at time t based on (1) internal state data Xt-1 of the robot Rbt1 at time t−1, (2) environmental map data mt-1 at time t−1, and (3) control input data Ut-1 used to control the robot Rbt1 at time t−1. The moving object controller 1000 may not search the entire image D1 for the predetermined landmark, unlike such techniques known in the art. The moving object controller 1000 greatly shortens the processing time taken to detect the predetermined landmark and thus shortens the processing time taken to generate (update) environmental map data. Through the processing described above, the moving object controller 1000 generates an environmental map efficiently and performs highly accurate state estimation in a short time. As a result, the moving object controller 1000 controls a moving object in an appropriate manner.

In the example described above, the landmark prediction unit predicts information about a landmark at time t based on the data STt-1 at time t−1, which precedes the current time t by one time. The landmark prediction unit may predict information about the landmark at time t based on a plurality of sets of past data obtained before the current time t (e.g., data STt-1, data STt-2, and subsequent data).

Second Embodiment

A second embodiment will now be described.

In the second embodiment, the components that are the same as in the first embodiment are given the same reference numerals as those components, and will not be described in detail.

2.1 Structure of Moving Object Controller

FIG. 6 is a schematic block diagram of a moving object controller 2000 according to the second embodiment.

FIG. 7 is a schematic block diagram of a landmark prediction unit 5A in the second embodiment.

FIG. 8 is a schematic block diagram of a landmark detection unit 2A in the second embodiment.

The moving object controller 2000 according to the second embodiment includes a landmark detection unit 2A, which replaces the landmark detection unit 2 included in the moving object controller 1000 of the first embodiment, and a landmark prediction unit 5A, which replaces the landmark prediction unit 5.

As shown in FIG. 7, the landmark detection unit 2A includes a landmark prediction signal generation unit 51 and an environmental map data reliability obtaining unit 52.

The landmark prediction signal generation unit 51 receives data ST1-1 at time t−1, which is read from the storage unit 4. The landmark prediction signal generation unit 51 generates a landmark prediction signal Sig_pre based on the data STt-1, and outputs the generated landmark prediction signal Sig_pre to the landmark detection unit 2A.

The environmental map data reliability obtaining unit 52 receives data STt-1 at time t−1, which is read from the storage unit 4. The environmental map data reliability obtaining unit 52 obtains reliability Blf_m of information about each landmark based on environmental map data mt-1, and outputs the obtained reliability Blf_m of the information about each landmark to the landmark detection unit 2A. The reliability Blf_m is obtained for each landmark. For example, reliability Blf_m (LM1) of a landmark LM1 is a sum of diagonal elements of the variance-covariance matrix data V_mtxt-1 (LM1) of the landmark LM1. In other words, the reliability Blf_m (LM1) is a sum of the variance of the x-coordinate of the landmark LM1 at time t−1 and the variance of the y-coordinate of the landmark LM1 at time t−1. The reliability Blf_m (LMk) of the landmark LMk may be calculated with any other method that can yield a value representing the estimation accuracy of the landmark position.

As shown in FIG. 8, the landmark detection unit 2A includes a landmark initial position obtaining unit 21, a first comparator 22, and a landmark search unit 23.

The landmark initial position obtaining unit 21 receives a landmark prediction signal Sig_pre, which is output from the landmark prediction unit 5A. The landmark initial position obtaining unit 21 obtains information for setting an image area that is likely to contain each landmark in an image D1 based on the landmark prediction signal Sig_pre. Using the obtained information, the landmark initial position obtaining unit 21 then determines the initial position (in the image) at which the search for each landmark is to be started in the image D1. The landmark initial position obtaining unit 21 outputs information Init_pos about the initial position in the search for each landmark to the landmark search unit 23.

The first comparator 22 receives reliability Blf_m of information about each landmark, which is output from the landmark prediction unit 5A, and a threshold value Th1. The first comparator 22 compares the reliability Blf_m and the threshold value Th1, and outputs a signal Det1 indicating the comparison result to the landmark search unit 23.

The landmark search unit 23 receives the information Init_pos about the initial position at which the search for each landmark is started, which is output from the landmark initial position obtaining unit 21, and the signal Dell indicating the comparison result, which is output from the first comparator 22. Using the signal Det1 indicating the comparison result, the landmark search unit 23 performs (1) local search or (2) global search for each landmark to obtain the position (coordinate position on the x-y plane shown in FIGS. 2 and 3) and the size of each landmark in the environmental map. The landmark search unit 23 then outputs a signal indicating the obtained information for identifying the position and the size of each landmark in the environmental map to the state update unit 31 and the environmental map update unit 32 included in the state estimation unit 3 as a landmark detection signal LM1.

2.2 Operation of Moving Object Controller

The operation of the moving object controller 2000 with the above-described structure will now be described. The processing that is the same as the processing described in the first embodiment will not be described.

The landmark prediction signal generation unit 51 generates a landmark prediction signal Sig_pre based on the data STt-1 read from the storage unit 4. The landmark prediction signal generation unit 51 then outputs the generated landmark prediction signal Sig_pre to the landmark initial position obtaining unit 21 included in the landmark detection unit 2A.

The environmental map data reliability obtaining unit 52 obtains reliability Blf_m of information about each landmark based on environmental map data mt-1 at time t−1 read from the storage unit 4. The environmental map data reliability obtaining unit 52 then outputs the obtained reliability Blf_m of the information about each landmark to the first comparator 22 of the landmark detection unit 2A.

The environmental map data reliability obtaining unit 52 obtains

(1) reliability Blf_m (LM1) of the landmark LM1 based on the sum of diagonal elements of the variance-covariance matrix data V_mtxt-1 (LM1) of the landmark LM1, and

(2) reliability Blf_m (LM2) of the landmark LM2 based on the sum of diagonal elements of the variance-covariance matrix data V_mtxt-1 (LM2) of the landmark LM2.

For ease of explanation, the environmental map data in the present embodiment also includes the two landmarks LM1 and LM2.

The landmark initial position obtaining unit 21 included in the landmark detection unit 2A obtains information for setting an image area that is likely to contain each landmark in the image D1 based on a landmark prediction signal Sig_pre, which is output from the landmark prediction unit 5A. Using the obtained information, the landmark initial position obtaining unit 21 then determines the initial position (in the image) at which the search for each landmark is to be started in the image D1. The landmark initial position obtaining unit 21 outputs the determined information Init_pos about the initial position in the search for each landmark to the landmark search unit 23.

The first comparator 22 receives reliability Blf_m of information about each landmark, which is output from the landmark prediction unit 5A, and a threshold value Th1. The first comparator 22 then compares the reliability Blf_m and the threshold value Th1.

More specifically, the processing described below is performed.

(A) The reliability Blf_m (LM1) of the landmark LM1 is compared with the threshold Th1.

(A1) When Blf_m (LM1)<Th1, the signal Det1 is set to Den (LM1)=1.

(A2) When Blf_m (LM1)≧Th1, the signal Det1 is set to Det1 (LM1)=0.

(B) The reliability Blf_m (LM2) of the landmark LM2 is compared with the threshold Th1.

(B1) When Blf_m (LM2)<Th1, the signal Det1 is set to Det1 (LM2)=1.

(B2) When Blf_m (LM2)≧Th1, the signal Det1 is set to Det1 (LM2)=0.

The resultant signal Det1 indicating the detection result for each landmark is output from the first comparator 22 to the landmark search unit 23.

The landmark search unit 23 receives information Init_pos indicating the search initial position for each landmark from the landmark initial position obtaining unit 21 and the signal Dell indicating the comparison result from the first comparator 22, and determines whether (1) local search or (2) global search is to be performed for each landmark using the received information and the received signal.

More specifically, the landmark search unit 23 performs the processing (A1), (A2), (B1), and (B2) described below.

(A1) When Det1 (LM1)=1, indicating the estimation result with high reliability for the landmark LM1, local search is to be performed for the landmark LM1. The local search refers to setting an image area ARkt that is likely to contain the landmark LMk (k is a natural number) and searching the set image area ARkt to detect the landmark LMk (k is a natural number) as described in the first embodiment.

(A2) When Det1 (LM1)=0, indicating the estimation result with low reliability for the landmark LM1, global search is to be performed for the landmark LM1. The global search refers to searching the entire area of the image D1 to detect the landmark LMk (k is a natural number).

(B1) When Det1 (LM2)=1, indicating the estimation result with high reliability for the landmark LM2, the local search is to be performed for the landmark LM2.

(B2) When Det1 (LM2)=0, indicating the estimation result with low reliability for the landmark LM2, the global search is to be performed for the landmark LM2.

The landmark search unit 23 obtains the size and the position (coordinates on the x-y plane shown in FIGS. 2 and 3) of each of the landmarks LM1 and LM2 in the environmental map based on the search result obtained through the processing above as described in the first embodiment. The landmark search unit 23 then outputs a landmark detection signal LMt containing the obtained information for identifying the size and the position of the landmarks LM1 and LM2 in the environmental map to the state update unit 31 and the environmental map update unit 32 included in the state estimation unit 3.

As described above, the moving object controller 2000 selectively switches the processing for detecting a landmark between the local search and the global search based on the reliability of information about the landmark. The moving object controller 2000 can thus detect a landmark at high speed when the information about the landmark has high reliability. When the information about the landmark has low reliability, the global search is performed to increase the probability that the landmark can be detected.

First Modification

A first modification of the second embodiment will now be described.

In the present modification, the components that are the same as in the above embodiments are given the same reference numerals as those components, and will not be described in detail.

FIG. 9 is a schematic block diagram of a moving object controller 2000A according to a first modification of the second embodiment.

FIG. 10 is a schematic block diagram of a landmark prediction unit 5B in the first modification of the second embodiment.

FIG. 11 is a schematic block diagram of a landmark detection unit 2B in the first modification of the second embodiment.

The moving object controller 2000B according to the present modification includes a landmark detection unit 2B, which replaces the landmark detection unit 2A included in the moving object controller 2000 of the second embodiment, and a landmark prediction unit 5B, which replaces the landmark prediction unit 5A.

As shown in FIG. 10, the landmark prediction unit 5B includes a self-position reliability obtaining unit 53, which is added to the structure of the landmark prediction unit 5A.

The self-position reliability obtaining unit 53 receives data STt-1 at time t−1, which is read from the storage unit 4. The self-position reliability obtaining unit 53 obtains reliability Blf_rbt of internal state data of the robot Rbt1 based on the internal state data Xt-1 of the robot Rbt1, and outputs the obtained reliability Blf_rbt of the internal state data of the robot Rbt1 to the landmark detection unit 2B. The reliability Blf_rbt is, for example, a sum of diagonal elements of the variance-covariance matrix data V_mtxt-1 (Rbt1) of the robot Rbt1. In other words, the reliability Blf_rbt is a sum of the variance of the x-coordinate of the robot Rbt1 at time t−1 and the variance of the y-coordinate of the robot Rbt1 at time t−1. The reliability Blf_rbt of the robot Rbt1 may be calculated with any other method that can yield a value representing the estimation accuracy of the internal state data of the robot Rbt1.

As shown in FIG. 11, the landmark detection unit 2B includes a landmark search unit 23A, which replaces the landmark search unit 23 included in the landmark detection unit 2A, and a second comparator 24 and a detection information obtaining unit 25, which are added to the structure of the landmark detection unit 2.

The second comparator 24 receives the reliability Blf_rbt of the internal state data of the robot Rbt1, which is output from the self-position reliability obtaining unit 53 of the landmark prediction unit 5B, and a threshold value Th2. The second comparator 24 compares the reliability Blf_rbt and the threshold value Th2, and outputs a signal Det2 indicating the comparison result to the landmark search unit 23A.

The landmark search unit 23A further receives the signal Det2 indicating the comparison result, which is output from the second comparator 24, and a signal Det3 output from the detection information obtaining unit 25, in addition to the structure of the landmark search unit 23. The landmark search unit 23A then performs the same processing as the landmark search unit 23, and further determines whether local search or global search is to be performed to detect a landmark based on the signal Det2 and/or the signal Det3.

The detection information obtaining unit 25 receives a landmark detection signal LMt, which is output from the landmark search unit 23A, and obtains information about the detection for each landmark based on the landmark detection signal LMt. For example, the detection information obtaining unit 25 generates a signal containing the estimation accuracy of information about the number of times each landmark has been detected within a predetermined past period or the position of each landmark as a detection signal Det3. The detection information obtaining unit 25 outputs the generated detection signal Det3 to the landmark search unit 23A.

The operation of the moving object controller 2000A with the above-described structure will now be described. The processing that is the same as the processing of the moving object controller 2000 will not be described.

The self-position reliability obtaining unit 53 obtains reliability Blf_rbt of the internal state data of the robot Rbt1 based on the internal state data Xt-1 of the robot Rbt1. The self-position reliability obtaining unit 53 outputs the obtained reliability Blf_rbt of the robot Rbt1 to the second comparator 24 included in the landmark detection unit 2B.

The second comparator 24 included in the landmark detection unit 2B compares the reliability Blf_rbt of internal state data of the robot Rbt1, which is output from the self-position reliability obtaining unit 53 of the landmark prediction unit 5B, and a threshold value Th2.

More specifically, the processing described below is performed.

(1) When Blf_rbt<Th2, the signal Det2 is set to Det2=1.

(2) When Blf_rbt≧Th2, the signal Det2 is set to Det2=0.

The resultant signal Det2 indicating the comparison result is output from the second comparator 24 to the landmark search unit 23A.

The landmark search unit 23A performs the same processing as the processing performed by the landmark search unit 23 of the second embodiment, and additionally determines whether local search or global search is to be performed to detect each landmark based on the signal Det2.

More specifically, when Det2=0, the landmark search unit 23A determines that the internal state data of the robot Rbt1 has low reliability, and thus selects global search to detect each landmark. When Det2=1, the landmark search unit 23A selects local search or global search for each landmark based on the signal Det1, and performs the selected local or global search to detect each landmark in the same manner as the landmark search unit 23 according to the second embodiment.

The landmark search unit 23A may select local search or global search to detect each landmark based on the detection signal Det3.

The landmark search unit 23A can determine the estimation accuracy of information about the number of times each landmark has been detected in a predetermined past period or about the position of each landmark using the signal Det3 containing information about the landmark detection, which is output from the detection information obtaining unit 25. When the detection signal Det3 indicates that a predetermined landmark has not been retrieved (detected) in the search performed by the landmark search unit 23A for at least a predetermined period of time, the landmark search unit 23A may determine that the information about the predetermined landmark has low reliability, and may select global search for this landmark and perform the global search to detect the landmark.

In the moving object controller 2000A according to this modification, the detection information obtaining unit 25 may be eliminated.

As described above, the moving object controller 2000A selectively switches the processing for detecting a landmark between the local search and the global search based on the reliability of information about the landmark and the internal state data of the robot Rbt1. The moving object controller 2000A can thus detect each landmark with higher accuracy.

Other Embodiments

In the above embodiments (including modifications), the environmental map data is generated using the x-y coordinate system as shown in FIGS. 2 to 4. However, the data may be generated using another coordinate system, such as a polar coordinate system. Although the processing described in the above embodiments is performed using absolute coordinates, the processing may be performed using relative coordinates with the origin being the position of the robot Rbt1.

In the above embodiments (including modifications), the position of a moving object (robot Rbt1) and the position of each landmark are identified using two-dimensional data (the x-coordinate and the y-coordinate). However, the position of the moving object (robot Rbt1) and the position of each landmark may be identified using three-dimensional data (e.g., the x-coordinate, the y-coordinate, and the z-coordinate).

In the above embodiments (including modifications), the image area ARkt that is likely to contain the landmark LMk (k is a natural number) is an elliptical area. In some embodiments, the image area ARkt that is likely to contain the landmark LMk (k is a natural number) may be in another shape (e.g., a rectangle or a circle).

In the search for the landmark LMk, the landmark detection unit may terminate the search upon detecting the landmark LMk, and perform no further search of the image area ARkt, which has been set as the search area. This enables search at higher speed.

In the search for the landmark LMk, the landmark detection unit may further detect an image area other than the image area ARkt that has been set as the search area when failing to detect the landmark LMk in the image area ARkt.

In the above embodiments (including modifications), the variance-covariance matrix data is used to obtain the estimation accuracy of the position of the robot Rbt1 and the position of each landmark. In some embodiments, the accuracy of the position of the robot Rbt1 and the position of each landmark may be obtained with another index indicating positional variations.

In the above embodiments (including modifications), the diagonal elements of the variance-covariance matrix data (the elements corresponding to the variances of the x-coordinate and the y-coordinate) are used to obtain the estimation accuracy of the position of the robot Rbt1 and the position of each landmark. In some embodiments, selected elements of the variance-covariance matrix data or all the elements of the variance-covariance matrix data may be used to calculate a value indicating the accuracy of the position of the robot Rbt1 and the position of each landmark.

In the above embodiments (including modifications), the processing for estimating the position of the robot Rbt1 and the position of each landmark is performed using particle filtering. In some embodiments, the processing may be performed using other time-series filtering, such as Kalman filtering. In resampling of particles, Gaussian noise (noise in accordance with the Gaussian distribution) using the dynamics of random walks as the underlying assumption is added to obtain the set of particles resulting from the prediction. However, noise in accordance with a distribution other than the Gaussian distribution may be added to obtain the set of particles resulting from the prediction.

The above embodiments and modifications may be combined to form the moving object controller.

Each block of the moving object controller described in the above embodiments may be formed using a single chip with a semiconductor device, such as an LSI (large-scale integration) device, or some or all of the blocks of the moving object controller may be formed using a single chip.

Although LSI is used as the semiconductor device technology, the technology may be an integrated circuit (IC), a system LST, a super LSI, or an ultra LST depending on the degree of integration of the circuit.

The circuit integration technology employed should not be limited to LSI, but the circuit integration may be achieved using a dedicated circuit or a general-purpose processor. A field programmable gate array (FPGA), which is an LSI circuit programmable after manufactured, or a reconfigurable processor, which is an LSI circuit in which internal circuit cells are reconfigurable or more specifically the internal circuit cells can be reconnected or reset, may be used.

All or part of the processes performed by the functional blocks described in the above embodiments may be implemented using programs. All or part of the processes performed by the functional blocks described in the above embodiments may be implemented by a central processing unit (CPU) in a computer. The programs for these processes may be stored in a storage device, such as a hard disk or a ROM, and may be executed from the ROM or be read into a RAM and then executed.

The processes described in the above embodiments may be implemented by using either hardware or software (including use of an operating system (OS), middleware, or a predetermined library), or may be implemented using both software and hardware.

The processes described in the above embodiments may not be performed in the order specified in the above embodiments. The order in which the processes are performed may be changed without departing from the scope and the spirit of the invention.

The present invention may also include a computer program enabling a computer to implement the method described in the above embodiments and a computer readable recording medium on which such a program is recorded. The computer readable recording medium may be, for example, a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a large-capacity DVD, a next-generation DVD, or a semiconductor memory.

The computer program may not be recorded on the recording medium but may be transmitted with an electric communication line, a radio or cable communication line, or a network such as the Internet.

The term “unit” herein may include “circuitry,” which may be partly or entirely implemented by using either hardware or software, or both hardware and software.

The specific structures described in the above embodiments of the present invention are mere examples, and may be changed and modified variously without departing from the scope and the spirit of the invention.

APPENDIXES

The present invention may also be expressed in the following forms.

A first aspect of the invention provides a moving object controller for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object. The controller includes an observation obtaining unit, a landmark prediction unit, a landmark detection unit, and a state estimation unit.

The observation obtaining unit obtains observation data from an observable event.

The landmark prediction unit generates a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time.

The landmark detection unit detects information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction unit, and generates a landmark detection signal indicating a result of the detection.

The state estimation unit estimates the internal state of the moving object based on the observation data obtained by the observation obtaining unit and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimates the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

In this moving object controller, the landmark prediction unit predicts information about a landmark (predictive information) using the observation data at the current time based on (1) the internal state data of the moving object at the time preceding the current time, and (2) the environmental map data at the time preceding the current time, and generates a landmark prediction signal including the predictive information. The landmark detection unit detects information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction unit. This moving object controller can thus search for each landmark efficiently. The moving object controller shortens the processing time taken to generate (update) the environmental map data. In other words, the moving object controller performs the processing described above to generate an environmental map efficiently and to perform highly accurate state estimation in a short period of time. As a result, the moving object controller controls the moving object in an appropriate manner.

A second aspect of the invention provides the moving object controller of the first aspect of the invention in which the observation data is image data obtained by the moving object capturing an environment surrounding the moving object.

The landmark prediction unit calculates a predictive image area that is predicted to contain the landmark in an image formed using the image data obtained at the current time based on the internal state data indicating the internal state of the moving object at the time preceding the current time and the environmental map data at the time preceding the current time, and generates the landmark prediction signal containing information about the calculated predictive image area.

The landmark detection unit performs local search in which the predictive image area of the image formed using the image data at the current time is searched to detect an image area corresponding to the landmark, and the local search is started at an initial position that is a pixel position included in a central area of the predictive image area calculated by the landmark prediction unit and is sequentially performed at positions away from the initial position.

In this moving object controller, the landmark prediction unit predicts an image area that is likely to contain the landmark in the image represented by the observation data at the current time (image formed using the image data) based on (1) the internal state data of the moving object at the time preceding the current time and (2) the environmental map data at the time preceding the current time, and searches for the landmark. This moving object controller may not search all the areas of the image for the landmark, unlike such techniques known in the art. The moving object controller 1000 can thus greatly shortens the processing time taken to detect each landmark. The moving object controller greatly shortens the processing time taken to generate (update) the environmental map data. In other words, the moving object controller performs the processing described above to generate an environmental map efficiently and to perform highly accurate state estimation in a short period of time. As a result, the moving object controller controls the moving object in an appropriate manner.

A third aspect of the invention provides the moving object controller of the second aspect of the invention in which the landmark prediction unit generates the landmark prediction signal including information about estimation accuracy of information about a position of the landmark obtained at the time preceding the current time.

The landmark detection unit performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the estimation accuracy of the positional information about the landmark obtained at the time preceding the current time is lower than a predetermined value based on the landmark prediction signal.

The moving object controller performs global search to increase the probability that the landmark can be detected when the estimation accuracy of the positional information for the landmark is low. More specifically, this moving object controller appropriately avoids local search that may fail to detect a landmark and may take longer processing time when the estimation accuracy of the positional information about the landmark is low.

A fourth aspect of the invention provides the moving object controller of the second or third aspect of the invention in which the landmark prediction unit generates the landmark prediction signal including information about estimation accuracy of the internal state data of the moving object obtained at the time preceding the current time.

The landmark detection unit performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the estimation accuracy of the internal state data of the moving object obtained at the time preceding the current time is lower than a predetermined value based on the landmark prediction signal.

The moving object controller performs global search to increase the probability that the landmark can be detected when the estimation accuracy of the internal state data of the moving object is low. More specifically, this moving object controller can appropriately avoid local search that may fail to detect a landmark and may take longer processing time when the estimation accuracy of the internal state data of the moving object is low (when the position of the moving object is not estimated in an appropriate manner).

A fifth aspect of the invention provides the moving object controller of one of the second to fourth aspects of the invention in which the landmark detection unit includes a detection information obtaining unit configured to store a value indicating the number of times the landmark has been detected in searching for the landmark within a predetermined period of time.

The landmark detection unit performs the local search when the value indicating the number of times the landmark has been detected stored in the detection information obtaining unit is greater than a predetermined value.

This moving object controller performs local search to shorten the processing time when the number of times the landmark has been detected is large, or in other words when the estimation accuracy of the positional information about the landmark is determined to be high.

A sixth aspect of the invention provides the moving object controller of one of the second to fourth aspects of the invention in which the landmark detection unit includes a detection information obtaining unit configured to store landmark detection information indicating whether the landmark has been detected in searching for the landmark within a predetermined period of time.

The landmark detection unit performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the landmark has not been detected within the predetermined period of time based on the landmark detection information stored by the detection information obtaining unit.

This moving object controller performs global search to increase the probability that the landmark can be detected when the landmark has not be detected for a predetermined period of time.

A seventh aspect of the invention provides a moving object control method for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object.

The moving object control method includes an observation obtaining step, a landmark prediction step, a landmark detection step, and a state estimation step.

The program enabling the computer to implement the moving object control method has the same advantageous effects as the moving object controller of the first aspect of the present invention.

The observation obtaining step obtains observation data from an observable event.

The landmark prediction step generates a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time.

The landmark detection step detects information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction step, and generates a landmark detection signal indicating a result of the detection.

The state estimation step estimates the internal state of the moving object based on the observation data obtained by the observation obtaining step and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimates the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

An eighth aspect of the invention provides an integrated circuit used in a moving objet controller for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object. The integrated circuit includes an observation obtaining unit, a landmark prediction unit, a landmark detection unit, and a state estimation unit.

The observation obtaining unit obtains observation data from an observable event.

The landmark prediction unit generates a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time.

The landmark detection unit detects information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction unit, and generates a landmark detection signal indicating a result of the detection.

The state estimation unit estimates the internal state of the moving object based on the observation data obtained by the observation obtaining unit and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimates the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

The integrated circuit has the same advantageous effects as the moving object controller of the first aspect of the present invention.

Claims

1. A moving object controller for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object, the controller comprising:

observation obtaining circuitry configured to obtain observation data from an observable event;
landmark prediction circuitry configured to generate a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time;
landmark detection circuitry configured to detect information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction circuitry, and generate a landmark detection signal indicating a result of the detection; and
state estimation circuitry configured to estimate the internal state of the moving object based on the observation data obtained by the observation obtaining circuitry and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimate the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

2. The moving object controller according to claim 1, wherein

the observation data is image data obtained by the moving object capturing an environment surrounding the moving object,
the landmark prediction circuitry calculates a predictive image area that is predicted to contain the landmark in an image formed using the image data obtained at the current time based on the internal state data indicating the internal state of the moving object at the time preceding the current time and the environmental map data at the time preceding the current time, and generates the landmark prediction signal containing information about the calculated predictive image area, and
the landmark detection circuitry performs local search in which the predictive image area of the image formed using the image data at the current time is searched to detect an image area corresponding to the landmark, and the local search is started at an initial position that is a pixel position included in a central area of the predictive image area calculated by the landmark prediction circuitry and is sequentially performed at positions away from the initial position.

3. The moving object controller according to claim 2, wherein

the landmark prediction circuitry generates the landmark prediction signal including information about estimation accuracy of information about a position of the landmark obtained at the time preceding the current time, and
the landmark detection circuitry performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the estimation accuracy of the positional information about the landmark obtained at the time preceding the current time is lower than a predetermined value based on the landmark prediction signal.

4. The moving object controller according to claim 2, wherein

the landmark prediction circuitry generates the landmark prediction signal including information about estimation accuracy of the internal state data of the moving object obtained at the time preceding the current time, and
the landmark detection circuitry performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the estimation accuracy of the internal state data of the moving object obtained at the time preceding the current time is lower than a predetermined value based on the landmark prediction signal.

5. The moving object controller according to claim 2, wherein

the landmark detection circuitry includes detection information obtaining circuitry configured to store a value indicating the number of times the landmark has been detected in searching for the landmark within a predetermined period of time, and
the landmark detection circuitry performs the local search when the value indicating the number of times the landmark has been detected, the number being stored in the detection information obtaining circuitry, is greater than a predetermined value.

6. The moving object controller according to claim 2, wherein

the landmark detection circuitry includes detection information obtaining circuitry configured to store landmark detection information indicating whether the landmark has been detected in searching for the landmark within a predetermined period of time, and
the landmark detection circuitry performs global search in which the image formed using the image data at the current time is searched to detect the image area corresponding to the landmark when the landmark has not been detected within the predetermined period of time based on the landmark detection information stored by the detection information obtaining circuitry.

7. A moving object control method for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object, the method comprising:

obtaining observation data from an observable event;
generating a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time;
detecting information about the landmark at the current time based on the landmark prediction signal generated by the step of generating the landmark prediction signal, and generating a landmark detection signal indicating a result of the detection; and
estimating the internal state of the moving object based on the observation data obtained by the step of obtaining the observation data and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimating the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.

8. An integrated circuit used in a moving object controller for controlling a moving object while performing processing for generating an environmental map expressed using information about a landmark and performing processing for estimating an internal state of the moving object, the integrated circuit comprising:

observation obtaining circuitry configured to obtain observation data from an observable event;
landmark prediction circuitry configured to generate a landmark prediction signal including predictive information about a landmark at a current time based on internal state data indicating an internal state of the moving object at a time preceding the current time and environmental map data at the time preceding the current time;
landmark detection circuitry configured to detect information about the landmark at the current time based on the landmark prediction signal generated by the landmark prediction circuitry, and generate a landmark detection signal indicating a result of the detection; and
state estimation circuitry configured to estimate the internal state of the moving object based on the observation data obtained by the observation obtaining circuitry and the landmark detection signal to obtain data indicating an estimated internal state of the moving object at the current time, and estimate the environmental map based on the landmark detection signal to obtain data indicating an estimated environmental map at the current time.
Patent History
Publication number: 20160282876
Type: Application
Filed: Mar 22, 2016
Publication Date: Sep 29, 2016
Patent Grant number: 9958868
Applicants: MegaChips Corporation (Osaka-shi), Kyushu Institute of Technology (Kitakyushu-shi)
Inventors: Fumiya SHINGU (Osaka), Norikazu IKOMA (Fukuoka), Kenta NAGAMINE (Osaka), Hiromu HASEGAWA (Osaka)
Application Number: 15/076,813
Classifications
International Classification: G05D 1/02 (20060101); G06K 9/00 (20060101); G05D 1/00 (20060101);