POSITION DETECTION SENSOR AND POSITION MEASUREMENT DEVICE

- HAMAMATSU PHOTONICS K.K.

Provided is a position detection sensor including: a light-receiving unit that includes a first pixel, a second pixel and a calculation unit that performs center-of-gravity operation by using an intensity of first and second electric signals to calculate a first position. In the first pixel, as the incident position is closer to one end of the light-receiving unit in a second direction, the intensity of the first electric signal decreases. In the second pixel, as the incident position is closer to the one end, the intensity of the second electric signal increases. The calculation unit further calculates a second position on the basis of a first integrated value obtained by integrating the intensity of the first electric signal, and a second integrated value obtained by integrating the intensity of the second electric signal.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present disclosure relates to a position detection sensor and a position measurement device.

BACKGROUND ART

Patent Literature 1 discloses an optical sensor that detects an incident position of a light spot. The optical sensor has a light-receiving region having a taper shape of which a width gradually widens along one direction in a plane. When the light spat moves along the one direction on the light-receiving region having the shape, an output from the optical sensor linearly varies. A one-dimensional position in the one direction with respect to the incident position of the light spot is detected by using a variation of the output. When two optical sensors are disposed in opposite directions in a state in which hypotenuses thereof are in contact with each other, a variation rate of a differential output from the optical sensors is amplified two times in comparison to a variation rate of an output from individual optical sensors.

Patent Literature 2 discloses a detection element of a two-dimensional light incident position detection element that detects a two-dimensional position to which a light spot is incident.

CITATION LIST Patent Literature

Patent Literature 1: Japanese Unexamined Patent Publication No. H3-34369

Patent Literature 2: Japanese Unexamined Patent Publication No. H4-313278

SUMMARY OF INVENTION Technical Problem

Recently, for example, in a position detection sensor capable of being used for detecting an incident position of light in an optical control field, requirements such as a high frame rate and an improvement of a position detection function have been increasing. Examples of the position detection sensor include a profile sensor and a line sensor. In the profile sensor, a projection image in a column direction is obtained by electric signals output from a pixel group of pixels wired for every row among a plurality of pixels which are two-dimensionally arranged, and a projection image in a row direction is obtained by electric signals output :from a pixel group of pixels wired for every column. A two-dimensional position to which light is incident by using the projection images is detected by using the projection images. However, in the profile sensor in which the pixels are two-dimensionally arranged as described above, since the electric signals for acquiring the projection image in the row direction (or the column direction) are output from respective pixels in addition to the electric signals for acquiring the projection image in the column direction (or the row direction), the number of the electric signals output is larger in comparison, to the line sensor in which pixels are one-dimensionally arranged. Accordingly, in the profile sensor, time is taken to read out the electric signals, and thus there is a limit in detection of the two-dimensional position to which light is incident at a high speed.

On the other hand, the line sensor detects a one-dimensional position to which light is incident by using signals read out from a plurality of pixels which are one-dimensionally arranged. In the line sensor, only electric signals for detecting the incident position of light in one direction are output, and thus the number of electric signals output is smaller in comparison to the profile sensor. Accordingly, in the line sensor, it is possible to suppress time necessary for reading out the electric signals, and thus it is possible to detect the one-dimensional position to which light is incident at a high speed. However, in the line sensor, it is difficult to detect a two-dimensional position to which light is incident. That is, the line sensor does not have a position detection function relating to two directions.

An object of the present disclosure is to provide a position detection sensor and a position measurement device which are capable of detecting a two-dimensional position to which light is incident at a high speed.

Solution to Problem

According to an embodiment of the present disclosure, there is provided a position detection sensor that detects an incident position of light. The position detection sensor includes: a light-receiving unit that includes a plurality of pixel pairs, each of the pixel pairs including a first pixel that generates a first electric signal corresponding to an. incident light amount of the light and a second pixel that is disposed along a first direction side by side with the first pixel and generates a second electric signal corresponding to an incident light amount of the light, and the pixel pairs being arranged along the first direction; and a calculation unit that performs center-of-gravity operation by using an intensity of the first electric signal and an intensity of the second electric signal to calculate a first position that is the incident position in the first direction. In the first pixel, as the incident position is closer to one end of the light-receiving unit in a second direction intersecting the first direction, the intensity of the first electric signal decreases. In the second pixel, as the incident position is closer to the one end in the second direction, the intensity of the second electric signal increases. The calculation unit further calculates a second position that is the incident position in the second direction on the basis of a first integrated value obtained by integrating the intensity of the first electric signal, and a second integrated value obtained by integrating the intensity of the second electric signal.

In the position detection sensor, when light is incident to the first pixel of the plurality of pixel pairs, the first pixel generates the first electric signal (for example, a charge signal) corresponding to the incident light amount of the light. Similarly, when light is incident to the second pixel, the second pixel generates the second electric signal (for example, a charge signal) corresponding to an incident light amount of the light. Here, since the plurality of pixel pairs are arranged along the first direction, the calculation unit performs weighting operation (center-of-gravity operation) on positions of respective pixels (the first pixel and the second pixel) with intensities of electric signals of the pixels to calculate the first position that is the incident position of the light in the first direction. In addition, in the first pixel, as the incident position of the light is closer to the one end of the light-receiving unit in the second direction, the intensity of the first electric signal decreases. In the second pixel, as the incident position of the light is closer to the one end, the intensity of the second signal increases. The calculation unit calculates the second position that is the incident position of the light in the second direction on the basis of a first integrated value obtained by integrating the intensity of the first electric signal and a second integrated value obtained by integrating the intensity of the second electric signal, by using a variation of the intensities of the first electric signal and the second electric signal. In this manner, the position detection sensor can calculate the second position in addition to the first position with respect to the incident position of the light. That is, the position detection sensor has a position detection function relating to two directions. In addition, in the position detection sensor, it is possible to obtain two pieces of information of the first position and the second position with respect to the incident position of the light by using only information of the electric signals (the first electric signal and the second electric signal). That is, for example, it is not necessary to separately generate an electric signal for calculating the second position from each pixel. According to this, it is possible to suppress an increase of the number of the electric signals, and as a result, it is possible to suppress an increase of time necessary for reading-out of the electric signals. That is, according to the position detection sensor, it is possible to detect a two-dimensional position to which the light is incident at a high speed.

In the position detection sensor, the light-receiving unit may further include a first transmission filter which covers the first pixel and. through which the light is transmitted, and a second transmission filter which covers the second pixel and through which the light is transmitted, a transmittance of the in the first transmission filter may decrease as it is closer to the one end in the second direction, and a transmittance of the light in the second transmission filter may increase as it is closer to the one end in the second direction. When the light-receiving unit includes the first transmission filter and the second transmission filter, in the first pixel, as the incident position of the light is closer to the one end in the second direction, an incident light amount of the light incident to the first pixel decreases, and according to this, the intensity of the first electric signal generated in the first pixel also decreases. In contrast, in the second pixel, as the incident position of the light is closer to the one end in the second direction, the incident light amount of the light incident to the second pixel increases, and according to this, the intensity of the second electric signal generated in the second pixel also increases. Accordingly, according to this configuration, it is possible to appropriately realize the light-receiving unit of the position detection sensor.

In the position detection sensor, the light-receiving unit may further include a first light-shielding part that covers another portion of the first pixel excluding one portion of the first pixel and shields the light, and a second light-shielding part that covers another portion of the second pixel excluding one portion of the second pixel and shields the light, a width of the one portion of the first pixel in the first direction may decrease as it is closer to the one end in the second direction, and a width of the one portion of the second pixel in the first direction may increase as it is closer to the one end in the second direction. When the light-receiving unit includes the first light-shielding part and the second light-shielding part, in the first pixel, as the incident position of the light is closer to the one end in the second direction, the incident light amount of the light incident to the first pixel, decreases, and according to this, the intensity of the first electric signal generated in the first pixel also decreases. In contrast, in the second pixel, as the incident position of the light is closer to the one end in the second direction, the incident light amount of the light incident to the second pixel increase, and according to this, the intensity of the second electric signal generated in the second pixel also increases. Accordingly, according to this configuration, it is possible to appropriately realize the light-receiving unit of the position detection sensor.

In the position detection sensor, a width of the first pixel in the first direction may decrease as it is closer to the one end in the second direction, and a width of the second pixel in the first direction may increase as it is closer to the one end in the second direction. When the light-receiving unit includes the first pixel and the second pixel, in the first pixel, as the incident position of the light is closer to the one end in the second direction, the incident light amount of the light incident to the first pixel decreases, and according to this, the intensity of the first electric signal generated in the first pixel also decreases. In contrast, in the second pixel, as the incident position of the light is closer to the one end in the second direction, the incident light amount of the light incident to the second pixel increases, and according to this, the intensity of the second electric signal generated in the second pixel also increases. Accordingly, according to the configuration, it is possible to appropriately realize the light-receiving unit of the position detection sensor.

According to another embodiment of the present disclosure, there is provided a position measurement device that measure an incident position of light. The position measurement device includes: the position detection sensor; and a light source that irradiates the light-receiving unit with the light. A diameter of the light that is emitted to the light-receiving unit is two or more times a large one between a maximum value of a width of the first pixel in the first direction and a maximum value of a width of the second pixel in the first direction. The position measurement device includes the position detection sensor, and thus it is possible to appropriately exhibit the above-described effect. In addition, since the diameter of the light that is emitted to the light-receiving unit is two or more times a large one between the maximum value of the width of the first pixel in the first direction and the maximum value of the width of the second pixel in the first direction, it is possible to calculate the first position and the second. position with accuracy.

Advantageous Effects of invention

According to the present disclosure, it is possible to detect a two-dimensional position to which light is incident at a high speed.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 is a schematic configuration diagram illustrating a, position detection sensor of an embodiment.

FIG. 2 is a top view illustrating a plurality of first transmission filters and a plurality of second transmission filters.

FIG. 3 is a cross-sectional. view taken along line III-III illustrated in FIG. 1.

FIG. 4 is a schematic configuration view illustrating an example of a position detection sensor of a first modification example.

FIG. 5 illustrates a result obtained by verifying accuracy of a second detection position in a case where a diameter of a light spot is three times a width of a pixel.

FIG. 6 illustrates a result obtained by verifying accuracy of the second detection position in a case where the diameter of the light spot is three times the width of the pixel.

FIG. 7 illustrates a result obtained by verifying accuracy of a first detection position in a case where the diameter of the light spot is three times a width of the pixel.

FIG. 8 illustrates a result obtained by verifying accuracy of the first detection position in a case where the diameter of the light spot is three times a width of the pixel.

FIG. 9 is a view illustrating a relationship between the diameter of the light spot and an error of the second detection position in a case where the diameter of the light spot is made to gradually vary.

FIG. 10 is a result obtained by verifying accuracy of the first detection position in a case Where the diameter of the light spot is one times a maximum value of the width of the pixel.

FIG. 11 is a result obtained by verifying accuracy of the second detection position in a case where the diameter of the light spot is one times the maximum value of the width of the pixel.

FIG. 12 is a view illustrating an example of a relationship of the first detection position, the second detection position, a first detection error, and a second detection error.

FIG. 13 is a view illustrating a method of creating a look-up table.

FIG. 14 is a view illustrating another example of a shape of each pixel of the first modification example

FIG. 15 is a view illustrating still another example of the shape of the pixel of the first modification example.

FIG. 16 is a view illustrating still another example of the shape of the pixel of the first modification example.

FIG. 17 is a view illustrating another example of an arrangement of respective pixels of the first modification example.

FIG. 18 is a view illustrating a state in which a plurality of light spots are simultaneously incident to the position detection sensor of the first modification example.

FIG. 19 is a schematic configuration diagram illustrating a position measurement device including the position detection sensor of the first modification example.

FIG. 20 is a schematic configuration diagram illustrating a position detection sensor of a second modification example.

FIG. 21 is a schematic configuration diagram illustrating a position detection sensor of a third modification example.

FIG. 22 is a schematic configuration diagram illustrating a position detection sensor of a fourth modification example.

FIG. 23 is a schematic configuration diagram illustrating a profile sensor as a comparative example.

FIG. 24 is a schematic configuration diagram illustrating a line sensor as a comparative example.

DESCRIPTION OF EMBODIMENTS

Hereinafter, an embodiment of a position detection sensor of the present disclosure will be described in detail with reference to the accompanying drawings. In description of the drawings, the same reference numeral will be given to the same clement, and redundant description thereof will be appropriately omitted.

EMBODIMENT

FIG. 1 is a schematic configuration diagram illustrating a position detection sensor 1 of this embodiment. The position detection sensor I is a sensor that detects a two-dimensional position of incident light spot L with respect to an incident position thereof Specifically, the position detection sensor 1 detects a first detection position (first position) that is the incident position of light in an X-axis direction (first direction), and a second detection position (second position) that is the incident position of the light in a Y-axis direction (second direction) intersecting the X-axis direction. As illustrated in FIG. 1, the position detection sensor 1 includes a light-receiving unit 10 and a signal processing unit 30. The light-receiving unit 10 includes a plurality of pixel pairs 11 which are arranged along the X-axis direction in an XY plane. Each of the plurality of pixel pairs 11 includes a first pixel 12 and a second pixel 13 which are arranged side by side along the X-axis direction. For example, the first pixel 12 and the second pixel 13 have a rectangular shape in which the Y-axis direction is set as a longitudinal direction, and are alternately arranged along the X-axis direction. Hereinafter, a plurality of the first pixels 12 and a plurality of the second pixels 13 are collectively referred to as a plurality of pixels P1 to PN (N is an integer of two or greater, and represents the number of pixels). The pixels P1, P3, . . . , and PN-1 assigned with odd numbers correspond to the first pixels 12, and the pixel P2, P4, . . . , and PN assigned with even. numbers respectively correspond to the second pixels 13.

The pixels P1to PN respectively generate charge signals Dx1 to DxN corresponding to incident light amounts of the incident light spot L. Specifically, when the light spot L is incident to the first pixels 12, the first pixels 12 generate charge signals Dx1, Dx3, . . . , and DxN-1 (first electric signals) corresponding to incident light amounts of the light spot L. Similarly, when the light spot L is incident to the second pixels 13, the second pixels 13 generate charge signals Dx2, Dx4, . . . , and DxN (second electric signals) corresponding to incident light amounts of the light spot L. A diameter W of the light spot L is set to be larger than a width S of each of the plurality of pixels P1 to PN in the X-axis direction. A luminance distribution of the light spot L has a Gaussian distribution (that is, an intensity distribution that is strongest near the center and. gradually weakens toward the periphery) expressed by the following Expression (1). In Expression (1), I represents the intensity of the light spot L, and r is a distance from the center of the light spot L. ω is the distance r when the intensity I becomes 1/e2, and represents a radius of the light spot L having the Gaussian distribution. Accordingly, the diameter W of the light spot L is expressed by 2ω.

[ Mathematical Formula 1 ] I ( r ) = exp ( - 2 r 2 ω 2 ) ( 1 )

The light-receiving unit 10 further includes a plurality of first transmission filters 14 which are respectively disposed on the plurality of first pixels 12, and a plurality of second transmission filters 15 Which are respectively disposed on the plurality of second pixels 13. FIG. 2 is a top view illustrating the plurality of first transmission filters 14 and the plurality of second transmission filters 15. As illustrated in FIG. 2, for example, the first transmission filters 14 and the second transmission filters 15 have a rectangular shape in which the Y-axis direction is set as a longitudinal direction, and are alternately arranged along the X-axis direction. FIG. 3 is a cross-sectional view taken along tine illustrated in FIG. 1. As illustrated in FIG. 3, the first transmission filters 14 respectively cover the first pixels 12, and the second transmission filters 15 respectively cover the second pixels 13. The first transmission filters 14 and the second transmission filters 15 allow incident light to be transmitted therethrough. The transmittance of the first transmission filters 14 gradually decreases (that is, decreases in a monotone manner) as it is closer to one end 10a of the light-receiving unit 10 in the Y-axis direction, and gradually increases (that is, increases in a monotone manner) as it is closer to the other end 10b of the light-receiving unit 10 in the Y-axis direction on the first pixels 12. The transmittance of the first transmission filters 14 may gradually decrease as it is closer to the one end 10a in the Y-axis direction, and may gradually increase as it is closer to the other end 10b in the Y-axis direction on the first pixels 12. In FIG. 1 and FIG. 2, the transmittance of the first transmission filters 14, and the second transmission filters 15 is expressed in shades of color. The larger the transmittance is, the shades are thinner, and the smaller the transmittance is, the shades are darker. When light is transmitted through each of the first transmission filters 14, an incident light amount of the light spot L that is incident to each of the first pixels 12 gradually decreases (or decreases step by step) as the incident position of the light spot L is closer to the one end 10a, in the Y-axis direction, and gradually increases (or increases step by step) as the incident position of the light spot L is closer to the other end 10b in the Y-axis direction. According to this, intensities of the charge signals Dx1, Dx3, . . . , and DxN-1 generated in the first pixels 12 also gradually decreases (or decreases step by step) as the incident position is closer to the one end 10a in the Y-axis direction, and also gradually increases (or increases step by step) as the incident position is closer to the other end 10b in the Y-axis direction.

On the other hand, the transmittance of the second transmission filters 15 gradually increases as it is closer to one end 10a in the Y-axis direction, and gradually decreases as it is closer to the other end 10b of the light-receiving unit 10 in the Y-axis direction on the second pixels 13. The transmittance of the second transmission filters 15 may gradually increase as it is closer to the one end 10a in the Y-axis direction, and may gradually decrease as it is closer to the other end 10b in the Y-axis direction on the second pixels 13. When light is transmitted through each of the second transmission filters 15, an incident light amount of the light spot L that is incident to each of the second pixels 13 gradually increases (or increases step by step) as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, and gradually decreases (or decreases step by step) as the incident position of the light spot L is closer to the other end 10b in the Y-axis direction. According to this, intensities of the charge signals Dx2, Dx4, . . . , and DxN generated in the second pixels 13 also gradually increases increases step by step) as the incident position is closer to the one end 10a in the Y-axis direction, and also gradually decreases (or decreases step by step) as the incident position is closer to the other end 10b in the Y-axis direction. As described above, an increase direction or a decrease direction of the transmittance in the Y-axis direction is reversed between the first transmission filters 14 and the second transmission filters 15.

FIG. 1 will be referred to again. The signal processing unit 30 is provided on one side of the pixels P1 to PN in the Y-axis direction. The signal processing unit 30 includes a plurality of switch elements 31, a shift register 32, an amplifier 33, an A/D converter 34, and a calculation unit 35. The switch elements 31 are provided in correspondence with the pixels P1 to PN, respectively. Input terminals of the switch elements 31 are electrically connected to the pixels P1 to PN, respectively. The shift register 32 is provided to sequentially read out the charge. signals Dx1 to DxN from the pixels P1 to PN. The shift register 32 outputs a control signal for controlling an operation of the switch elements 31. The switch elements 31 are sequentially closed by the control signal that is output from the shift register 32. When the switch elements 31 are sequentially closed, the charge signals Dx1 to DxN generated in the pixels P1 to PN are sequentially output from output terminals of the switch elements 31. The amplifier 33 is electrically connected to the output terminals of the switch elements 31. The amplifier 33 outputs a voltage value corresponding to the charge signals Dx1 to DxN output from the output terminals of the switch elements 31. The A/D converter 34 is electrically connected to the amplifier 33. The A/D converter 34 converts voltage values output from the amplifier 33 into digital values. The A/D converter 34 outputs the digital values. The digital values are values corresponding to intensities of the charge signals Dx1 to DxN. Accordingly, hereinafter, description may be given in a state of substituting the digital values with the intensities of the charge signals Dx1 to DxN.

The calculation unit 35 is electrically connected to the AID converter 34. The calculation unit 35 calculates the first detection position of the incident position of the light spot L in the X-axis direction, and the second detection position that is the incident position of the light spot L in the Y-axis direction on the basis of the digital values output from the A/D converter 34. Here, a method of calculating the first detection position and a method of calculating the second detection position will be described in detail. The first detection position is calculated by performing weighting operation (center-of-gravity operation) on positions of the pixels P1 to PN in the X-axis direction with the intensities of the charge signals Dx1 to DxN. Specifically, the first detection position is calculated by using the following Expression (2). In Expression (2), Px1 represents the first detection position, and i represents 1, 2, . . . , or N is the number of pixels).

[ Mathematical Formula 2 ] Px 1 = i = 1 N iS 2 Dx i i = 1 N Dx i ( 2 )

As described above, the intensities of the charge signals Dx1, Dx3, . . . , and DxN-1 generated in the first pixels 12 decrease as the incident position of the light spot L is closer to the one end 10a of the light-receiving unit 10 in the Y-axis direction, and the intensities of the charge signals Dx2, Dx4, . . . , and DxN generated in the second pixels 13 increase as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction. In this manner, the intensities of the charge signals Dx1 to DxN vary with respect to the incident position of the light spot L in the Y-axis direction, and thus it is possible to calculate the second detection position that is the incident position of the light spot L in the Y-axis direction by using a variation of the intensities of the charge signals Dx1 to DxN. The second detection position is calculated on the basis of a first integrated value obtained by integrating the intensities of the charge signals Dx1, Dx3, . . . , and DxN-1 generated in the first pixels 12, and a second integrated value obtained by integrating the intensities of the charge signals Dx2, DX4, . . . , and DxN generated in the second pixels 13. In an example, the second detection position is calculated by the following Expression (3). In Expression (3), Py1 represents the second detection position, and h represents a length each of the pixels P1 to PN in the Y-axis direction.

[ Mathematical Formula 3 ] Py 1 = i = 2 , 4 , 6 N hDx i i = 1 N Dx i ( 3 )

In Expression (3), the second detection position Py1 is calculated by taking a ratio between a total value of the first integrated value and the second integrated value (that is, an integrated value obtained by integrating the intensities of the charge signals Dx1 to DxN generated in all of the pixels P1 to PN) and the second integrated value. The second detection position Py1 may be calculated on the basis of a ratio between the first integrated value, and the total value of the first integrated value and the second integrated value, or may be calculated on the basis of a ratio between the first integrated value and the second integrated value.

As described above, the first detection position Px1 and the second detection position Py1 which are calculated by the calculation unit 35 are calculated with more accuracy when correction is made. A method of the correction will be described in detail. First, a look-up table relating to a center-of-gravity position (hereinafter, referred to as “first center-of-gravity position”) of an actually incident light spot L in the X-axis direction, a center-of-gravity position (hereinafter, referred to “second center-of-gravity position”) of an actually incident light spot L in the Y-axis direction, the first detection position Px1, and the second detection position Py1 is created in advance, and the look-up table is recorded, for example, in the calculation unit 35. In addition, in case of performing correction of the second detection position Py1, the calculation unit 35 reads out an error LUTy, which corresponds to the first center-of-gravity position, between the second center-of-gravity position and the second detection position Py1 from the look-up table, and calculates a correction value Py2 of the second detection position Py1 by the following Expression (4).


Py2=Py1+LUTy(Px1,Py1, W)   (4)

For example, the look-up table relating to the error LUTy is created as follows. First, the diameter W of the incident light spot L is determined in advance, and the light spot L having the diameter W is caused to be incident to a plurality of positions which are determined in advance in the pixels P1 to PN. The reason why the diameter W of the incident light spot L is determined in advance is because the accuracy of the second detection position Py1 and the first detection position Px1 is influenced by a relative size of the diameter W of the light spot L with respect to the width S of each of the pixels P1 to PN. However, the diameter W of the light spot L may not be determined in advance, and in this case, when creating the look-up table relating to the error LUTy, a plurality of look-up tables with respect to the diameter W of a plurality of the light spots L may be created in advance. In addition, when detecting the incident position of the light spot L, the diameter W of the incident light spot W may be measured, and a look-up table corresponding to the diameter W of the incident light spot L may be used.

Next, the second detection position Py1 that is calculated by the calculation unit 35 when the light spot L is caused to be incident to the positions is recorded for every first center-of-gravity position. A relationship between the recorded second detection position Py1, the second center-of-gravity position, and the error LUTy is expressed by an approximation curve by polynomial approximation. In addition, the look-up table obtained by complementing numerical values corresponding to more second detection positions Py1 and the error LUTy is created on the basis of the approximation curve for every first center-of-gravidity position. As another method of creating the look-up table relating to the error LUTy, for example, there is a method of creating the took-up table on the basis of a relationship between the charge signals Dx1 to DxN generated in the pixels P1 to PN, and a position to which the light spot L is actually incident in the Y-axis direction. The look-up table created in this manner can be used as a look-up table for determining a direct position (that is, a position to which the light spot L is actually incident in the Y-axis direction) from the charge signals Dx1 to DxN without through the second detection position Py1 and the like.

On the other hand, when performing correction of the first detection position Px1, the calculation unit 35 reads out an error LUTy, which corresponds to the second center-of gravity position, between the first center-of-gravity position and the first detection position Px1 from a look-up table relating to the first center-of-gravity position, the second center-of-gravity position, the first detection position Px1, and the second detection position Py1, and calculates a correction value Px2 of the first position Px1 by the following Expression (5).


Px2=Px1+LUTx(Px1, Py1, W)   (5)

The look-up table relating to the error LUTx is created in the same manner as in the look-up table relating to the error LUTy.

Specifically, the light spot L is caused to be incident to a plurality of positions which are determined in advance in the pixels P1 to PN, and records the first detection position Px1 calculated by the calculation unit 35 at that time is recorded for every second center-of-gravity position. A relationship of the recorded first detection position Px1, the first center-of-gravity position, and the error LUTx is expressed by an approximation curve by polygonal approximation. In addition, the look-up table obtained by complementing numerical values corresponding to more first detection positions Px1 and the error LUTx is created on the basis of the approximation curve for every second center-of-gravity position. The look-up table relating to the error LUTx may be a look-up table relating to the first center-of-gravity position and the first detection position Px1. As another method of creating the look-up table relating to the error LUTx, for example, there is a method of creating the look-up table on the basis of a relationship between the charge signals Dx1to DxN generated in the pixels P1 to PN, and a position to which the light spot L is actually incident in the X-axis direction. The look-up table created in this manner can be used as a look-up table for determining a direct position (that is, a position to which the light spot L is actually incident in the X-axis direction) from the charge signals Dx1 to DxN. The correction by the above-described method is performed in a case of desiring to obtain the first detection position Px1 and the second detection position Py1 with more accuracy, and is not essential.

An effect obtained by the position detection sensor 1 of the above-described embodiment will be described together with a problem in a comparative example. For example, in a field of robot control or optical control, a profile sensor specialized for detecting a position of an incident light spot is suggested. For example, the profile sensor is applied to MEMS control application or the like. FIG. 23 is a view illustrating a profile sensor 100 as a comparative example. As illustrated in FIG. 23, the profile sensor 100 includes a light-receiving unit 101, a first signal processing unit 110, and a second signal processing unit 120. The light-receiving unit 101 includes a plurality of pixels 102 which are two-dimensionally arranged. Each of the pixels 102 are divided into two regions. A Y-axis direction pixel 103 and an X-axis direction pixel 104 are provided in the two regions of the pixel 102, respectively.

A plurality of the Y-axis direction pixels 103 are wired for every column (that is, in the Y-axis direction), and are electrically connected. to the first signal processing unit 110. A plurality of the X-axis direction pixels 104 are wired for every row (that is, in the X-axis direction), and are electrically connected to the second signal processing unit 120. The first signal processing unit 110 sequentially outputs voltage signals corresponding to charge signals generated in the Y-axis direction pixels 103 as time-series data for every column. The time-series data represents a projection image (profile) in the X-axis direction. The first signal processing unit 110 detects a position in the X-axis direction with respect to an incident position of a light spot by the projection image in the X-axis direction. Similarly the second signal processing unit 120 sequentially outputs voltage signals corresponding to charges generated in the X-axis direction pixels 104 as time-series data for every row. The time-series data represents a. projection image in the Y-axis direction. The second signal processing unit 120 detects a position in the Y-axis direction with respect to the incident position of the light spot by the projection image in the Y-axis direction.

As described above, in the profile sensor 100, a two-dimensional position to which the light spot is incident is detected with only two pieces of output data of the projection image in the X-axis direction, and the projection image in the Y-axis direction, and thus it is possible to detect the two-dimensional position to which the light spot is incident at a high speed. That is, according to the profile sensor 100, it is possible to realize a high frame rate. In addition, for example, in the profile sensor 100, data of a projection image with a less data amount is handled in comparison to a sensor that detects a two-dimensional position to which a light spot is incident through image processing of image data (including information such as an incident position, a shape, and a light amount of the light spot) obtained through image capturing, and thus it is possible to suppress a circuit scale necessary for calculating the incident position of the light spot. As a result, for example, when the profile sensor 100 is manufactured by using a CMOS process, circuit parts such as an amplifier, an A/D converter, and an operation unit can be highly integrated. Application of profile sensor 100 has been in progress in fields such as motor control and optical control in which a high frame rate is required. Among these, requirements such as the high frame rate and an improvement of a position detection function have been increasing.

However, in the profile sensor 100 in which the pixels 102 are two-dimensionally arranged, since the charge signals for acquiring the projection image in the Y-axis direction are output from the pixels 102 in addition to the electric signals for acquiring the projection image in the X-axis direction, the number of the charge signals is larger in comparison to a line sensor in which pixels are one-dimensionally arranged. Accordingly, in the profile sensor 100, time is taken in reading-out of the charge signals, and thus there is a limit in detection of the two-dimensional position to which light is incident at a high speed.

On the other hand, the line sensor detects the one-dimensional position to which a light spot is incident by using signals read out from a plurality of pixels which are one-dimensionally arranged. FIG. 24 is a schematic configuration diagram illustrating a line sensor 200 as a comparative example. As illustrated in FIG. 24, the line sensor 200 includes a light-receiving unit 201 and a signal processing unit 210. The light-receiving unit 201 includes pixels P1 to PN. The signal processing unit 210 includes the plurality of switch elements 31, the shift register 32, the amplifier 33, the A/D converter 34, and a calculation unit 220. A difference between the line sensor 200 and the position detection sensor 1 of this embodiment is in that the light-receiving unit 201 does not includes the plurality of first transmission filters 14 and the plurality of second transmission filters 15, and the signal processing unit 210 includes the calculation unit 220 instead of the calculation unit 35. The calculation unit 220 reads out charge signals output from the pixels P1 to PN, and detects an incident position of a light spot in the X-axis direction on the basis of voltage signals corresponding to the charge signals. For example, the calculation unit 220 performs weighting operation on positions of the pixels P1 to PN in the X-axis direction with intensities of the charge signals to calculate only an incident position of the light spot in the X-axis direction.

In the line sensor 200, since only the charge signals for detecting the incident position of the light spot in the X-axis direction are output from the pixels P1 to PN, it is possible to further suppress the number of charge signals which are output in comparison to the profile sensor 100. Accordingly, in the line sensor 200, it is possible to suppress time necessary for reading-out of the charge signals, and thus it is possible to detect a one-dimensional position to which the light spot is incident at a high speed. However, in the line sensor 200, it is difficult to detect an incident position of the light spot in the Y-axis direction. That is, the line sensor 200 does not have a position detection function relating to two directions.

In the position detection sensor 1 of this embodiment, as in the line sensor 200 of the comparative example, since the pixels P1 to PN are arranged along the X-axis direction, the calculation unit 35 performs weighting operation on positions of the pixels P1 to PN with the intensities of the charge signals Dx1 to DXN to calculate the first detection position Px1 that is the incident position of the light spot L the X-axis direction. In addition, in the first pixels 12, the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, the intensities of the charge signals Dx1, Dx3, . . . , and DxN-1 decrease. In the second pixels 13, as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, the intensities of the charge signals Dx2, Dx4, . . . , and DxN increase. The calculation unit 35 calculates the second detection position Py1 that is the incident position of light in the Y-axis direction, for example, with using Expression (3) by using a variation of the intensities of the char signals Dx1 to DxN. In this manner, the position detection sensor 1 of this embodiment can calculate the second detection position Py1 in addition to the first detection position Px1 with respect to the incident position of the light spot L. That is, the position detection sensor 1 of this embodiment has a position detection function relating to two directions.

In addition, the position detection sensor 1 of this embodiment can obtain two pieces of information of the first detection position Px1 and the second detection position Py1 with respect to the incident position of the light spot L by using only information of the charge signals Dx1 to DxN. That is, it is not necessary to separately generate charge signals for calculating the second detection position Py1 from the pixels P1 to PN. According to this, it is possible to suppress an increase of the number of charge signals, and it is possible to suppress an increase of time necessary for reading-out of the charge signals. That is, according to the position detection sensor 1 of this embodiment, it is possible to detect the two-dimensional position to which the light spot L is incident at a high speed, and it is possible to make a frame rate be faster. In addition, as described above, since it is not necessary to separately generate the charge signals for calculating the second detection position Py1 from the pixels P1 to PN, for example, an interconnection and a circuit for reading out the charge signals are not necessary. According to this, it is possible to make an area of the pixels P1 to PN be larger, and it is possible to improve an aperture ratio of the pixels P1 to PN. As a result, it is possible to raise sensitivity with respect to the light spot L incident to the pixels P1 to PN, and it is possible to realize enlargement of a dynamic range.

First Modification Example

FIG. 4 is a schematic configuration diagram illustrating an example of a position detection sensor 1A of a first modification example. A difference between this modification example and the embodiment is in that shapes of pixels are different, and a light-receiving unit 10A of the position detection sensor 1A of this modification example does not include the plurality of first transmission filters 14 and the plurality of second transmission filters 15. In each pixel pair 11A of the light-receiving unit 10A, a width in the X-axis direction in a first pixel 12A corresponding to the first pixel 12 of the embodiment gradually decreases as it is closer to the one end 10a of the light-receiving unit 10A in the Y-axis direction, and gradually increases as it is closer to the other end 10b of the light-receiving unit 10A in the Y-axis direction. In an example, a plurality of the first pixels 12A have an isosceles triangular shape that tapers toward the one end 10a side in the Y-axis direction. On the other hand, each second pixel 13A corresponding to the second pixel 13 of the embodiment gradually increases as it is closer to the one end 10a in the Y-axis direction, and gradually decreases as it is closer to the other end 10b in the Y-axis direction. In an example, a plurality of second pixels 13 have an isosceles triangular shape that tapers toward the other end 10b side in the Y-axis direction. Hereinafter, the plurality of first pixels 112A and the plurality of second pixels 113A are collectively referred to as a plurality of pixels P1 to PN as in the embodiment. The pixels P1, P3, . . . , and PN-1 assigned with odd numbers correspond to the first pixels 12A, and the pixel P2, P4, . . . , and PN assigned with even numbers respectively correspond to the second pixels 13A. In addition, charge signals generated from the pixels P1 to PN are referred to as Dx1 to DxN.

When the light-receiving unit 10A includes the pixels P1 to PN, in the first pixels 12A, as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, an incident light amount of the light spot L incident to the first pixels 12A decreases, and according to this, intensities of charge signals Dx1, Dx3, . . . , and DxN-1 generated in the first pixels 12A also decrease. On the other hand, in the second pixels 13A, as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, an incident light amount of the light spot L incident to the second pixels 13A increases, and according to this, intensities of charge signals Dx2, Dx4, . . . , and DxN generated in the second pixels 13A also increase.

Continuously, description will be given of accuracy of the first detection position Px1 and the second detection position Py1 which are detected by the position detection sensor 1A of this modification example. The accuracy of the first detection position Px1 and the second detection position Py1 is influenced by a relationship between the diameter W of the incident light spot L and a maximum value S of a width of each of the pixels P1 to PN. Specifically, the accuracy of the first detection position Px1 and the second detection position Py1 is further improved as the diameter W of the light spot L is greater than the maximum value S of the width of each of the pixels P1 to PN. Accordingly, the diameter W of the light spot L is set to he sufficiently greater than the maximum value S of the width of each of the pixels P1 to PN to improve the accuracy of the first detection position Px1 and the second detection position Py1. In an example, the diameter W of the light spot L is two or more times the maximum value S of the width of each of the pixels P1 to PN (specifically, a larger one between a maximum value of the width of the first pixel 12A and a maximum value of the width of the second pixel 13A), and preferably three or more times the maximum value S. For example, when the diameter W of the light spot L is three or more times the maximum value S of the width of the pixels P1 to PN, an error hereinafter, referred to as a second detection error) between the second center-of-gravity position and the second detection position Py1 becomes 1/1000 or less times a length h of each of the pixels P1 to PN in the Y-axis direction. In a case where the size of the light spot L is larger than the maximum value S of the width of each of the pixels P1 to PN, the second detection position Py1 is calculated in a sub-pixel unit accuracy. The size of the light spot L represents the distance r from the center of the light spot L when the intensity I of the light spot L becomes zero in Expression (1) of the embodiment.

FIG. 5 and FIG. 6 illustrate results Obtained by verifying the accuracy of the second detection position Py1 through simulation in a. case where the diameter W of the light spot L is three times the maximum value S of the width of the pixels P1 to PN. In FIG. 5 and FIG. 6, the maximum value S of the width of each of the pixels P1 to PN is set to 20, and the length h of each of the pixels P1 to PN is set to 1000. FIG. 5 illustrates a relationship between the second detection position Py1 and the second center-of-gravity position. In FIG. 5, the horizontal axis represents the second detection position Py1 and the vertical axis represents the second center-of-gravity position. Values on the horizontal axis and the vertical axis in FIG. 5 are relative values in a case where the length h of each of the pixels P1 to PN is set to a reference value (that is, h is 1). In FIG. 5, graphs illustrating relationships between the second detection position. Py1 and the second center-of-gravity position when an X-axis direction position to which the light spot L is actually incident is made to vary from 10 to 10.45 with an interval of 0.05 are illustrated in an overlapping manner. The values from 10 to 10.45 of the X-axis direction position are relative values in a case where the maximum value S of the width of each of the pixels P1 to P1 is set to a reference value (that is, S is 1). As illustrated in FIG. 5, it can be confirmed that the second detection position Py1 and the second center-of-gravity position match each other.

FIG. 6 illustrates a relationship between the second center-of-gravity position and the second detection error. In FIG. 6, the horizontal axis represents the second center-of-gravity position and the vertical axis represents the second detection error. Values on the vertical axis and the horizontal axis in FIG. 6 are relative values in a case where the length h of each of the pixels P1 to PN is set to a reference value (that is, h is 1). In FIG. 6, graphs G10 to G19 illustrate relationships between the second center-of-gravity position and the second detection error when the X-axis direction position to which the light spot L is actually incident is made to vary from 10 to 10.45 with an interval of 0.05. The values from 10 to 10.45 of the first center-of-gravity position are relative values in a case where the maximum value S of the width of each of the pixels P1 to PN is set to a reference value (that is, S is 1). As illustrated in FIG. 6, the second detection error is as very small as 0.03% or less with respect to the length H of each of the pixels P1 to PN.

Next, description will be given of a result obtained by verifying accuracy of the first detection position Px1 under the same condition as in the verification of the accuracy of the second detection position Py1 as in FIG. 5 and FIG. 6. FIG. 7 and FIG. 8 illustrate results obtained by verifying the accuracy of the first detection position Px1 through simulation in a case where the diameter W of the light spot L is three times the maximum value S of the width of the pixels P1 to PN. FIG. 7 illustrates a relationship between the first detection position Px1 and the first center-of-gravity position. In FIG. 7, the horizontal axis represents the first detection position Px1, and the vertical axis represents the first center-of-gravity position. Values on the horizontal axis and the vertical axis in FIG. 7 are relative values in a case where maximum value S of the width of the pixels P1 to PN is set to a reference value. In FIG. 7, graphs illustrating relationships between the first detection position Px1 and the first center-of-gravity position when a Y-axis direction position to which the light spot L is actually incident is made to vary from 0.1 to 0.9 with an interval of 0.1 are illustrated in an overlapping manner. The values from 0.1 to 0.9 of the Y-axis direction position are relative values in a case where the length h of each of the pixels P1 to PN is set to a reference value. As illustrated in FIG. 7, it can be confirmed that the first detection position Px1 and the first center-of-gravity position match each other.

FIG. 8 illustrates a relationship between the first center-of-gravity position., and an error (hereinafter, referred to as “first detection error”) between the first center-of-gravity position and the first detection position Px1. FIG. 8, the horizontal axis represents the first center-of-gravity position, and the vertical axis represents the first detection error. Values on the vertical axis and the horizontal axis in FIG. 8 are relative values in a case where the maximum value S of the width of each of the pixels P1 to PN is set to a reference value. In FIG. 8, graphs G20 to G28 illustrate relationships between the first center-of-gravity position and the first detection error when the Y-axis direction position to which the light spot L is actually incident is made to vary from 0.1 to 0.9 with an interval of 0.1. The values from 0.1 to 0.9 of the Y-axis direction position are relative values in a case where the length h of each of the pixels P1 to PN is set to a reference value. As illustrated in FIG. 8, the first detection error is as very small as 0.015% or less with respect to the maximum value S of the width of each of the pixels P1 to PN in the X-axis direction. As described above, from the verification results in FIG. 5 to FIG. 8, it can be known that when the diameter W of the light spot L is set to be three or more times the maximum value S of the width of the pixels P1 to PN, the first detection error and the second detection error become very small in combination.

Next, description will be given of a state of a variation of the second detection error with respect to the variation of the diameter W of the light spot L. FIG. 9 is a view illustrating a relationship between the diameter W of the light spot L and the second detection error in a case where the diameter W of the light spot L is made to gradually vary. In FIG. 9, the horizontal axis represents the diameter W of the ha t spot L, and the vertical axis represents the second detection error. Values on the vertical axis in FIG. 9 are relative values in a case where the length h of each of the pixels P1 to PN is set to a reference value. Values on the horizontal axis in FIG. 9 are relative values in a case where the maximum value S of the width of each of the pixels P1 to PN is set to a reference value. As illustrated in FIG. 9, as the diameter W of the light spot L is greater than the maximum value S of the width of each of the pixels P1 to PN, the second detection error is smaller. Specifically, for example, when the diameter W of the light spot L is two or more times the maximum value S of the width of each of the pixels P1 to PN, accuracy of the second detection position Py1 becomes sufficiently high. On the other hand, as the diameter W of the light spot L becomes smaller than two times the maximum value S of the width of each of the pixels P1 to PN, the second detection error non-linearly increases.

FIG. 10 and. FIG. 11 illustrate results obtained by verifying the accuracy of the first detection position Px1 and the accuracy of the second detection position Py1 through simulation in a case where the diameter W of the light spot L is one times the maximum value S of the width of each of the pixels P1 to PN. FIG. 10 corresponds to FIG. 7, and FIG. 11 corresponds to FIG. 8. As illustrated in FIG. 10, it can be confirmed that the second detection error becomes 20% or less of the length h of each of the pixels P1 to PN. On the other hand, as illustrated in FIG. 11, it can be confirmed that the first detection error becomes 10% or less of the maximum value S of the width of each of the pixels P1 to PN. In this manner, in a case where the diameter W of the light spot L is one times the maximum value S of the width of each of the pixels P1 to PN, the first detection error and the second detection error further increase in combination with each other in comparison to a case where the diameter W of the light spot L is three times the maximum value S of the width of each of the pixels P1 to PN.

As described above, the first detection error and the second detection error are greatly influenced by the number of pixels to which the light spot L is incident. It is possible to suppress the first detection error and the second detection error by performing the same correction as in the embodiment. That is, the calculation unit 35 reads out the error LUTx and the error LUTy from the lookup table that is created in advance, and can calculate the correction value Px2 of the first detection position Px1 and the correction value 72 of the second detection position Py1 by using Expression (4) and Expression (5). FIG. 12 is a view illustrating an example of the look-up table illustrating a relationship between the first detection position Px1, the second. detection position Py1, the error LUTx, and the error LUTy. In FIG. 12, the diameter W of the light spot L is set to 1.0S, and the first center-of-gravity position is set to 10.0S. The error LUTx and the error LUTy illustrated in FIG. 12 is calculated by polynomial approximation including the first detection position Px1, the second detection position Py1, and the diameter W of the light spot L. Specifically, the error LUTx is calculated by linear function approximation, and the error LUTy is calculated by using cubic function approximation.

Here, as another method of creating the look-up table, for example, there is a theoretical calculation method based on a relationship between the magnitude of the charge signals Dx1 to DxN generated in the pixels P1 to PN, and the incident position of the light spot L. FIG. 13 is a view illustrating a method of creating the look-up table. In FIG. 13, a triangular portion corresponds to the pixels P1 to PN, and a circular portion corresponds to the light spot L. In this case, a hatched portion surrounded by the triangular portion and the circular portion becomes a light-receiving area of each of the pixels P1 to PN. The area of the hatched portion is calculated by using an equation of circle which represents the light spot L and an equation of straight line which represents hypotenuses of the triangle. In FIG. 13, two intersections between the equation of circle and the equation of straight line are expressed by (X0, Y0) and (X1, Y1). The equation of circle is expressed by the following Expression (6). In Expression (6), r represents a diameter of a circle, Xc represents a center coordinate of the circle in the X-axis direction, Ye represents a center coordinate of the circle in the Y-axis direction.


(X−Xc)2+(Y−Yc)2=r2   (6)

On the other hand, the equation of straight line is expressed by the following Expression (7). In Expression (7), b represents an intercept and a represents a slope.


Y=aX+b   (7)

A light-receiving area S1 is calculated on the basis of the following Expression (8) by using Expression (6) and Expression (7). Provided that, in Expression (8), a relationship of Xc and Yc>r>b is established.


S1=∫x0{aX+b−(Yc−√{square root over (r2−(X−Xc)2)})}  (8)

The light-receiving area calculated as described above corresponds to the magnitude of the intensities of the charge signals Dx1 to DXN from the pixels P1 to PN when the light spot L having the same intensity distribution is incident to each of the pixels P1 to PN. In addition, it is possible to theoretically obtain the intensities of the charge signals Dx1 to DxN generated in the pixels P1 to PN when light having a Gaussian distribution (Gaussian beam) is incident by applying a Gaussian distribution expressed by Expression (1) to the light-receiving area. In this manner, it is also possible to create the look-up table relating to the error LUTx and the error LUTy on the basis of the theoretical calculation. In addition, it is also possible to create a look-up table for determining a direct position (that is, a position in the X-axis direction and the Y-axis direction to which the light spot L is actually incident) from the charge signals Dx1 to DXN.

FIG. 14 is a schematic configuration diagram illustrating a position measurement device 2 including the position detection sensor 1A according to this modification example. The position measurement device 2 measures each of the first detection position Px1 and the second detection position Py1. As illustrated in FIG. 14, the position measurement device 2 includes the position detection sensor 1A of this modification example, and a light source 3. The light source 3 irradiates the light-receiving unit 10A with light. When the light spot L is incident to the pixels P1 to PN of the light-receiving unit 10A, charge signals Dx1 to DxN are generated from the pixels P1 to PN. The position detection sensor 1A calculates the first detection position Px1 and the second detection position Py1 on the basis of the charge signals Dx1 to DxN. Since the position measurement device 2 includes the position detection sensor 1A, the position measurement device 2 appropriately has the effect of the embodiment. In addition, the diameter W of the light spot L output from the light source 3 is set to be greater than the maximum value S of the width of each of the plurality of pixels P1 to PN. In an example, the diameter W of the light spot L that is emitted to the light-receiving unit 10A is two or more times the maximum value S (specifically, the larger one between the maximum value S of the width of the first pixel 12A and the maximum value S of the width of the second pixel 13A) of the width of each of the pixels P1 to PN, and more preferably three or more times the maximum value S. According to this, it is possible to calculate the first detection position Px1 and the second detection position Py1 with accuracy.

FIG. 15 is a view illustrating a state in which a plurality of (for example, two) light spots LA and LB are simultaneously incident to the position detection sensor 1A of this modification example. In the position detection sensor 1A of this modification example, even in a case where the light spots 1A and 1B are simultaneously incident to the light-receiving unit 10A, when the plurality of light spots LA and LB are spaced away from each other by 1 S or more, it is possible to detect two-dimensional positions of the light spots LA and LB. As illustrated in FIG. 15, the light spot LA is incident to pixels P3 to P6 (third to sixth pixels from the left in FIG. 15). The light spot LB is incident to pixels P8 to P12 (eighth to twelfth pixels from the left in FIG. 15). In this case, the calculation unit 35 calculates the two-dimensional position of the light spot LA as follows. That is, the calculation unit 35 calculates the two-dimensional position of the light spot. LA by applying Expression (2) and Expression (3) in the embodiment with respect to only the pixels P3 to P6 to which the light spot L is incident. Specifically, the calculation unit 35 can calculate a first detection position PxA and a second detection position PyA of the light spot LA by using, for example, the following Expression (9) and Expression (10).

[ Mathematical Formula 9 ] PxA = i = 3 6 iS 2 Dx i i = 3 6 Dx i ( 9 ) [ Mathematical Formula 10 ] PyA = i = 4 , 6 hDx i i = 3 6 Dx i ( 10 )

Similarly, the calculation unit 35 calculates the two-dimensional position of the light spot LB by applying Expression (2) and Expression (3) in the embodiment with respect to only the pixels P8 to P12 to which the light spot LB is incident. Specifically, the calculation unit 35 can calculate a first detection position PxB and a second detection position PyB of the light spot LB by using, for example, the following Expression (11) and Expression (12).

[ Mathematical Formula 11 ] PxB = i = 8 12 iS 2 Dx i i = 8 12 Dx i ( 11 ) [ Mathematical Formula 12 ] PyB = i = 8 , 10 , 12 hDx i i = 8 12 Dx i ( 12 )

As described above, even in a case where the plurality of light spots LA and LB are simultaneously incident to the light-receiving unit 10A, it is possible to correct the first detection position PxA and the second detection position PyA of the light spot LA, and the first detection position PxB and the second detection position PyB of the light spot LB by using the look-up table relating the error LUTx and the error LUTy.

The shape of the pixels P1 to PN is not limited to the above-described shape and may be another shape. FIG. 16, FIG. 17, and FIG. 18 are views illustrating another example of the shape of the pixels P1 to PN of this modification example. A first pixel 12B of each pixel pair 11B illustrated in FIG. 16 includes a plurality of (for example, six) light-receiving region 12b having a square shape. A width of the light-receiving regions 12b in the X-axis direction is smaller as it is closer to the one end 10a of the light-receiving unit 10B in the Y-axis direction, and is larger as it is closer to the other end 10b of the light-receiving unit 10B in the Y-axis direction. A second pixel 13B of the pixel pair 11B includes a plurality of (for example, six) light-receiving regions 13b having a square shape. A width of the light-receiving regions 13b in the X-axis direction is larger as the second pixel 13B is closer to the one end 10a of the light-receiving unit 10B in the Y-axis direction, and is smaller as the second pixel 13B is closer to the other end 10b of the light-receiving unit 10B in the Y-axis direction.

A width of a first pixel 12C of each pixel pair 11C, illustrated in FIG. 17 in the X-axis direction decreases step by step (in a step shape) as it is closer to the one end 10a of a light-receiving unit 10C in the Y-axis direction, and increases step by step (in a step shape) as it is closer to the other end 10b of the light-receiving unit 10C in the Y-axis direction. On the other hand, a width of a second pixel 13C in the X-axis direction increases step by step (in a step shape) as it is closer to the one end 10a in the Y-axis direction, and decreases step by step (in a step shape) as it is closer to the other end 10b. A first pixel 12D of each pixel pair 11D illustrated in FIG. 18 has a light-angled triangular shape that tapers toward the one end 10a side of the light-receiving unit 10D in the Y-axis direction. On the other hand, a second pixel 13D of the pixel pair 11D has a right-angled triangular shape that tapers toward the other end 10b side of the light-receiving unit 10D in the Y-axis direction. Outer edges of the first pixel 12D and the second pixel 13D at a boundary of a plurality of pixel pairs 11D are not inclined with respect to the Y-axis direction, and extend along the Y-axis direction.

It is not necessary for the arrangement of the pixels P1 to PN in the X-axis direction to be an arrangement in which the first pixel and the second pixel are alternately arranged side by side, and the arrangement may be another arrangement. FIG. 19 is a view illustrating another example of the arrangement of the pixels P1 to PN of this modification example. As illustrated in FIG. 19, a first pixel 12E of each pixel pair 11E has a right-angled triangular shape that tapers toward the one end 10a side of the light-receiving unit 10E in the Y-axis direction, and a second pixel 13E of the pixel pair 11E has a right-angled triangular shape that tapers toward the other end 10b side of the light-receiving unit 10E. In addition, straight lines which constitute the right angle of a plurality of the first pixels 12E are disposed to face each other in the X-axis direction. Similarly, straight lines which constitute the right angle of a plurality of the second pixels 13E are disposed to face each other in the X-axis direction. Even in this aspect, it is possible to exhibit the same effect as in the embodiment.

Second Modification Example

FIG. 20 is a schematic configuration diagram illustrating a position detection sensor 113 of a second modification example. A difference between this modification example and the embodiment is in that a light-receiving unit 1OF of the position detection sensor 1B of this modification example includes a plurality of first light-shielding part 16 and a plurality of second light-shielding part 17 instead of the plurality of first transmission filters 14 and the plurality of second transmission filters 15. Each of the first light-shielding parts 16 is disposed on each first pixel 12 and shields incident light. The first light-shielding part 16 covers another portion of the first pixel 12 excluding one portion 12c (a hatched portion in FIG. 20) of the first pixel 12. A width of the one portion 12c in the X-axis direction gradually decreases (or decreases step by step) as it is closer to the one end 10a of a light-receiving unit 10F in the Y-axis direction, and gradually increases (or increases step by step) as it is closer to the other end 10b of the light-receiving unit 10F. In an example, each part 12c has an isosceles triangular shape that tapers toward the one end 10a side of the light-receiving unit 10F. In this case, the first light-shielding part 16 has a shape that is hollowed out in the isosceles triangular shape.

Each of the second light-shielding parts 17 is disposed on each second pixel 13 and shields incident light. The second light-shielding part 17 covers another portion of the second pixel 13 excluding one portion 13c (a hatched portion in FIG. 20) of each of a plurality of the second pixels 13. A width of the one portion 13c in the X-axis direction gradually increases (or increases step by step) as it is closer to the one end 10a in the Y-axis direction, and gradually decreases decreases step by step) as it is closer to the other end 10b in the Y-axis direction. In an example, the one portion 13c has an isosceles triangular shape that tapers toward the other end 10b side in the Y-axis direction. In this case, the second light-shielding part 17 has a shape that is hollowed out in the isosceles triangular shape.

Since the light-receiving unit 10F includes the first light-shielding parts 16 and the second light-shielding parts 17, an incident light amount of a light spot L incident to a plurality of the first pixels 12 decreases as an incident position of the light spot L is closer to the one end 10a in the Y-axis direction, and according to this, intensities of charge signals Dx1, Dx3, . . . , and DxN-1 generated in the first pixels 12 also decreases. In contrast, an incident light amount of the light spot L incident to a plurality of the second pixels 13 increases as the incident position of the light spot L is closer to the one end 10a in the Y-axis direction, and according to this, intensities of charge signals Dx2, Dx4, and DxN generated in the second pixels 13 also increases.

Third Modification Example

FIG. 21 is a schematic configuration diagram illustrating a position detection sensor 1C of a third modification example. A. difference between this modification example and the embodiment is in that each of pixels P1 to PN of this modification example is divided into two parts in the Y-axis direction, and two pieces of signal processing units are provided. As illustrated in FIG. 21, a position detection sensor 1C of this modification example includes a light-receiving unit 10G, a first signal processing unit 30A, and a second signal processing unit 30B. Each of pixels P1 to PN of the light-receiving unit 10G is divided into two parts at a boundary near the center in the Y-axis direction. With regard to the two regions, each first pixel 12 includes a region 12F located on the one end 10a side of the light-receiving unit 10G in the Y-axis direction and a region 12G located on the other end 10b side of the light-receiving unit 10G in the Y-axis direction. With regard to the two regions, each second pixel 13 includes a region 13F located on the one end 10a side in the Y-axis direction and a region 13G located on the other end 10b side in the Y-axis direction.

The first signal processing unit 30A and the second signal processing unit 30B are respectively provided on both ends of the pixels P1 to PN in the Y-axis direction. Each of the first signal processing unit 30A and the second signal processing unit 30B includes the plurality of switch elements 31, the shift register 32, the amplifier 33, and the A/D converter 34. Input terminals of the switch elements 31 of the first signal processing unit 30A are electrically connected to the regions 12F and the regions 13F. Input terminals of the switch elements 31 of the second signal processing unit 30B are electrically connected to the regions 12G and the regions 13G. The calculation unit 35 is electrically connected to the A/D converter 34 of the first signal processing unit 30A, and the A/D converter 34 of the second signal processing unit 30B. The calculation unit 35 calculates the first detection position Px1 and the second detection position Py1 with respect to the incident position of the light spot L incident to the light-receiving unit 10G on the basis of charge signals DxF1 to DxFN generated in the regions 12F and the regions 13F, and charge signals DxG1 to DxGN generated in the regions 12G and the regions 13G as in the embodiment.

In the position detection sensor 1C of this modification example, each of the pixels P1 to PN is divided into two parts, and as a result, the charge signals DxF1 to DxFN generated in the regions 12F and the region 13F are read out by the first signal processing unit 30A, and the charge signals DxG1 to DxGN generated in the regions 12G and the regions 13G are read out by the second signal processing unit 30B. According to this, in each of the pixels P1 to PN, it is possible to shorten a distance from a portion to which the light spot L is incident to each of the switch elements 31. As a result, utilization efficiency of the light incident to the pixels P1 to PN is raised, and accuracy of the first detection position Px1 and the second detection position Py1 can be improved.

Fourth Modification Example

FIG. 22 is a schematic configuration diagram illustrating a position detection sensor 1D of a fourth modification example. A difference between this modification example and the embodiment is in that a light-receiving unit 10H of the position detection sensor 1D al this modification example includes a plurality of metal wires 20. For example, the metal wires 20 are aluminum (Al) wires. The metal wires 20 are respectively provided in correspondence with pixels P1 to PN, extend on the pixels P1 to PN along the Y-axis direction, and are continuously or intermittently connected to the pixels P1 to PN. The metal wires 20 are electrically connected to the input terminals of the switch elements 31, respectively. In the pixels P1 to PN, as the incident position of the light spot L in the Y-axis direction is spaced away from each of the switch elements 31, time is further taken in reading-out of charge signals Dx1 to DxN generated in the pixels P1 to PN. The reason for this is considered because a movement speed of the charge signals Dx1 to DxN in a diffusion layer that constitutes the pixels P1 to PN is slow, and thus time is taken for transferring the charge signals Dx1 to DxN.

Here, the metal wires 20 extending along the Y-axis direction are respectively provided on the pixels P1 to PN, and the metal wires 20 are respectively connected to the switch elements 31 so that the charge signals Dx1 to DxN pass through the metal wires 20. According to this, it is possible to improve the movement speed of the charge signals Dx1 to DxN, and it is possible to improve a reading-out speed of the charge signals Dx1 to DxN.

The position detection sensor and the position measurement device of the present disclosure are not limited, to the embodiment and the modification examples, and various modifications can be additionally made. For example, the embodiment and the modification examples may be combined in correspondence with an object and an effect which are required. The position detection sensor may be applied to a measurement device that measures a three-dimensional shape of a target by using a so-called light sectioning method. In this case, a two-dimensional position of light reflected from a surface of the target is detected by a position detection sensor, and the three-dimensional shape of the target is measured on the basis of the two-dimensional position that is detected.

REFERENCE SIGNS LIST

1, 1A to 1D: position detection sensor, 2: position measurement device, 3: light source, 10, 10A to 10H: light-receiving unit, 10a: one end, 10b: another end, 11, 11A to 11E: pixel pair, 12, 12A to 12E: first pixel, 12F, 12G, 13F, 13G: region, 12b, 13b: light-receiving region, 12c, 13c: one portion, 13, 13A to 13E: second pixel, 14: first transmission filter, 15: second transmission filter, 16: first light-shielding part, 17: second light-shielding part, 20: metal wire, 30: signal processing unit, 30A: first signal processing unit, 3013: second. signal processing unit, 31: switch element, 32: shill register, 33: amplifier, 34: A/D converter, 35: calculation unit, Dx1 to DxN: charge signal, L, LA, LB: light spot, P1 to PN: pixel.

Claims

1. A position detection sensor that detects an incident position of light, comprising:

a light-receiving unit that includes a plurality of pixel pairs, each of the pixel pairs including a first pixel that generates a first electric signal corresponding to an incident light amount of the light and a second pixel that is disposed along a first direction side by side with the first pixel and generates a second electric signal corresponding to an incident light amount of the light, and the pixel pairs being arranged along the first direction; and
a calculation unit that performs center-of-gravity operation by using an intensity of the first electric signal and an intensity of the second electric signal to calculate a first position that is the incident position in the first direction,
wherein in the first pixel, as the incident position is closer to one end of the light-receiving unit in a second direction intersecting the first direction; the intensity of the first electric signal decreases,
in the second pixel, as the incident position is closer to the one end in the second direction, the intensity of the second electric signal increases, and
the calculation unit further calculates a second position that is the incident position in the second direction on the basis of a first integrated value obtained by integrating the intensity of the first electric signal; and a second integrated value obtained by integrating the intensity of the second electric signal.

2. The position detection sensor according to claim 1,

wherein the light-receiving unit further includes a first transmission filter which covers the first pixel and through which the light is transmitted, and a second transmission filter which covers the second pixel and through which the light is transmitted,
a transmittance of the light in the first transmission filter decreases as it is closer to the one end in the second direction, and
a transmittance of the light in the second transmission filter increases as it is closer to the one end in the second direction.

3. The position detection sensor according to claim 1,

wherein the light-receiving unit further includes a first light-shielding part that covers another portion of the first pixel excluding one portion of the first pixel, and shields the light, and a second light-shielding part that covers another portion of the second pixel excluding one portion of the second pixel and shields the light,
a width of the one portion of the first pixel in the first direction decreases as it is closer to the one end in the second direction, and
a width of the one portion of the second pixel in the first direction increases as it is closer to the one end in the second direction.

4. The position detection sensor according to claim 1,

wherein a width of the first pixel in the first direction decreases as it is closer to the one end in the second direction, and
a width of the second pixel in the first direction increases as it is closer to the one end in the second direction.

5. A position measurement device that measures an incident position of light, comprising:

the position detection sensor according to claim 1; and
a light source that irradiates the light-receiving unit with the light,
wherein a diameter of the light that is emitted to the light-receiving unit is two or more times a large one between a maximum value of a width of the first pixel in the first direction and a maximum value of a width of the second pixel in the first direction.
Patent History
Publication number: 20200217644
Type: Application
Filed: Aug 28, 2018
Publication Date: Jul 9, 2020
Applicant: HAMAMATSU PHOTONICS K.K. (Hamamatsu-shi, Shizuoka)
Inventors: Munenori TAKUMI (Hamamatsu-shi, Shizuoka), Haruyoshi TOYODA (Hamamatsu-shi, Shizuoka), Yoshinori MATSUI (Hamamatsu-shi, Shizuoka), Kazutaka SUZUKI (Hamamatsu-shi, Shizuoka), Kazuhiro NAKAMURA (Hamamatsu-shi, Shizuoka), Keisuke UCHIDA (Hamamatsu-shi, Shizuoka)
Application Number: 16/647,500
Classifications
International Classification: G01B 11/00 (20060101); H01L 27/144 (20060101); H01L 31/02 (20060101); H01L 31/0216 (20060101);