Pixel diagnostics with a bypass mode

- Lumileds LLC

A LED controller includes an image buffer to hold image data. An LED pixel forming a part of a large pixel array is activatable in response to image data, LDO state, and pulse width modulation module state. A logic module including a pixel diagnostic mode using an LDO bypass is connected to modify LDO state and allow direct addressing of the LED pixel for diagnostic purposes without needing to use image data from the image buffer.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims benefit of priority to European Patent Application No. 18203783.8 filed Oct. 31, 2018 and to U.S. Provisional Patent Application No. 62/729,244 filed Sep. 10, 2018, each of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

The present disclosure relates generally to a microcontroller with external data image inputs that is able to support an addressable LED pixel array at high speed image refresh speeds.

BACKGROUND

While pixel arrays of LEDs with supporting CMOS circuitry have been used, practical implementations suitable for commercial use can face severe manufacture, power, and data management problems. Individual light intensity of thousands of emitting pixels may need to be controlled at refresh rates of 30-60 Hz. High data refresh rates are needed for many applications, and systems that support a variety of calibration, testing, and control methodologies are needed.

SUMMARY

In one embodiment, a LED controller includes an image buffer to hold image data. An LED pixel forming a part of a large pixel array is activatable in response to image data, LDO state, and pulse width modulation module state. A logic module including a pixel diagnostic mode using an LDO bypass is connected to modify LDO state and allow direct addressing of the LED pixel for diagnostic purposes without needing to use image data from the image buffer.

In one embodiment, the image buffer is effectively disconnected from the LED pixel when the LDO bypass is activated.

In another embodiment, a pulse width modulator is connected between the image frame buffer and the LED pixel. The pulse wide modulator can have duty cycle loaded during a read of the image buffer. In other embodiments, the pulse width modulator supports configurable per pixel leading edge phase shift.

A row select and a column select are used to select the particular LED pixel for activation when using LDO bypass. The LED pixel can be supplied with a data line, a bypass line, PWMOSC line, a Vbias line, and a Vf line.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram illustrating illumination of a road in discrete sectors using an active headlamp;

FIG. 2 illustrates a dynamic pixel addressable lighting module positioned adjacent to a static lighting module;

FIG. 3A is one embodiment of a vehicle headlamp system for controlling an active headlamp;

FIG. 3B is one embodiment of a vehicle headlamp system for controlling an active headlamp with connections to vehicle processing output;

FIG. 4 is a schematic illustration of one embodiment of an active headlamp controller;

FIG. 5 is an illustration of a microcontroller assembly for an LED pixel array;

FIGS. 6A and 6B respectively illustrates an LDO bypass circuit for a pixel control circuit and gate timing diagram;

FIGS. 6C and 6D respectively illustrates an alternative pixel control circuit and gate timing diagram; and

FIG. 7 illustrates an active matrix pixel array with row and column select supporting LDO bypass.

DETAILED DESCRIPTION

Light emitting pixel arrays may support applications that benefit from fine-grained intensity, spatial, and temporal control of light distribution. This may include, but is not limited to, precise spatial patterning of emitted light from pixel blocks or individual pixels. Depending on the application, emitted light may be spectrally distinct, adaptive over time, and/or environmentally responsive. The light emitting pixel arrays may provide pre-programmed light distribution in various intensity, spatial, or temporal patterns. The emitted light may be based at least in part on received sensor data and may be used for optical wireless communications. Associated optics may be distinct at a pixel, pixel block, or device level. An example light emitting pixel array may include a device having a commonly controlled central block of high intensity pixels with an associated common optic, whereas edge pixels may have individual optics. Common applications supported by light emitting pixel arrays include video lighting, automotive headlights, architectural and area illumination, street lighting, and informational displays.

Light emitting pixel arrays may be used to selectively and adaptively illuminate buildings or areas for improved visual display or to reduce lighting costs. In addition, light emitting pixel arrays may be used to project media facades for decorative motion or video effects. In conjunction with tracking sensors and/or cameras, selective illumination of areas around pedestrians may be possible. Spectrally distinct pixels may be used to adjust the color temperature of lighting, as well as support wavelength specific horticultural illumination.

Street lighting is an important application that may greatly benefit from use of light emitting pixel arrays. A single type of light emitting array may be used to mimic various street light types, allowing, for example, switching between a Type I linear street light and a Type IV semicircular street light by appropriate activation or deactivation of selected pixels. In addition, street lighting costs may be lowered by adjusting light beam intensity or distribution according to environmental conditions or time of use. For example, light intensity and area of distribution may be reduced when pedestrians are not present. If pixels of the light emitting pixel array are spectrally distinct, the color temperature of the light may be adjusted according to respective daylight, twilight, or night conditions.

Light emitting arrays are also well suited for supporting applications requiring direct or projected displays. For example, warning, emergency, or informational signs may all be displayed or projected using light emitting arrays. This allows, for example, color changing or flashing exit signs to be projected. If a light emitting array is composed of a large number of pixels, textual or numerical information may be presented. Directional arrows or similar indicators may also be provided.

Vehicle headlamps are a light emitting array application that requires large pixel numbers and a high data refresh rate. Automotive headlights that actively illuminate only selected sections of a roadway can used to reduce problems associated with glare or dazzling of oncoming drivers. Using infrared cameras as sensors, light emitting pixel arrays activate only those pixels needed to illuminate the roadway, while deactivating pixels that may dazzle pedestrians or drivers of oncoming vehicles. In addition, off-road pedestrians, animals, or signs may be selectively illuminated to improve driver environmental awareness. If pixels of the light emitting pixel array are spectrally distinct, the color temperature of the light may be adjusted according to respective daylight, twilight, or night conditions. Some pixels may be used for optical wireless vehicle to vehicle communication.

One high value application for light emitting arrays is illustrated with respect to FIG. 1, which shows potential roadway illumination pattern 100 for a vehicle headlamp system illuminating a region 120 in front of a vehicle. As illustrated, a roadway 110 includes a left edge 112, a right edge 114, and a centerline 116. In this example, two major regions are illuminated—a downward directed statically illuminated region 122 and a dynamically illuminated region 130. Light intensity within region 130 can be dynamically controlled. For example, as an oncoming vehicle (not shown) traveling between centerline 116 and left edge 112 moves into a subregion 132, light intensity can be reduced or shut off completely. As the oncoming vehicle moves toward subregion 134, a series of subregions (not shown) can be defined to also have reduced light intensity, reducing the chance of unsafe dazzle or glare. As will be appreciated, in other embodiments, light intensity can be increased to accentuate road signs or pedestrians, or spatial illumination patterns adjusted to allow, for example, dynamic light tracking of curved roadways.

FIG. 2 illustrates a positioning of lighting modules 200 able to provide a lighting pattern such as discussed with respect to FIG. 1. An LED light module 222 can include LEDS, alone or in conjunction with primary or secondary optics, including lenses or reflectors. To reduce overall data management requirements, the light module 222 can be limited to on/off functionality or switching between relatively few light intensity levels. Pixel level control of light intensity is not necessarily supported.

Positioned adjacent to LED light module 22 is an active LED array 230. The LED array includes a CMOS die 202, with a pixel area 204 and alternatively selectable LED areas 206 and 208. The pixel area 204 can have 104 rows and 304 columns, for a total of 31,616 pixels distributed over an area of 12.2 by 4.16 millimeters. The selectable LED areas 206 and 208 allow for differing aspect ratios suitable for different vehicle headlamps or applications to be selected. For example, in one embodiment selectable LED area 206 can have a 1:3 aspect ratio with 82 rows and 246 columns, for a total of 20,172 pixels distributed over an area of 10.6 by 4 millimeters. Alternatively, selectable LED area 208 can have a 1:4 aspect ratio with 71 rows and 284 columns, for a total of 20,164 pixels distributed over an area of 12.1 by 3.2 millimeters. In one embodiment, pixels can be actively managed to have a 10-bit intensity range and a refresh rate of between 30 and 100 Hz, with a typical operational refresh rate of 60 Hz or greater.

FIG. 3A illustrates an embodiment of a vehicle headlamp system 300 including a vehicle supported power (302) and control system including a data bus (304). A sensor module 306 can be connected to the data bus 304 to provide data related to environment conditions (e.g. time of day, rain, fog, ambient light levels, etc), vehicle condition (parked, in-motion, speed, direction), or presence/position of other vehicles or pedestrians. A separate headlamp controller 330 can be connected to the vehicle supported power and control system.

The vehicle headlamp system 300 can include a power input filter and control module 310. The module 310 can support various filters to reduce conducted emissions and provide power immunity. Electrostatic discharge (ESD) protection, load-dump protection, alternator field decay protection, and reverse polarity protection can also be provided by module 310.

Filtered power can be provided to a LED DC/DC module 312. Module 312 can be used only for powering LEDs, and typically has an input voltage of between 7 and 18 volts, with a nominal 13.2 volts. Output voltage can be set to be slightly higher (e.g. 0.3 volts) than LED array max voltage as determined by factory or local calibration, and operating condition adjustments due to load, temperature or other factors.

Filtered power is also provided to a logic LDO module 314 that can be used to power microcontroller 322 or CMOS logic in the active headlamp 330.

The vehicle headlamp system 300 can also include a bus transceiver 320 (e.g. with a UART or SPI interface) connected to microcontroller 322. The microcontroller 322 can translate vehicle input based on or including data from the sensor module 306. The translated vehicle input can include a video signal that is transferrable to an image buffer in the active headlamp module 324. In addition, the microcontroller 322 can load default image frames and test for open/short pixels during startup. In one embodiment, a SPI Interface loads an image buffer in CMOS. Image frames can be full frame, differential or partial. Other microcontroller 322 features can include control interface monitors of CMOS status, including die temperature, as well as logic LDO output. In some embodiments, LED DC/DC output can be dynamically controlled to minimize headroom. In addition to providing image frame data, other headlamp functions such as complementary use in conjunction with side marker or turn signal lights, and/or activation of daytime running lights can also be controlled.

FIG. 3B illustrates one embodiment of various components and modules of a vehicle headlamp system 330 capable of accepting vehicle sensor inputs and commands, as well as commands based on headlamp or locally mounted sensors. As seen in FIG. 3B, vehicle mounted systems can include remote sensors 340 and electronic processing modules capable of sensor processing 342. Processed sensor data can be input to various decision algorithms in a decision algorithm module 344 that result in command instructions or pattern creation based at least in part on various sensor input conditions, for example, such as ambient light levels, time of day, vehicle location, location of other vehicles, road conditions, or weather conditions. As will be appreciated, useful information for the decision algorithm module 344 can be provided from other sources as well, including connections to user smartphones, vehicle to vehicle wireless connections, or connection to remote data or information resources.

Based on the results of the decision algorithm module 344, image creation module 346 provides an image pattern that will ultimately provide an active illumination pattern to the vehicle headlamp that is dynamically adjustable and suitable for conditions. This created image pattern can be encoded for serial or other transmission scheme by image coding module 348 and sent over a high speed bus 350 to an image decoding module 354. Once decoded, the image pattern is provided to the uLED module 380 to drive activation and intensity of illumination pixels.

In some operational modes, the system 330 can be driven with default or simplified image patterns using instructions provided to a headlamp control module 370 via connection of the decision algorithm module 344 through a CAN bus 352. For example, an initial pattern on vehicle start may be a uniform, low light intensity pattern. In some embodiments, the headlamp control module can be used to drive other functions, including sensor activation or control.

In other possible operational modes, the system 330 can be driven with image patterns derived from local sensors or commands not requiring input via the CAN bus 352 or high speed bus 350. For example, local sensors 360 and electronic processing modules capable of sensor processing 362 can be used. Processed sensor data can be input to various decision algorithms in a decision algorithm module 364 that result in command instructions or pattern creation based at least in part on various sensor input conditions, for example, such as ambient light levels, time of day, vehicle location, location of other vehicles, road conditions, or weather conditions. As will be appreciated, like vehicle supported remote sensors 340, useful information for the decision algorithm module 364 can be provided from other sources as well, including connections to user smartphones, vehicle to vehicle wireless connections, or connection to remote data or information resources.

Based on the results of the decision algorithm module 364, image creation module 366 provides an image pattern that will ultimately provide an active illumination pattern to the vehicle headlamp that is dynamically adjustable and suitable for conditions. In some embodiments, this created image pattern does not require additional image coding/decoding steps but can be directly sent to the uLED module 380 to drive illumination of selected pixels.

FIG. 4 illustrates one embodiment of various components and modules of an active headlamp system 400 such as described with respect to active headlamp 330 of FIG. 3. As illustrated, internal modules include an LED power distribution and monitor module 410 and a logic and control module 420.

Image or other data from the vehicle can arrive via an SPI interface 412. Successive images or video data can be stored in an image frame buffer 414. If no image data is available, one or more standby images held in a standby image buffer can be directed to the image frame buffer 414. Such standby images can include, for example, an intensity and spatial pattern consistent with legally allowed low beam headlamp radiation patterns of a vehicle.

In operation, pixels in the images are used to define response of corresponding LED pixels in the pixel module 430, with intensity and spatial modulation of LED pixels being based on the image(s). To reduce data rate issues, groups of pixels (e.g. 5×5 blocks) can be controlled as single blocks in some embodiments. High speed and high data rate operation is supported, with pixel values from successive images able to be loaded as successive frames in an image sequence at a rate between 30 Hz and 100 Hz, with 60 Hz being typical. In conjunction with a pulse width modulation module 418, each pixel in the pixel module can be operated to emit light in a pattern and with an intensity at least partially dependent on the image held in the image frame buffer 414.

In one embodiment, intensity can be separately controlled and adjusted by setting appropriate ramp times and pulse width for each LED pixel using logic and control module 420 and the pulse width modulation module 418. This allows staging of LED pixel activation to reduce power fluctuations, and to provide various pixel diagnostic functionality.

FIG. 5 illustrates a microcontroller assembly 500 for an LED pixel array. The assembly 500 can receive logic power via Vdd and Vss pins. An active matrix receives power for LED array control by multiple VLED and VCathode pins. A Serial Peripheral Interface (SPI) can provide full duplex mode communication using a master-slave architecture with a single master. The master device originates the frame for reading and writing. Multiple slave devices are supported through selection with individual slave select (SS) lines. Input pins can include a Master Output Slave Input (MOSI), a Master Input Slave Output (MISO), a chip select (SC), and clock (CLK), all connected to the SPI interface.

In one embodiment, the SPI frame includes 2 stop bits (both “0”), 10 data bits, MSB first, 3 CRC bits (x3+x+1), a start 111b, and target 000b. Timing can be set per SafeSPI “in-frame” standards.

MOSI Field data can be as follows:

Frame 0: Header

Frame 1/2: Start Column Address [SCOL]

Frame 3/4: Start Row Address [SROW}

Frame 5/6: Number of Columns [NCOL]

Frame 7/8: Number of Rows [NROW]

Frame 9: Intensity pixel [SCOL, SROW]

Frame 10: Intensity pixel [SCOL+1, SROW]

Frame 9+NCOL: Intensity pixel [SCOL+NCOL, SROW]

Frame 9+NCOL+1: Intensity pixel [SCOL, SROW+1]

Frame 9+NCOL+NROW: Intensity pixel [SCOL+NCOL, SROW+NROW]

MISO Field data can include loopback of frame memory.

A field refresh rate at 60 Hz (60 full frames per second) is supported, as is a bit rate of at least 10 Mbps, and typically between 15-20 Mbps.

The SPI interface connects to an address generator, frame buffer, and a standby frame buffer. Pixels can have parameters set and signals or power modified (e.g. by power gating before input to the frame buffer, or after output from the frame buffer via pulse width modulation or power gating) by a command and control module. The SPI interface can be connected to an address generation module that in turn provides row and address information to the active matrix. The address generator module in turn can provide the frame buffer address to the frame buffer.

The command and control module can be externally controlled via an Inter-Integrated Circuit (I2C) serial bus. A clock (SCL) pin and data (SDA) pin with 7-bit addressing is supported.

The command and control module include a digital to analog converter (DAC) and two analog to digital converters (ADC). These are respectively used to set Vbias for a connected active matrix, help determine maximum Vf, and determine system temperature. Also connected are an oscillator (OSC) to set the pulse width modulation oscillation (PWMOSC) frequency for the active matrix. A bypass line is also present to allow address of individual pixels or pixel blocks in the active matrix for diagnostic, calibration, or testing purposes.

In one embodiment, the command and control module can provide the following inputs and outputs:

Input to CMOS chip:

VBIAS: Sets voltage bias for LDO's.

GET_WORD[ . . . ]: Requests Output from CMOS.

TEST_M1: Run Pixel Test: LDO in bypass mode, sequentially addresses columns, then rows, outputs VF, using internal 1 μA source.

Vf values output via SPI.

TEST_M2: Run Pixel Test: LDO in bypass mode, sequentially addresses columns, then rows, outputs VF, using external I source.

Vf values output via SPI.

TEST_M3: LDO in bypass mode, addressing through I2C, using internal 1 μA source, Vf output via I2C.

TEST_M4: LDO in bypass mode, addressing through I2C, using external I source, Vf output via I2C.

BUFFER_SWAP: Swap to/from standby buffer.

COLUMN_NUM: Addresses a specific row.

ROW_NUM: Addresses a specific column.

Output from CMOS chip:

CW_PHIV_MIN, CW_PHIV_AVG, CW_PHIV_MAX: factory measured EOL global luminous flux data.

CW_VLED_MIN, CW_VLED_AVG, CW_VLED_MAX: factory measured EOL global forward voltage data.

CW_SERIALNO: die/CMOS combo serial number for traceability purposes.

TEMP_DIE: Value of Die Temperature.

VF: Value of Vf bus when being addressed with COLUMN_NUM and ROW_NUM.

BUFFER_STATUS: Indicates which buffer is selected.

Various calibration and testing methods for microcontroller assembly 500 are supported. During factory calibration a Vf of all pixels can be measured. Maximum, minimum and average Vf of the active area can be “burned” as calibration frame. Maximum Vf and dVf/dT calibration frames can be used together with measured die temperature to determine actual VLED dynamically. Typically, a VLED of between 3.0V-4.5V is supported, with actual value being determined by feedback loop to external DC/DC converter such as described with respect to FIG. 3.

FIGS. 6A and 6B respectively illustrates one embodiment of a pixel control circuit 600 and associated timing diagram 610. Pixel control circuit 600 includes logic having row and column select, and a bypass signal. PWN OSC input and data, along with output from the logic are first fed into generator and then into a PWM. The PWM in turn has a duty cycle that controls activation of a particular pixel. This is described in more detail with respect to the following description of a pixel control circuit 630 of FIG. 6C. Factory calibration V of all pixels can be measured at 1.0 μA and 1.0 mA using an external current source and LDO bypass functionality.

This operation can be bypassed when the LED pixel is supported by a low dropout (LDO) linear regulator as illustrated in the circuit 600. During bypass Vf can be measured either with internal 1 μA current source or external current source on VLED. Bypass can be done as a pixel by pixel operation using row and column select. Advantageously, this pixel bypass circuitry allows determination if a particular pixel is working correctly or if any fault situation has occurred.

As illustrated in FIG. 6B, image data and pulse width modulation oscillation clock data can be received by a pulse width modulator. Based on input from a logic module, gate timing including pulse start, ramp time and pulse duration/width (duty cycle) can be set on a per pixel basis. For example, the duty cycle (δ) can be loaded from frame buffer on “read”. An 8-bit δ resolution can be supported. In one embodiment, the pulse leading-edge phase shift (φ) can be set differently for each pixel.

FIG. 6C illustrates a pixel control circuit 630 that does not support a bypass circuit. Pixel control circuit 630 includes logic having row and column select. Output from the logic are first fed into generator and then into a PWM. The PWM in turn has a duty cycle that controls activation of a particular pixel using additional circuitry in the following manner. Three switches, K1 through K3, are controlled by signals received from center control block outside pixels. Switch K3 is the current source, or LDO, and its current is controlled by Vbias. K2 is the PWM switch, which turns on and off based on the PWM duty cycle determined by the image data. In this example, K2 and K3 are P-channel MOSFet, but they can also be switches of any other suitable form. In FIG. 6C, the PWM signal is connected to the gate of K2, and the drain node of K2 is connected to the gate of K3. Consequently, when PWM signal is high, K2 is off and K3 is on, so the LED is on and the current is determined by Vbias voltage. When PWM is low, K2 is on, pulling K3 gate high and turning it off, so the LED is off.

Switch K1 is turned on and off based on row select and column select signal. K1 will be turned on only when the row and column of a specific pixel is selected, otherwise, it will stay off. When K1 is turned on, its impedance becomes low, and the LED forward voltage at the anode, or Va node of that pixel, will appear on the Vf bus. Since the impedance of Vf bus is much higher than that of K1 in the turn-on state, Vf voltage will equal to Va node voltage. When a fault situation happens in the circuit or the LED, the LED forward voltage may deviate from the normal value. Therefore, the Vf voltage can be used to determine if the pixel is working correctly or if any fault situation has occurred, without needing specific pixel bypass circuitry such as disclosed with respect to FIG. 6A. In this way, a real-time or “on-the-fly” detection can be realized such that the pixel status could be monitored and reported during operation.

Because the Vf bus is a shared node for all pixels, the K1 switch can be turned on for only one pixel at a time. The best time to detect fault situations would be when a pixel is turned on by the PWM. Preferably, the PWM values defined by the application image can be used for testing, but special test images may also be an option. When a pixel is turned off, the detection may still be done, although at a more limited level than during turn-on.

The K1 switching control with respect to the PWM can be flexible. Depending on detection requirements, the K1 frequency may be higher or lower than the PWM frequency. Apparently, the higher K1 frequency, the faster detection, i.e. the more pixels can be tested within a time range. E.g., if the PWM frequency is 500 Hz, only one pixel can be tested during one PWM period, or 2 ms, with a K1 frequency of 500 Hz, whereas ten pixels might be tested during 2 ms with a K1 frequency of 5000 Hz. Moreover, the two frequencies may be synchronous or asynchronous.

FIG. 6D illustrates an example control scheme for pixel control circuitry. In this embodiment, a K1 frequency is set close to PWM frequency with turn-on synchronized. Note that while this example only shows three pixels, it can be extended in a similar manner to a whole matrix array of pixels. For each pixel, the diagram shows the PWM signal voltage, Va node voltage and K1 control voltage. For a normal pixel, the Va node voltage is high when the PMM signal is high and low when the PWM is low. Likewise, the K1 is turned on when the K1 control voltage is high, and off when the control voltage is low. Depending on circuit design, control phase of PWM and K1 can be opposite, i.e. turning on the respective switch when low and turning it off when high.

Operation of pixel 1 proceeds as follows:

t1˜t4: PWM voltage is high. The pixel is turned on and the Va node is high. In the meantime, the K1 control voltage is also high at t1, synchronized with the turn-on of PWM. K1 control voltage remains low for pixel 2 and 3. K1 of pixel 1 is turned on, and the Va node voltage appears on the Vf bus, so the Vf bus voltage equal to Va voltage of pixel 1 at this time. The turning-off moment of K1, t3, is earlier than that of the PWM, t4, so that the pixel test can complete before the pixel is turned off.

Operation of pixel 2 proceeds as follows:

At t2: PWM is high and the pixel is turned on. The slight lagging between t1 and t2 is the phase shift. At this moment, K1 of pixel 1 is still on, so K1 control voltage of pixel 2 is low and pixel 2 is not tested for this PWM cycle.

t5˜t7: PWM is high and the pixel is turned on again at the second time. K1 control voltage is synchronized with pixel 2 at turn-on moment of t5 and stays on until t6. K1 control voltage remains low for other two pixels. Thus, the Vf bus voltage reflects the Va node voltage of pixel 2. In this example, the PWM duty cycle of pixel 2, which is the conduction or on time in percentage of the period or cycle time, is bigger than that of pixel 1. The Va voltage of pixel 2 is lower than that of pixel 1 when turned on.

Operation of pixel 3 proceeds as follows:

PWM is low all the time. The pixel stays off and Va node voltage is low.

t8˜t9: K1 control voltage is high for pixel 3 and low for other two pixels. Consequently, the Vf bus voltage represents the low Va voltage of pixel 3 at this time. The Va node voltage relation is:

FIG. 7 illustrate in more detail a block diagram 600 of active matrix array supporting LDO bypass. Row and column select are used to address individual pixels, which are supplied with a data line, a bypass line, a PWMOSC line, a Vbias line, and a Vf line. Timing and activation of a gate and pulse width modulator oscillator (PWMOSC) is illustrated with respect to FIG. 6B. As will appreciated, in certain embodiments LDO bypass is not required, and pixel testing can proceed using circuitry and/or control schemes described with respect to FIGS. 6C and 6D.

Many modifications and other embodiments of the invention will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the invention is not to be limited to the specific embodiments disclosed, and that modifications and embodiments are intended to be included within the scope of the appended claims. It is also understood that other embodiments of this invention may be practiced in the absence of an element/step not specifically disclosed herein.

Claims

1. A compound metal oxide semiconductor (CMOS) backplane comprising:

a light emitting diode (LED) controller configured to be coupled with an LED pixel array, the LED controller comprising a plurality of switches configured to drive individually selected pixels of the LED pixel array; and
a forward bias bus configured to be coupled with the LED pixel array such that the forward bias bus is shared by all of the pixels of the LED pixel array,
the LED controller configured to, for each individually selected pixel during at least one of procedures including diagnostics, calibration, and testing of the LED pixel array: detect a forward bias voltage carried by the forward bias bus as either one of voltage detection including a normal forward bias voltage and a deviation from the normal forward bias voltage, in response to detection of the normal forward bias voltage, determine that the individually selected pixel is working correctly, and in response to detection of the deviation of a from the normal forward bias voltage, determine that a fault situation has occurred in the individually selected pixel.

2. The CMOS backplane of claim 1, wherein the plurality of switches comprises:

a pulse width modulation (PWM) switch having a control terminal coupled with an output of a PWM generator; and
a current switch having: a control terminal connected with a bias voltage and with a terminal of the PWM switch other than the control terminal of the PWM switch, and an output terminal configured to drive the individually selected pixels of the LED pixel array such that the current switch is in a different state than the PWM switch.

3. The CMOS backplane of claim 1, wherein the LED controller is configured to individually drive thousands of pixels of the LED pixel array at refresh rates of faster than about 60 Hz.

4. The CMOS backplane of claim 1, wherein the LED controller is configured to drive the individually selected pixels during the at least one of diagnostics, calibration, and testing of the LED pixel array using predetermined test images to determine fault situations in the individually selected pixels associated with the test images.

5. The CMOS backplane of claim 1, wherein the LED controller is configured to drive the individually selected pixels during the at least one of diagnostics, calibration, and testing of the LED pixel array at a slower rate than a rate of coupling the forward bias bus with the LED pixel array to enable forward bias voltages associated with multiple individual pixels to be determined using the forward bias bus during a predetermined time period.

6. The CMOS backplane of claim 1, wherein the LED controller is configured to drive the individually selected pixels during the at least one of diagnostics, calibration, and testing of the LED pixel array synchronously with coupling the forward bias bus with the LED pixel array.

7. The CMOS backplane of claim 1, wherein the LED controller is configured to drive the individually selected pixels using a same polarity as a polarity used to couple the forward bias voltage on the forward bias bus.

8. The CMOS backplane of claim 1, wherein the LED controller is configured to drive the individually selected pixels to limit the forward bias voltage on the forward bias bus to a single pixel.

9. The CMOS backplane of claim 1, wherein the CMOS backplane is configured to be installed in a vehicle headlamp.

10. A light emitting diode (LED) system comprising:

an LED pixel array comprising pixels; and
a compound metal oxide semiconductor (CMOS) backplane including: a light emitting diode (LED) controller configured to be coupled with the LED pixel array, the LED controller having a plurality of switches configured to drive pixels of the LED pixel array during at least one of procedures including diagnostics, calibration, and testing of the LED pixel array; and a forward bias bus configured to be coupled with the LED pixel array such that the forward bias bus is shared by all of the pixels of the LED pixel array,
the LED controller configured to, for each individually selected pixel or group of pixels during the at least one of the procedures: detect a forward bias voltage carried by the forward bias bus as either one of voltage detection including a normal forward bias voltage and a deviation from the normal forward bias voltage, in response to detection of the normal forward bias voltage, determine that the individually selected pixel or group of pixels is working correctly, and in response to detection of the deviation of a from the normal forward bias voltage, determine that a fault situation has occurred in the individually selected pixel or group of pixels.

11. The LED system of claim 10, wherein the plurality of switches comprises:

a pulse width modulation (PWM) switch having a control terminal coupled with an output of a PWM generator; and
a current switch having: a control terminal connected with a bias voltage and with a terminal of the PWM switch other than the control terminal of the PWM switch, and an output terminal configured to drive an individually selected pixel or group of pixels of the LED pixel array such that the current switch is in a different state than the PWM switch.

12. The LED system of claim 10, wherein:

the LED pixel array has thousands of pixels, and
the LED controller is configured to drive individually selected pixels or groups of pixels of the LED pixel array at refresh rates of faster than about 60 Hz.

13. The LED system of claim 10, wherein the LED controller is configured to drive individually selected pixels or groups of pixels of the LED pixel array during the at least one of diagnostics, calibration, and testing of the LED pixel array using predetermined test images to determine fault situations in the individually selected pixels associated with the test images.

14. The LED system of claim 10, wherein the LED controller is configured to drive individually selected pixels or groups of pixels of the LED pixel array during the at least one of diagnostics, calibration, and testing of the LED pixel array at a slower rate than a rate of coupling the forward bias bus with the LED pixel array to enable voltages associated with multiple individual pixels to be determined using the forward bias bus during a predetermined time period.

15. The LED system of claim 10, wherein the LED controller is configured to drive individually selected pixels or groups of pixels of the LED pixel array during the at least one of diagnostics, calibration, and testing of the LED pixel array synchronously with coupling the forward bias bus with the LED pixel array.

16. A method of testing a light emitting diode (LED) array, the method comprising:

driving at least one of an individually selected pixel or group of pixels of the LED pixel array;
detecting, when the at least one of the individually selected pixel or group of pixels is driven during at least one of procedures including diagnostics, calibration, and testing of the LED pixel array, a forward bias voltage carried by a forward bias bus connected with the LED pixel array as either one of voltage detection including a normal forward bias voltage and a deviation from the normal forward bias voltage;
in response to detection of the normal forward bias voltage, determine that the individually selected pixel or group of pixels is working correctly, and
in response to detection of the deviation of from the normal forward bias voltage, determine that a fault situation has occurred in the individually selected pixel or group of pixels.

17. The method of claim 16, further comprising driving thousands of individually selected pixels of the LED pixel array at refresh rates of faster than about 60 Hz.

18. The method of claim 16, further comprising driving the at least one of the individually selected pixel or group of pixels of the LED pixel array during the at least one of diagnostics, calibration, and testing of the LED pixel array using predetermined test images to determine fault situations in the at least one of the individually selected pixel or group of pixels associated with the test images.

Referenced Cited
U.S. Patent Documents
5184114 February 2, 1993 Brown
6611610 August 26, 2003 Stam et al.
6728393 April 27, 2004 Stam et al.
6933956 August 23, 2005 Sato et al.
7432967 October 7, 2008 Bechtel et al.
7526103 April 28, 2009 Schofield et al.
7542861 June 2, 2009 You et al.
8334662 December 18, 2012 Jin et al.
8564505 October 22, 2013 Smith
9318069 April 19, 2016 Nambi et al.
9324264 April 26, 2016 Jeong et al.
9849827 December 26, 2017 Uchida et al.
10148938 December 4, 2018 Nagasaki et al.
10219348 February 26, 2019 Chung
10651357 May 12, 2020 Andrews
20010026646 October 4, 2001 Morita et al.
20020142504 October 3, 2002 Feldman et al.
20020171617 November 21, 2002 Fuller
20020186192 December 12, 2002 Maruoka et al.
20030038983 February 27, 2003 Tanabe et al.
20030058262 March 27, 2003 Sato et al.
20030222893 December 4, 2003 Koester et al.
20040095184 May 20, 2004 Oka et al.
20040114379 June 17, 2004 Miller et al.
20040119667 June 24, 2004 Yang
20040129887 July 8, 2004 Vydrin et al.
20040252874 December 16, 2004 Yamazaki
20050029872 February 10, 2005 Ehrman et al.
20060007059 January 12, 2006 Bell
20060017688 January 26, 2006 Hohmann et al.
20060237636 October 26, 2006 Lyons et al.
20070242337 October 18, 2007 Bradley
20080129206 June 5, 2008 Stam et al.
20090040152 February 12, 2009 Scheibe
20090040775 February 12, 2009 Scheibe
20090322429 December 31, 2009 Ivanov et al.
20100073358 March 25, 2010 Ozaki
20100097525 April 22, 2010 Mino
20100259182 October 14, 2010 Man et al.
20100301777 December 2, 2010 Kraemer
20110012891 January 20, 2011 Cheng et al.
20110062872 March 17, 2011 Jin et al.
20110234108 September 29, 2011 Huang
20120286135 November 15, 2012 Gong et al.
20120287144 November 15, 2012 Gandhi et al.
20120306370 December 6, 2012 Van De Ven et al.
20130082604 April 4, 2013 Williams et al.
20140267329 September 18, 2014 Lee et al.
20150138212 May 21, 2015 Bae et al.
20150151671 June 4, 2015 Refior et al.
20150186098 July 2, 2015 Hall
20150204512 July 23, 2015 Chen et al.
20160081028 March 17, 2016 Chang et al.
20160081148 March 17, 2016 Liang et al.
20160104418 April 14, 2016 Keum et al.
20160155406 June 2, 2016 Lee
20160275919 September 22, 2016 Lawrence et al.
20160302270 October 13, 2016 Wang
20160335957 November 17, 2016 Fu et al.
20160345392 November 24, 2016 Scenini et al.
20170243532 August 24, 2017 Huang et al.
20180014486 January 18, 2018 Creechley
20180074199 March 15, 2018 Lin et al.
20180079352 March 22, 2018 Dalal
20180308418 October 25, 2018 Scenini
20190013307 January 10, 2019 Wu et al.
20190057643 February 21, 2019 Bae et al.
20190132917 May 2, 2019 Veenstra et al.
20190189879 June 20, 2019 Tandon et al.
20190335553 October 31, 2019 Ahmed
20200079278 March 12, 2020 Bonne et al.
20200079280 March 12, 2020 Bonne et al.
20200082503 March 12, 2020 Bonne et al.
20200082749 March 12, 2020 Bonne
20200084848 March 12, 2020 Bonne et al.
20200084853 March 12, 2020 Bonne et al.
20200084854 March 12, 2020 Bonne et al.
20200084868 March 12, 2020 Bonne
20200128640 April 23, 2020 Van Voorst Vader
Foreign Patent Documents
204077513 January 2015 CN
102015016375 June 2017 DE
2626851 August 2013 EP
2000187464 July 2000 JP
2002247614 August 2002 JP
2006-284859 October 2006 JP
2006308633 November 2006 JP
2007017479 January 2007 JP
2009530681 August 2009 JP
2009-223145 October 2009 JP
2013-506873 February 2013 JP
2013161084 August 2013 JP
2016189003 November 2016 JP
20110028204 March 2011 KR
101846329 April 2018 KR
201110811 March 2011 TW
WO-2009140963 November 2009 WO
WO-2013066017 May 2013 WO
WO-2014062425 April 2014 WO
WO-2020053716 March 2020 WO
WO-2020053717 March 2020 WO
WO-2020053718 March 2020 WO
WO-2020053719 March 2020 WO
Other references
  • “U.S. Appl. No. 16/456,835, Non Final Office Action dated Jun. 11, 2020”, 9 pgs.
  • “U.S. Appl. No. 16/456,835, Notice of Allowance dated Jan. 27, 2020”, 9 pgs.
  • “U.S. Appl. No. 16/456,835, Notice of Allowance dated Oct. 8, 2020”, 9 pgs.
  • “U.S. Appl. No. 16/456,844, Non Final Office Action dated Feb. 5, 2020”, 8 pgs.
  • “U.S. Appl. No. 16/456,844, Notice of Allowance dated Aug. 6, 2020”, 5 pgs.
  • “U.S. Appl. No. 16/456,849, Non Final Office Action dated Oct. 29, 2020”, 11 pgs.
  • “U.S. Appl. No. 16/456,858, Non Final Office Action dated Feb. 20, 2020”, 11 pgs.
  • “U.S. Appl. No. 16/456,858, Notice of Allowance dated Aug. 28, 2020”, 6 pgs.
  • “U.S. Appl. No. 16/456,862, Final Office Action dated Sep. 14, 2020”, 15 pgs.
  • “U.S. Appl. No. 16/456,862, Non Final Office Action dated Mar. 6, 2020”, 15 pgs.
  • “U.S. Appl. No. 16/456,868, Final Office Action dated Oct. 15, 2020”, 12 pgs.
  • “U.S. Appl. No. 16/456,868, Non Final Office Action dated Apr. 1, 2020”, 9 pgs.
  • “U.S. Appl. No. 16/456,868, Notice of Allowance dated Jan. 22, 2021”, 8 pgs.
  • “U.S. Appl. No. 16/456,868, Notice of Allowance dated Apr. 26, 2021”, 8 pgs.
  • “U.S. Appl. No. 16/456,868, Response filed Jul. 1, 2020 to Non Final Office Action dated Apr. 1, 2020”, 7 pgs.
  • “U.S. Appl. No. 16/456,868, Response filed Dec. 15, 2020 to Final Office Action dated Oct. 15, 2020”, 9 pgs.
  • “U.S. Appl. No. 16/456,874, Non Final Office Action dated Aug. 5, 2020”, 7 pgs.
  • “U.S. Appl. No. 16/456,874, Notice of Allowance dated Jan. 15, 2021”, 5 pgs.
  • “U.S. Appl. No. 16/456,874, Response filed Nov. 5, 2020 to Non Final Office Action dated Aug. 5, 2020”, 10 pgs.
  • “European Application Serial No. 18201763.2, European Search Report dated Feb. 8, 2019”, 1 pg.
  • “European Application Serial No. 18202319.2, Extended European Search Report dated Feb. 12, 2019”, 10 pgs.
  • “European Application Serial No. 18203445.4, Partial European Search Report dated Apr. 12, 2019”, 14 pgs.
  • “European Application Serial No. 18203783.8, Extended European Search Report dated Mar. 14, 2019”, 10 pgs.
  • “International Application Serial No. PCT/IB2019/057504, International Search Report dated Nov. 18, 2019”, 3 pgs.
  • “International Application Serial No. PCT/IB2019/057504, Written Opinion dated Nov. 18, 2019”, 5 pgs.
  • “International Application Serial No. PCT/IB2019/057506, International Search Report dated Oct. 31, 2019”, 4 pgs.
  • “International Application Serial No. PCT/IB2019/057506, Written Opinion dated Oct. 31, 2019”, 10 pgs.
  • “International Application Serial No. PCT/IB2019/057507, International Search Report dated Apr. 24, 2020”, 6 pgs.
  • “International Application Serial No. PCT/IB2019/057507, Written Opinion dated Apr. 24, 2020”, 24 pgs.
  • “International Application Serial No. PCT/IB2019/057508, International Preliminary Report on Patentability dated Mar. 25, 2021”, 12 pgs.
  • “International Application Serial No. PCT/IB2019/057508, International Search Report dated Nov. 12, 2019”, 4 pgs.
  • “International Application Serial No. PCT/IB2019/057508, Written Opinion dated Nov. 12, 2019”, 10 pgs.
  • “Taiwanese Application Serial No. 108123755, Notice dated Aug. 26, 2020”, (w/ English Translation), 6 pgs.
  • “Taiwanese Application Serial No. 108123755, Office Action dated May 13, 2020”, 11 pgs.
  • “Taiwanese Application Serial No. 108123756, Office Action dated May 25, 2020”, 29 pgs.
  • “Taiwanese Application Serial No. 108123758, Office Action dated Feb. 3, 2021”, (W/ English Translation), 7 pgs.
  • “Taiwanese Application Serial No. 108123758, Office Action dated Jun. 29, 2020”, (w/English Translation), 11 pgs.
  • “Taiwanese Application Serial No. 108123758, Response filed Sep. 29, 2020 to Office Action dated Jun. 29, 2020”, (W/ English Translation), 23 pgs.
  • “Japanese Application Serial No. 2021-513231, Final Notification of Reasons for Refusal dated Feb. 1, 2022”, (w/ English Translation), 8 pgs.
  • “Japanese Application Serial No. 2021-513231, Response filed Nov. 22, 2021 to Notification of Reasons for Refusal dated Aug. 24, 2021”, (w/ English Translation of Claims), 7 pgs.
  • “Korean Application Serial No. 10-2021-7010600, Response filed Dec. 15, 2021 to Notice of Preliminary Rejection dated Oct. 18, 2021”, (w/ English Translation of Claims), 20 pgs.
  • “Korean Application Serial No. 10-2021-7010600, Notice of Preliminary Rejection dated Oct. 18, 2021”, (w/ English Translation), 16 pgs.
  • “Japanese Application Serial No. 2021-513231, Notification of Reasons for Refusal dated Aug. 24, 2021”, (w/ English Translation), 9 pgs.
  • “European Application Serial No. 19780398.4, Response filed Oct. 19, 2021 to Communication pursuant to Rules 161(1) and 162 EPC”, 10 pgs.
  • “Japanese Application Serial No. 2021-513231, Examiners Decision of Final Refusal dated Jun. 7, 2022”, (w/ English Translation), 16 pgs.
  • “Taiwanese Application Serial No. 108123758, Response filed Apr. 16, 2021 to Office Action dated Feb. 3, 2021”, (w/ English Translation of Claims), 8 pgs.
  • “Japanese Application Serial No. 2021-513231, Response Filed Apr. 28, 2022 to Final Notification of Reasons for Refusal dated Feb. 1, 2022”, W/ English Claims, 7 pgs.
  • “European Application Serial No. 19780398.4, Communication Pursuant to Article 94(3) EPC dated Aug. 4, 2022”, 6 pgs.
  • “Japanese Application Serial No. 2021-513231, Preliminary Examination Report dated Jan. 10, 2023”, (w/ English Translation), 3 pgs.
  • “European Application Serial No. 19780398.4, Response Filed Dec. 1, 2022 to Communication Pursuant to Article 94(3) EPC dated Aug. 4, 2022”, 11 pgs.
Patent History
Patent number: 11615733
Type: Grant
Filed: May 6, 2021
Date of Patent: Mar 28, 2023
Patent Publication Number: 20210256896
Assignee: Lumileds LLC (San Jose, CA)
Inventor: Ronald Johannes Bonne (Plainfield, IL)
Primary Examiner: Sejoon Ahn
Application Number: 17/313,581
Classifications
Current U.S. Class: 315/185.0R
International Classification: G09G 3/20 (20060101); G09G 3/32 (20160101); H05B 45/10 (20200101); H05B 39/04 (20060101);