Optimal threshold method for segmentation of LED pixel lattices under area constraints

Introduction

As a multimedia display terminal, Light Emitting Diode (LED) display has been widely used in stage background layout, live broadcast of large-scale sports events, outdoor advertising and other occasions [1-2]. The brightness non-uniformity of the LED display is an important indicator of the quality of the LED display. The brightness non-uniformity of the LED display is the inconsistency of the luminous intensity between the pixels and the pixels in the LED display. If the brightness of the LED display is not uniform, it will show mottled or mosaic, which will affect the display quality of the LED display. LEDs are active light-emitting semiconductor display devices, so the light-emitting characteristics of each LED are discrete, and the attenuation characteristics of each LED are different. Even if the same batch of LED devices, after a period of use, they will show brightness inconsistency. problem. In order to solve the problem of brightness inconsistency of LED display screens, various non-uniformity correction techniques have been proposed, but the most commonly used brightness correction technology based on CCD (Charge-couple Device) data acquisition method [3-5] . The method mainly uses the CCD camera to collect the brightness information of the LED display screen, and then generates the correction coefficient of each LED through the computer image processing technology, and feeds back to the control system to achieve the correction effect.

The brightness correction method based on the CCD camera acquisition method needs to segment the brightness information of each LED pixel from the acquired CCD image. The threshold method and the edge method generally adopt two methods of dividing the LED pixels. Using the threshold method can get faster processing speed, but usually the image is required to satisfy a certain mathematical model, and the threshold value is selected accurately, otherwise the phenomenon of mottled or LED pixel area segmentation may be unclear. The edge method can extract the edge of each LED more accurately, but the disadvantage is that the pixels of each LED must be kept at a certain interval during the acquisition process, otherwise the effect of the acquisition edge will be seriously affected. At the same time, the edge connection is required after the edge is segmented. Algorithms such as padding to eliminate edge breaks. In addition to the above two methods of segmentation, the segmentation method based on the geometrical characteristics of LED pixel points is also used in the case where the LED pixel points are arranged in a relatively regular manner, such as using the horizontal projection transformation of the image and the vertical projection transformation to obtain the LED pixel points in the figure. Distributed position information, integrated a priori information of LED pixel geometry to achieve the segmentation of LED pixels [6]. However, such methods require LED pixel point arrangement rules, the distance is uniform and the size is approximately equal, otherwise the wrong segmentation result will be caused, which has a large limitation. This paper proposes an LED pixel segmentation method, which introduces an area constraint criterion and uses an adaptive optimal threshold method to segment each LED pixel. First, the gray histogram is generated, and then the optimal threshold is generated by using the optimal threshold algorithm combined with the area constraint. Finally, the captured image is segmented by the threshold segmentation method of the image. In the subsequent processing, the gray value of the photosensitive pixel corresponding to the target area is accumulated to obtain the relative brightness information of the corresponding LED pixel, and the brightness non-uniformity correction of the LED display screen is performed according to the obtained brightness information.

Experiments show that the proposed algorithm can segment each LED pixel more accurately, which provides a basic guarantee for subsequent LED brightness correction and recognition.

2 LED optical imaging model

A CCD is a semiconductor device used to convert an optical image into a digital signal. The tiny photosensitive material implanted on the CCD is called a photosensitive pixel, and the more photosensitive pixels contained in a CCD, the higher the resolution of the screen provided. In the photographic image acquired by the CCD image sensor, the value of each photosensitive unit represents the brightness information of the response. The photosensitive image contains information such as the spatial position, luminous shape, and luminous intensity of the LED pixel [5].

LEDs are non-ideal cosine-distributed near-Lambertian light-emitting devices whose light intensity distribution obeys the near Lambertian distribution [6-7], so the imaging of a single LED on a CCD is approximately circular. A pixel on an LED display corresponds to a plurality of CCD photosensitive pixels, and the more the corresponding photosensitive pixels, the better the LED pixel lattice division. Generally, the output signal intensity of the CCD photosensitive pixel is linearly related to the brightness of the received optical signal, so that the gray level information of the pixel in the photosensitive image can be used to characterize the brightness of the optical signal received by the corresponding CCD photosensitive pixel. Figure 1 shows an image of an LED pixel captured by a CCD camera.

It can be seen from FIG. 1 that in the acquired image, the shape of a single LED pixel is approximately circular; and the distribution of gray values ​​of the photosensitive pixels corresponding to different LED pixels is approximately the same; having a higher gray value The LED pixels are approximately evenly distributed in the background with lower gray values. Therefore, as long as the threshold is determined, the LED method can be ideally divided by the threshold method.

3 optimal threshold method to segment LED pixel lattice

3.1 Threshold segmentation method

Threshold method is the most common method of segmentation of parallel direct detection regions [8]. The basic principle of the threshold method is to select a threshold T. Then, all points where f(x, y)>T are called object points; otherwise, they are called background points. The threshold method can be expressed by the formula (1). Where g(x, y) is the divided binary image, f(x, y) is the photosensitive image, and T is the threshold.

When applying the threshold method to segment grayscale images, it is generally required that the segmented image satisfies a certain image model: suppose the image is composed of a target and a background with a single-peak grayscale distribution, and the grayscale value between adjacent pixels within the target or background. The correlation is very high, but the pixels on both sides of the target and background boundaries have great differences in gray values. Images that satisfy the above model can generally be segmented better using threshold methods.

From the above analysis of the image acquired by the CCD, it can be known that the imaging of a single LED pixel is approximately circular, and the closer to the center pixel, the higher the gray value [9-10], therefore, the LED pixel acquisition image satisfactorily satisfies The above image model. Targeting a single LED pixel and black as the background, as long as the threshold parameter is determined, then the segmentation method can be used to obtain the segmented image.

3.2 Optimal Threshold Algorithm

When the target image of the acquired image and the gray value of the background are interlaced, they cannot be separated by a global threshold. At this time, it is usually desirable to reduce the probability of mis-segmentation, and selecting the optimal threshold is a commonly used method.

Assuming that an image contains only two major grayscale regions, we consider these values ​​as random quantities and consider their histograms as an estimate of their probability density function p(z) [11-12]. This total density function can be seen as a mixture of two layers of density functions. One is the density p1(z) of the background and the other is the density p2(z) of the target. Thus, the mixed probability density function describing the overall grayscale variation can be expressed in the form of equation (2). Where P1 is the probability that the background pixel appears, and P2 is the probability that the target pixel appears.

This entry was posted in on