Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy...

10
Microprocessors and Microsystems 71 (2019) 102874 Contents lists available at ScienceDirect Microprocessors and Microsystems journal homepage: www.elsevier.com/locate/micpro A robust, real-time and calibration-free lane departure warning system Islam Gamal a , Abdulrahman Badawy a , Awab M.W. Al-Habal a , Mohammed E.K. Adawy a , Keroles K. Khalil b , Magdy A. El-Moursy b,, Ahmed Khattab a a Electronics and Electrical Communications Department, Faculty of Engineering, Cairo University, Giza, Egypt b Mentor, A Siemens Business, Cairo, Egypt a r t i c l e i n f o Article history: Received 8 October 2018 Revised 22 April 2019 Accepted 19 August 2019 Available online 21 August 2019 Keywords: Driver assistance system (DAS) Lane departure warning system (LDWS) Edge drawing (ED) Region of interest (ROI) Line segment detection (LSD) a b s t r a c t A real-time and calibration-free lane departure warning system algorithm is proposed. The pre-processing stage of the lane departure algorithm is carried out using Gaussian pyramid to smooth the image and reduce its dimensions, which decrease the unnecessary details in the image. A lane detection stage is then developed based on Edge Drawing Lines (EDLines) algorithm that is a real-time line segment detector, which has false detection control. The reference-counting technique is used to track the lane boundaries and predict the missing ones. Experimental results show that the proposed algorithm has accuracy of 99.36% and average processing time of 80 fps (frame per second). The proposed algorithm is efficient to be used in the self-driving systems in the Original Equipment Manufacturers (OEMs) cars. © 2019 Published by Elsevier B.V. 1. Introduction Population growth and the rapid increase in the number of ve- hicles have led to more traffic accidents every year, which become a serious problem. According to the Association for Safe Interna- tional Road Travel (ASIRT) organization, about 1.3 million people die and about 20–50 million are injured or disabled annually due to traffic accidents [1]. It has been estimated that deaths will reach 2.4 million people annually by 2030 unless immediate action is taken. There are number of reasons causing the traffic accidents, rang- ing from the driver’s behavior, mechanical failures, environmental conditions to road design. Unintended lane departures, which are caused by drivers, occupy the fourth rank among these reasons. According to the National Highway Traffic Safety Administration (NHTSA), 37% of deaths in traffic accidents in USA are caused by lane departures [2]. That prompts the development of many Driver Assistance Systems (DASs) which provide the driver with essential information about the surroundings and prevent driver from mak- ing unintended mistakes [3]. Lane Departure Warning System (LDWS) is a mechanism de- signed to warn the driver when the vehicle begins to move out of its lane unless a turn signal is on in that direction. This system can Corresponding author. E-mail addresses: [email protected] (I. Gamal), [email protected] (A. Badawy), [email protected] (K.K. Khalil), [email protected] (M.A. El-Moursy), [email protected] (A. Khattab). be implemented using two different technologies: machine vision (MV) or GPS technology. GPS uses high-resolution map databases with its highly accurate positioning ability. On the other hand, MV uses single or multiple cameras with image processing algorithms to detect lanes on the road. Unlike GPS, MV uses the existing in- frastructure and it can be adapted easily with road design changes. Therefore, most of the proposed techniques in the literature uses MV technology to implement the LDWS [3]. These implementa- tions are mainly based on the inverse perspective mapping (IPM) to ease lane detection by getting the bird’s eye view of the road in front of the car. IPM has a high computational time, which affects the real-time performance of the system. Also, it is parameter- based and requires calibration of the camera for every different type of car which makes the system unportable [4]. A new algo- rithm based on MV to implement a non-IPM-based LDWS with high efficiency and real-time performance is proposed in this pa- per, as shown in Fig. 1. It is a real-time and calibration-free LDWS (RTCFLDWS) algorithm. The remaining of this paper is organized as follows. In Section 2, a brief overview of the related previous work is pre- sented. In Section 3, the proposed algorithm is introduced and de- scribed in details. In Section 4, experimental results for the pro- posed algorithm are provided. Some conclusions are portrayed in Section 5. 2. Related work In this section, the related work in LDWS is reviewed. In the data acquisition stage of the LDWS which is based on MV https://doi.org/10.1016/j.micpro.2019.102874 0141-9331/© 2019 Published by Elsevier B.V.

Transcript of Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy...

Page 1: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

Microprocessors and Microsystems 71 (2019) 102874

Contents lists available at ScienceDirect

Microprocessors and Microsystems

journal homepage: www.elsevier.com/locate/micpro

A robust, real-time and calibration-free lane departure warning system

Islam Gamal a , Abdulrahman Badawy

a , Awab M.W. Al-Habal a , Mohammed E.K. Adawy

a , Keroles K. Khalil b , Magdy A. El-Moursy

b , ∗, Ahmed Khattab

a

a Electronics and Electrical Communications Department, Faculty of Engineering, Cairo University, Giza, Egypt b Mentor, A Siemens Business, Cairo, Egypt

a r t i c l e i n f o

Article history:

Received 8 October 2018

Revised 22 April 2019

Accepted 19 August 2019

Available online 21 August 2019

Keywords:

Driver assistance system (DAS)

Lane departure warning system (LDWS)

Edge drawing (ED)

Region of interest (ROI)

Line segment detection (LSD)

a b s t r a c t

A real-time and calibration-free lane departure warning system algorithm is proposed. The pre-processing

stage of the lane departure algorithm is carried out using Gaussian pyramid to smooth the image and

reduce its dimensions, which decrease the unnecessary details in the image. A lane detection stage is then

developed based on Edge Drawing Lines (EDLines) algorithm that is a real-time line segment detector,

which has false detection control. The reference-counting technique is used to track the lane boundaries

and predict the missing ones. Experimental results show that the proposed algorithm has accuracy of

99.36% and average processing time of 80 fps (frame per second). The proposed algorithm is efficient to

be used in the self-driving systems in the Original Equipment Manufacturers (OEMs) cars.

© 2019 Published by Elsevier B.V.

1

h

a

t

d

t

2

t

i

c

c

A

(

l

A

i

i

s

i

A

(

(

b

(

w

u

t

f

T

M

t

t

f

t

b

t

r

h

p

(

S

s

s

p

h

0

. Introduction

Population growth and the rapid increase in the number of ve-

icles have led to more traffic accidents every year, which become

serious problem. According to the Association for Safe Interna-

ional Road Travel (ASIRT) organization, about 1.3 million people

ie and about 20–50 million are injured or disabled annually due

o traffic accidents [1] . It has been estimated that deaths will reach

.4 million people annually by 2030 unless immediate action is

aken.

There are number of reasons causing the traffic accidents, rang-

ng from the driver’s behavior, mechanical failures, environmental

onditions to road design. Unintended lane departures, which are

aused by drivers, occupy the fourth rank among these reasons.

ccording to the National Highway Traffic Safety Administration

NHTSA), 37% of deaths in traffic accidents in USA are caused by

ane departures [2] . That prompts the development of many Driver

ssistance Systems (DASs) which provide the driver with essential

nformation about the surroundings and prevent driver from mak-

ng unintended mistakes [3] .

Lane Departure Warning System (LDWS) is a mechanism de-

igned to warn the driver when the vehicle begins to move out of

ts lane unless a turn signal is on in that direction. This system can

∗ Corresponding author.

E-mail addresses: [email protected] (I. Gamal),

[email protected] (A. Badawy), [email protected]

K.K. Khalil), [email protected] (M.A. El-Moursy), [email protected]

A. Khattab).

S

2

t

ttps://doi.org/10.1016/j.micpro.2019.102874

141-9331/© 2019 Published by Elsevier B.V.

e implemented using two different technologies: machine vision

MV) or GPS technology. GPS uses high-resolution map databases

ith its highly accurate positioning ability. On the other hand, MV

ses single or multiple cameras with image processing algorithms

o detect lanes on the road. Unlike GPS, MV uses the existing in-

rastructure and it can be adapted easily with road design changes.

herefore, most of the proposed techniques in the literature uses

V technology to implement the LDWS [3] . These implementa-

ions are mainly based on the inverse perspective mapping (IPM)

o ease lane detection by getting the bird’s eye view of the road in

ront of the car. IPM has a high computational time, which affects

he real-time performance of the system. Also, it is parameter-

ased and requires calibration of the camera for every different

ype of car which makes the system unportable [4] . A new algo-

ithm based on MV to implement a non-IPM-based LDWS with

igh efficiency and real-time performance is proposed in this pa-

er, as shown in Fig. 1 . It is a real-time and calibration-free LDWS

RTCFLDWS) algorithm.

The remaining of this paper is organized as follows. In

ection 2 , a brief overview of the related previous work is pre-

ented. In Section 3 , the proposed algorithm is introduced and de-

cribed in details. In Section 4 , experimental results for the pro-

osed algorithm are provided. Some conclusions are portrayed in

ection 5 .

. Related work

In this section, the related work in LDWS is reviewed. In

he data acquisition stage of the LDWS which is based on MV

Page 2: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

2 I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874

Fig. 1. Block diagram of the proposed algorithm.

d

p

h

t

t

t

T

e

[

[

l

d

l

i

t

c

t

m

m

g

3

l

a

s

D

t

n

t

p

l

a

l

T

d

3

v

m

a

3

l

u

w

r

t

t

a

fi

n

i

p

a

w

s

l

technology, there are two main approaches of using the camera:

the single-camera approach [5,6] , and the multi-camera approach

[7] . In the single camera approach, one camera is fixed behind the

windshield mirror. This approach is the most widely used in the

industry because of its low cost. In the multi-camera approach

[7] , two or more cameras are used in front and in rear of the

vehicle, which provide data redundancy resulting in high accuracy.

However, using more than one camera increases the cost and

the required computational time. Furthermore, the multi-camera

approach requires calibration to merge the images taken from

different cameras.

LDWS consists of different stages, in the pre-processing stage

the acquired image from the camera is enhanced to make it suit-

able for further processing in the system. The road environment is

filled with objects and details, which are considered as noise for

the system. Image smoothing is performed to reduce unnecessary

details by applying image filters such as the Gaussian Filter [6] and

Median Filter [8] . The image may contain some lines, which are

similar to the lane boundaries. Therefore, Region of Interest (ROI)

extraction is used to determine the sufficient portion of the image

to detect the lanes correctly. The main method to extract the ROI

is to detect the vanishing point in the image and to crop the image

at it [6] . The vanishing point is the point at which all lines in the

image intersect. Applying the ROI reduces the computational time

as the work is done on a smaller image.

Line detection stage is done in two steps: edge detection then

line segmentation. Many image processing techniques are used

in edge detection [9,10] . Canny edge detector is used in [9] to

find the intensity gradients of the image and consider only the

pixels at which the gradient has the absolute maxima. Laplacian

edge detection is another method used in [4] . There are many

methods to segment the lines [10] . A single approach can perform

both tasks in a single step such as Hough transform and Line

Segment Detector (LSD). Hough transform is commonly used in

the literature as in [11] . Unfortunately, Hough transform has many

rawbacks especially the high complexity that causes high com-

utational time. LSD is a very powerful method used in [12] . LSD

as lower execution time and is more accurate than the Hough

ransform and it does not generate false positive lines, which are

rue lines not being detected.

Following line detection, tracking methods have been used to

rack the lane boundaries as they change from frame to another.

hese methods are mainly dependent on tracking lines in the

quation form y = mx + c. Therefore, line fitting is done as in

13] using Least Square line fitting to create the line equation. In

4] , a scoring mechanism has been proposed to keep track of the

ane boundaries using the concept of reference-counting and pre-

ict their locations if they are missed in one frame or more.

In the last stage, the position of the car with respect to the

ane is calculated in each frame using successive information about

t then the departure can be detected. Some departure detection

echniques use half of the image to represent the middle of the

ar and estimate the distance between it and the lane position

o detect the deviation of the car from the lane as in [14] . The

ain drawback of this technique is the need to have the camera

ounted in the middle of the car. The stages of the proposed al-

orithm are described in the next section.

. The proposed algorithm

The proposed algorithm extracts the ROI to reduce the outlier

ines in the image (trees boundaries, roadsides, etc.). Then, im-

ge smoothing stage is carried out using Gaussian pyramid as it

moothes the image without harming the needed edges in it. Edge

rawing Lines (EDLines) algorithm is then used, which is real-

ime and very powerful edge and line segment detection tech-

ique with high performance and high false detection control. In

he following stage, basic machine learning (ML) concepts are em-

loyed in the lane filtering and clustering to reject the lines with

ow probability of being lane boundaries. Based on these lines, an

dvanced reference-counting algorithm is introduced to track the

anes between consecutive frames taken by a single front camera.

he stages of the algorithm are shown in Fig. 1 and explained in

etail in the following subsections.

.1. Image pre-processing

In real-time LDWS (like the presented one in this paper), it is

ery important to shrink the image as much as possible to mini-

ize the processed items and decrease the processing time. This is

chieved in two sub-stages: ROI extraction, and image smoothing.

.1.1. ROI extraction

The ROI extraction is the main sub-stage which filters the out-

iers that affect the succeeding stages. The concept itself is broadly

sed in many domains to extract exactly the right part of the data

hich is sufficient to do the required job. In the proposed algo-

ithm, it is required to detect the lane boundaries on the road;

herefore, the part of the image that contains the ground “the bot-

om part” is the most important part. An adaptive ROI is extracted

s shown in Fig. 2 . It is defined using six points illustrated in the

gure. The first and second points are the lower left and right cor-

ers, respectively. The third and sixth points are calculated accord-

ng to the y-intercept value of the lane lines. The most important

oints are the forth and the fifth points which separate the top

nd the bottom parts of the image. They are calculated based on

hat is called the vanishing point.

The vanishing point is the point at which all lane lines inter-

ect. It is calculated by finding the intersection points among all

ines in the image (detected in the previous frame) then the mean

Page 3: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874 3

Fig. 2. ROI points.

Fig. 3. The Gaussian Pyramid.

o

n

a

f

r

d

f

l

d

a

r

3

i

i

n

m

m

t

o

i

s

e

l

i

i

t

s

i

a

d

a

h

t

u

3

t

p

s

f

f

t

I

3

i

b

p

t

T

l

i

e

i

3

a

o

m

i

f

fi

e

i

t

t

v

l

l

f these points coordinates is considered as vanishing point coordi-

ates. If the list of detected lines does not include the lane bound-

ries, this point is inaccurate causing wrong ROI extraction. There-

ore, the frame at which the vanishing point is not calculated cor-

ectly affects the accuracy and stability of the system and must be

ropped. To overcome this, the vanishing point is calculated every

rame but it is updated only every 10–20 frames using a feedback

oop. Therefore, the calculations are stabilized and not affected by

ropping frames. The vanishing point and the other ROI points

re initialized such that the system converges to an appropriate

egion.

.1.2. Image smoothing

In this stage, the image pyramid approach is adopted. The main

dea of the image pyramid is to have many versions for the same

mage but in different scales (down sampled versions of the origi-

al image). There are two types of image pyramids: Gaussian pyra-

id and Laplacian pyramid. The Gaussian pyramid, which is the

ost commonly used, is used in the proposed algorithm as it gives

he best results with the minimum processing time compared with

ther smoothing techniques [15] . As shown Fig. 3 , this pyramid

s constructed by sub-sampling and smoothing the original image

everal times. If the scaled versions of the image are stacked, they

nd up with a pyramid of the original image in its base and the

owest resolution image in the top of the pyramid. Smoothing the

mage diminishes the details and shrinks it reducing the process-

ng time. Therefore, this stage is used to reduce the number of de-

ected lines by the EDLines algorithm at the line segment detection

tage and speed up the system.

Fig. 4 shows the results of applying the EDLines algorithm on

mages with different levels of Gaussian pyramid. In Fig. 4 (a), the

lgorithm is applied on the original image directly and that pro-

uces 446 lines. In Fig. 4 (b), a 2-level Gaussian pyramid is used

nd 179 lines are produced. This reduces the number of pixels by

alf and smoothes the image reducing the number of lines to less

han half. In Fig. 4 (c), the number of lines reaches only 48 lines

sing a 3-level pyramid, which is much less than the first case.

.2. Line detection

One of the most important stages in any LDWS is the line de-

ection stage. The idea of the LDWS is to find the lane boundaries

osition and to know the position of the car with respect to it. This

tage is responsible for detecting edges in a given image (which

orm lines, curves, or other shapes) and distinguishing the lines

rom the other shapes. The line detection stage can be divided into

wo sub-stages: the edge detection and the line segment detection.

n the following sections, these sub-stages are introduced.

.2.1. Edge detection

Edge detection is a mathematical method to detect points in an

mage where there are sharp changes in brightness. An algorithm

ased on the first-order derivate of the image is used to find these

oints. The ED algorithm [16] is a new edge detection algorithm

hat runs at real-time and produces each edge as a pixel chain.

his algorithm works on a greyscale image and performed as fol-

ows: the gradient magnitude is computed at each pixel such that

t is possible to extract pixels with maximum gradient. These pix-

ls (which are called the anchors) are connected with smart rout-

ng procedure to form the edges.

.2.2. Line segment detection

By applying edge detection, there are many types of shapes that

re formed. These shapes can be regular, non-regular, line, curve,

pen, or closed. According to the nature of the application, the seg-

entation and classification is performed. In the LDWS, line is the

nterested shape to be segmented.

EDLines algorithm [17] , based on ED algorithm, is used to per-

orm the line segment detection based on Least Squares (LS) line

tting method. It fits points in a certain coordinate system to be

xpressed linearly in terms of axes of this system . The basic idea

s to walk over the edge pixels chain and fit them to lines using

he LS method. The algorithm generates an initial line segment and

hen extends this line segment by adding more pixels to it. The

alue of the minimum line length depends on the lane boundaries

ength in pixel unit. Chain pixels are add to the current line as

ong as the pixels are within a certain distance from the line, e.g.,

Page 4: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

4 I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874

Fig. 4. The effect of using the Gaussian Pyramid on the number of lines detected by the EDLines algorithm. (a) The original image, (b) The image after apply 2-level Gaussian

Pyramid and (c) The image after apply 3-level Gaussian Pyramid.

Fig. 5. Separating left lines from right lines.

Table 1

Features values for filtering process.

Features Left Right

Slope −55 < m < −0 . 5 0.5 < m < 55

Start point S x < width/2 S x > width/2

Length Length > 15 Length > 15

T

e

s

1 pixel error. The algorithm continues adding pixels to the current

line segment until the direction of the line changes. At this point,

a complete line segment is found and the remaining pixels of the

chain are then processed recursively to extract more line segments.

3.3. Line filtering and clustering

A line segment is defined mainly by its start point ( x 1 , y 1 ) and

end point ( x 2 , y 2 ). Using these two points the equation of the line

in the form y = mx + c can be acquired, where m is the line slope

and c is y -intercept. In addition, the length of the line segment is

calculated using:

l ength ( l ) =

( x 1 − x 2 ) 2 + ( y 1 − y 2 )

2 (1)

In the presented algorithm, the line segment is defined using

five features (slope ( m ), intercept Point ( c ), starting point( Sx, Sy ),

end point ( Ex, Ey ) and length( l )). The proposed algorithm defines

the lane by only two lines: left and right ones. This is done by

filtering followed by clustering of the line segments. The filtering

and clustering techniques, which are employed in the proposed al-

gorithm, are described as follows.

3.3.1. Line filtering

In this part, the line segments are filtered so that only the ones

that related to lane boundaries are selected and passed to the next

stage. It is found that it is useful to separate lines in the left side of

the frame from the lines in the right side. This separation is done

using slope values illustrated in Fig. 5 . After the line segments are

separated, a selection process is done using the values of three fea-

tures: slope, start point and length. The ranges for the features of

a line segment to be selected as a lane boundary are shown in

able 1 . It is worth mentioning that the values are calculated by

xperiments, where m and S x are slope and x -coordinate of the

tarting point of the line respectively.

Page 5: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874 5

Fig. 6. Illustration of problem solved by line clustering.

3

a

t

F

s

i

t

d

t

D

w

o

c

c

3

l

p

i

a

T

a

g

t

f

m

d

r

t

m

d

f

T

f

t

p

a

m

A

Fig. 7. Position of the car with respect to the lane boundaries.

t

t

i

a

o

t

d

l

i

3

r

t

i

t

o

s

t

i

t

i

p

t

t

i

l

a

a

f

D

w

m

o

d

r

.3.2. Line clustering

The lane boundaries between two lanes in the same direction

re dashed lines with certain thickness which causes the line de-

ector to detect it as two parallel lines. The problem is described in

ig. 6 . The problem here is similar to the clustering problem in ML

ince there are a lot of data and it is required to cluster these data

nto classes. There are around 2 – 10 lines. It is required to clus-

er close lines to be represented as only one line. Close lines are

etermined by the distance between them which is defined using

he following equation:

istance = ( m 1 − m 2 ) 2 + ( c 1 − c 2 )

2 (2)

here m 1 , m 2 , c 1 and c 2 are slopes and y-intercepts, respectively,

f the two lines. A threshold is put on the distance so similar lines

ould be merged into one line whose features are the mean of the

lustered lines.

.4. Lane tracking

Tracking is done mainly for two purposes: selecting only the

ines related to the lane boundaries from the detected lines and

redicting the position of the lane boundaries if they do not ex-

st. Both purposes cannot be reached without having knowledge

bout the history of the lane boundaries in the previous frames.

here are two cases: the information about the lane boundaries is

vailable or there is no enough information about them. Mainly, for

ood road conditions and clear view, the system is considered with

he first case. While the system is operating, it may keep jumping

rom case one to case two because the lane boundaries could be

issed for some frames. It is very intuitive that things in real life

o not change suddenly; therefore, the output of this moment is

elatively close to the output of the previous moment. The men-

ioned concept is the main driver of the tracking algorithm.

In the n th frame, it is important to define what is the infor-

ation needed from the previous frames. Every detected line is

efined by its slope and intercept. Other parameters are added

or every line: score, verified flag, slope error and intercept error.

hese parameters are updated in every frame and passed from one

rame to another. Also, they are collected in a list that is called the

racked list.

Starting with the first case, where all information about the

revious frame is available, the detected lines in the current frames

re compared with the lines in the tracked list. Therefore, the best

atch for the tracked lines could be found in the detected lines.

fter these comparisons, there are three cases:

a) The tracked line has a best match in the detected lines. In this

case, the parameters of the tracked line are updated with the

values of the detected line and the score is incremented by one.

b) The tracked line does not have best match in the detected lines.

In this case, the parameters of the tracked line are left as they

are, but the score is decremented by one.

c) If a detected line is not matched with any of the tracked lines.

In this case, the line is added to the tracked list with a score

value, verified flag value, slope error value and intercept error

value of zeros.

The verified flag is updated. If the score is larger than a certain

hreshold, the verified value is changed from 0 to 1. This means

hat this line was detected in many frames and the probability that

t is related to the lane boundary is high. Note that, in this stage

lso the tracked lines that have a score less than a certain thresh-

ld are deleted from the tracked list.

In the second case, where there is no enough information about

he previous frames, the tracked list is initialized empty and all

etected lines are added to the tracked list. If there is no detected

ines, the tracking stage is bypassed until there is a line detected

n the image.

.5. Lane departure

In this stage, it is required to find the position of the car with

espect to the lane. In order to do that, the position of the car in

he image must be provided. The position of the car is defined by

ts mid-point. In order to define this point, the camera is assumed

o be mounted behind the windshield mirror so that the mid-point

f the car is in the middle of the image. This assumption ensures

impler algorithm for departure calculation.

The lane is defined by its boundaries and to compare its posi-

ion with the car position, the mid-point between its boundaries

s determined by intersecting the lane boundaries with the bot-

om of the image (which is a horizontal line its equation is y =mage height) then the middle distance between the intersection

oints will be the mid-point. It is worth mentioning that the

racked list contains many lines, but it is not necessary that all

hese lines are related to the lane boundaries. Therefore, only ver-

fied lines are used in the calculations. The verified lines are those

ines that appear in a bunch of consecutive frames; therefore, they

re the most probable to be related to lane boundaries.

Using the mid-point of the lane and the mid-point of the car

s shown in Fig. 7 , the car lane departure is calculated using the

ollowing equation:

eparture ( d ) =

L − C

W/ 2

(3)

here L is Lane midpoint, C is Car midpoint and W is Lane width.

The values of the departure ( d ) starts from −100% (the car is

oving on the left lane boundary) to 100% (the car is moving

n the right lane boundary). Both cases are extremes because the

eparture warning must be given before the car leaves the cur-

ent lane. In order to ensure good warning behavior, the departure

Page 6: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

6 I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874

Table 2

Regions based on deviation value.

Region Departure values

Safe region −40% to 40%

Warning region −60% to − 40%

40% to 60%

Danger region < −60% and > 60%

4

d

f

r

t

i

b

o

t

a

r

i

4

i

c

t

u

A

i

a

c

n

4

b

g

t

c

S

1

range is divided into three regions: the safe region, the warning

region and the danger region as shown in Table 2 .

The detection and departure rates are calculated using Eqs.

(4) and (5) , respectively, found in [18] .

Det ection Rat e =

T P

T P + F N

(4)

Departure Rate =

Hit

Hit + Miss (5)

where TP and FN are true positive and false negative results, re-

spectively and Hit is the successful detected departures and Miss is

the not detected departures.

4. Experimental results

According to ISO 17361:2017 standard, the environment condi-

tions to test LDWS are flat and dry asphalt, lane markings being

directly visible by the driver and horizontal visibility range be-

ing greater than 1 km. The stated conditions in the standard de-

scribe an ideal environment. Real life is not ideal. To guarantee

that the driver safety is accomplished by RTCFLDWS, the system

is tested by a various challenging weather and illumination condi-

tions: clear, cloudy, rainy, day, sunset and night. The system out-

puts for those situations are depicted in Figs. 8 –10 .

4.1. Offline testing

These tests are performed on different dataset using PC with

Intel(R) Core(TM) i7-5500 U CPU @ 2.4 GHz.

Fig. 8. RTCFLDWS results in different situations: (a) R

.1.1. Gemy’s dataset

It consists of 28,319 frames recorded using Sony Xperia Z5

ual mobile phone which provides images at 30 fps. It covers dif-

erent types of situations: straight roads, curvy roads, multi-lane

oads, daytime, nighttime, shadow effect and obstacles. In addi-

ion, it covers departure situation which all other datasets ignore

t. Moreover, it introduces all performance test procedures required

y ISO 17361 standard. The raw videos and RTCFLDWS results on

ur dataset can be found in [19,20] respectively. The average de-

ection rate and the average frame processing time on our dataset

re 99.46% and 17.3 ms respectively. The detection and departure

ates and frame processing time for all the above cases are stated

n Table 3 .

.1.2. Others dataset

The proposed system is also tested with the dataset presented

n [21–24] . The average detection rate and the average frame pro-

essing time are 99.25% and 12.5 milliseconds respectively. The de-

ection and departure rates and frame processing time for each sit-

ation are stated in Table 4 . Fig. 11 shows false negative samples.

video for RTCFLDWS results can be found in [25] .

Comparing RTCFLDWS performance with other methods us-

ng the same dataset in [23] is presented in Table 5 . RTCFLDWS

chieves higher detection rate than the others.

More than 99% detection rate is achieved with RTCFLDWS as

ompared to less than 93% achieved with the best existing tech-

ique in [24] .

.2. Online testing

ISO 17361:2017 standard states three types of tests that should

e carried out before any LDWS is granted certificate; Warning

eneration in a curve, Repeatability and False alarm. These three

ests are performed on the proposed algorithm using 10-megapixel

amera, fixed behind the windshield mirror of a vehicle. i.MX6

ABRE Lite board with Quad-Core ARM® Cortex A9 processor @

GHz per core and 1 GByte of 64-bit wide DDR3 @ 532 MHz is

ainy, (b) night and (c) left and right departure.

Page 7: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874 7

Fig. 9. RTCFLDWS results in various tunnel illumination conditions: (a) yellow lamp tunnel, (b) white lamp tunnel and (c) dim light tunnel.

Fig. 10. RTCFLDWS Results in challenging conditions: (a) shadow effect, (b) & (c) tracking of missed lanes and (d) steep curve and inclined lines at night.

Page 8: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

8 I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874

Table 3

Offline test results on our dataset.

Conditions Detection Detection rate (%) Departure Departure rate (%) Frame processing time (ms)

TP FN Hit Miss

Highway day Curve and departure 3316 16 99.52 6 6 100 16.83

Straight 1 1747 10 99.43 1 1 100 16.82

Straight and departure 2195 14 99.37 7 7 100 18.38

Straight 2 1352 5 99.63 2 2 100 18.37

Shadow and obstacle 1364 5 99.63 3 3 100 18.14

False alarm test 800 7 99.13 1 1 100 17.17

Warning in a right curve test 1509 8 99.47 2 2 100 17.18

Repeatability test 4339 19 99.56 11 11 100 17.64

ISO 17361 test procedures 6259 35 99.44 14 14 100 17.38

Highway night False alarm test 858 4 99.54 – – – 17.14

Straight 1412 11 99.23 – – – 16.50

Curve and obstacle 3025 9 99.7 1 1 100 16.71

Table 4

Offline test results on others dataset.

Dataset Conditions Detection Detection rate (%) Departure Departure rate (%) Frame processing time (ms)

TP FN Hit Miss

Udacity [20] Highway Curve 1240 10 99.2 – – – 17.28

Solid white 105 2 99.02 – – – 14.65

Solid yellow 650 4 99.38 – – – 16.42

Caltech [21] Urban Clear 245 4 98.37 – – – 19.16

DIML [21] Highway Clear 1198 5 99.58 1 1 100 15.92

Cloudy 1237 0 100 – – – 14.74

Sunset/sunrise 1351 7 99.48 3 3 100 15.22

Steep curve 257 2 99.22 – – – 17.54

Urban Night 1248 12 99.04 – – – 17.38

Sunset 1070 17 98.4 – – – 16.09

Tunnel White lamp 1332 5 99.62 1 1 100 15.91

Yellow lamp 1233 5 99.59 2 2 100 16.02

Juju [23] Highway Night 165 1 99.39 – – – 6.78

Rainy 275 0 100 – – – 6.08

Tunnel White lamp 860 15 98.26 – – – 6.33

Fig. 11. Error samples.

Table 5

Lane detection rate comparison.

Method Detection rate (%)

Aly [21] 72.94

Borkar [6] 81.53

Hunjae [22] 85.32

RTCFLDWS 99.37

Table 6

Warning generation test results.

Right curve Left curve

Left departure Right departure Left departure Right departure

Passed Passed Passed Passed

Table 7

Repeatability test results.

Trial no. Departure direction

Left Right

1 Passed Passed

2 Passed Passed

3 Passed Passed

4 Passed Passed

i

t

5

m

a

e

used in the tests. The results of the first two tests are stated in

Tables 6 and 7 . As shown in the tables, all tests have passed. Fi-

nally, the system does not issue any warnings within the no warn-

ng zone for 10 0 0 m distance on a straight road that indicates that

he system passes the False Alarm Test successfully.

. Conclusions

In this paper, a new reliable and robust algorithm to imple-

ent LDWS is introduced. The RTCFLDWS algorithm is real-time

nd scalable. It reduces the input image using region of interest

xtraction. Edge detection and line segmentation method EDLines

Page 9: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874 9

i

fi

l

l

m

a

p

r

a

D

R

[

[

[

[

[

b

S

E

C

M

D

c

i

i

n

a

a

i

i

A

t

8

p

s applied. It is fast and accurate with false detection control. The

ltering and clustering block uses basic machine learning to se-

ect only the lines related to lane boundaries from the detected

ines. The lane boundaries are tracked while they change as the car

oves. The lines are drawn on a GUI display and a warning signal

ppears when the departure happens. From the results, the pro-

osed system achieves average detection rate of 99.36%, departure

ate of 100% and average processing time of 14.9 ms. The proposed

lgorithm outperforms all the existing ones in literature.

eclaration of Competing Interest

None.

eferences

[1] ASIRT Organization, [Online]. Available: http://asirt.org/initiatives/

informing- road- users/road- safety- facts/road- crash- statistics . [2] D. LeBlanc , Road departure crash warning system field operational test:

methodology and results, Technical report volume 1 (2006) Jun . [3] C.B. Wu , L.H. Wang , K.C. Wang , Ultra-low complexity block-based lane detec-

tion and departure warning system, IEEE Trans. Circuits Syst. Video Technol.29 (no.Feb 2) (2019) 582–593 .

[4] O.G. Lotfy , A .A . Kassem , E.M. Nassief , H.A . Ali , M.R. Ayoub , M.A . El-Moursy ,

M.M. Farag , Lane departure warning tracking system based on score mecha-nism, in: the proceeding of the IEEE Midwest symposium on Circuits and Sys-

tems (MWSCAS) , 2016, pp. 1–4. Oct . [5] A . Irshad , A .A . Khan , I. Yunus , F. Shafait , Real-Time lane departure warning

system on a lower resource platform, in: the proceedings of the InternationalConference on Digital Image Computing: Techniques and Applications (DICTA ) ,

2017, pp. 1–8. Nov . [6] Y. Kortli , M. Marzougui , B. Bouallegue , J.S.C. Bose , P. Rodrigues , M. Atri , A novel

illumination-invariant lane detection system, in: the proceedings of the Inter-

national Conference on Anti-Cyber Crimes (ICACC), 2017, pp. 166–171. Mar . [7] A. Borkar , M. Hayes , M.T. Smith , A new multi-camera approach for lane depar-

ture warning, in: the proceedings of the International Conference on AdvancedConcepts for Intelligent Vision Systems, 2011, pp. 58–69. Aug .

[8] J. Baili , M. Marzougui , A. Sboui , S. Lahouar , M. Hergli , J.S.C. Bose , K. Bes-bes , Lane departure detection using image processing techniques, in: the pro-

ceedings of the International Conference on Anti-Cyber Crimes (ICACC), 2017,

pp. 238–241. Mar . [9] M. Kodeeswari , P. Daniel , Lane line detection in real time based on morpho-

logical operations for driver assistance system, in: the proceedings of the In-ternational Conference on Signal Processing, Computing and Control (ISPCC),

2017, pp. 316–320. Sep . [10] N.G. Cho , A. Yuille , S.W. Lee , A novel linelet-based representation for line seg-

ment detection, IEEE Trans. Pattern Anal. Mach. Intell. 40 (no.May 5) (2018)

1195–1208 . [11] D.K. Lee , J.S. Shin , J.H. Jung , S.J. Park , S.J. Oh , I.S. Lee , Real-time lane detection

and tracking system using simple filter and Kalman filter, in: the proceedingsof International Conference on Ubiquitous and Future Networks (ICUFN), 2017,

pp. 257–277. Jul . [12] F. Zhou , Y. Cao , X. Wang , Fast and resource-efficient hardware implementation

of modified line segment detector, IEEE Trans. Circuits Syst. Video Technol. 28

(no.Nov 11) (2018) 3262–3273 . [13] A . Gupta , A . Choudhary , Real-time lane detection using spatio-temporal incre-

mental clustering, in: the proceedings of IEEE International Conference on In-telligent Transportation Systems (ITSC), 2017, pp. 1–6. Oct .

[14] R.F. Berriel , E. de Aguiar , A.F. de Souza , T. Oliveira-Santos , Ego-Lane AnalysisSystem (ELAS): dataset and algorithms, Image Vis. Comput. 68 (2017) 64–75

Dec .

[15] T. Behrens , K. Schmidt , R.A. MacMillan , R.A.V. Rossel , Multiscale contextualspatial modelling with the Gaussian scale space, Geoderma 310 (2018) 128–137

Jan . [16] C. Topal , O. Ozsen , C. Akinlar , Real-time edge segment detection with edge

drawing algorithm, in: the proceeding of International Symposium on Imageand Signal Processing and Analysis (ISPA), 2011, pp. 313–318. Sep .

[17] C. Akinlar , C. Topal , Edlines: real-time line segment detection by edge drawing

(ed), in: the proceeding of the IEEE International Conference on Image Pro-cessing (ICIP), 2011, pp. 2837–2840. Sep .

[18] I. Gamal , A. Badawy , A.M.W. Al-Habal , M.E.K. Adawy , K.k Khalil , M.A. El–Moursy , A. Khattab , A robust, real-time and calibration-free lane departure

warning system, IEEE Int. Symp. Circuits Sys. (ISCAS) (2019) 2405–2408 . [19] Islam Gamal, Gemy’s dataset for testing LDWS, [Online]. Available: https://

www.youtube.com/playlist?list=PLdELJ5xai9LhEpkJiT2AVdcjGYVA6iCmO . 20] Islam Gamal, RTCFLDWS output on Gemy’s dataset, [Online]. Available: https:

//youtu.be/FTYbSPBBV-o .

[21] D. Lichtenberg, GitHub, Microsoft, [Online]. Available: https://github.com/udacity/CarND- LaneLines- P1/tree/master/test _ videos .

22] M. Aly , Real time detection of lane markers in urban streets, in: the proceedingof the IEEE Intelligent Vehicles Symposium , 2008, pp. 7–12. Jun .

23] H. Yoo , U. Yang , K. Sohn , Gradient-enhancing conversion for illumination-ro-bust lane detection, IEEE Trans. Intell. Transp. Sys. 14 (no.Sep 3) (2013)

1083–1094 . 24] J.H. Yoo , S.W. Lee , S.K. Park , D.H. Kim , A robust lane detection method based

on vanishing point estimation using the relevance of line segments, IEEE Trans.Intell. Transp. Sys. 18 (no.Dec 12) (2017) 3254–3266 .

25] Islam Gamal, RTCFLDWS output on different dataset, [Online]. Available: https:

//youtu.be/PoHsBO6g7ZE .

Islam Gamal received his B.Sc. in Electronics and Com-munications Engineering, with honor, from Cairo Univer-

sity in 2018. Currently, he is an Embedded Software En-

gineer at Mentor Graphics Company, a Siemens business,where he manages projects to both internal and exter-

nal customers, including development/integration of AU- TOSAR basic software components and tests for automo-

tive. Islam is an instructor at Mentor Embedded Academyof Excellence. He has been a Certified LabVIEW Asso-

ciate Developer since 2015, a Certified LabVIEW Devel-

oper since 2017 and a LabVIEW Student Ambassador(LSA) who trains students for LabVIEW development for

four years. His current research interests are in AUTOSAR-ased ECUs and Calibration and Measurement Protocol and Tools.

Awab M. W. Al-Habal as born in 1995. He received theB.S. degree with honor from the Electronics and Electrical

Communications Engineering Department at Cairo Uni- versity in 2018. He is currently a Research Assistant at

the Electronics and Electrical Communications Engineer-

ing Department at Cairo University, Egypt. His researchinterests include embedded systems, Internet of Things,

Machine learning and Data science.

Keroles karam Khalil was born in Cairo, Egypt,in 1991.

He received the B.S. degree (Cumulative grade : verygood) in, electronics and communications engineering Ain

Shams University, Cairo, in 2013, he was an EmbeddedSystem Engineer with the Mentor Embedded System Divi-

sion, MentorGraphics Corporation, Cairo. Linkedin Profile

https://www.linkedin.com/in/keroles- karam- 2a86057b .

Magdy A. El-Moursy was born in Cairo, Egypt in 1974.He received the B.S. degree in electronics and communi-

cations engineering (with honors) and the Master’s de-gree in computer networks from Cairo University, Cairo,

Egypt, in 1996 and 20 0 0, respectively, and the Master’s

and the Ph.D. degrees in electrical engineering in thearea of high-performance VLSI/IC design from University

of Rochester, Rochester, NY, USA, in 2002 and 2004, re-spectively. In summer of 2003, he was with STMicro-

electronics, Advanced System Technology, San-Diego, CA, USA. Between September 2004 and September 2006 he

was a Senior Design Engineer at Portland Technology De-

velopment, Intel Corporation, Hillsboro, OR, USA. Duringeptember 2006 and February 2008 he was assistant professor in the Information

ngineering and Technology Department of the German University in Cairo (GUC),airo, Egypt. Between February 2008 and October 2010 he was Technical Lead in the

entor Hardware Emulation Division, Mentor Graphics Corporation, Cairo, Egypt.r. El-Moursy is currently Senior Engineering Manager in Integrated Circuits Verifi-

ation and Solutions Division, Mentor, A Siemens Business and Associate Professor

n the Microelectronics Department, Electronics Research Institute, Cairo, Egypt. Hes Associate Editor in the Editorial Board of Elsevier Microelectronics Journal, Inter-

ational Journal of Circuits and Architecture Design and Journal of Circuits, Systems,nd Computers and Technical Program Committee of many IEEE Conferences such

s ISCAS, ICM, ICAINA, PacRim CCCSP, ISESD, SIECPC, and IDT. His research interest isn Networks-on-Chip/System-on-Chip, interconnect design and related circuit level

ssues in high performance VLSI circuits, clock distribution network design, digital

SIC circuit design, VLSI/SoC/NoC design and validation/verification, circuit verifica-ion and testing, Embedded Systems and low power design. He is the author of over

0 papers, five book chapters, and four books in the fields of high speed and lowower CMOS design techniques, NoC/SoC and Embedded Systems.

Page 10: Microprocessors and Microsystemsakhattab/files/MICRO.pdf · Keroles K. Khalil b, Magdy A. El-Moursy b, ∗, Ahmed Khattab a a Electronics and Electrical Communications Department,

10 I. Gamal, A. Badawy and A.M.W. Al-Habal et al. / Microprocessors and Microsystems 71 (2019) 102874

f

t

e

i

t

r

Ahmed Khattab (S’05, M’12, SM’17) is an Associate Pro-

fessor in the Electronics and Electrical CommunicationsEngineering Department at Cairo University. He is also

an adjunct Associate Professor in the American University

in Cairo (AUC). He received his Ph.D. in Computer Engi-neering from the Center for Advanced Computer Studies

(CACS) at the University of Louisiana at Lafayette, USA, in2011. He received a Master of Electrical Engineering De-

gree from Rice University, USA, in 2009. He also receivedM.Sc. and B.Sc. (Honors) degrees in Electrical Engineering

from Cairo University, Cairo, Egypt, in 2004 and 2002, re-

spectively. He is an IEEE senior member. Dr. Khattab hasauthored/co-authored 3 books, over 70 journal and con-

erence publications, and a US patent. He serves as a reviewer in many IEEE transac-ions, journals and conferences, and is a member of the technical committee of sev-

ral prestigious conferences such as IEEE Globecom, IEEE ICC, IEEE ICCCN, and IEEEWF-IoT. Dr. Khattab was awarded Egypt State Award in 2017. His current research

nterests are in the broad areas of wireless networking including, but not limited to,he Internet of Things (IoT), wireless sensor networks, vehicular networks, cognitive

adio networks, security and machine learning.