International Conference on Computer and Communication...

6
International Conference on Computer and Communication Engineering (ICCCE 2010), 11-13 May 2010, Kuala Lumpur, Malaysia 978-1-4244-6235-3/10/$26.00 ©2010 IEEE Foreground Segmentation-Based Human Detection With Shadow Removal Fadhlan Hafiz Department of Electrical and Computer Engineering International Islamic University Malaysia Kuala Lumpur, Malaysia [email protected] A. A. Shafie Department of Mechatronics Engineering International Islamic University Malaysia Kuala Lumpur, Malaysia [email protected] Othman Khalifa Department of Electrical and Computer Engineering International Islamic University Malaysia Kuala Lumpur, Malaysia [email protected] M.H. Ali Department of Mechatronics Engineering International Islamic University Malaysia Kuala Lumpur, Malaysia [email protected] AbstractRecent research in video surveillance system has shown an increasing focus on creating reliable systems utilizing non-computationally expensive technique for observing humans’ appearance, movements and activities, thus providing analytical information for advanced human behaviour analysis and realistic human modelling. In order for the system to function, it requires robust method for detecting human form from a given input of video streams. In this paper, we present a human detection technique suitable for video surveillance. The technique we propose includes background subtraction, foreground segmentation, and shadow removal. The proposed detection technique will first try to extract all foreground objects from the background and then moving shadows will be eliminated by a shadow detection algorithm. Finally, we perform a morphological reconstruction algorithm to recover the distorted foreground objects after shadow removal process. We define certain features that describe human and match them with the final objects obtained from earlier processing. The experimental result proves its validity and accuracy in various fixed outdoor and indoor video scenes. Keywords- Human detection, Background subtraction, Foreground segmentation, Gaussian mixture model, Shadow removal, Video surveillance I. INTRODUCTION Human detection in videos has becoming an increasingly important research area in both computer vision and pattern recognition community because of its potential applications in video surveillance, driving assistance system and content-based image retrieval. The analysis of behaviours of people also requires the human detection and tracking system. In recent years, the development of human detection and tracking system has been going forwards for several years, many real time systems have been developed[1][2]. However, there are still some challenging technologies need more researches: foreground segmentation and false alarm elimination. From the view of the state of arts, there is no perfect algorithm for foreground segmentation to be adaptive to tough situations such as heavy shadow, sudden light change, tree shaking so on. Most systems of human detection and tracking work fine in the environment with gradual light change, however, they fail to deal with the situation with sudden light change, tree shaking and moving background. In this paper, we present a fast real-time human detection system which utilizes combinations of several non- computationally expensive techniques that are in result proven to be robust enough against camera noise, illumination change, heavy shadow and moving background. We propose the use of foreground segmentation based on Gaussian Mixture Model (GMM) to extract foreground objects from the image. To get accurate detection, the shadow, which is considered as one of the biggest challenge in human detection must be removed from the foreground objects. Therefore, we apply shadow detection and removal based on simple contrast adjustments in HSV colour space in our proposed technique. The rest of this paper is organized as follows: section II reviews previous work; section III gives understanding of GMM foreground segmentation; section IV details the shadow removal process; section V demonstrates the overall human detection process; section VI elaborates the result; and finally section VII concludes the paper. This is research is sponsored by MDEC/MSC Malaysia

Transcript of International Conference on Computer and Communication...

  • International Conference on Computer and Communication Engineering (ICCCE 2010), 11-13 May 2010, Kuala Lumpur, Malaysia

    978-1-4244-6235-3/10/$26.00 ©2010 IEEE

    Foreground Segmentation-Based Human Detection With Shadow Removal

    Fadhlan Hafiz Department of Electrical and Computer Engineering

    International Islamic University Malaysia Kuala Lumpur, Malaysia

    [email protected]

    A. A. Shafie Department of Mechatronics Engineering International Islamic University Malaysia

    Kuala Lumpur, Malaysia [email protected]

    Othman Khalifa Department of Electrical and Computer Engineering

    International Islamic University Malaysia Kuala Lumpur, Malaysia

    [email protected]

    M.H. Ali Department of Mechatronics Engineering International Islamic University Malaysia

    Kuala Lumpur, Malaysia [email protected]

    Abstract—Recent research in video surveillance system has shown an increasing focus on creating reliable systems utilizing non-computationally expensive technique for observing humans’ appearance, movements and activities, thus providing analytical information for advanced human behaviour analysis and realistic human modelling. In order for the system to function, it requires robust method for detecting human form from a given input of video streams. In this paper, we present a human detection technique suitable for video surveillance. The technique we propose includes background subtraction, foreground segmentation, and shadow removal. The proposed detection technique will first try to extract all foreground objects from the background and then moving shadows will be eliminated by a shadow detection algorithm. Finally, we perform a morphological reconstruction algorithm to recover the distorted foreground objects after shadow removal process. We define certain features that describe human and match them with the final objects obtained from earlier processing. The experimental result proves its validity and accuracy in various fixed outdoor and indoor video scenes.

    Keywords- Human detection, Background subtraction, Foreground segmentation, Gaussian mixture model, Shadow removal, Video surveillance

    I. INTRODUCTION Human detection in videos has becoming an increasingly

    important research area in both computer vision and pattern recognition community because of its potential applications in video surveillance, driving assistance system and content-based image retrieval. The analysis of behaviours of people also requires the human detection and tracking system. In recent

    years, the development of human detection and tracking system has been going forwards for several years, many real time systems have been developed[1][2].

    However, there are still some challenging technologies need

    more researches: foreground segmentation and false alarm elimination. From the view of the state of arts, there is no perfect algorithm for foreground segmentation to be adaptive to tough situations such as heavy shadow, sudden light change, tree shaking so on. Most systems of human detection and tracking work fine in the environment with gradual light change, however, they fail to deal with the situation with sudden light change, tree shaking and moving background.

    In this paper, we present a fast real-time human detection

    system which utilizes combinations of several non-computationally expensive techniques that are in result proven to be robust enough against camera noise, illumination change, heavy shadow and moving background. We propose the use of foreground segmentation based on Gaussian Mixture Model (GMM) to extract foreground objects from the image. To get accurate detection, the shadow, which is considered as one of the biggest challenge in human detection must be removed from the foreground objects. Therefore, we apply shadow detection and removal based on simple contrast adjustments in HSV colour space in our proposed technique.

    The rest of this paper is organized as follows: section II reviews previous work; section III gives understanding of GMM foreground segmentation; section IV details the shadow removal process; section V demonstrates the overall human detection process; section VI elaborates the result; and finally section VII concludes the paper.

    This is research is sponsored by MDEC/MSC Malaysia

  • II. PREVIOUS WORK

    Despite all the difficulties on human detection, a lot of work has been done recent years to realize several human detection techniques which produce accurate result in various condition and environment. One of the techniques used by other researchers is based on Histogram of Oriented Gradients (HOG) [3] and the improved version of HOG in Locality Preserving Projection (LPP-HOG). [4]. Local object appearance and shape are characterized by the distribution of local intensity gradients or edge direction and HOG features are calculated by taking orientation histograms of edge intensity in local region. We found that the total number of features becomes too many when the features extracted from all locations on the grid and the computation is too complex to even fit the real-time system requirement.

    Jianpeng Zhou and Jack Hoang proposed the use of GMM in [5], where the extraction of moving objects from the image is based on the motion information and the foreground segmentation is carried out using background subtraction. The background can be modelled as Gaussian distribution N (μ; σ2), this basic Gaussian model can adapt to gradual light change by recursively updating the model using an adaptive filter. However, this basic model will fail to handle multiple backgrounds, such as water wave and tree shaking. To solve the problem of multiple backgrounds, the models such as the mixture of Gaussian [6] can be used. The GMM have been used for modelling dynamically video foreground/background pixel distribution [7, 8]. Given a video sequence taken by a static camera, the GMM works background subtraction by constructing over time a mixture model for each pixel, and decides in an input frame whether each pixel belongs to the foreground or the background.

    In [9], they adopt the idea from and propose a method based on shadow confidence score (SCS) to separate vehicles and cast shadows from the foreground objects in traffic monitoring system .While it suggested that shadow removal method based on model is only applied to some special scenes with large and complex computations [10], so our proposed technique to match real-time requirement includes the shadow removal method based on properties of colour information like chromaticity as also proposed in [11, 12] which proven to be non-computationally expensive.

    III. FOREGROUND SEGEMENTATION USING GMM Moving object extraction in the foreground is the first step

    toward human detection. Time-adaptive mixtures of Gaussians background models (GMM) can solve the problems caused by complex background such as listed as follows:

    • Gradual changing background: like the gradual illumination;

    • Non-static background: like the swing leaves in wind and the changing television displays;

    • Sudden change of the background: such as the objects are added or removed from the scene suddenly.

    A. Background Modelling In [10], each pixel is modelled as a pixel process; each process consists of a mixture of K adaptive Gaussian distributions. The probability that a pixel of a particular

    distribution will occur at time t is determined by the following equation. The distributions are ordered based on least variance and maximum weight. , , , , , 1 Where: K = number of Gaussian distributions, , = weight estimate of the ith Gaussian in the mixture at

    time, , = mean value, , = covariance matrix of the ith Gaussian at time t,

    = Gaussian probability density function. B. Background ModelMatching and Updating

    We checked every new pixel with each of K current Gaussian distributions. If the pixel grey value is within 2.5 of a distribution, a fast match is found. Then, the parameters , and , for the matching distribution are updated as [13]: , 1 · , ·, 1 · , · , 2 Where

    α = Gaussian adaptation learning rate.

    If the current pixel value matches none of the distributions, the least probable distribution is updated with the current pixel values, a high variance and low prior weight. The prior weights of the K distributions are updated at time t according to:

    , 1 · , · , 1, 3 Where

    = learning rate,

    , = 1 for the model which matched the pixel and 0 for the remaining models.

    The Gaussians are ordered based on the descending ratio of ω/σ. This increases as the Gaussian’s weight increases and its variance decreases. The first B distributions accounting for a proportion T of the observed data are defined as background. We set T = 0.8 here as in [13]:

    arg min 4 For the non-background pixel, we calculate the difference

    between this pixel in current image and in background model. Only the pixel with the difference over the threshold 10 is labelled as foreground pixel.

    IV. SHADOW REMOVAL Foreground objects include human and cast shadows. The

    shadow affects the performance of foreground detection and the regions of shadow will be detected as foreground. Recognizing shadow is hard work and to simplify that, the shadow

  • elimination method can be based on the assumption that chromaticity of the shadow area is not changed, and only the intensity of the pixel in the shadow area is lowered [9, 10]. An area cast into shadow often results in a significant change in intensity without much change in chromaticity. We just apply our shadow elimination method on the detected foreground area. This is based on the fact that only the shadows which are wrongly detected as foreground is need to be considered.

    HSV colour space matches people’s visual feeling better than

    RGB colour space and additionally the luminance and chrominance variety can be detected more effectively in HSV colour space, especially in the outdoor scenes. For these reasons, HSV colour space is chosen to distinguish luminance (V) from chrominance (H and S). It is based on the simple idea that, shadows change the brightness of the background, but do not really affect the chrominance and saturation in HSV colour space.

    So, we manipulate the value of luminance by changing the

    contrast of the foreground objects to remove the lower luminance shadows while keeping the desired foreground object with higher luminance unaffected by the process [14]. The contrast adjustment adjusts the contrast of an image by linearly scaling the pixel values between upper and lower limits. Pixel values that are above or below this range are saturated to the upper or lower limit value, respectively as in Fig.1.

    Figure 1: Graph representation of contrast adjustment

    First, we formulate the contrast enhancement optimization for gray images. We consider the intensity values of a grayscale image to be representative of the luminance values at the pixel locations. We pose the optimization as follows: f Ω 14|Ω| I p I qI p I qΩ 5

    subject to a perceptual constraint 1 I p I qI p I q 1 τ 6 And a saturation constraint 7

    Where scalar functions and represent the gray values at pixel p of the input and output images respectively, Ω denotes sets of pixels that makes up the image, |Ω| denotes the cardinality of denotes the set of four neighbours of p, L and U are the lower and upper bounds of the gray values (e.g. L = 0 and U = 255 for images that have gray values between 0 and 255), and τ 0 is the single parameter that controls the amount of enhancement achieved. Note that the saturation constraint does not control the gradient but just the range of values a pixel is allowed to have. Thus the pixels in the very dark or very bright regions of the image will still have their gradients enhanced. The example of contrast adjustment in gray scale and binary-image representation of shadow removal are shown in Fig.2 and Fig.3 respectively. The contrast is carefully selected to compensate other part of human body which might have low luminance like head. Higher contrast value removes more shadow but can heavily deform human shape.

    Figure 2: Contrast adjustment applied to gray-scale images: Before contrast adjustment (Top row), and after contrast adjustment (Bottom row).

    Figure 3: Shadow removal illustrated in binary images: Original image (left), without shadow removal (middle), with shadow removal (right)

  • In Fig.3 the middle image are from foreground segmentation

    after using GMM; and it is obvious that the foreground includes the moving object, cast shadow and speckle noise; and after shadows removal we can see clearly that some foreground pixels are regarded as shadows and removed, consequently. We can observe slight deformation in Fig.3, but the deformation of that magnitude can be reconstructed using morphological operation

    V. HUMAN DETECTION PROCESS The overall detection technique proposed in this paper can be

    classified into two parts, (1) Image Pre-processing and (2) Segmentation and Noise removal. Image pre-processing includes frame processing, foreground segmentation, and binarization. While Segmentation includes Shadow removal, morphological operation, noise removal, and size filtering.

    A. Image Pre-processing

    Initial background means to locate a scene without any human in the detection zone as in Fig.4. The initial background and the images from the video streams will be first converted to gray scale image for foreground segmentation. Subsequently the image will be compared to the background to detect any changes in the image using

    Figure 4: Initial background (Left), and foreground (Right)

    GMM as explained earlier. The result of this process is passed to shadow removal algorithm to detect and remove any cast shadows that exist in the foreground. After that, the image will be passed to binarization process which aims to transform grayscale image to black and white (binary) image Pixels with the lightness below filter threshold acquire the low colour, 0, and those with the lightness above the filter threshold acquire the high colour, 1 producing binary image.. The algorithm for this process in histogram is: “Given a number, threshold, T between 0 and 255, create a T-th frame by replacing all the pixels with gray level lower than or equal to T with black (0), the rest with white (1)”. This T value must be able to conserve the human body shape because sometimes the difference in light intensity can produce a highly-deformed human shape especially the region between head and chest. Therefore the T value must be carefully selected. It must also compensate with the object shadow and light illumination, which we considered as noise. Lower T will increase the effect of shadow in the image and increase the pixel noise, while higher T will deform the human shape. Hence, the T value selected should minimize both effects. From various experiments carried out, the optimal T value is 15.

    B. Segmentation and Noise Removal After pre-processing, the image is morphologically

    reconstructed after that to ensure that human body part that is separated because earlier shadow removal will be grouped together as in Fig.6. It is important to note that this process cannot reconstruct heavily deformed shape so the shadow removal process should be carried out moderately by selecting suitable contrast value and in this case we set the contrast at1.5. Later, the camera pixels noise will be removed using median filter. Then the resulting image is later on filtered according to the object’s size contained in the image in order to remove insignificantly small image and to distinguish between human and non-human..

    The simplest method is to establish the body appearance model to distinguish human and non-human. The human configuration is more than one, but the whole shape of human is in stabilization. Among the shape-based methods, the model can be established by using the simple shape ratio of human body, or the various action of human, or multiple rectangular of human. The corresponding mathematic transform include edge extraction, wavelet transform, projection transform and etc. Work as end-long state, the outline of human body presents a slightness rectangular. Although the non-rigid movement of human makes the shape of human changing largely, the ratio of height and width of human is above a definite value. Therefore, the model established by the outline ratio is a simplest method for end-long body detection. The human model used is illustrated in Fig.5.

    Figure 5: The human model used in this technique

    Figure 6: The morphological operation: Image after shadow removal (Left), and image after reconstruction (Right)

    The minimum height and width is set at 30 pixels. This process will ignore all objects with small size. Then the image will be filtered by the same filter but with different parameter which is size ratio, S. The S value in pixels is defined as:

  • S Height pixelsWidth pixels , 1.6 S 3.0

    This S is defined as between 1.6 pixels andexperience. If the ratio of height to width is withthen the object is classified to a human body. Talso ignore all blobs with small size which insignificant objects, shadow and noise. Overaand the graphical overview are illustrated step and Fig.8 respectively. The final human imagecoloured bounding box for easy observation.

    Figure 7: Human detection process

    Figure 8: Graphical overview of human detec

    VI. EXPERIMENTAL RESULWe test the proposed technique of GM

    segmentation and shadow removal using the vfrom CAVIAR Test Set [15]. As we can see fhuman detection algorithm classify human by using colored bounding boxes. It can be obalgorithm can detect the boundary of humaperfectly, given that the human is free from ashown in Fig.9 (a), (b) and (c). But as the ocbetween human (group human), the algorithmthe presence of human but cannot accuratelynumber of human presents in the scene (false nbounding boxes also located incorrectly (falshown in Fig.9 (d). False negative also haphuman in the scene is too far from the camera m

    Video Streams Frame Processing Get background and foreground

    BinarizationShadow RemovalMorphological Operation

    Noise Removal Size Filter Human

    Background Foreground Backg

    Binarization Shadow Removal & Morphological

    Operation

    0 8 d 3.0 pixels from hin the threshold, This process will

    corresponds to all process flow by step in Fig.7 e is indicated in

    s flow

    tion process

    LT MM foreground video sequences from Fig. 9, the indicating them

    bserved that the an body almost any occlusion as cclusion happens

    m still can detect y determine the negative) and the lse positive) as ppens when the making their size

    too small and causing the system (e) and (f). Another cause of false the human body appeared in thoccluded like in Fig.9 (e) and (f). Ois summarized in Table 1. The videOneStopEnter2cor.mpeg taken fro2724 frames with total number of in

    VII. EXPERIMEN

    Figure 9: Example of human detecttechnique implemented on CAVIA

    Table 1: Accuracy of

    Total Success Rate (%)

    False Negative (%)

    False Positive (%)

    The proposed method for programming and it was implemenQ9550, 2.83GHz with 4GB of memdetection system works approximimages of size 384x288 which initiof 27fps without any detection

    Gray-scale Filter

    Background Subtraction

    ground Subtraction

    Size Filter

    to filter them out as in Fig.9 negative is when only part of

    he scene or they are partly Overall detection success rate eo sequence used in the test is om CAVIAR which contains ndividual human is 8.

    NTAL RESULT

    tion using proposed AR Test Set

    proposed technique

    97.2

    0.6

    1.2

    now only use single-core nted to a PC (Intel Quad Core mory). In this test, the whole mately 9 fps for input jpg ially have average frame rates applied. The time taken to

  • perform the detection is illustrated in Fig.10. We measure the time from the beginning of the detection until the end of the detection, i.e. when the bounding boxes are assigned. As we can see, the detection time increased as the number of human in the scene increased. But from 6 human/frame to 8 human/frame, the detection time is constant at 75ms.

    Figure 10: The detection time of proposed technique

    VIII. CONCLUSION In this paper, we presented a human detection technique

    employing GMM as foreground segmentation technique and contrast adjustment for shadow removal, which is suitable for real-time video surveillance system. The algorithm presented produces good result and not at all computationally expensive. The computation is fast and do not consume much processing power which made this technique very suitable for analyzing fast video streams at high frame rates. This technique has been implemented into CCTV system in IIUM Machine Vision Lab. Still the technique used could be improved further to handle occlusion and crowd detection. This technique also suitable to be implemented into human recognition and tracking analysis.

    Future Work: The partial body occlusion is the main drawback from using shape-based human detection as the system cannot determine correctly the boundary of human body in case of occlusion. But we can overcome this problem using head detection to correctly determine the number of human presents in the scene and therefore can locate the bounding boxes accurately based on human-shape model. The implementation of this technique can greatly improve the accuracy of the proposed human detection and widen the situations that in can be applied to. The evaluation also will take into consideration more video sequences from CAVIAR to determine the overall performance of the system. The code for the software will also be revised to implement multi-threading for better performance with multi-core processors.

    IX. REFERENCES [1] J. Connell, A.W. Senior, A. Hampapur, Y-L Tian, L. Brown, and S. Pankanti, “Detection and Tracking in the IBM PeopleVision System”, IEEE ICME, June 2004

    [2] L.M.Fuentes and S.A.Velastin, “People Tracking in Surveillance application”, in Proc. 2nd IEEE International Workshop on PETS, Dec. 2001 [3] Hui-Xing Jia and Yu-Jin Zhang, “Fast Human Detection by Boosting Histograms of Oriented Gradients”, In Proceedings of Fourth International Conference on Image and Graphics, 2007.

    [4] Qin Jun Wang and Ru Bo Zhang, “LPP-HOG: A New Local Image Descriptor for Fast Human Detection”, IEEE 978-1-4244-3531-9/08, 2008.

    [5] Jianpeng Zhou and Jack Hoang, “Real Time Robust Human Detection and Tracking System”, In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005

    [6] Mohand Said Allilit , Nizar Bouguila And Djemel Ziout, “Online Video Foreground Segmentation using General Gaussian Mixture Modeling”, 2007 IEEE International Conference on Signal Processing and Communications (ICSPC 2007), 24-27 November 2007.

    [7] J. Cheng, J. Yang, Y. Zhou and Y Cui, “Flexible Background Mixture Models for Foreground Segmentation”, Image and Vision Computing, 24(5):473-482,2006. [8] C. Stauffer and W.E.L. Grimson.” Learning Patterns of Activity Using Real-Time Tracking”. IEEE Trans. on PAMI, 22(8):747-757, 2000. [9] Huang Mao chi, Yen Shwuhuey,”A Real-Time and Color-Based Computer Vision for Traffic Monitoring System [J]”, IEEE International Conference on Multimedia and ICME, 2004,(3): 2119-2122� [10] Chuanxu Wang, Weijuan Zhang, “A Robust Algorithm for Shadow Removal of Foreground Detection In Video Surveillance”, 2009 Asia-Pacific Conference on Information Processing, 2009. [11] Goncalo Monteiro, Jo˜ao Marcos, Miguel Ribeiro, and Jorge Batista, “Robust Segmentation For Outdoor Traffic Surveillance”, In Proceeding of ICIP, 2008. [12] Zhen Tang, Zhenjiang Miao, “Fast Background Subtraction and Shadow Elimination Using Improved Gaussian Mixture Model”, In Proceeding of HAVE 2007 - IEEE International Workshop on Haptic Audio Visual Environments and their Applications, 2007. [13] Milan Sonka et.al, “Image Processing, Analysis, and Machine Vison”, Thomson Learning, USA, pp. 816, 2008. [14] Alessandro Capra et al., “Dynamic Range Optimization by Local Contrast Correction and Histogram Image Analysis”, IEEE 0-7803-9459-3/06, 2006. [15] http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/

    67686970717273747576

    1 2 3 4 5 6 7 8

    Detection Time

    Number of Human/frame

    Time (ms)

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice