Fc 32962967
Click here to load reader
-
Upload
anonymous-7vppkws8o -
Category
Documents
-
view
214 -
download
0
Transcript of Fc 32962967
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 1/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
962 | P a g e
Comparative Study: Content Based Image Retrieval using Low
Level Features
Ankita Sharma1, Gurpreet Singh
2
Department of Electronics and Communication, Lovely Professional University
1,2
Abstract Image retrieval plays an important role in many
areas like architectural and engineering design,
fashion, journalism, advertising, entertainment,
etc. How to search and to retrieve the images
that we are interested in is a fatal problem: itbrings a necessity for image retrieval systems.
As we know, visual features of the images
provide a description of their content. Content-
based image retrieval (CBIR), emerged as a
promising mean for retrieving images andbrowsing large images databases. It is the
process of retrieving images from a collection
based on automatically extracted features by
generally using low level features. To improve
CBIR, human perception i.e. high level feature
extraction can be included for better efficiency.In this paper, a comparative analysis is
performed so as to achieve efficient results.
Keywords:CBIR, Low Level Features, HighLevel Features, Feature Vector
I. INTRODUCTIONRecent advances of technology indigital
imaging and digital storage devices make it possible to easily generate, transmit, manipulateand store large number of digital images anddocuments. As a result of advances in the Internetand new digital image sensor technologies, thevolume of digital images produced by scientific,educational, medical, industrial, and other applications available to users increaseddramatically. Various methods and algorithms have been proposed for imageaddressing. Such studiesrevealed the indexing and retrieval concepts which
have further evolved to Content Based ImageRetrieval (CBIR).Concept of CBIR evolved inearly 1990‟s.
Content Based Image Retrieval (CBIR) is atechnology that helps to organize and retrievedigital image database by their visualcontent. It isthe process of retrieving images from a collection based on automatically extracted features bygenerally using low level features. CBIR evolvedas an advancement of Text Based Image Retrieval(TBIR) which used annotations to describe images.In it images were indexed using keywords,headings, codes, which were then used as retrieval
keys. It suffered from many problems like; (1) it
was non-standardized, because different users useddifferent keywords for annotation, (2) It remainedsometimes incomplete. (3) Humans were requiredto personally describe every image, therefore, itwas cumbersome and labour intensive.Due to theabove mentioned disadvantages, CBIR proved to be more promising and efficient than TBIR.CBIR basically uses the visual contents of an imagesuch as colour, shape, texture, and spatial layout to
represent and index the image. In typical content- based image retrieval systems, the visual contentsof the images in the database are extracted and arethen described by feature vectors. The featurevectors of the images in the database form adatabase. To retrieve images, users provide theretrieval system with example images or sketchedfigures. The system then changes these examplesinto its internal representation of feature vectors.The similarities between the feature vectors of thequery example and those of the images in thedatabase are then measured and retrieval is performed with the aid of an indexing scheme. The
indexing scheme provides an efficient way tosearch for the image database. Recent retrievalsystems have incorporated users' relevancefeedback to modify the retrieval process in order togenerate semantically more meaningful retrievalresults [2].
Figure 1: Content Based Image Retrieval System
The figure 1 shows how CBIR systemworks. Feature vector of both the query image andthe database image is firstly calculated and then thesimilarity measurement is done between the two.The database image which matches the most withthe query image is ranked as first and so on. A point to focus upon is that the retrieval result is not
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 2/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
963 | P a g e
a single image; rather it is number of rankedimages.
II. IMAGE CONTENT DESCRIPTORSFeatureIt can be defined as the representation of any
distinguishable characteristic of an image. It can be classified into 3 levels:1. Low Level: It includes colour, texture, shape,
spatial information and motion.2. Middle Level: It includes arrangement of
particular types of objects, roles and scenes.3. High Level: It includes impressions, emotions
and meaning associated with the combinationof perceptual features.
Feature VectorsExtracting the features of an image and
calculating its feature vector is one of the mostimportant tasks in CBIR. Generally, low level
features are extracted and calculated. Theseinclude:1. Color: Color reflects the chromatic attributes of the image. A range of geometric color models (e.g.,HSV, RGB, Luv) for discriminating between colorsare available. Color histograms are amongst themost traditional technique for describing the low-level properties of an image. Color CoherentVectors (CCV) and Color Moments can also beused to calculate the color feature vector.2. Texture: It is another important property of images. Various texture representations have beeninvestigated in pattern recognition and computer
vision. Basically, texture representation methodscan be classified into two categories: statistical andtransform based methods. Statistical methodsinclude calculating the Gray level Co-occurrenceMatrix (GLCM) which further calculates theEnergy, Entropy, Contrast and Inverse Differencemoment. Transform Based Methods includecalculating the Gabor transform or wavelettransform etc.3. Shape: Shape features of objects or regions have been used in many content-based image retrievalsystems. Compared with colour and texturefeatures, shape features are usually described after images have been segmented into regions or objects. Since robust and accurate imagesegmentation is difficult to achieve, the use of shape features for image retrieval has been limitedto special applications where objects or regions arereadily available. The state-of-art methods for shape description can be categorized into either boundary-based or region-based methods. A goodshape representation feature for an object should beinvariant to translation, rotation and scaling.4. Spatial: Regions or objects with similar colour and texture properties can be easily distinguished by imposing spatial constraints. For instance,regions of blue sky and ocean may have similar colour histograms, but their spatial locations in
images are different. Therefore, the spatial locationof regions or the spatial relationship betweenmultiple regions in an image is very useful for searching images.
III. SIMILARITY MEASUREMENTS
For comparing the similarities between the queryimage and the database image Euclidean distanceformula can be used.
SimilarityQ,DB = Q − DB16n=1
2 ....(1)
Where Q is the query image and DB is the data base image.Other formulas for distance measurement whichcan be used are:
Minkowski-Form Distance: If eachdimension of image feature vector is independentof each other and is of equal importance, theMinkowski-form distance is appropriate for calculating the distance between two images [12].
Quadratic Form Distance: The Minkowskidistance treats all bins of the feature histogramentirely independently and does not account for thefact that certain pairs of bins correspond to featureswhich are perceptually more similar than other pairs. To solve this problem, quadratic formdistance was introduced.
IV. PERFORMANCE EVALUATIONFor evaluating the performance of the
CBIR system, the value of Precision and Recall areconsidered. However, these two parameters don‟t
give information about ranking order of the image.Therefore NMRR (Normalized Modified RetrievalRank) is taken into account.Precision: It is defined as the ratio of the number of relevant images retrieved to the total number of images retrieved. It is a measure of accuracy of retrieval and is given as:
Precision =No. ofrelevantimagesretrieved
Totalno. ofimagesretrieved
…(2)
Recall: It is defined as the number of relevantimages retrieved to relevant images available in the
database.
Recall =No. ofrelevantimagesretrieved
No. ofrelevantimagesavailable
inthedatabase
…(3)
NMRR: It stands for Normalized ModifiedRetrieval Rank. It is basically the normalized MRR score. It gives information about the retrieval rank of the images. Its value lies in between 0 and 1.The smaller the values are, better are the results.
NMRR =MRR q
K+0.5−0.5∗NG q….(4)
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 3/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
964 | P a g e
V. ANALYSIS OF SOME LOW LEVEL
FEATURE BASED TECHNIQUES1. CBIR using advanced Color and TextureFeatures [1]
This presents an efficient Content BasedImage Retrieval (CBIR) system using color and
texture. Here, two different feature extractiontechniques are employed. A universal content based image retrieval system uses color, textureand shape based feature extraction techniques for better matched images from the database. In proposed CBIR system, color and texture featuresare used. The texture features were extracted fromthe query image by applying block wise DiscreteCosine Transforms (DCT) on the entire image andfrom the retrieved images the color features wereextracted by using moments of colors (Mean,Deviation and Skewness) theory.
Figure 2: Block Diagram of Feature Extraction [1]Color Feature Extraction: Color is one of the mostwidely used visual features in CBIR because it isrelatively robust. As shown in the block diagram,the RGB components of the image are extracted.They are then divided into block sizes of 8x8 eachand the three moments are retrieved using the
equations given below[1]:Ei =
1
mn Pij
mnj=1 …(5)
σi =1
mn (Pij − E)2mn
j=1 1
2…(6)
αi =1
mn (Pij − E)3mn
j=1 1/3…(7)
Where Ei is an average of each color channel (i= R,G, B), σi is a standard deviation, αi is a skewness.Pij is a value of each color channel at jth image pixel. m.n are the total number of pixels per image.Texture Feature Extraction: For efficient texturefeature calculation, we can use DCT coefficients
[5].
Figure 3: DCT-9 Feature Vectors [1].
For each sub block containing one DC coefficientand other AC coefficients, we extract a feature setof 9 vector components by averaging specificregions of coefficients as shown by figure 4,considering following characteristics:1) The DC coefficient of each sub block representsthe average energy of an image (vector component1).2) All remaining coefficients within a sub block contain frequency information which represents adifferent pattern related to image variation (vector components 2 to 4).3) The coefficients of some regions within a sub
block also represent some directional information(vector components 5 to 9).For DCT transform first we convert RGB imageinto gray level image. We then used block basedDCT transformation. The equation used for theDCT calculation for each pixel is given as follows:
Fu, v=
cu cv
4 f i, jcos
2i + 1uΠ
16cos
2i + 1vΠ
16
7
i=0
7
i=0
…(8) Where f (i, j) is the pixel value at the (x, y)coordinate position in the image, F (u, v) is DCT
domain representation of f (i, j), where u and vrepresent vertical and horizontal frequencies,respectively.[1]
RESULTSThere are three different techniques for retrieval of images. They are as follows:1) Using Color (HSV and YCbCr) And Texture2) Using only Color (HSV and YCbCr)3) Using only TextureAmongst all three techniques the first one 'UsingColor (YCbCr) And Texture' is giving veryefficient results than any other technique. It
retrieved 60% of total available images in thedatabase which are similar to given query image.
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 4/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
965 | P a g e
The results based on only color and texture wereapproximately 40% [1].
2. CBIR using Pulse Coupled Neural Networks
(PCNN) [2]In this technique a method for CBIR based
on PCNN is shown. The PCNN is originally proposed as a model of biological neural network in the cat visual cortex, which can reproducesynchronous activities in the brain [6][7][8][9].The method of the image matching using thePCNN had been proposed in [10]. In thisconventional study, the method utilizes the PCNN-Icon to achieve image matching. Where the PCNN-Icon is obtained from PCNN firing activity whenthe image data inputs to the PCNN. The PCNN-Icon is a time series of the number of firingneurons, and it is obtained by the observation of neuron firing in the PCNN.The most important
characteristics of the PCNN-Icon is that thecorrelation between PCNN-Icons of same or similar images has large value. One of theremarkable features of this method is that themethod can recognize the rotated, or scale modifiedimage.
This image matching method using PCNNis achieved by means of the PCNN-Icon. ThePCNN-Icon is defined as the time series of thenumber of the firing neurons in the PCNN. Toobtain the PCNN-Icons,a number of the firingneurons are observed in every time step from t = 0to t = tmax, where tmax is defined arbitrary. Namely,
the PCNN-Icon is also defined as a tmax-dimensional natural number vector. Where weassumed that the tmax is 100 in this study. It isempirically known that the PCNN-Icon is unique tothe input image.
To retrieve the images, the systemcalculates the correlation among the PCNN-Iconsof the images in the database and the query image.The way of presentation of the results is simplyassumed that the system presents the image data indescending order of the correlation between thePCNN-Icon of the image in the database and that of the query image.
RESULTSIn the results of the image retrieval using
our CBIR system, it was shown that the images inrelevant category were listed on top of the ranking.The characteristics of retrieved images have also been evaluated, and it is shown that the imagerecognition method using PCNN is valid for theCBIR system [2].
Figure 4: Result of CBIR using PCNN
3. CBIR using Shape Feature [3]The focus of this technique is on shape
based image retrieval. Like any other features based on human perception, the major problem inthe use of shape is how to represent shapeinformation. The proposed technique is a contour based approach. Canny operator is used to detectthe edge points of the image. The contour of theimage is traced by scanning the edge image and re-sampling is done to avoid the discontinuities in thecontour representation. The resulting image isswept line by line and the neighbour of every pixelis explored to detect the number of surrounding
points and to derive the shape features [3]Contour-based shape description
technique is implemented in this work. The contour detection is one of the most used operators inimage processing. In this method, lengths of theshape‟s radii from centroid to boundary are used to
represent the shape. If a shape is used as feature,edge detection is the first step to extract thatfeature. Here, the canny edge detector is used todetermine the edge of the object in the scene. After the edge has been detected the important step istracing the contour of the object in the scene. For this the edge image is scanned from four directions
(right to left, left to right, top to bottom, bottom totop) and the first layer of the edge occurred isdetected as image contour. To avoid discontinuitiesin object boundary the contour image is then re-sampled. Next, the central point of the object islocated. The centroid is computed using the givenequations:
xc = xi
n
ni=1 …(9)
yc = y i
n
ni=1 …(10)
Where n is the number of points of anobject. The contour is characterized using asequence of contour points described in polar form
[11]. Here the pole at the centroid (xc, yc) is takenthen the contour graph is obtained using the
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 5/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
966 | P a g e
equation d=f(θ) and each point (x,y) has the polar description where x, y, d, θ are related using the
given equations:
d = (x − xc )2 + (y − yc )2…(11)
θ = tan−1(y−yc
x−xc)…(12)
RESULTThe approach described herein has proven
to be simple and effective for shape based imageRetrieval and it performs better than salient pointsmethod. Average retrieval efficiency is 71.17% (for top 20 images), 70.07% (for top 25 images) and80.625% (for top 30 images).
4. Image Retrieval using Local Color Features[4]
In this technique, an image indexing andretrieval approach using local color features and a
modified weighted color distortion measure is proposed. In it, each image is segmented intoseveral regions by a watershed segmentationalgorithm, and then the mutual relationships between connected color regions are extracted aslocal color features. That is, an image can berepresented as set of connected (adjacent) color regions and the mutual relationships betweenconnected color regions. In the image retrievalstage, the similarity between a query image and atarget image will contain not only direct regioncorrespondence but also the mutual relationships between connected color regions. Watershed
algorithm is used to partition an image intosegmented regions. Because the watershedalgorithm is a gradient-based region-growingmethod [12], each input color image 1 is convertedinto a gray-scale image L. To match thecharacteristics of the human visual system, theconversion is a weighted average conversion [13]
RESULTTo evaluate the performance of the
proposed approach, three existing approaches for comparison are implemented. They are (1) theconventional color histogram (CCH) approach, (2)
the fuzzy color histogram (FCH) approach [3], and(3) the unified feature matching (UFM) approach.The performance of the proposed approach is foundto be better than those of the three existingapproaches. The average precision value is found to be 0.281 whereas for CCH it is 0.242, for FCH it is0.260, for UFM 0.202.
VI. CONCLUSIONResearch in content-based image retrieval
(CBIR) in the past has been focused on image processing, low-level feature extraction, etc. In thisreview paper, the main focus is on Low Level
Feature extraction. Features like color, shape,texture has been extracted by applying suitable
techniques. Use of Neural networks has also givengood results. These techniques give good resultswhen the image database is not very large.
But, in recent trends, image retrieval needsmore accurate results when the database increases.Extensive experiments on CBIR systems
demonstrate that low-level image features cannotalways give proper results. Therefore, high-levelsemantic concepts can be introduced as well, so asto decrease the semantic gap between the low leveland the high level features. It is believed that CBIR systems should provide maximum support in bridging the „semantic gap‟ between low-levelvisual features and the richness of humansemantics.Therefore, for better retrieval results the high levelfeatures can also be introduced along with the lowlevel features for better retrieval results.
REFERENCES[1] SagarSoman, MitaliGhorpade,
VrushaliSonone, SatishChavan, (2011),“Content Based Image Retrieval Using
Advanced Colour and Texture Features”,International Conference in ComputationalIntelligence (ICCIA)
[2] Masato Yonekawa and HiroakiKurokawa,School of Computer Science,Tokyo University of Technology,2012,“The Content-Based Image Retrievalusing the Pulse Coupled Neural Network”,
WCCI 2012 IEEE World Congress on
Computational Intelligence.[3] S.Arivazhagan, L.Ganesan,
S.Selvanidhyananthan, 2007,”Image
Retrieval using Shape Feature”,International Journal of Imaging Scienceand Engineering (IJISE).
[4] Tzu-Chien Wu, Jin-Jang Leou,@Member, IEEE, and Li-Wei Kang,2005,”Image Retrieval Using Local Color Features”, 0-7803-8838-0/05/$20.0002005 IEEE
[5] Dr.H.B.Kekre, Sudeep D. Thepade andAkshayMaloo, “Image Retrieval using
Fractional Coefficients of TransformedImage using DCT and Walsh Transform,”
International Journal of EngineeringScience and Technology. Vol. 2, No. 4, pp. 362-371, 2010
[6] R. Echorn, H. J. Reitboeck, M. Arndt, P.Dicke, ”Feature linking via
synchronization among distributedassemblies: Simulations of resultsfrom catvisual cortex,” Neural Computation, vol.2,
pp.293 – 307, 1990[7] A. K. Engel, A. K. Kreiter, P. K ¨ onig,
and W. Singer, ”Synchronizationof
oscillatory neuronal responses betweenstriate and extrastriate visual cortical areas
7/29/2019 Fc 32962967
http://slidepdf.com/reader/full/fc-32962967 6/6
Ankita Sharma, Gurpreet Singh / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 2, March -April 2013, pp.962-967
967 | P a g e
of cat,” Proc. Natl. Acad. Sci. USA, vol.
88, pp.6048 – 6052,1991[8] W. Kruse and R. Eckhorn, ”Inhibition of
sustained gamma oscillations (35-80 Hz) by fast transient responses in cat visualcortex,” Proc. Natl. Acad. Sci. USA, vol.
93, pp.6112 – 6117,1996[9] R. Echorn, ”Neural Mechanisms of Scene
Segmentation: Recording fromthe VisualCortex Suggest Basic Circuits for LikingField Model,” IEEETrans. Neural
Network, vol.10, no.3, pp.464 – 479, 1999[10] J. L. Johnson, M. L. Padgett, ”PCNN
Models and Applications,” IEEE Trans.
Neural Network, vol.10, no.3, pp.480 – 498, 1999
[11] Hwei-Jen Lin, Yang-Ta Kao, Shwu-HueyYen, and Chia-Jen Wang. “A Study of
shape-Based Image Retrieval”,
Proceedings of the 24th InternationalConference on Distributed ComputingSystems Workshops (ICDCSW‟04)0-7695-2087-1, 2004.
[12] Y. Chen and J. 2. Wang, “A region-basedfimy feature matching approach tocontent- based image retrieval,” IEEE
Trans on Putrern Analysis and MachineIntelligence, vol. 24, no. 9, pp. 1252-1267, September 2002.
[13] D. A. Adjeroh and M. C. Lee, “On ratio- based color indexing,” IEEE Trum.
onlmage Processing, vol. 10, no. 1, pp.
3648, January 2001.
Ankita Sharma,received her B.techDegree from MaharanaPratapCollege of Technology, Gwalior inthe year 2010. Presently, she is pursuing her M.tech (ECE) fromLovely Professional University,Punjab (2013). Her area of interest isimage processing and in future her
focus is to pursue Ph.D in the same domain.
Gurpreet Singh, received hisB.tech degree from Rayat Instituteof Engineering and Technology inthe year 2010. Presently, he is pursuing M.tech (ECE) fromLovely Professional University,Punjab (2013). His area of interest
is Image Processing and Neural Networks. Infuture, his aim is to pursue Ph.D in the same area..