combinepdf (5)



J. Arockia Jenifer & S. Lakshmi

Fatima College(Autonomous), Madurai


Biometric image processing and recognition, which can be used in attendance, like

employee attendance, student attendance. It is more secure for user access, e-commerce and other

security applications. This includes the addition of the different options such as fingerprint based

attendance system, face recognition attendance systems.

This biometric scan can solve issues pertaining to information security. Fundamental

challenges encountered by biometric systems in real world applications. Biometric technology

used to measure and analyze personal characteristics, these characteristics include fingerprints,

voice patterns, hand measurements.

Fingerprints are considered to be the best and fastest method for biometric identification.

The biometric identification systems are widely used for unique identification, mainly for

verification and identification. Using fingerprint based attendance system has been introduced for

automatically monitoring and calculating the student attendance in a class.


Image processing is a method to perform some operations on an image, in order to get an

enhanced image or to extract some useful information from it. It is a type of signal processing in

which input is an image and output may be image characteristics associated with the image.

Analyzing and manipulating the image.

Digital Image Processing is the technology of manipulating the groups of bits to enhance

the quality of the image or create different perspectives or to extract information from the image

digitally, with the help of computer algorithms. Digital image is how computer record the

pictures. The smallest unit is called pixel, normally consists of a 0-255 value for gray images. For

color images, each pixel has three 0-255 values, representing RGB.

Digital image processing visualization observes the objects, which are not visible. Image

sharpening and restoration is used to create a better image. Measurement of pattern measures

various objects in an image. Biometric image processing and recognition can be used in

attendance. Biometric attendance systems are quickly gaining in most offices and institutions.

It can reduce problems such as the presence of the missing paper and easily damaged.

With this system can replace the existing manual system to a more systematic.

In fingerprint attendance system with access control, that not only allows you to login and

out times of the employees, but also enables you prevent unauthorized entries into the workplace.

This makes easier for both the employee and the business as work hours are logged automatically

when the employee enters and leaves the office. This eliminates the possibility of timesheets

getting lost or manipulated; it also saves a lot of time. In biometrics, image processing is required

for identifying and individual whose biometric image is stored in the database previously, faces,

fingerprints, etc., are image based biometrics, which require image processing and pattern

recognition techniques.


1. Field used

It can be used in institutions, colleges, companies and many developing fields. An

automated system eliminates the need for paper tracking and instead makes use of touch screens,

magnetic stripe cards. This makes easier for both the employees and students. Automated

fingerprint verification is a closely related technique used in applications such as attendance and

access control systems. It compares the basic fingerprint patterns between a previously stored

template and a candidate fingerprint.

2. Fingerprint Process

Biometric technologies can provide a means for uniquely recognizing humans based upon

one or more physical or behavioral characteristics and can be used to establish or verify personal

identity of individuals previously enrolled.

Fingerprints are considered to be the best method for biometric identification. They are

secure to use, unique for every person and do not change in one’s life time. A fingerprint

recognition system operates either in verification mode or in identification mode. Automated

fingerprint identification is the process of automatically matching one or many unknown

fingerprints against a database of known and unknown prints.

Finger scanners consist of

‣ A reader or scanning device.

‣ Software that coverts the scanned information into digital form and compares


‣ A database that stores the biometric data for comparison.

Binarization converts gray scale image into binary image by image by fixing the threshold

value. The pixel values above and below the threshold are set to 1 and 0 respectively.

It’s the most critical task in the fingerprint matching system. The binarized image is thinned using

block filter to reduce the thickness of all ridge lines to a single pixel width to extract minutiae

points effectively. Thinning preserves outermost pixels by placing white pixels at the boundary of

the image, as a result first five and last five rows, first five and last five columns are assigned

value of one.

The minutiae location and the minutiae angles are derived after minutiae extraction. To

compare the input fingerprint data with the template data minutiae matching is used. Its use a

minute features on the finger.

The major minutiae features are ridge ending, bifurcation, and short ridge. The ridge

ending is the point at which a ridge terminates. Single ridge splits into two ridges. Short ridges

are ridges which are significantly shorter than the average ridge length on the fingerprint.

Minutiae and patterns are very important in the analysis of fingerprints since no two fingers have

been shown to be identical. During the matching process, each input minutiae point is compared

with template minutiae point.

3. Finger Based Attendance System

Student identification should be done by student’s finger print, for identification, the

device scans the ridge and edge of the finger and creates a template. The system searches all the

templates that are stored in the system database and matches with each saved template. Student’s

fingerprints from the fingerprint scanner used as input. The fingerprint scanner can read

fingerprints of any or more fingers of the both hands. The basic information was stored in the

particular table and the fingerprints were in the template data table.


This paper presented the related works and performance analysis for fingerprint biometric.

This paper mainly comprised of development of attendance management system and fingerprint

identification system. Attendance system management was made automated and online. There are

some limitations of the fingerprint technology. In identification mode, biometric sample is taken

for future recognition purpose and in verification mode. It is platform independent and results are

very accurate. It gives a real time security system along with portability and low cost.


1. Zhang Yongqiang and Liu Ji, The design of wireless fingerprint attendance system,

International Conference on CommunicationTechnogy, 2010.

2. Younhee Gil, Access Control System with high level security using fingerprints,IEEE

the 32nd Applied Imagery Pattern Recognition Workshop (AIPR 09)

3. Jain, A.K., Hong, L., and Bolle, R(1997), On-Line Fingerprint Verification, IEEE

Trans. On Pattern Anal And Machine Intell.


R.Astalakshmi & M.H.Saied Sharmila

MAM College, Trichy


The Internet of Things (IOT) is rising rapidly and its interconnection with mobile devices

or Web is increasing. Mobile devices connected with sensors provide more developed services,

enhance the user knowledge, experience, awareness and encourage better living. This paper is a

survey paper which provides various applications or field of mobile sensing when combined with

IOT or Web technology. The study has been done over 60-70 papers, which comprised of most

relevant information in the domain, categorize these papers into eight different categories based

on their area of application (health, transportation, games & sports, and agriculture), the nature of

interface (crowd sensing, feedback, control). The challenges and problems have been analyzed

and discussed. Finally, suggestions were provided to enhance the use of mobile sensing in various

fields. Keywords—Internet of Things (IOT), mobile sensing, Radio Frequency Identification

(RFID), QR sensing, barcodes, NFC, Web of Things (WOT).


The present period of remote system sensors, radio recurrence identification(RFID) and

short-run remote correspondence have allowed the Internet to get to or to experience an installed

processing. Ordinary objects of our encompassing are joined to sensors, getting to be

interconnected and are remarkably tended to, permits association among them through Internet

made the Internet of Things a reality. Porting in IPv6 (128 bits) gives the extensive tending to go

which gives the office of joining non-living condition with advanced world (Mobiles, web

applications and so forth.), causes the IOT to develop at quicker rate. This developing innovation

is upheld by the Web of Things (WOT) which additionally characterizes the Web systems to

interface this age with physical items.The IOT interconnect the distinctive gadgets at the Network

layer while the WOT is stretching out the Web practice to port the gadgets into it. The connecting

of portable registering with IOT and WOT has expanded with the quick ascent of cell phones

with multi-sensors. This gives the higher applications to estimating temperature, ascertaining

increasing speed and checking areas or record recordings or catch photographs.There are a few

study papers identified with IOT and Mobile detecting, however we have reviewed the quick

developing interconnection between portable detecting and Internet of Things or Web. We have

experienced such a significant number of works in the field of versatile detecting and Internet of

Things. Lastly determined different classes that clarified the work or versatile applications in

particular field. These applications have experienced with numerous issues and difficulties, which

have been examined in last segment.


The mobile phone has become a mandatory aspect of our life. The mobile device has

extended its use in areas for text messages, taking pictures and even detecting our location (GPS

built in). The internet of things and the web of things (IOT, WOT) have made it possible for the

user to get interacted with the physical world through mobile phone and could sense everything

with the sensing application of the mobile phone. The work relates to mobile technologies can

also be classified in areas like:

1) Crowd sensing

The term crowd sensing is a method wherein the data is collected and shared with the help

of mobile devices. Due to the ubiquitous presence of mobile devices, it has become an enticing

method for the businesses and companies including Google, Face book, etc. for providing


2) Control

Controlling the appliances is again an area where the sensing factor has played an

important role. The home appliances are easily controlled by the sensors how far we are from

home. There are other areas too like offices and factories where it can be a daily need.

3) Health

The wellbeing related portable application is frequently utilized strategy generally for the

seniority ShivamJuyalNavyata Joshi Rahul Chauhan Dept. of Computer Science and Dept. of

Electronics and Communication Dept. of Electronics and Communication Engineering, Graphic

Era Hill University Engineering, Graphic Era Hill University Engineering, Graphic Era Hill

University Dehradun, India Dehradun, India Dehradun, India individuals today. The sensors identify

the body movement and transfer it to the web for breaking down it and after that give the


4) Feedback applications

These incorporate versatile applications that offer criticism to the client about their

ecological conditions. These applications can decrease the undeniably negative impacts which

can be because of their exorbitant ecological utilization.

5) Agriculture

To improve the productivity and get intact with the customer’s mobile applications have

been a boom for the farmers. It even helps to check the productivity and manage the livestock.

This is known as Smart Farming.

6) Games and Sports

The different physical sensors used during sports activity gives the information about the

athlete performance, checking the heart rate measure the speed and the separation secured. The

versatile moreover shares web diversions which help them with rivaling the buddies enduring the


7) Interaction with Surroundings

There can be seen innumerable applications today which have related the world. The long

range relational correspondence goals share the information with the all inclusive community and

swell our social messages.

8) Transportation

The identifying features of the mobile phone are furthermore used to take a gander at it the

fitting ceasing domains and all through help the action delays, road condition and caution the

setbacks. In this stage, each application is given and discussed most weighty works that have been

done in the individual field.

‣ Versatile SENSING

‣ Transportation Control

‣ Recreations and Sports

‣ Wellbeing

‣ Group detecting

‣ Criticism

‣ Horticulture

A. Group detecting

Gathering identifying is on a very basic level an amassing of data and decoding it. It

basically weights on identifying the zone where they live. It is a participation of the social

occasion of people, sharing their data which help for facing the environmental challenges. With

the use of recognizing PDAs, it will help us with investigating the parts of our lives. It's a scene

of get-together and sharing the area information and separate it. In a couple of utilizations like

Noise Tube, mobile phones work as a fuss sensor to evaluate the uproar presentation of a man in

the customary condition. The versatile identifying is fundamental for the customers to get

possessed with different activities and it in like manner helps in getting aware of their condition

and find the solutions for its headway.

B. Control

The mobile phones can in like manner be used as a source to control diverse physical

devices from wherever at whatever point. A champion among the most surely understood

conventional outlines is the home computerization which may join controlling the lights,

ventilation, circulating air through and cooling and furthermore the security structure. This

methodology can in like manner be used as a piece of different distinctive fields. The Internet of

things has given the resolute quality to identify and screen the things inside a framework. The

advances of the web have been giving moved other options to the home automation. The

contraptions can be identified with the help of Wi-Fi and subsequently through the web the

devices can be controlled. A flexible application system used to control and adjust the lights and

aficionados of the house is presented in. This is a measure to save the essentialness and use

contraptions more supportively. There are various more game plans made to work for the home

and the building computerization through the employments of web and web. An Ocean is a

computerization advantage that controls the earth of the house through the mobile phone.

Moreover, even the advancement of plants can be distinguished with the help of sensors and

mobile phone to manage parameters like stickiness, temperature, and watering office. Therefore,

with the help of convenient applications, the physical contraptions can be controlled satisfactorily

giving simplicity and moreover the individual fulfillment. Essentialness saving can in like manner

e said as a factor which is seen here for our advantage and pleasant life in the present and

moreover later on.

C. Wellbeing

In a specific order, flexible applications related the body sensors and the web or telephone

sensors, and are utilized to perceive some issue by assessing some strange lead of the body part.

Body sensors check the flourishing status and send the standard to the telephones, which is shared

on the Web, and a brief span later examination and testing are finished by success specialists. In

coordinate, a remote correspondence organize is produced that accomplices telephone with

sensors or human body parts. Remedial issues are ending up more normal than later in continuous

memory on the planet today. With the dynamic in helpful science now it has wound up being

unquestionably not hard to see thriving related issues and the way of life of individuals which is

finding the opportunity to be disgraceful legitimately.

There are different foundations for fundamental remedial issues like unequal eating regimen,

nonattendance of calories use, trademark issues, extend and whatnot. These have induced

individuals coordinate with different wearisome honest to goodness illnesses like infection,

diabetes, circulatory strain, thickness, and so on. Advantageous applications like Food Cam.

Correspondence with Surrounding is utilized to screen person's dietary cases in worry with

supplements. From this time forward thriving has changed into a basic key point for the people.

The ascending of the correspondence advances has impelled different adjustments in the life of

people. The web of things can comparatively serve the individual satisfaction of individuals

experiencing various tribulations. This should be possible with the assistance of flourishing

correspondence progresses

D. Feedback Application

Feedback adaptable application gives contribution about the effect of condition, for

instance, essentialness usage, use of water and capacity to the customer. It goes about as a hover

between the customer and the biological response. These applications respond a contribution to a

customer. The ammeter structure screens or measures the essentialness usage of a home machine

by a customer. It gives the contribution to the customer about its usage of essentialness by the

family contraption with the objective that it can be checked or controlled. With this advancement,

it is in like manner possible to finish an examination of the essentialness usage with online

neighbors. Feedback application in PDAs furthermore measures the development and the rising

pollution which show the growing masses. This assistant in plotting some new benchmarks in

remote identifying of the physical enveloping.

E. Agriculture

Robotization incredibly influences developing, making a sharp developing [16] method

which allows the more viable developing. Adaptable applications interface the farmers with the

plant hone which gives a fitting utilization of benefits. With flexible figuring application like

MooMonitor +, agriculturists measure their cow-like prosperity and extravagance by putting

sensors inside their body. With the compact applications, the farmer can offer their things by

sharing them on the Web which allows the online courses of action. There are such countless

supper applications through which farmers channel the things to check the idea of the sustenance.

This can similarly check the expiry of the generation which prompts the diminishing of

sustenance wastage. As agribusiness is surely not a shielded space, so adroit distinguishing and

convenient preparing is imperative to save the item from bug, manage ideal pH and temperature.

In this way, connecting versatile processing with IOT, including web data, permits quicker,

appropriate more educated development of farming.

F. Games and Sports

These engaging exercises are the quickest developing applications in the field of Internet

of Thing and Web of Thing innovation. In this classification, the portable application like Bike

Net, process the quickening, separate secured; breathing rate, and so forth of the sportsman.

Additionally, there are a few sensors which put in shoes to figure the weight made by an athletic

which encourages him to enhance his execution. Internet amusements and virtual diversions give

the physical appearance of the portable client. Its collaboration with outside condition upgrades

the gaming knowledge of the versatile client

G. Interaction with Surroundings

This is the more extensive class, which associates the surroundings with web-empowered

administrations to upgrade or to mechanize the things. Straight to the point et al. exhibited the

possibility of a looking framework in which sensor cell phones are utilized to locate the lost

element. The cell phones are associated with advanced item code framework that peruses the

RFID labels and standardized identification.

The standardized tag examining or settling is conceivable through cell phones. The drug store

therapeutic store is likewise focused by it, to advance the hunt procedure. The NFC-empowered

cell phones are utilized to check the accessibility of a specific solution. With cell phone

applications like My2cents clients would feedback be able to about the item and furthermore

share the nature with the item. Associating with environment or things through versatile detecting

requires different advancements like NFC, RFID, scanner tags and QR detecting.

H. Transportation

Web of Things or IOT aimed to connect transportation with the mobile application for

better driving, easy parking and more convenience in public transport. For example, Densitybased

traffic controller with IOT manages the traffic light system in accordance with the density

of crowd on a particular junction and shared the information of traffic on the mobile application.

Waze is the popular mobile application which connects 60 million users. It informs about the

status of traffic and provides live-mapping, voice navigation system, alerts about road accidents

and even more. INRIX traffic app provides similar services, offers parking services, based on the

type of vehicle and the period of parking.

V Track monitors the traffic delays by locating the driver’s phones. In, road situation

monitoring and alert function are provided which offers the alerts about road condition to the

user. Park here, is a sensing GPS parking facility which helps the driver in the automatic finding

of vehicle parking. GoMetro is a mobile application which links location of the user with public

transport for convenience. In this field, authors like to do more research work for offering the

better driving facility. The above mobile apps like GoMetro are getting popular especially in the



As above talked about, we can see the quantity of utilizations that join the Internet of

Things or Web of Things with versatile detecting to get more educated and improved

administrations. In this stage, we will do examination and talked about open issues related with

individual application or work. With this study of related work, investigation has been shaped

which are as per the following

a) Cell phone sensors are utilized as a part of the participatory method to permit clients for the

blend of information on the Web or Internet.

b) The versatile applications are utilized to control gadgets (home mechanization) situated in the

physical condition.

c) The cell phones interface the body sensors with the web for games and wellbeing applications.

d) The sensors are put inside the dairy animals body to gauge the wellbeing and fruitfulness of a


e) The portable application is utilized to check the status of activity and gives live-mapping.

f) The portable application shares the data of various clients to encounter social cooperation.

The detecting advances in every one of the eight applications are clarified. The information

reconciliation from different cell phones is a costly undertaking and requires a ton of

computation. As indicated by the overview, a few billions of dollars are utilized for these IOT

works. It is normal that IOT profit will develop quickly in the coming years. Versatile figuring

has an extraordinary effect in the field of wellbeing, transportation, control and Games or games.

These applications have the most noteworthy business esteem as per the overview. Versatile

applications in input and home mechanization are expanding. At last, the utilization of the

versatile application in keen cultivating or agribusiness and association with things are still

slower. There are a few difficulties, talked about in different reviews that ran over when portable

detecting is joined with the Internet of Things or Web of Things, which are as per the following:

1. Regular Sensing

A portion of the versatile applications require general detecting, information exchange through

sensors which request high computational cost, stockpiling, and other equipment. There must be

some other alternative in which sense is the second assignment in the cell phone.

2. Mobile Crowd sensing

It has different issues like how to quantify detecting quality, how to manage deficient information

and how to utilize normal administrations.

3. Security

It is the most important in mobile sensing when interacting with IoT or WoT. Security must

be ensuring while sharing the resources against unauthenticated users. High-security architecture

like Yaler must be developed for the solution.

4. Privacy

Security is the significant issue or test for the client. Amid the sharing of individual data

on the web, protection must be in thought. Versatile applications must be furnished with high


5. Congestion

While exchanging information or sharing assets, it must guarantee that the information

being exchanged is sans blunder or exact.

6. Precise

It is imperative angle particularly in couple of uses like Health that comprise of body

arrange. In this, the estimations must be dependable and exact. The versatile applications must

recognize the imperatives. It additionally decides to what low-level of estimation can be


7. Personnel

It is normal that your cell phone alarms you about your wellbeing. Assume, you are going

by your specialist and his telephone recommends him supplements and supplements for you, as

per the status of your body sensors.

8. Mood analysis

It is practically equivalent to work force that offer better mentalities to the client to

manage a circumstance as indicated by their feelings given by versatile application. It

encourages representatives or laborers to enhance their wellbeing and productivity.

9. Cloud storage

It is fundamental in the vast majority of the group applications, keeping in mind the end

goal to store enormous measure of information created by portable applications. Normal detecting

prompts the age of enormous information which must be put or put away in an appropriate place.

10. Market Cases

It is imperative to get the ideal market idle of IOT or WOT with portable detecting,

demonstrating the mechanized world. The financial attainability of IOT or Web-based resource

must be clear. Every one of the issues talked about above influences the distinctive applications

in an alternate way.

As per the creators of numerous sorts of research, examined in this study, the result and

criticalness of each issue are diverse in every application. They are separated as exceptionally

basic, fundamental, critical, less vital and not applicable. For instance, Security is basic in

wellbeing, control, input, less essential in swarm detecting, diversions, and games while vital in

horticulture and communication with things. It is watched that security and protection are

fundamental in the field of wellbeing, criticism, and transportation. Distributed storage is vital in

the application where huge or enormous information is created like in swarm detecting. The

answers for the current issues not just raise the use of versatile detecting in IOT or WOT yet

additionally increment the market esteem particularly in the field of wellbeing, games, and

transportation. Versatile application territory will be noteworthy or expanded in future

incorporate shrewd city, tourism, seismic tremor and surge administration, in the safe exchange

and small scale organizing inside the human body for wellbeing related data.


This paper is an overview paper which portrays different utilizations of versatile

detecting. The Internet of things and web advances have been a fundamental administration to

make our lives exceptionally more straightforward furnishing us with the different applications in

the cell phone which could detect the world through the advances. We have talked about the

portable detecting and its applications in various fields. This is giving more dynamic data to the

general population helping in better basic leadership in our life. We considered numerous

examination papers and reasoned that the field of portable detecting is progressively getting to be

huge and important.

This paper is examination the eight distinct classifications of portable detecting (swarm detecting,

criticism, wellbeing, transportation, sports, cooperation with the encompassing, agribusiness,

control). Additionally, the difficulties of versatile detecting in various fields have been talked

about. At long last, we have investigated the issues and proposed the particular arrangements

enhance the utilization of versatile detecting all the more proficiently. Summing up, the web of

things and web application has empowered us to exploit the new openings and the critical factor

is to work in this field and grow more research to fulfill the difficulties


1. P. Sommer, L. Schor, and R. Wattenhofer, "Towards a zero arrangement remote sensor organize

engineering for brilliant structures," in Proc. BuildSys, Berkeley, CA, USA, 2009, pp. 31-36.

2. D. Yazar and A. Dunkels, "Proficient application reconciliation in IP-based sensor systems" in

Proc.! st Workshop Embedded Sens. Syst. Vitality Efficiency Build. Berkeley, CA, USA, Nov.

2009, pp. 43-48.

3. J. W.Huiand D. E. Culler, "IP is dead, long live IP for remote sensor systems," in Proc. sixth

Conf. Netw.Embedded Sensor Syst. (SenSys), Raleigh, NC, USA, Nov. 2008.

4. E. Wilde, "Putting things to REST" School Inf., UC Berkeley, CA, USA, UCB iSchool Tech.

Rep 2007-015, Nov. 2007.

5. Kamilaris, A. Pitsillides, and M. Yiallouros, "Building vitality mindful shrewd homes utilizing

Web innovations," J. Encompassing Intell. Savvy Environment, vol. 5, no. 2, 2013

6. N.D. Path et al., "A Survey of cell phone detecting," IEEE Commun. Mag., vol. 5, no. 2, pp. 161-

186, 2013.

7. G. G. Meyer, K. Framling, and J. Holmstrom, "Canny items: An overview "Comput. Ind., vol.

60, no.3, pp. 137-148, 2009.



M.Bhuvaneswari & R.Vinci Vinola

Fatima College(Autonomous), Madurai


India is an agricultural country wherein about 70% of the population depends on

agriculture. Agriculture has changed more in the past century than it has since farming began

many millennia ago. Research in agriculture is aimed towards increase of productivity and food

quality at reduced expenditure and with increased profit. Now day Computer technologies have

been shown to improve agricultural productivity in a number of ways. One technique which is

emerging as a useful tool is image processing. These techniques produce an output for the input

image acquired through any of the imaging technique. The results are informative and supportive

for farmers, agro based industries and marketers. It provides a timely support for decision making

at an affordable cost. This paper presents a short survey on using image processing techniques to

assist researchers and farmers to improve agricultural practices.


Precision Agriculture; Image Processing; Profit for Farmer.


Now this year the continued demand for food with an increasing population, climate

change and political instability. The agriculture industry continues to search for new ways to

improve productivity and sustainability. This has resulted in researched from multiple disciplines

searching for ways to incorporate new technologies and precision into the agronomic system.

There is a need for efficient and precise techniques of farming, enabling farmers to put minimal

inputs for high production. Precision agriculture is one of such techniques that is helping farmers

in meeting both the above needs, precision agriculture can assist farmers in decision making

about seed selection, crop production, disease monitoring, weed control, pesticide and fertilizers

usage. It analyzes and controls farmer’s requirements using location specific information data and

imagery techniques, Schellberg et al.(2008). In many parts of the world, Mainly in the rural areas

this kind of data is inaccessible and the cost of procurement of these techniques is also not

affordable by the farmers, Mondal and Basu(2009). The trends towards precision farming

techniques are reliant on location specific data including the taking of multiple image databases.

The use of image a survey of Image Processing Techniques for agriculture to be used for decision



Image processing techniques can be used to enhance agricultural practices by improving

accuracy and consistency of processes while reducing farmers manual monitoring. Often it offers

flexibility and effectively substitutes the farmer visual decision making. Image Processing term

meaning image acquisition process of retrieving of a digital image from a physical source capture

an image using sensors gray scale conversion process of converting. A color or multi channel

digital image to a single channel where image pixel possess a single intensity value where image

pixel possess a single intensity value image background, retrieving foreground objects Image

enhancement improvement in perception of image details for human and machine analysis image

histogram analysis pixel plot analysis in terms of peaks and valleys formed by pixel frequency

and pixel intensities binary image segmentation foreground objects separation from background

in a binary(black and white).

Image color image segmentation image objects separation in color image, regions of interest

image filtering process of distorting. An image in a desired was using a filter feature extraction

process of defining a set of features or image characteristics that efficiently or meaningfully

represent the information important for analysis and classification image registration process of

transforming different sets of data into one coordinate system image transition process of

transforming different sets of data into one co ordinate system image transition process of

changing state or defining a condition between two or more images image object detection

process of finding instances of real world objects such as weeds, plants and insects in images or

video sequences image object analysis process extracting reliable and meaningful information

from images. Table (1) Applications/ Models/ Systems Developed Using Image Processing

Techniques And Remarks Over Its Accuracy And Usability.






Payne Et Al.,2014

To Predict Mango

Image Processing

Mango Fruits Were Detected With



Prathibha Et Al.,


Early Pest Detection Image Processing System Can Detect The Pests In

Tomato Fruit In The Early Stage.

Intaravanne &

To Predict The

Android Device

Leaf Color Levels Can Be

Sumriddetch, 2015

Needed Fertilizer For

And Image

Accurately Identified



Mainkar Et Al.2015 To Automatically

Detect Plant Leaf


Image Analysis

Leaf Diseases Were Detected By

The Proposed System

Gopal, 2016

To Develop An Auto

Image Processing

Developed Model Was Useful For

Irrigation And Pest

Irrigation And Pest Detection

Detection System

Hanson Et Al.2016

For Detection And

Image Processing,

Watermelon Leaf Diseases Were

Identification Of

Neural Networks

Detected With 75.9% Of

Plant Disease


Zhao Et Al., 2016

To Predict Oriental

Digital Image

Developed System Was

Fruit Moths


Successful For Prediction (0.99



Maharlooei Et Al., To Detect And Count E-Image Image Captured With An

2017 Different Sized Processing Inexpensive Digital Camera Gave

Soybean Aphids On Technique Satisfactory Results

A Soybean Leaf

Gray Scale Conversion

After image acquisition, pre processing of the images involves gray scale conversion,

Eerens et al.(2014) and Jaya et al.(2000). Du and Sun(2004) highlights gray scale conversion as

an intermediate step in food quality evaluation models. They reported various applications

evaluating food items like fruits, fishery, grains, meat, vegetables and others the use of image

processing techniques applicable to different assessments. Other work (2013) reported on the use

of gray level determination of foreign fibers in cotton production images that enhanced

background separation and segmentation. Java et al.(2000) also demonstrated the image analysis

techniques using neural networks for classification of the agriculture products. This study

reported that the multi-layer neural networks classifiers are the best in performing categorization

of agricultural products. For example here:

Image Background Extraction in Applications

The Background is of minimal use it is preferable to extract if from the images. Such

images having regions of interests solid objects in dissimilar background are easily extractable.

This results in non-uniform gray levels distribution between objects of interests and the image

background, Eerens et al.(2014) and Java et al.(2000) following this understanding du and

sun(2004) report various applications where background is not taken into consideration while

evaluating the food products quality including pizza, corn germplasm and cob,etc. Similarly wu

et al.(2013) extracted background of the foreign fiber images detected in to cotton products. This

aids in the clear detection of foreign fibers which were difficult to trace out. A survey on

advanced techniques by Sankaran et al.(2010) highlight the us of fluorescence spectroscopy and

imaging, visible and infrared spectroscopy, hyper spectral imaging in detecting plant diseases and

on future enhancements, which could focus on the metabolic activities of the plants and trees

releasing volatile organic compounds.


The review of survey papers on the use of image processing techniques showed that these

techniques can be useful to assist agricultural scientists. The deep approach which is improve

application in computer vision, automatic speech recognition and natural language processing,

Bengio(2009) is emerging as the preferred approach. The review found that crop identification

and disease detection the common uses for the technique.


This paper presented a survey on using image processing techniques used in an agriculture

context. Employing the processes like segmentation, features extraction and clustering can be

used to interrogate images of the crops. There is a need to select the most appropriate techniques

to assist decision-making. The image processing techniques have been used across a vast range of

agricultural production contexts. It can be effective in food quality assessment, fruit defects

detection, weed crop classification. There are a number of applications and methods to choose

from for implementation to real time needs. While the existing application sustaining the needs of

today, there are more and more new methods are evolving to assist and ease the farming

practices.It is evident that these approaches will all contribute to the wider goal of optimizing

global of production. One factor, which could increase the development of image processing

techniques for agriculture is the availability of Online data sets. No online images databases are

available on food quality assessment, fruit defects detection or weed/crop classification. Similar

to databases like handwritten or printed documents and characters, faces, there is a need of

agricultural databases that will ease in the testing and verification of newly developed image

processing methods.


1. Lalit P. Saxena1 and Leisa J.(2014) Armstrong2, A survey of image processing techniques for


2. A.B. Ehsanirad And Y. H. S. Kumar (2010), Leaf Recognition For Plant Classification Using

Glcm And Pca Methods, Oriental Journal Of Computer Science & Technology, 3 (1), Pp. 31–36,


3. Image Analysis Applications In Plant Growth And Health Assessment, Journal Of Agricultural

Faculty Of Mustafa Kemal University.(2017).

4. C. C. Yang, S. O. Prasher, J. A. Landry, J. Perret And H. S. Ramaswamy (2000), Recognition Of

Weeds With Image Processing And Their Use With Fuzzy Logic For Precision Farming,

Canadian Agricultural Engineering, 42 (4), Pp. 195-200, 2000.

5. D. S. Jayas, J. Paliwal and N. S. Visen (2000), Multi-layer neural networks for image analysis of

agricultural products, J. Agric. Engng Res., 77 (2), pp. 119–128, 2000.

6. D. J. Mulla (2013), Twenty five years of remote sensing in precision agriculture: Key advances

and remaining knowledge gaps, Biosystems Engineering, 114, pp. 358–371, 2013.

7. E. E. Kelman and R. Linker (2014), Vision-based localisation of mature apples in tree images

using convexity, Biosystems Engineering, 114, pp. 174–185, 2014.

8. C. J. Du and D. W. Sun (2004), Recent developments in the applications of image processing

techniques for food quality evaluation, Trends in Food Science & Technology, 15, pp. 230–249,


9. C. Puchalski, J. Gorzelany, G. Zagula and G. Brusewitz (2008), Image analysis for apple defect

detection, detection, Biosystems and Agricultura Engineering, 8, pp. 197–205, 2008.


R. Thirumalai Kumar& V.Vinoth Kumar

NMSSVN College,Madurai


The development of Cloud Computing services is speeding up the rate in which the

organizations outsource their computational services or sell their idle computational resources.

Even though migrating to the cloud remains a tempting trend from a financial perspective, there

are several other aspects that must be taken into account by companies before they decide to do

so cloud computing has become a key IT buzzword. Cloud computing is in its infancy in terms of

market adoption. However, it is a key IT megatrend that will take root. Aiming to give a better

understanding of this complex scenario, in this paper giving an overview of the cloud computing

in this emerging technology.


Cloud computing is a subscription-based service where you can obtain networked storage

space and computer resources. Cloud computing entails running computer/network applications

that are on other people’s servers using a simple user interface or application format. The cloud

technology has many benefits and that would explain its popularity. First, companies can save a

lot of money; second, they are able to avoid the mishaps of the regular server protocols. When a

company decides to have a new piece of software, whose license can only be used once and it’s

pretty expensive, they wouldn’t have to buy software for each new computer that is added to the

network. Instead, they could use the application installed on a virtual server

somewhere and share, in the ‘cloud’.

How can we use the cloud?

The cloud makes it possible for you to access your information from anywhere at any

time. While a traditional computer setup requires you to be in the same location as your data

storage device, the cloud takes away that step. The cloud removes the need for you to be in the

same physical location as the hardware that stores your data. Your cloud provider can both own

and house the hardware and software necessary to run your home or business applications.

This is especially helpful for businesses that cannot afford the same amount of hardware and

storage space as a bigger company. Small companies can store their information in the cloud,

removing the cost of purchasing and storing memory devices. Additionally, because you only

need to buy the amount of storage space you will use, a business can purchase more space or

reduce their subscription as their business grows or as they find they need less storage space.

One requirement is that you need to have an internet connection in order to access the cloud. This

means that if you want to look at a specific document you have housed in the cloud, you must

first establish an internet connection either through a wireless or wired internet or a mobile

broadband connection. The benefit is that you can access that same document from wherever you

are with any device that can access the internet. These devices could be a desktop, laptop, tablet,

or phone. This can also help your business to function more smoothly because anyone who can

connect to the internet and your cloud can work on documents, access software, and store data.

Imagine picking up your smart phone and downloading a pdf document to review instead of

having to stop by the office to print it or upload it to your laptop. This is the freedom that the

cloud can provide for you or your organization.


‣ The “no-need-to-know” in terms of the underlying details of infrastructure, applications

interface with the infrastructure via the APIs.

‣ The “flexibility and elasticity” allows these systems to scale up and down at will

utilizing the resources of all kinds (CPU, storage, server capacity, load balancing, and databases)

‣ The “pay as much as used and needed” type of utility computing and the “always on

anywhere and any place” type of network-based computing.

• Cloud are transparent to users and applications, they can be built in multiple ways

‣ Branded products, proprietary open source, hardware or software, or just off-the-shelf


In general, they are built on clusters of PC servers and off-the-shelf components plus Open

Source software combined with in-house applications and/or system software.


There are different types of clouds that you can subscribe to depending on your needs. As

a home user or small business owner, you will most likely use public cloud services.

1. Public Cloud - A public cloud can be accessed by any subscriber with an internet

connection and access to the cloud space.

2. Private Cloud - A private cloud is established for a specific group or organization and

limits access to just that group.

3. Hybrid Cloud - A hybrid cloud is essentially a combination of at least two clouds,

where the clouds included are a mixture of public and private.


Each provider serves a specific function, giving users more or less control over their cloud

depending on the type. When you choose a provider, compare your needs to the cloud services

available. Your cloud needs will vary depending on how you intend to use the space and

resources associated with the cloud. If it will be for personal home use, you will need a different

cloud type and provider than if you will be using the cloud for business. Keep in mind that your

cloud provider will be pay-as-you-go, meaning that if your technological needs change at any

point you can purchase more storage space (or less for that matter) from your cloud provider.

There are three types of cloud providers that you can subscribe to: Software as a Service

(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). These three types

differ in the amount of control that you have over your information, and conversely, how much

we can expect our provider to do for. Briefly, here is what can expect from each type.

1. Software as a Service - A SaaS provider gives subscribers access to both resources and

applications. SaaS makes it unnecessary for you to have a physical copy of software to install on

your devices. SaaS also makes it easier to have the same software on all of your devices at

once by accessing it on the cloud. In a SaaS agreement, you have the least control over the cloud.

2. Platform as a Service - A PaaS system goes a level above the Software as a Service

setup.APaaS provider gives subscribers access to the components that they require to develop and

operate applications over the internet.

3. Infrastructure as a Service - An IaaS agreement, as the name states, deals primarily with

computational infrastructure. In an IaaS agreement, the subscriber completely outsources the

storage and resources, such as hardware and software, that they need.


The use of the cloud provides a number of opportunities

It enables services to be used without any understanding of their infrastructure.

Cloud computing works using economies of scale:

It potentially lowers the outlay expense for start up companies, as they would no

longer need to buy their own software or servers.

Cost would be by on-demand pricing.

Vendors and Service providers claim costs by establishing an ongoing revenue


Data and services are stored remotely but accessible from “anywhere”.

In parallel there has been backlash against cloud computing

Use of cloud computing means dependence on others and that could possibly limit

flexibility and innovation:

The others are likely become the bigger Internet companies like Google and IBM,

who may monopolise the market.

Some argue that this use of supercomputers is a return to the time of mainframe

computing that the PC was a reaction against.

Security could prove to be a big issue:

It is still unclear how safe out-sourced data is and when using these services

ownership of data is not always clear.


There are also issues relating to policy and access:

If your data is stored abroad whose policy do you adhere to?

What happens if the remote server goes down?

How will you then access files?

There have been cases of users being locked out of accounts and losing access to


Lower computer costs

o You do not need a high-powered and high-priced computer to run cloud

computing's web-based applications.

o Since applications run in the cloud, not on the desktop PC, your desktop PC does

not need the processing power or hard disk space demanded by traditional desktop


o When you are using web-based applications, your PC can be less expensive, with a

smaller hard disk, less memory, more efficient processor...

o In fact, your PC in this scenario does not even need a CD or DVD drive, as no

software programs have to be loaded and no document files need to be saved.

Improved performance

o With few large programs hogging your computer's memory, you will see better

performance from your PC.

o Computers in a cloud computing system boot and run faster because they have

fewer programs and processes loaded into memory…

Reduced software costs

o Instead of purchasing expensive software applications, you can get most of what

you need for free-ish

• applications today, such as the Google Docs suite.

o better than paying for similar commercial software

• which alone may be justification for switching to cloud applications.

Instant software updates

o Another advantage to cloud computing is that you are no longer faced with

choosing between obsolete software and high upgrade costs.

o When the application is web-based, updates happen automatically

• available the next time you log into the cloud.

o When you access a web-based application, you get the latest version

• without needing to pay for or download an upgrade.

Improved document format compatibility

o You do not have to worry about the documents you create on your machine being

compatible with other users' applications or OSes

o There are potentially no format incompatibilities when everyone is sharing

documents and applications in the cloud.

Unlimited storage capacity

o Cloud computing offers virtually limitless storage.

o Your computer's current 1 Tbyte hard drive is small compared to the hundreds of

Tbytes available in the cloud.

Increased data reliability

o Unlike desktop computing, in which if a hard disk crashes and destroy all your

valuable data, a computer crashing in the cloud should not affect the storage of

your data.

• if your personal computer crashes, all your data is still out there in the

cloud, still accessible

o In a world where few individual desktop PC users back up their data on a regular

basis, cloud computing is a data-safe computing platform!

Universal document access

o That is not a problem with cloud computing, because you do not take your

documents with you.

o Instead, they stay in the cloud, and you can access them whenever you have a

computer and an Internet connection

o Documents are instantly available from wherever you are

Latest version availability

o When you edit a document at home, that edited version is what you see when you

access the document at work.

o The cloud always hosts the latest version of your documents

o as long as you are connected, you are not in danger of having an outdated version

Easier group collaboration

o Sharing documents leads directly to better collaboration.

o Many users do this as it is an important advantages of cloud computing

o Multiple users can collaborate easily on documents and projects

Device independence

o You are no longer tethered to a single computer or network.

o Changes to computers, applications and documents follow you through the cloud.

o Move to a portable device, and your applications and documents are still available.


Requires a constant Internet connection

o Cloud computing is impossible if you cannot connect to the Internet.

o Since you use the Internet to connect to both your applications and documents, if

you do not have an Internet connection you cannot access anything, even your

own documents.

o A dead Internet connection means no work and in areas where Internet

connections are few or inherently unreliable, this could be a deal-breaker.

Does not work well with low-speed connections

o Similarly, a low-speed Internet connection, such as that found with dial-up

services, makes cloud computing painful at best and often impossible.

o Web-based applications require a lot of bandwidth to download, as do large


Features might be limited

o This situation is bound to change, but today many web-based applications simply

are not as full-featured as their desktop-based applications.

• For example, you can do a lot more with Microsoft PowerPoint than with

Google Presentation's web-based offering

Can be slow

o Even with a fast connection, web-based applications can sometimes be slower than

accessing a similar software program on your desktop PC.

o Everything about the program, from the interface to the current document, has to

be sent back and forth from your computer to the computers in the cloud.

o If the cloud servers happen to be backed up at that moment, or if the Internet is

having a slow day, you would not get the instantaneous access you might expect

from desktop applications.

Stored data might not be secure

o With cloud computing, all your data is stored on the cloud.

• The questions is How secure is the cloud?

o Can unauthorised users gain access to your confidential data?

Stored data can be lost

o Theoretically, data stored in the cloud is safe, replicated across multiple machines.

o But on the off chance that your data goes missing, you have no physical or local


• Put simply, relying on the cloud puts you at risk if the cloud lets you down.


• Many of the activities loosely grouped together under cloud computing have already been

happening and centralised computing activity is not a new phenomena

• Grid Computing was the last research-led centralised approach

• However there are concerns that the mainstream adoption of cloud computing could cause

many problems for users

• Many new open source systems appearing that you can install and run on your local cluster

– It should be able to run a variety of applications on these systems


To summarize, the cloud provides many options for the everyday computer user as well as

large and small businesses. It opens up the world of computing to a broader range of uses and

increases the ease of use by giving access through any internet connection. However, with this

increased ease also come drawbacks. You have less control over who has access to your

information and little to no knowledge of where it is stored. You also must be aware of the

security risks of having data stored on the cloud. The cloud is a big target for malicious

individuals and may have disadvantages because it can be accessed through an unsecured internet

connection. If you are considering using the cloud, be certain that you identify what information

you will be putting out in the cloud, who will have access to that information, and what you will

need to make sure it is protected. Additionally, know your options in terms of what type of cloud

will be best for your needs, what type of provider will be most useful to you, and what the

reputation and responsibilities of the providers you are considering are before you sign up.


Won Kim: “Cloud Computing: “Status and Prognosis”, in Journal of Object Technology, vol. 8,

no.1, January-February 2009, pp. 65-72 Cloud Computing: Today and Tomorrow





Fatima College(Autonomous),Madurai

Image processing is a method to perform some operations on an image, in order to get an

enhanced image or to extract some useful information from it. It is a type of signal processing in

which input is an image and output may be image or characteristics/features associated with that

image. Nowadays, image processing is among rapidly growing technologies. It forms core

research area within engineering and computer science.

Image processing basically includes the following three steps:

1. Importing the image via image acquisition tools

2. Analyzing and manipulating the image

3. Output in which result can be altered image or report that is based on image analysis.

There are two types of methods used for image processing namely, analog and digital

image processing. Analog image processing can be used for the hard copies like printouts and

photographs. Image analysts use various fundamentals ofinterpretation while using these visual

techniques. Digital image processing techniques help in manipulation of the digital images by

using computers. The three general phases that all types of data have to undergo while using

digital technique are pre-processing, enhancement, and display, information extraction. DIP is the

use of computer algorithms to create, process, communicate, and display digital images. The

input of that system is a digital image and the system process that image using efficient

algorithms, and gives an image as an output. The process of digital image processing is defined in

the form of phases.

Image Acquisition

Image Pre- Processing

Image Segmentation

Feature Extraction

Classification based on

Statistical Analysis


In recent years the importance of sustainable agriculture has risen to become one of the most

important issues in agriculture. In addition, plant diseases continue to play a major limiting role in

agricultural production. The control of plant diseases using classical pesticides raises serious concerns

about food safety, environmental quality and pesticide resistance, which have dictated the need for

alternative pest management techniques. In particular, nutrients could affect the disease tolerance or

resistance of plants to pathogens. When some diseases are not visible to naked eye but actually they

are present, then it is difficult to detect it with the naked eye. Earlier, microscope is used to detect the

disease, but it become difficult as to observe each and every leaf and plant. So, the fast and effective

way is a remote sensing technique. Detection and recognition of diseases in plants using machine

learning is very fruitful in providing symptoms of identifying diseases at its earliest. Computer

processing Systems are developed for agricultural applications, such as detection of leaf diseases,

fruits diseases etc. In all these techniques, digital images are collected using a digital camera and

image processing techniques are applied on these images to extract useful information that are

necessary for further analysis. Digital Image processing is used for the implementation which will

take the image as input and then perform some operation on it and then give us the required or

expected output.

The image processing can be used in agricultural applications for following purposes:

1. To detect diseased leaf, stem, fruit

2. To quantify affected area by disease.

3. To find shape of affected area.

4. To determine color of affected area

5. To determine size & shape of fruits


The image acquisition is required to collect the actual source image. An image must be

converted to numerical form before processing. This conversion process is called digitization.


The principle objective of the image enhancement is to process an image for a specific task

so that the processed image is better viewed than the original image. Image enhancement

methods basically fall into two domains, spatial and frequency domain.


In image processing, segmentation falls in to the category of extracting different image

attributes of an original image.

Segmentation subdivides an image into constituent regions or objects


There are three important techniques are used. They are

1. Artificial Neural Network

2. Clustering Method

3. SVM (Support Vector Machine



ANN means Artificial Neutral Network. It is a type of artificial intelligence that includes some

functions of the person mind.

ANN’s have three layers that are interconnected.

1. The first layer consists of input neurons. Those neurons send data on to the second layer,

which in turn sends the output neurons to the third layer.

2. ANN is also known as a neural network.


A large number of very simple processing neuron-like processing elements . A large number of

weighted connections between the elements . Distributed representation of Knowledge over the

connections . Knowledge is acquired by network through a learning process.


1. It is a non-parametric classifier.

2. It is an universal functional with arbitrary accuracy.

3. capable to present functions such as OR, AND, NOT

4. It is a data driven self-adaptive technique

5. Efficiently handle. noisy inputs


1. It is semantically poor.

2. ANN is time taking.

3. Problem of over fitting.



This is an iterative technique that is used to partition an image into clusters. Clusters can be

selected manually or randomly. Distance between the pixel and cluster center is calculated by the

squared or absolute. The difference is typically based on pixel color, intensity, texture, and location,

or a weighted combination of these factors. More commonly used clustering algorithm

k- means algorithm, fuzzy c-means algorithm, expectation – maximization (EM) algorithm.


Clustering based on intensity and threshold separation.

1. It uses stochastic approach

2. Performance and accuracy depends upon the threshold selection.


1. Simpler classifier as exclusion of any training process.

2. Applicable in case of a small dataset which is not trained


1. Speed of computing distance increases according to numbers available in training


2. Expensive testing of each instance and sensitive to irrelevant inputs

SVM (Support Vector Machine)


A support vector machine builds a hyper plane or set of hyper planes in a high- or infinite

dimensional space, used for classification, generalization error of the classifier.


SVM uses Nonparametric with binary classifier approach and can handle more input data very

efficiently. Performance and accuracy depends upon the hyper plane selection and kernel



1. It gains flexibility in the choice of the form of the threshold.

2. Contains a nonlinear transformation.

3. It provides a good Generalization capability.

4. The problem of over fitting is eliminated.

5. Reduction in computational complexity.


1. Result transparency is low.

2. Training is time consuming.

3. Structure of algorithm is difficult to understand.


Therefore, this system will help the farmer to increase the production of agriculture. This

method we can identify the disease present in monocot or dicot. Most of us we mainly used

popular method, clustering method. This deals with the accuracy and fast approach of disease

detection. By using this concept the disease identification is done for all kinds of plants and also

the user can know the affected area of the plants in percentage. By identifying the disease

properly, the user can rectify the problem very easy and with less cost. So I conclude that image

processing is one of the important tools for disease detection of plants.


1. P.Revathi, M.Hemalatha, “Classification of Cotton Leaf Spot Diseases Using Image

Processing Edge Detection Technique”, ISBN, 2012, 169-173, IEEE.

2. Santanu Phadikar & Jaya Sil[2008] Rice Disease Identification Using Pattern Recognition


3. Agrios, George N. (1972). Plant Pathology (3rd ed.). Academic Press.




S.Jaishreekeerthana & S.Kowsalya

MAM College,Trichy

Since security vulnerabilities speak to one of the great difficulties of the Internet of

Things, scientists have proposed what is known as the Internet of Biometric Things, which

blends customary biometric advancements with setting mindful validation procedures. A

standout amongst the most popular biometric innovations is electronic fingerprint

acknowledgment, which get fingerprint utilizing diverse advancements, some of which are

more appropriate for others. In addition, diverse fingerprint databases have been worked to

contemplate the effect of different factors on the exactness of various fingerprint

acknowledgment calculations and frameworks. In this paper, we study the accessible

fingerprint obtaining advances and the accessible fingerprint databases. We likewise

distinguish the preferences and disservices of every innovation and database. List Terms—

Capacitive, Digital Cameras, Fingerprints, Internet of BiometricThings, Optical, Synthetic

Fingerprints Generation, Thermal, Ultrasonic.


Later, billions of items (or things) will be associated together making what is known as the

Internet of Things .security vulnerabilities speak to one of the fabulous difficulties Utilizing

biometric identification strategies to address a portion of the Security vulnerabilities is an

exceptionally engaging arrangement. As of late, a few scientists have proposed what is known

as the Internet of Biometric Things , which makes utilization of customary biometric

identification strategies and setting mindful verification systems .A standout amongst the most

popular biometrics innovations is electronic Fingerprint verification. It can be utilized inside

various applications, for example, cell phones, ATMs, shrewd homes, brilliant structures,

transportation frameworks, and so on. Along these lines, computerized fingerprint

identification is getting consistently developing enthusiasm from both scholarly and business

parties. The significance of concentrating on fingerprint, as a principle biometric device to

exceptionally recognize individuals and henceforth the security and protection comes from

numerous reasons. The first one is that no two people Supported by the Jordan University of

Science and Technology Deanship of Research

Fig. 1: A Fingerprint Clarification have been found to have the same fingerprints. Another

reason is that fingerprints don't change with age. At long last, each finger of a solitary

individual has an alternate fingerprint. This reality can be used to distinguish not just the

individual who contacted a specific question, yet in addition the finger utilized by that

individual. As Shown in Figure 1, a fingerprint is framed from the accompanying parts: Ridge

Endings, the terminals of the edges. Bifurcations, the purpose of separating edge into two

branches. Specks,little edges. Islands irregular point without any connectors. Pore: a white

point encompassed by dark point. Scaffolds, little edges joining two longer contiguous edges

Crossovers: two edges which cross each other Core: focal point of the fingerprint. Deltas:

focuses, at the lower some portion of the fingerprint, encompassed by a triangle. Fingerprints

can be classified as per their shapes into three fundamental writes; Whorl fingerprint, Arch

fingerprint and circle fingerprint. Each class has its edge flow and design. Figure 2

demonstrates these three sorts with their reality rates. Circle fingerprint are the most wellknown

with a percent (60% 65%) of all fingerprints.

Curve fingerprints which are once in a while exist represents just 5% percent[6]. An average

Automatic Fingerprint Identification System (AFIS) more often than not comprises of

numerous phases that incorporate Image Obtaining, fingerprint division,image preprocessing

(improvement), highlight extraction, and coordinating outcome .

• Image Acquisitio

The phase of picture procurement is a standout amongst the most essential factors that

contribute in the achievement of the fingerprint acknowledgment framework, here there are a

few elements to be considered before picking the gadget that will be utilized to catch

fingerprints regarding execution, cost and size of the subsequent picture, in this investigation

we center around the fingerprint catch stage as it were.• Segmentation: The principle of this

step is to separate the image of the finger from the background, it is an initial step of

fingerprint system

• Preprocessing

This phase includes some modifications to the image with the aim of enhancement, this would

increase the accuracy of the system used in fingerprint recognition..

• Feature extraction and Person

At last, comparable fingerprints are recovered. Along these lines, just this arrangement

of fingerprints (source) are contrasted and the objective fingerprint. Fingerprints are gained in

various ways. For non-mechanized (or non-on the web) frameworks, fingerprints are typically

procured by stamping inked fingertips on an extraordinary paper. On the off chance that these

fingerprints should be put away on a computerized gadget, a picture of the paper is taken

utilizing a scanner for instance. Then again, fingerprints are procured utilizing fingertips

scanners for mechanized frameworks. Scanners may utilize distinctive advances, for example,

optical, capacitive, warm, and ultra-sonic. These scanners more often than not require the

fingertip to be squeezed or hauled over some surface. This may reshape or crush the

subsequent fingerprint which may prompt mistaken identification. Likewise, this makes it

difficult to cross utilize diverse finger tips scanners in securing and verification process as

various scanners may brings about various reshaping comes about. As of late, as most present

day computerized frameworks are given, or can be effortlessly given, with fantastic advanced

cameras, a course of research began to consider.The likelihood of securing fingerprints

utilizing these cameras. As indicated by the investigations in this approach is promising and

the outcomes are worthy. To plan distinctive procedures and concentrate the effect of various

components identified with robotized fingerprint acknowledgment frameworks, an expansive

fingerprint picture set is required .In this dad per, we go for cultivating the improvement of

concentrating on fingerprint verification. Specifically, we direct a review on the accessible

fingerprint picture sets and the procurement gadgets used to gather each, for example, optical

sensor, capacitive sensor, warm sensor, and diverse kinds of advanced cameras. We list the

fundamental highlights of every innovation and fingerprint picture set, next to their favorable

circumstances and hindrances.


This section gives an overview of the most common devices used for online fingerprint

acquisition. Many technologies have been utilized in these devices. Namely, optical,

capacitive, thermal, and ultrasonic

A. Optical sensor

Optical fingerprint devices are the oldest and the most commonly used devices to

capture fingerprints. Figure 3(a) 1 clarifies how this sensor works. The fingerprint image is

captured by placing a finger on one side of a prism, a light source that emits light vertical on

the second prism side, and a camera that is placed in front of the third prism side. The light

collected by the camera is then converted into a digital signal where bright areas reflect the

valleys and dark areas reflect the ridges.This sensor was used to acquire a large number finger

print databases, such as FVC2000 (DB1, DB3), FVC2002 (DB1 and DB2), FVC2004 (DB1

and DB2), FVC2006 (DB2). Figure 3(b) is an example of a commercial optical sensor. The

main advantages of optical sensors are temperature insensitivity, low cost, and high resulting

fingerprint resolution. On the other hand, it suffers form many disadvantages. As optical

sensor depends on light reflection, as a result, its functionality may suffer from lighting


A) Clarification of how optical sensors work

B) Commercial optical sensor based Fingerprint

Acquisition Device picture as they may cover. Besides, this deposit can be utilized

to take the fingerprint. One of the business cases that uses this innovation is appeared in

Figure 3(b) is the SecuGen Hamster IV 2. it is one of the normal gadgets in fingerprint

frameworks, which have been certified by the FBI and STQC. Close to SecuGen Hamster IV,

other optical gadgets were utilized to gather the IIIT-D Multi-sensor Optical and Latent

Fingerprint .B. Capacitive Sensor

This gadget depends on the standard of estimating the electric limit of the human skin.

Fingerprints are set on a thin plate that contains an arrangement of small scale capacitors. Skin

of a fingerprint contains edges and valleys. The capacitive sensor fundamentally measure the

limit of the skin of the finger. The limit of edges is diverse frame the limit of valleys on the

grounds that the limit depends very on the separation of the skin from the miniaturized scale

capacitor plate. A basic model of a capacitive sensor is appeared. This sensor was utilized to

catch various fingerprint datasets that incorporates FVC2000 (DB2) [20] and FVC2002 (DB3)

. Favorable position of capacitive sensors are little in estimate and consequently they are less

expensive to deliver. They additionally devours Capacitive Technology low energy.This

advantage is especially helpful in the battery controlled gadgets. It likewise obtuse to

ecological impacts, for example, daylight. Numerous producers, for example, Sony, Fujitsu,

Hitachi, and Apple, have embraced this innovation in their fingerprint acknowledgment

frameworks.C. Synthetic Fingerprint Generation

This strategy naturally creates artificial fingerprints in light of a model. This model

typically is controlled by an arrangement of parameters which can be changed to deliver

variant fingerprints as far as clearness, territory, focused or not, revolution, the difficulty of

the acknowledgment of the fingerprint, and some more. This technique can be utilized to

manufacture substantial fingerprint databases in brief time with no cost. Additionally, the

produced artificial fingerprints might be tuned to recreate normal fingerprints. This technique

was utilized as a part of some fingerprint databases, Sensor depends on pyro electric material.

This material can change temperature contrasts to various voltages. By the contrast between

the temperatures of the edges and the valleys, the voltage created for every one is not the same

as the other.

A standout amongst the most widely recognized warm sensors is "Atmel FingerChip"

which is utilized to secure numerous outstanding fingerprint databases.Namely, FVC2004

(DB3) and FVC2006 (DB3). This technology is not very common commercially. One of the

difficulties faced when dealing with this technology is that the thermal image should be

captured in a very short time. That is because when a finger is placed over the pyro electric

material, the temperature difference between the areas under the ridges and the valleys will be

initially detectable. Eventually, after a short period of time, the temperature difference

vanishes as the adjacent areas heat each other. As this method depends on body temperature, it

is very hard to be received E. Ultrasound Sensor

This gadget depends on the rule of sending high recurrence sound waves, and after that

catching the reflection from the finger surface. The upside of this innovation is its capacity to

go through residue, soil, and skin oil that may cover the finger surface. Therefore, fingerprint

ultrasound detecting might be utilized to catch top notch pictures if the finger is messy. Then

again, this method has numerous weakness. For instance, it is exceptionally costly and may

set aside long opportunity to catch the fingerprint picture. Qualcomm4 are using ultrasonic


a) Clarification of how Ultrasonic Sensors work

F. Digital Cameras

Numerous advanced camera writes are utilized as a part of the writing of fingerprint

frameworks. We arrange them into webcams, cell phone cameras and progressed advanced

cameras .Webcams are computerized cameras that are generally low in determination and

exceptionally shoddy in cost. These cameras comes generally implicit work force PCs for

video visiting purposes. Since these cameras have low determination, it is difficult to get the

fingerprint points of interest from a picture. Accordingly, numerous examines proposed

numerous picture pre-handling strategies to defeat this issue. Over the most recent couple of

years, cell phones cameras have seen an enormous change which have empowered them to be

utilized to procure fingerprint. Current determination of cell phone have achieved 8 uber

pixels all things considered. Nonetheless, there are an arrangement of criteria that ought to be

considered when taking fingerprint pictures utilizing these gadgets, ffor example, lighting,

remove between the finger and the camera, the proper concentration that ought to be picked,

and the picture preprocessing steps. Progressed computerized cameras, additionally, have

been considered and used to secure fingerprint pictures. These cameras delivers high

determination pictures. In these cameras, picture obtaining is profoundly flexible. They can

likewise apply propelled picture upgrade and pressure calculations


In this segment, we portray briefly the absolute most basic fingerprint databases:

FVC2000 database is utilized as a part of the First International Finger-print Verification

Competition. The database comprises of four sections; DB1, DB2, DB3 and DB4. DB1 and

DB2 contain pictures of 110 fingers with eight impressions for every finger (880 pictures

altogether). The pictures of up to four fingers for every member where taken. Fingerprints of

the fore and the center fingers of the two hands are taken. Half of the fingerprints are for guys.

The times of the people in these two databases are 20-30 years of age. DB1 was procured

utilizing a minimal effort optical sensor (Secure Desktop Scanner) while DB2 was gained an

ease capacitive sensor (TouchChip by ST Microelectron-ics). As indicated by, these two

databases can't be utilized to assess the precision of the biometric calculations. That is on the

grounds that no alerts were taken to ensure the nature of the pictures and the plates of the

sensor were not methodicallly cleaned after each utilization In DB3, the fingerprints are

obtained from 19 volunteers, of whom 55% are guys. The pictures of six fingers are taken

from every member (thumb, fore, and center of left and right hands). The periods of the

people are in the vicinity of 5 and 73. DB3 pictures were obtained utilizing an optical sensor

(DF-90 by Identicator Technology). DB4 was naturally created utilizing engineered age.

FVC2002 database was utilized as a part of the Second International Fingerprint Verification

Competition. It comprises of four sections; DB1, DB2, DB3 and DB4. The quantity of

pictures is the same as in FVC2000. In this database, four distinct kinds of advances were

utilized to obtain the fingerprints in the four unique databases (I) an optical sensor

"TouchView II" by Identix, (ii) an optical sensor "FX2000" by Biometrics, (iii) a capacitive

sensor "100 SC" by Precise Biometrics, and (iv) manufactured fingerprint age. 90 people took

an interest in this database gathering.In DB1, DB2, and DB3, pictures of various 30 members

were gathered. Pictures of 4 fingers (the fore and the center fingers of the two hands) of every

member were gathered. No any additional exertion were taken to ensure the nature of the

pictures as the perpos of these pictures are simply to be utilized as a part of rivalry. DB4 was

artificially created.


In this paper, we have studied the writing for the most widely recognized fingerprint

procurement gadgets and the diverse accessible fingerprint picture sets. We have seen that the

most recent pattern in fingerprint obtaining is to utilize distinctive kinds of advanced cameras

and uncommonly the utilization of cell phone cameras. We have additionally seen the

accessible fingerprint dataset obtained utilizing computerized cameras are altogether taken

with work in debasements due to specific lighting, distinctive foundation, low camera quality,

picture pressure, and some more. At the end of the day, the accessible fingerprint databases

needs to a flawless fingerprint database that can be utilized for testing the effect of various

adjustable debasements. As a future work, we intend to assemble a computerized camera

procured fingerprint set in idealize conditions that can be redone effectively and can be

utilized for testing the effect of various debasements on the precision of various fingerprint

acknowledgment calculations.


R.Kowshika & R.Nihitha

Fatima College(Autonomous),Madurai


Image processing is a method to perform some operations on an image, in

order to get an enhanced image or to extract some useful information from it. It is a

type of signal processing in which input is an image and output may be image or

characteristics features associated with that image. Now a days, image processing is

among rapidly growing technologies. There are two types of methods used for image

processing namely analogue and digital image processing. Analogue image processing

can be used for the hard copies like printouts and photographs. Image analysts use

various fundamentals of interpretation while using these visual techniques. The three

general phases that all types of data have to undergo while using digital technique are preprocessing

, enhancement, and display, information extraction. Analog image processing is

done on analog signals. In this type of processing the images. Digital image processing

deals with developing a digital system that performs operations on an digital image.


Capturing three-dimensional models has become more and more important during

last year, due to rapid development of 3D technologies, especially 3Dprinters. There

are several possibilities of capturing such 3D models ,but the most frequent method is

3D scanning, which means direct capturing of 3D models. This technology is spreading

into the new domains and more and more new applications. This paper deals with

applications of 3D scanning technologies in medicine, where 3D scanning brings

significant advantages on Medical Volumetry


Each cube contains probability by object eg. Density in case of MRI or CT.

All object in scan are then approximated by number of same cubes. On the other

hand, computation over such model are simple and fast. It is useful for models in low

resolution , which are frequently updated and rebuild.


The universal of 3D scanning device suitable for medical purposes and

usable in following abilities:

High accuracy- essential parameter to being able to distinguish even tiny

changes of body caused by muscle strengthening.

Flexibilty since device should be universal , we shall be capable of scanning

entire body as same as tiny details. Because 3D scanner must be very flexibile.

Low operational costs- to allow its everyday, use operation shall be


Simple manipulation- device must be as much as possible automated, not

distributing the personnel with complex setting before each scanning.

High speed- the scanning procedure must be very fast. In other cases, the

personnel would not have time to use it and would prefer estimation.

No limitations- device should be usable with any patient. There should be no

limitations according to metal parts , health state etc.....

Harmless operation-using the device shall not be harmful for personnel in any


To fulfil all these requirements, the design of medical 3D scanner shall be as


Data capturing sensor shall be structured light 3D scanner or laser scanner due

to precision reasons. As a result of this, computation of 3D point position. Sensor

motion shall be motorized from reason of precise sensor localization, automatic

movement and keeping in measuring range. The problem with computational

requirements is not serious since the model is once captured and afterwards modified


Because such device is not commercially available, we created our own 3D scanner

meeting the specifications. High accuracy is reached by using precise manipulator

with accurate laser scanner and high flexibility is caused by programmable scanning

by replaceability of laser scanner, what provides possibility of scanning both tiny and

large structures. It is 3D modelling system, useful for many different medical

applications besides. Such models are visualized to operator who defines regions of

interest(ROI) is defined by method itself.


The technology was recently by Sitemens healthineers. MRI scanners equipped

compressed sensing technologies operate much more quickly than MRI scanners

currently in use. The result is the first clinical application using compressed sensing

technology . “compressed sensing “ technology which was developed scans of the

beating heart can be completed in as few as 25 seconds while the patient breathes

freely. In contrast in an MRI scanner equipped with conventional acceleration

techniques must lie still for four minutes or more and hold their breath for as

manyas seven to 12 times related procedures. In the future, compressed sensing might

change the way MRI of the abdomen are performed. Today certain populations like

excluded from having abdominal MRI due to their inability to perform long,

consecutive and exhausting ,the amount of data required for an image of excellent

diagnostic quality may be reduced enabling imaging of the abdomen in one continuous

run. While still in the early stages research reported today in the advances has made

significant steps towards a new MRI method with enable to personalise life-saving

medical treatments and allow real-time imaging to take place in locations such as

operating theatres and GP practises.

MRI which works by detecting the magnetism of molecules to create an image is a

crucial tool in medical diagnostics. However current technology is not very efficient- a

typical hospital scanner will effectively detect only one molecule in every 200,000,

making it difficult to see the full picture of what happening in the body.

The research team based at the university of York, has discovered a way to make

molecules more magnetic and therefore more visible an alternative method which

could produce a new generation of low-cost and highly sensitive imaging techniques.

Professor Simon Duckett from the centre for Hyper polarisation in magnetic resonance

at the university of York said. “what we think we have the potential to achieve with

MRI what could be compared to improvements in computing power and performance

over the last 40 years. While they are a vital diagnostic tool, current hospital scanners

could be compared to the abacus, the recent development of more sensitive scanners

takes us to Alan Turing’s computer and we are now attempting to create something

scalable and low-cost that would bring us to the tablet or smartphones.

The research team has found a way to transfer the “invisible” magnetism -a magnetic

form. Improved scanners are now being trailed in various countries, but because they

operate in the same way as regular MRI scanners- using a superconducting magnetthese

new models.



PET stands for position Emission Tomography and is a method of body

scanning that detects radioactive compounds that have been injected into the body to

provide information on function rather than structure . PET scanners are relatively new

to the secondary equipment market.


Magnetic Resonance Imaging is an imaging technique used primarily in medical

settings to produce high-quality images of the inside.MRI produce images that are the

visual equivalent of a slice of anatomy. MRI user radio frequencies a computer and a

large magnet that surrounds the patient.


Computerized Axial Tomography scanners use a fan beam of x-rays and a

detector system that rotates then display on a computer or transferred to film.


In nuclear medicine diagnosing techniques very small amount of radioactive

materials. Information from nuclear medicine studied describes organ function not just



This medical imaging techniques uses high-frequency sound waves and their

echoes. The main advantage is that certain images can be observed without using



These IT-based products improve the speed and consistency of image

communication with in the radiology department and throughout an enterprise.


It provides a summary of scientific knowledge on security scanners for

passenger screening. Although the does per scan arising from the use of screening for

security purposes are well below the public does limit this does not remove the

requirements for justification.


In order to show the degrees of sophistication principles, which are generally at

the basis of applications. The prospects in this area follows those of scanning. The

same scanning modality can produce several types of scanning. From this point of

view magnetic resonance imaging(MRI) is a very good example because different

sequences give access to the anatomy.


Digital image processing (3 rd edition):Rafael c .Gonzalez . Richard wood!Po=16.0550


J.Roselin Monica,S.Sivalakshmi

Fatima College(Autonomous),Madurai


Digital image processing is the use of computer algorithms to perform image

processing on digital images. As a subcategory or field of digital signal processing, digital

image processing has many advantages over analog image processing. It allows a much

wider range of algorithms to be applied to the input data and can avoid problems such as the

build-up of noise and signal distortion during processing. Since images are defined over two

dimensions (perhaps more) digital image processing may be modeled in the form

of multidimensional systems.Many of the techniques of digital image processing, or digital picture

processing as it often was called, were developed in the 1960s at the Jet Propulsion

Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a

few other research facilities, with application to satellite imagery, wire-photo standards

conversion, medical imaging, videophone, character recognition, and photograph enhancement. The

cost of processing was fairly high, however, with the computing equipment of that era. That changed

in the 1970s, when digital image processing proliferated as cheaper computers and dedicated

hardware became available. Images then could be processed in real time, for some dedicated

problems such as television standards conversion. As general-purpose computers became faster, they

started to take over the role of dedicated hardware for all but the most specialized and computerintensive

operations. With the fast computers and signal processors available in the 2000s, digital

image processing has become the most common form of image processing and generally, is used

because it is not only the most versatile method, but also the cheapest.Digital image processing

technology for medical applications was inducted into the Space Foundation Space

Technology Hall of Fame in 1994.


Visual Effects (abbreviated VFX) is the process by which imagery is created or manipulated

outside the context of a live action shot in film making.Visual effects involve in the

integration of live-action footage (special effects) and generated imagery (digital effects) to

create environments which look realistic, but would be dangerous, expensive, impractical,

time consuming or impossible to capture on film.

Visual effects using computer-generated imagery (CGI) have recently become accessible to

the independent filmmaker with the introduction of affordable and easy-touse

animation and compositing software.Visual effects primarily divides into two groups of:

1. Special Effects: It covers any visual effects that take place in live action, e.g. on set

explosions or stunt performances.

2. Digital Effects (commonly shortened to digital FX or FX): It covers the various

processes by which imagery is created or manipulated with or from photographic

assets. Digital Effects often involve the integration of still photography and computergenerated

imagery (CGI) to create environments which look realistic but would be

dangerous, costly, or impossible to capture in camera. FX is usually associated with

the still photography world in contrast to visual effects which is associated with

motion film production. Digital FX also divides into different subgroups of

professions such as:

‣ Matte paintings and stills: digital or traditional paintings or photographs which serve

as background plates for 3D characters, particle effects, digital sets, backgrounds.

‣ Motion Capture (Mo-Cap for short): It’s the process of recording the movements of

objects and or people. In a session of motion capture, the subject whose motion is

being captured is recorded and sampled many times per second by different scanners

placed all over the environment. There are different types of systems that read the

actor’s movement. One of which is the optical method that uses tracking cameras that

lock onto specialized markers placed over the actor’s motion capture suit. The other

type of method is called the non-optical method where instead of capturing the

markers location in space, it recorders and measures the inertia and mechanical

motion in the area. This type of motion capture doesn’t just apply to the body, but can

be used to track the facial movements and expressions of an actor and transfer them

to a 3d model later on in the pipeline. The same type of concept of using markers to

track motion is used, but more often than not, the actor’s face will have painted dots

on their face rather than ball shaped markers. Not only is the actor’s movements

recorded in this process, but the movement of the camera is also recorded, which

allows editors to use this data to enhance the environment the motion captured set is

imagined in. Once all of this is captured, the motion captured data is mapped to a

virtual skeleton using software such as Autodesk’s MotionBuilder or other software

of choice.

‣ Modelling: Creating 3D models of props or characters using specialised software.

‣ Animation: Assign movements for any objects and characters in 2D or 3D.

‣ Compositing: Combining visual elements from different sources to create the illusion

that all those elements are parts of the same scene.


The first animation films can be traced back to the early 1900s which featured

characters from the popular comic strips of the time. These films made use of the single

frame method which involved images projected at a high volume of frames per

second. Gertie the Dinosaur created by sketch artist Winsor McCay in 1914 is believed to be

the first successful animation short film.

King Kong, released in 1933 was among the pioneering movies to use this technique. The

use of miniature models was another Hollywood VFX technique that was in use in the early

1900s, but it was taken to another level by the classic sci-fi

franchises StarWars and StarTrek. Hundreds of miniature models manipulated by some

brilliant camerawork marked these films, which created an unprecedented fan base.

Superman, released in 1978, was another milestone in the special effects industry. By using

cables, blue screen and some clever camera tricks, the movie makers created the illusion of a

flying superhero.Visual effects definitely add value to movies and while Hollywood remains

the mecca of special effects, film makers the world over are now using VFX to enhance their

movies. Some of the recent hit films in China to have used VFX heavily include the Zhong

Kui: Snow Girl and the Dark Crystal, John Woo’s The Crossing, and the recently released

Monster Hunt, which went on to become the highest grossing Chinese film of all time. In

India, the larger-than-life Bollywood films use an average of 500 to 1500 VFX shots. Films

like Dhoom, Chennai Express and the smash-hit Bahubali have used special effects to

perfection in recent times. The growing popularity of visual effects in world cinema has led

to a spurt in the demand for special effects companies like Toolbox Studio, which has been

doing impressive work in the field.


VFX is a technology which is used for making animated and unbelievable movies

with its originality. This paper says about the definition, history, and techniques of the VFX.

This also includes about the lifespan of technology from the starting stage and still now how

it is useful for flims. VFX also makes a good carrier in life for the persons who have passion

in cartoon drawing.





S.Muralidharan & M.Deepak

MAM College,Trichy

Road Traffic is a standout amongst the most essential issue in our creating world. This

paper presents investigation of various viewpoints and issues identified with the issue. This paper

underscores on utilizing unmistakable innovation – "Web of Things" (IOT) for creating shrewd

framework to screen different parameters identified with street movement and utilizing it for

viable arrangement.

The overview of the current frameworks and concerned methods identified with the issue zone

are talked about. Distinctive issues like vehicle identification, impediment recognition, and path

shot location, mishap discovery and related techniques to comprehend these issues are

investigated. We propose our "IOT based activity checking and mischance discovery

framework" comprising of "Raspberry pi", and "pi camera as equipment" which will utilize live

video as information and process it for social event data about the live movement. The

framework produces data seeing the activity, for example, number of vehicles, crisis mishap

circumstances, and dishonorable shot of vehicle. The produced data can be utilized to deal with

and occupy the live movement as expected to keep away from the issues identified with street


Keywords—Internet of Things (IOT); Raspberry pi; Pi camera; Traffic Monitoring;


Rising road Traffic is one of the most concerning issue handled by the world in

everyday life. Individuals endure each day in a few or the other route because of

overwhelming street activity. It is much imperative to consider and enhance the factor of

street security while contemplating the issue of street activity. The effective answer for this

issue is to utilize keen computerized innovation for taking care of the continuous movement.

There has been different research work did to discover the answer for this issue. Yet at the

same time there is a need of proficient and perfect framework that can be for all intents and

purposes tried and executed.

Movement reconnaissance can be extremely helpful for future arranging, for example,

contamination control, activity flag control, framework and street arranging, inadvertent

blockage evasion. The information acquired from this framework can be valuable for the

expectation of the voyaging time and way. Using Internet of Things (IOT) can be

exceptionally useful for growing such a shrewd framework. IOT is having interconnected

system of physical parts and gadgets associated for the accumulation, trade and handling of

different sorts of information for finish of some particular goal. IOT has been very effective

in expressing in fact brilliant ideas for different spaces associated with this present reality, for

example, savvy city, shrewd home, keen horticulture, and keen security and so forth.

This paper exhibits top to bottom portrayal of issues and research issues identified with

activity observation and mischance identification framework. We additionally exhibit

examination and discoveries from the investigation of the writing identified with the issue. It

likewise introduces our proposed framework and future work. The proposed system in

perspective of IOT is limited with the help of a taking care of board to process the data and a

camera module to give the live video as information. Raspberry pi board will be used as

taking care of module and pi camera module will give the data information in video crude

arrangement h.264 to the Raspberry pi. The framework will distinguish the quantity of

vehicle cruising by, mischance and anticipate the path shot of the vehicles out and about. The

foundation subtraction utilizing Gaussian blend model and edge location utilizing watchful

edge is been executed on Raspberry pi.Motivation

Significant urban areas of India like Mumbai, Surat and Ahmedabad are confronting the issue

of street movement gravely. The general population stay stuck in roads turned parking lots

and gridlocks for a considerable length of time.


There are numerous crisis circumstances on street because of activity which needs moment

consideration. These crisis riotous circumstances can turn out to be defenseless. In this way it

is important to direct the activity to guarantee quick stream of the vehicles out and about.

Subsequently in this developing universe of innovation, the plan to utilize savvy mechanical

frameworks with wide future extension can be significantly more powerful.


1. A Research Problems Related to System in Live Environment

There is wide extent of change for the framework to be executed and tried for all

intents and purposes. Numerous analysts have proposed different procedures like shrewd

edge discovery, Spatial-Temporial investigation with Hidden Markov Model (HMM) ,

thresholding and foundation subtraction , 3-D demonstrate and neural system for vehicle

following, mischance recognition and path shot. They have utilized changing scope of

equipment like stratix III FPGA, cameras, for example, skillet tilt and zoom camera ,

standard camera, and V8 camcorder at various shifting edge rate and resolutions yet some

way or another the design of the framework created do not have the accompanying

components and should be assessed.Ractical compatibility Analysts and technocrats have

proposed numerous thoughts

and created numerous mixes of equipment and programming to give most ideal arrangement

they can.Yet at the same time the frameworks need to give for all intents and purposes

similarity this present reality for down to earth utilization. A few analysts have worked just

with disconnected video information which does not give understanding about working of

their proposed work in genuine world, While some examination work have been tried in

certifiable which needs more investigation about its working in various clear continuous

circumstances for down to earth approach with adaptable usage with live movement

2) Environment and background problem

This foundation and condition impacts in a few different ways of the gadgets. The

hindrances out of sight, for example, tree, plants, individuals and so on are factors that can

influence to the video preparing calculation and consequently can give second rate comes

about. Edge differential procedure utilized as a part of can have changed surprising

substantial handling on Raspberry pi board if there is excessively overwhelming movement

and foundation is intensely thick and if the edge rate isn't set to work with to the irregular


circumstance. Thus the encompassing condition caught in the casing of the camera is a

critical point to be considered while adopting the pragmatic strategy for this framework.

3) Software and hardware

Similarity As depicted before there are numerous products and equipment that has been

consolidate for a few proposed work. It is extremely imperative for programming to

commonly work correctly with the equipment. Raspberry pi board isn't intended to work with

x86 working framework [21] which incorporates windows and a portion of the Linux based

OS. We liked to utilize Raspbian OS, which is an extraordinarily planned and favored

working framework for Raspberry pi and python is the advanced dialect that is desirable over

work with Raspberry pi. The conduct of devices and programming will contrast for various

equipment, for instance the establishment and working of the Open CV library by Intel as per

the programming dialect on IOT gadgets for video preparing will vary not at all like in

ordinary designed PC. Subsequently it is important to discover most ideal mix of the two

components which will be reasonable to work in live condition with great execution.

4) Independency

Most of the current systems which are been observed and tested in the live environment

needs formation of the design to work as an independent system with remote access and

control over the network to provide it more compact ability and feasibility to be implemented

in real world.

5) Cost effectiveness

Financially plausible framework will give much better down to earth and achievable

approach. In the event that we consider greater situation where this sort of framework is

should have been utilized and kept up on expansive number of streets of the city, cost ends

up one main consideration. There emerges a need of building framework with minimum cost

equipment and programming.

B. Asset limitations for IOT based framework

1) Processing Power

The proposed framework will have constrained assets for preparing which will influence

the handling power. Raspberry pi has just 1 gigabytes of smash and 1.2 gigahertz of Quad

Core processor. These assets should be overseen and taken care of in an approach to play out

the different video preparing tasks. Subsequently we will undoubtedly work as indicated by

this confinement.


2) Power supply

The propose framework requires consistent power supply yet at current circumstance we

intend to utilize a battery. So we will have a constrained power supply for actualizing and

testing our framework.

3) Atmospheric impact

The equipment segments of the proposed framework won't have any assurance against

the climatic circumstances like warmth, rain, and cool and so on which can influence the

ordinary working of the framework.III. RELATED WORK

The past work completed by various scientists have been broke down and examined thinking

about different elements and parameters to produce the vital discoveries of the procedure,

methods and criteria required for building up the proposed framework. The writing audit has

been set up into five table speaking to the investigation of systems utilized for vehicle

discovery, mishap recognition and vehicle following, dataset utilized and sort of the camera

utilized. The nature of the framework is judged in light of the execution parameter alongside

the physical equipment that has been utilized for improvement of the framework. The tables

are readied in light of criteria whether crafted by writers has been executed in live condition

or tried in disconnected mode.

A. Vehicle Detection

The strategies Spatio-transient investigation, Kalman separating Frame distinction ,

Gaussian blend show [20] have been utilized for getting the extraction of the state of vehicle.

The Gaussian blend demonstrates has been more exact to isolate out the state of moving

vehicle and it additionally works for versatile foundation. Spatio-transient distinction and

Kalman separating strategies require complex preparing. Paired Frame contrast is less

unpredictable yet gives less accuracy contrasted with Gaussian blend demonstrate.

B. Vehicle Tracking

Vehicle following gives the path or the way of the moving vehicle. The path of the

vehicle can be utilized to judge and screen the customary stream of the activity. Vehicle

following is mostly isolated into two classes’ direction based and optical stream. Optical

stream based strategy requires low calculation contrasted with direction based. In light of the

study Lucas-Kanade technique in view of optical stream and K-implies bunching in view of

direction is discovered more proficient. Relatively Lucas-Kanade needs less calculation and

gives high precision. The critical examination and discoveries that are acquired The

investigation about different procedures/strategies was done to discover best methods. The


investigation of the past work helped us to get understanding about the current framework

and their philosophy. It helped us in seeing how to plan and propose more productive


D. Available Open Source Data Set for Experimentation

1) VIRAT video informational index

This informational index contains crude video records of 8.5 hours in .mp4 organize having

determination of hd 720pixel. The recordings in the informational collection are recorded

from stationary camera and contain ground data with blend of people strolling and vehicles

out and about.

2) MIT Traffic Video Data set

The MIT movement informational index contains 90 min video succession recorded at

720x480 utilizing a standard camera. The informational index contains day by day life

exercises out and about including people. This informational index can consummately be

utilized for testing and experimentation reason


After the investigation of the related work and examination of the current framework we

chose to plan a framework which survives and handle a portion of the current research issues

talked about before. The degree and target of the exploration in light of the elements and

assets accessible are enrolled underneath.

A. Extent of Research

The framework will be made out of raspberry pi and pi camera as equipment gadgets.

The programming of the framework will be finished utilizing Raspbian OS. The

programming dialect utilized will be python and openCV library. The framework will work

with video got in crude arrangement h.264 from raspberry pi camera at 1296 x 730

determinations 1-49fps or 640 x 480 at 1-90fps. The framework will distinguish

overwhelming vehicles including trucks, transports, and autos and path of the vehicle will be

plotted. The vehicle identification will be tested in live condition. The framework will

distinguish oddity and mischance or uncommon conduct of the vehicle. The mishap

discovery will be performed in demonstrated/recreated condition or disconnected information

because of confinement of assets.


B. Goal of Research

To review break down and think about various procedures identified with vehicle, path

and mischance discovery for movement reconnaissance. To test the different systems on the

proposed blend of equipment and check the yield as indicated by various parameters.

Determination of method for steps portrayed in Frame handling diagram, the overview or

tried consequences of analyses. To create framework with those chose strategies/calculation

for identifying number, path and mischance of vehicles with help of information video

caught from camera at particular required parameter. To take a stab at actualizing and testing,

this framework in live condition for vehicle location.


1) Raspberry pi

Raspberry pi is a little measured PC. It has an ARM calculable processor of 1.2 GHZ and

slam of 1 Gigabyte. It has different working framework as alternative to create different IoT

ventures. Raspberry pi likewise contains GPIO pins which are additionally programmable for

different tasks. Because of little size can without much of a stretch be fit for our proposed


2) pi Camera

Pi camera is a camera module that can be utilized with raspberry pi. There is opening in

Raspberry pi to straightforwardly associate the pi camera. Pi camera gives input video in

determination of 640, 720, or 1080 HD. Henceforth it can be effortlessly utilized for our

motivation. Diverse elements like shine, power can be balanced for Pi camera.

3) Battery as power source

4) Internet dongle for remote access

Programming tools:-

5) Python

Python is the authority bolstered dialect for Raspberry pi. There is additionally wide

network taking a shot at Raspberry and python. The Raspberry pi has been firmly upheld by

the python programming. Subsequently we work with Python IDE.

6) Opencv (Intel open source library)

OpenCV is an open source library for video and picture preparing and it is exceptionally

productive and powerful for different activities identified with our proposed framework.

OpenCV is lightweight as far as preparing.


7) Eclipse IDE for remote investigating

Shroud can be utilized for remote troubleshooting. We can interface remote raspberry and

Eclipse on neighborhood PC for troubleshooting the projects on raspberry pi. As Eclipse IDE

has great troubleshooting support consequently it gives us straightforwardness to take a shot

at Raspberry pi.Architectural diagrams

Fig . 2. Architecture of Proposed System

Execute concerned operations on each frames extracted from the live video of pi camera,

elaborates the processing and operations on single frame.

D. Experiment and demonstration

The initial step of our trial was to design the Raspberry pi. The headless establishment of

the Raspbian OS was done effectively on Raspberry pi. After the establishment of the

Raspbian OS the establishment of

the python and openCV was done in the virtual condition to satisfy every one of the

conditions required. For association of the Raspberry pi with PC/PC remote association

utilizing secure shell SSH is favored. After the setup of the Raspberry pi we tried three

strategies in view of our investigation and discoveries for vehicle identification and

foundation division Frame distinction, Gaussian Mixture Model, Bayesian division. The trial


was done on two dataset VIRAT and MIT Traffic as The strategies were likewise tried on a

video of Indian street movement with just 360p determination and temperamental camera as

the most dire outcome imaginable to get the correct data about the working of the three

methods and discover the best reasonable. We discovered that GMG performed well contrast

with others and consequently we utilized it with Canny edge identification to check whether

we can utilize all the more precisely followed edge of moving vehicle for our framework.

Our future work will incorporate development of bouncing box and impediment end. The

jumping box is a container shape covering moving vehicle in each edge and it will likewise

be valuable in distinguishing impediment between two vehicles. We will attempt to explore

different avenues regarding distinctive techniques for vehicle following and mischance

location and actualize them to locate the most appropriate one for our objective. We intend to

test the proposed framework in live condition for vehicle discovery. The framework will be

tried for remote access over the web utilizing the web dongle and a battery as power source

giving it an independency. This gets to and controls the framework from any geological area.



A top to bottom review of methodologies utilized by analysts for vehicle discovery,

vehicle following, and mischance identification is displayed. The accessible open source

video datasets, helpful for disconnected experimentation, are likewise talked about. From

study, we reason that (I) Hidden Markov Model and Neural Network strategies are more

effective and perfect to try different things with mischance discovery as these strategies give

great exactness, (ii) Lucas-Kanade, a strategy in view of optical stream, and K-implies

grouping, a system in light of centroid count of the moving article, are helpful for vehicle

following. The examinations on VIRAT and MIT activity datasets were led and it is reasoned

that for vehicle identification Gaussian blend model and edge contrast methods demonstrate

more compelling. The shrewd edge location strategy performed well to find the edge of the

vehicle. The climate conditions, control supply and preparing power are basic parameters to

be considered while planning IOT based movement observation framework. In future, we

intend to create IOT based movement observation framework. We expect to test the

framework in live condition with a dream of its pragmatic execution in everyday’s life


1. S. Kamijo, Y. Matsushita, K. Ikeuchi and M. Sakauchi. "Traffic monitoring and accident

detection at intersections." IEEE. Trans. on Intelligent transportation systems, vol. 1, no. 2,

pp. 108-118, June2000

2. R. Cucchiara, M. Piccardi, and P. Mello, ” Image Analysis and Rule- Based Reasoning

for a Traffic Monitoring System” IEEE transactions on Intelligent transportation systems,

vol. 1, No. 2, pp. 119-130,June2000

3. C. Lin, J. Tai and K. Song, “Traffic Monitoring Based on Real-Time Image Tracking”,

IEEE International Conference on Robotics & Automation, Taiwan, pp. 2091-2096,

September 2003

4. Y. Jung, K. Lee, and Y. Ho, “Content-Based Event Retrieval Using Semantic Scene

Interpretation for Automated Traffic Surveillance” IEEEtransactions on Intelligent

transportation systems, vol. 2, no. 3, pp. 151-163, September 2001

5. Y. Ki, and D. Lee, “A Traffic Accident Recording and Reporting Model at

Intersections”,IEEE. Trans. on Intelligent transportationsystems,vol.8, no.2,June2007




T. Muthu Krithika


Fatima College(Autonomous),Madurai


Security system with image processing, touch screen and verification software which can

be used in banks, companies, and at personal secured places.Using Image Processing touch

screen and verification software more security will be provided than any other systems. This

system can be used in locker system of banks, companies and at Personal secured places. The

techniques uses are color processing which use primary filtering to eliminate the unrelated color

or object in the image.


A digital image is a numeric representation, normally binary, of a two-dimensional image.

Depending on whether the image resolution is fixed, it may be of vector or raster type. By itself,

the term "digital image" usually refers to raster images or bitmapped images (as opposed to

vector images).Image processing is often viewed as arbitrarily manipulating an image to achieve

an aesthetic standard or to support a preferred reality. However, image processing is more

accurately defined as a means of translation between the human visual system and digital imaging



It is very important for banks, companies to provide high security system to their valuable

items. In this paper by using image processing, touch screen and verification software more

security will be provided than any other systems. This systemcan be used in locker system of

banks, companies and at personal secured places. The object detection technique uses color

processing which use primary filtering to eliminate the unrelated color or object in the image.

Touch screen and verification software can be used as an extra level of security for customers to

verify their identify. Customers has to undergo for security verification in three steps, In the first


step, Person is identified using touch screen in which he/she has to touch a point in the touch

screen which should be same as that of the point recorded initially during the account opening in


In the second step, at the time of account opening in bank, an object is given to the

customer which is different for different customer which may vary in shape, size or colour and

hence each object has different pixels value. Verification in second step is done using object

detection technique of image processing, in which object is verified using camera and object

should be placed in the screen at the same point as that of point in which it wasinitially placed in

the screen at the time of account opening.

In the third step, Verification is done using verification software in which person has to

answer the details asked by the system. If all the three responses of the person matches with the

response initially stored in the microcontroller during account opening, the locker security system

of bank opens otherwise it remains in the closed. This system is more secure than other systems

because three steps are required for verification.



In security system touch screen is used to provide extra security tousers. A touch screen is

divided into nine points in which atthe time of account opening user will be asked to choose

anyone point from the eight points, that one point chosen by theuser will be stored in the


. At the time of lockeropening in bank, the user will be asked in the first step totouch the

screen, if he/she touches the correct point of thetouch screen which was initially selected by the

user at thetime of account opening the system will proceed for the nextstep for object verification

otherwise the system will not gofor next step, the locker doors will remain closed. After thetouch

screen point detection, user has to undergo objectdetection which is the second step of this

system. The ninepoint Touch screen is shown in figure.



Object recognition is a process which identifies a specific object in a digital image or

video. Object recognition algorithms rely on matching, learning, or pattern recognition

algorithm using appearance-based or feature based techniques. Common techniques include

edges, gradients, Histogram of Oriented Gradients, Haar wavelets, and linear

binary patterns. The techniques using are such as color processing which are used as primary

filtering to eliminate the unrelated color or object in the image. Besides that, shape detection has

been used where it will use the edge detection, Circular Hough Transform .Identification of

person is done by matching the data which is initially stored in the microcontroller. Person has to

verify the object by placing it at the same place on the screen which was initially placed during

the account opening. If the position and object is verified the system proceeds for the next step.



Verification software can be used as an extra level of security or customers to verify their

identify. . At the time of registration of the customer they will be asked to fill details for which

they have to fill the details that will be stored in themicro controller. All the records of details of

different customers will be stored in microcontroller that will be done by creating appropriate

verification software using C or C++.At the time of entering in locker system, different customers

based on their Touch screen point detection and object recognition and previous information

stored in micro processor will be asked to enter the verification details, if they will enter all the

verification details correctly than the locker gate gets open. The customer will be asked to enter

the password(which will be given initially by the bank) and if the type password is matched with

password given by bank then the customer will be asked to enter object number (which will be

given initially at the time of account opening) and if the type object number is matched with

object number given by bank then the customer will be asked to enter answer of a security

question (the security question will be chosen by the user at the time of account opening and its

answer is entered and is stored in the microcontroller) if the given answer matches with the stored

answer the locker gate gets open otherwise it main in the closed position.


This security system consists of microcontroller, Object detection algorithm, Verification

Software, keyboard, LED and LCD. The system works on three steps. In the first step, Person is

Identified using touch screen in which he/she has to touch a point in the touch screen which

should be same as that of the point recorded initially during the account opening in bank. In

second step, at the time of account opening in bank, an object is given to the customer who is

different for different customer who may vary in shape, size or color and hence each object has

different pixels value. Verification in second step is done using object detection technique of

image processing, in which object is verified using camera and object should be placed in the

screen at the same point as that of point in which it was initially placed in the screen at the time of

account opening. In the third step, Verification is done using verification software in which

person has to answer the details asked by the system.

If all the three responses of the person matches with the response initially stored in

themicro controller during account opening, the locker security system of bank opens otherwise it

remains in the closed Position.



A banking locker security system using object detection technique, touch screen point

detection and verification software is implemented. It is more secured system which is cost

effective. The microcontroller compares with the datastored at the time of account opening. It

compares the whether the object is same and at the same place as that of initially recorded. It also

compares the point of touch screen touched with the point selected initially at the time of account

opening and finally this system verifies details using verification software.


1. Islam, N.S. Wasi-ur-Rahman, M. “An intelligent SMS- based remote Water Metering System”.

12th International Conference on Computers and Information Technology, 2009, 21-23 Dec.

2009, Dhaka, Bangladesh.

2. International Conference on Robotics, Vision, Information and Signal Processing 2007

(ROVISP2007), Penang, 28 – 30 November 2007.




K.Muthulakshmi & S.Anjala

Fatima College,Madurai


The aim of this survey is to provide a comprehensive overview of the state of the

art in the area of image forensics. These techniques have been designed to identify the source of

a digital image or to determine whether the content is authentic or modified, without the

knowledge of any prior information about the image under analysis. All the tools work by

detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the

digital image by the acquisition device and by any other operation after its creation. The paper

has been organized by classifying the tools according to the position in the history of the digital

image in which the relative footprint is left: acquisition based methods, coding –based methods,

and editing –based schemes.

There are two main interest in digital camera image forensics, namely source

identification and forgery detection. In this paper, we first briefly provide an introduction to the

major processing stages inside a digital camera and the review several methods for source

Digital camera identification and forgery detection. Existing methods for source identification

explore the various processing Stages inside a digital camera to device the clues for

distinguishing the source cameras while forgery detection checks for Consistencies in image

quality are for present for contain characteristics as evidence of tampering.

The growth sophisticated image processing and editing software has made the

manipulation of digital images easy and imperceptible to the naked eyes. This increased the

demand to assess the trustworthiness of digital images when used in crime investigation, as

evidence in court of law and for surveillance purposes. This paper presents a comprehensive

investigation of the progress and challenges in the field of digital image forensics to help the

beginners in developing understanding, apprehending the requirements and identifying the

research gaps in this domain.



In digital image processing, computer algorithms are used to perform image processing.

Digital image processing has several advantages over the analog image processing. It provides a

large number of algorithms to be used with the input data. In digital image processing we can

avoid some processing problems such as noise creation and signal distortion at some point in

signal processing. In 2000s, fast computers became available for signal processing and digital

image processing has become the popular form of image processing. For the reason that, signal

image processing became versatile method, and also cheapest.

The term digital image processing generally refers to processing of a two-dimensional picture

by a digital computer. In a broader context, it implies digital processing of any two-dimensional

data. A digital image is an array of real numbers represented by a finite number of bits. The

principle advantage of Digital Image Processing methods is its versatility, repeatability and the

preservation of original data precision.

In digital image processing, we are concentrating on one specific application. That is digital

camera image forensic methods.

Multimedia Forensics has become important in the last few years. There are two main interests,

namely source identification and forgery detection. Source identification focuses on identifying

the source digital devices (cameras, mobile phones, camcorders, etc) using the media produced

by them, while forgery detection attempts to discover evidence of tampering by assessing the

authenticity of the digital media (audio clips, video clips, images, etc).

A digital camera or digicam is a camera that captures photographs in digital memory. Most

cameras produced today are digital, [1] and while there are still compact cameras on the

market, the use of dedicated digital cameras is dwindling, as digital cameras are now

incorporated into many devices ranging from mobile devices to vehicles. However, high-end,

high-definition dedicated cameras are still commonly used by professionals.




Digital cameras consist of lens system, filters, and color filter array (CFA), image sensor, and

digital image processor (DIP). Color images may suffer from aberrations caused by the

lenses, such as,

Chromatic aberration and

Spherical aberration.


In capturing on image, First, We have to capture on 3d image using a digital camera and send

it to a particular system (digital image processing system) to focus on some specific area, and

then it will produce an output as a zoomed image. Here, it is focused on a water drop in a leaf.



Compact digital camera

Bridge camera

Mirror less interchangeable-lens camera

Digital single-lens reflex(DSLR) camera

Digital single-lens translucent(DSLT) camera


There are some techniques that use to detect the digital image forensic methods. They


1. Using JPEG Quantization Tables

2. Using Chromatic Aberration

3. Using Lighting

4. Using Camera Response Function (CRF)

5. Using Bicoherence and Higher Order Statistics

6. Using Robust Matching



The forensic analysis of digital images has become more significant in determining the

origin and authenticity of a photograph.The trustworthiness of photographs has an essential role

in many areas, including: forensic investigation, criminal investigation, surveillance systems,

intelligence services, medical imaging, and journalism.Digital images can be captured by some

digital cameras or scanners and can be generated on computers too. Passive image forensic

techniques for source identification work on the basic assumption that the fingerprints of the

imaging sensors, in-camera processing operations and compression are always present in images.

Detection of camera specific fingerprints identifies the image capturing device and justify that the

image is not computer rendered. The two images having the same in-camera fingerprints are

judged to be taken by the same device. The absence of fingerprints in images suggests that either

the image is computer generated or has been maliciously tampered thereby calling for image

integrity verification. Based on the above assumptions the published works are presented in this

section with respect to two issues: firstly, to distinguish between the natural and computer

generated images; and secondly, to identify the image capturing device if the image is natural.

Image tampering is a deliberate attempt to add, remove or hide some important details of

an image without leaving any obvious traces of the manipulation. The digital images are

generally tampered by region duplication, image splicing or image retouching. Region

duplication is also recognized as cloning or copy-move attack, where selective regions from an

image are copied, sometimes transformed, and then pasted to new locations within the image

itself with the main aim of concealing some original image contents. Image splicing on the other

hand uses selected regions from two or more images to be pasted together for producing a new



With the outgrowth of the imaging and communication technology, the exchange of

digital images has become easy and extensive. But at the same time, the instances of

manipulations in the digital images have also increased thereby resulting in greater need for

establishing ownership and authentication of the media. Digital image forensic researcher

community is continuously attempting to develop techniques for detection of the imaging device

used for image acquisition, tracing the processing history of the digital image and locating the

region of tampering in the digital images. The sensor, operational and compression fingerprints

have been studied with various image features to achieve the purposes. An attempt to recover the

tampered region details is expected to be an appealing investigation domain for many researchers.


Most of the work done in image forensics has focused on detecting the fingerprints of a specific

kind of tampering operation. But, practically a manipulated image is often the result of multiple

such tampering operations applied together. Thus, the need is to develop a technique or

framework capable of detecting multiple attacks and tampering.



Z. J. Geradts, J. Bijhold, M. Kieft, K. Kurosawa, K. Kuroki, and N. Saitoh, “Methods for

Identification of Images Acquired with Digital Cameras”, Proc. of SPIE, Enabling

Technologies for Law Enforcement and Security, vol. 4232, February 2001.

J. Lukas, J. Fridrich, and M. Goljan, “Digital Camera Identification from Sensor Pattern

Noise”, IEEE Transactions on Information Forensics and Security, June 2006.

Cheddad A. Doctored Image Detection: A Brief Introduction to Digital Image Forensics.

Inspire magazine, July 2012.



S.Alamelu & S.Nasreen Farzhana

Fatima College(Autonomuos),Madurai


Water is one of the most important substances on earth. All plants and animals must have

water to survive. If there was no water there would be no life on earth.This paper represents an

analysis about rainfall prediction using Digital image processing. The proposed approach here is

to use the digital cloud images to predict rainfall. Considering the cost factors and security issues,

it is better to predict rainfall from digital cloud images rather than satellite images. The status of

sky is found using wavelet. The status of cloud is found using the Cloud Mask Algorithm. The

type of cloud can be evolved using the K-Means Clustering technique. The type of rainfall cloud

is predicted by analyzing the color and density of the cloud images. The result predicts the type of

cloud with its information like classification, appearance and altitude and will provide the status

of the rainfall.


Digital image processing is the processing of digital images with computer algorithms.A

digital image is nothing more than a two dimensional signal. It is defined by the mathematical

function f(x,y) where x and y are the two co-ordinates horizontally and vertically. The value of

f(x,y) at any point is given as the pixel value at that point of an image.

Digital image processing is the use of computer algorithms to create, process, communicate, and

display digital images. Digital image processing algorithms can be used to convert signals from

an image sensor into digital images, to improve clarity and remove noise and other artifacts, to

extract the size, scale, or number of objects in a scene, to prepare images for display or printing

and to compress images for communication across a network.

A digital image is formed as follows. Since capturing an image from a camera is a physical

process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of

the image. So when the sunlight falls upon the object, then the amount of light reflected by that

object is sensed by the sensors, and a continuous voltage signal is generated by the amount of

sensed data. In order to create a digital image, we need to convert this data into a digital form.

This involves sampling and quantization. Sampling and quantization result in a two dimensional

array or matrix of numbers which are nothing but a digital image.



The paper is about “Rainfall Prediction Using Digital Image Processing”. The main

aim is to use digital cloud image for estimating rainfall and to detect the type of clouds using

different image processing techniques. Water is a life giver - even a life creator. A life without

water is unimaginable. Every single cell in the body requires water in order to function properly.

One important way in which the bodily tissues use water, for instance, is to regulate the body

temperature. Hydraulic power is a wonderful source of green energy, where power is created out

of water. Water is used in agriculture for irrigating fields of crops. A large amount of fresh water

is got from rain. Rainfall is one such important component for life. Rainfall play vital role in our

country’s social and economic development.

Rainfall being that much important, how are they formed as follows. Rain that’s fallen

may have been water in the ocean a couple of days before. The water that is at the top of oceans,

rivers and lakes turns into water vapor in the atmosphere using energy from the sun. The water

vapor rises in the atmosphere and there it cools down and forms tiny water droplets through

something called condensation. These then turn into clouds. When they all combine together,

they grow bigger and are too heavy to stay up there in the air. This is when they will fall to the

ground as rain.

Rainfall can also be predicted with the use of satellite, but it the costliest method. So

digitalimage processing techniques are used to determine rainfall. With a prior prediction of

rainfall, the people can find some way to save the water for future usage. Accurately forecasting

heavy rainfall can allow for warning of floods before rainfall occurs. Additionally, such

information is useful in agriculture to improve irrigation practices and the effectiveness of

applying fertilizer, pesticides, and herbicides to crops. Digitalimage processing is one such useful

technique to predict rainfall.


Predicting the rainfall consists of six phases. In the first phase data is collected. In the

second phase the status of sky is found. In the third phase the status of cloud is found. In the

fourth phase the type of cloud is evolved. In the fifth phase the information about the cloud and

the status of rain are displayed. In the sixth phase the analysis and measurement takes place.




Sky status



Cloud type



I. Data collection: The data collected is the digital cloud images. A digital camera is used to

capture the clouds. The image is stored in the file system. The dimension chosen for the images is

400 x 250. The format used is *.jpeg.

II. Sky status: The second step is sky status. The wavelet is used for finding the sky status. A

wavelet is nothing but a small wave of water. It separates the point needed for the cluster. In

wavelet separation points are used in the identification of the clouds. The wavelet threshold for

the clouds is > 50 and

Also the cloud mask algorithm is used here. The cloud mask algorithm consists of certain tests.

Single pixel threshold tests are used first. Dynamic histogram analysis is used to get threshold.

Thick High Clouds (Group 1): Thick high clouds are detected with threshold tests that rely on

brightness temperature in water vapor bands and infrared. Thin Clouds (Group 2): Thin clouds

tests rely on brightness temperature difference tests. Low Clouds (Group 3): Low clouds are best

detected using solar reflectance test and brightness temperature difference. Spatial uniformity test

is also used over land surface. High Thin Clouds (Group 4): This test is similar to Group1, but it

is spectrally turned to detect the presence of thin cirrus. Spatial uniformity test and Brightness

temperature difference test are applied. Temporal uniformity test is also processed.


Cloud type: The major task is to find the type of cloud as per the cloud status. Each and every

cloud will be having its own shape and density and the values are matched accordingly. The type

of cloud is identified by using clustering. We use K-means clustering to combine the pixels in

order to differentiate the clouds. The thickness of the clouds will be in the base part. The color,

Shape and Texture are the concepts used in order to find the type of cloud. The formula to find

the cloud type is shown as follows:

H(n) = ∑ C[i,j] , Cloud id = Highest Density of Cloud Status

K-Means algorithm is an unsupervised clustering algorithm that classifies the input data points

into multiple classes based on their inherent distance from each other. The algorithm assumes that

the d ata features form a vector space and tries to find natural clustering in them. The points are

clustered around centroids μi∀=1….k which are obtained by minimizing the objective. An

iterative version of the algorithm can be implemented. The algorithm takes a 2 dimensional image

as input. Various steps in the algorithm are as follows:

Compute the intensity distribution (also called the histogram) of the intensities.

The Centroids are initializing with the k random intensities.

Repeat the below steps until the cluster a label of the image does not change anymore.

Cluster the points based on distance of their intensities from the centroid intensities.

Cluster the points based on distance of their intensities from the centroid intensities.

Compute the new centroid for each of the clusters.

V. Rainfall estimation: The major step is the estimation of rainfall is estimated as per the type we

recognize. There are different types of clouds. They are as follows:


High Clouds:

Cirrus - The ice-crystal cloud is a feathery white cloud that is the highest in the sky. It has a

wispy looking tail that streaks across the sky and is called a fall streak.

Cirrostratus - A milky sheet of ice-crystal cloud that spreads across the sky.

Low clouds:

Cumulus - The fair weather cloud. Each cloud looks like a cauliflower.

Stratocumulus - A layer of cloud that can sometimes block the sun.

Rain Clouds:

Cumulonimbus and Nimbostratus - The dark, rain carrying clouds of bad weather. They are to be

blamed for most of the winter rains and some of the summer ones. They cover the sky and block

the sun.

Cumulonimbus and nimbostratus are the rainfall clouds. So we take the color and shape and also

the width and find the rainfall status. The temperature is also taken into account. Cloud

information gives the theoretical proof of cloud that is altitude, height, appearance and

classification are given.


The rainfall would be estimate accurately by determining type of cloud using the methods

like pca or contourlet, cloud screening algorithm and k-mean clustering. Theoretical study says

that it must provide better performance compared to other techniques like wavelet and other.

Considering the cost factors and security issues, the digital cloud images were used to predict

rainfall rather than satellite images. Prediction of rainfall can also be done with the help of neural

networks and artificial intelligence.



Watch "Introduction to Digital Image Processing by Ms. Geetanjali Raj [Digital Image

Processing]" on YouTube

Watch "Types of Clouds - Cirrus, Cumulus, Stratus, Nimbus | UPSC IAS Geography" on


Watch "Technical Course: Cluster Analysis: K-Means Algorithm for Clustering" on


Prediction of rainfall using image processing

Rain Fall Using Cloud Images | Cluster Analysis | Cloud



A.Gnana Priya & B.Muthulakshmi

Fatima College(Autonomuos),Madurai


India is an agricultural country. More than 70 percent of the people depend on agriculture.

Growing up the crop and production is a challengeable task. Because, many of the crops are

affected by pest. Insecticide is one of the best remedy for pest attack. But it is sometimes

dangerous for birds, animals and also for humans. Some crops requires close monitoring that

helps in management of diseases. Now a day’s digital image processing is widely used in

agricultural field. This method helps to identify the parts of the plant and find out the disease or

pest as earlier. It is also useful for understanding the better relationship between the climatic

condition and disease.

Image processing techniques could be applied on various applications as follows:

1. To detect plant leaf, stem, and fruit diseases. 2. To quantify affected area by disease. 3.

To find the boundaries of the affected area. 4. To determine the color of the affected area 5.To

determine size & shape of fruit.A counties wealth is detected by agriculture. Disease in plants

causes major loss in an economy. DIP is the use of computer algorithms to create process,

communicate, and display digital images. We may conclude that DIP is a useful and effective

technique for the crop cultivation.

Disease is caused by pathogen which is any agent causing disease. In most of the cases

pests or diseases are seen on the leaves or stems of the plant. Therefore identification of plants,

leaves, stems and finding out the pest or diseases, percentage of the pest or disease incidence,

symptoms of the pest or disease attack, plays a key role in successful cultivation of crops.



In biological science, sometimes thousands of images are generated in a single

experiment. These images can be required for further studies like classifying lesion, scoring

quantitative traits, calculating area eaten by insects, etc. Almost all of these tasks are processed

manually or with distinct software packages two major issue excessive processing time and

subjectiveness rising from different individuals. Hence to conduct high throughput experiments,

plant biologist need efficient computer software to automatically extract and analyze significant


Machine learning-based detection and recognition of plant diseases can provide extensive

clues to identify and treat the diseases in its very early stages. Comparatively, visually or naked

eye identification of plant diseases is quite expensive, inefficient, inaccurate and difficult.

Automatic detection of plant diseases is very important to research topic as it may prove the

benefits in monitoring large fields of crops, and thus automatically detect the symptoms of

diseases as soon as they appear on plant leaves.Initially, the infected Plant leaf images are taken

as input image and image is passed to pre-processing step, to resize the image and remove the

noise content by using a median filter. At the next stage, the pre-processed image is passed to the

segmentation for partitioning into clusters. FCM clustering technique is used for segmentation

which is very fast, flexible and easy to implement than others. Further, the segmented image

extracts the features of an image by using methods like color correlogram, SGLDM, and Otsu

methods. Finally, the classifier is used for classification and recognition of plant disease. One of

the best classifiers is SVM, which is more accurate than others and then it is stored in the

knowledge base



1.Image Acquisition: First we need to select the plant which is affected by the disease and then

collect the leaf of the plant and take a snapshot of leaf and load the leaf image into the system.

2.Segmentation: It meansrepresentation of the image in more meaningful and easy to analyse

way. In segmentation a digital image is partitioned into multiple segments can defined as superpixels.

Low Contrast: image pixel values are concentrated near a narrow range.

Contrast Enhancement: In figure.2, the original image is the image given to the system and the

output of the system after contrast enhancement.

3. Feature extractions, is the process done after segmentation. According to the segmented

information and predefined dataset some features of the image should be extracted. This

extraction could be the any of statistical, structural, fractal or signal processing. Color cooccurrence

Method, Grey Level Co-occurrence Matrices (GLCM), Spatial Gray-level

Dependence Matrices (SGDM) method, Gabor Filters, Wavelets Transform and Principal

component analysis are some methods used for feature extraction

4. Classification of diseases

Classification technique is used for training and testing to detect the type of leaf disease.

Classification deals with associating a given input with one of the distinct class. In the given

system support vector machine [SVM] is used for classification of leaf disease. The classification

process is useful for early detection of disease, identifying the nutrient deficiency.

Based on classification leaves are mainly affected with fungal, bacterial and viral. The

following describes common symptoms of fungal, bacterial and viral plant leaf diseases.

a) Bacterial disease symptoms:

The disease is characterized by yellowish green spots which come into view as watersoaked.

The lesions amass and then appear as dry dead spot


) Viral disease symptoms:

Among all plant leaf diseases, those caused by viruses are most complicated to diagnose.

All viral disease presents some degree of reduction in virus-infected plants. The production

length of such infected is usually short. This virus looks yellow or green stripes or spots on

foliage. Leaves might be wrinkled, curled and growth may be stunned as depicte

c) Fungal disease symptoms:

It is a type of plant pathogen, they are responsible for serious plant diseases. They damage

plants by killing cells. It disseminates through the wind, water, and movements of contaminated

soil, animals, birds etc. In the initial stage, it appears on older leaves as water-soaked, gray-green

spots. Later these spots darken and then white fungal growth forms on the undersides


We would like to conclude that this is an efficient and accurate technique for a detection

of plant diseases. In our research, plant disease is detected by SVM classifiers. The SVM

classifiers are based on color, texture and shape features. The algorithm in the proposed approach

uses image segmentation and classification technique for the detection of plant disease. The

accurate Disease detection and classification of the plant leaf image is very important for the

successful cultivation of cropping and this can be done using image processing. This paper

discussed various techniques to segment the disease part of the plant. Hence there is working on

development of fast, automatic, efficient and accurate system, which is use for detection disease

on unhealthy leaf. Also Comparison of different techniques of digital image processing is done

which gives the different results on different databases. Work can be extended for development of

system which identifies various pests and leaf diseases also.



1.)SujeetVarshneyet al, International Journal of Computer Science and Mobile Computing,

Vol.5 Issue.5, May- 2016, pg. 394-398

2.) International Journal of Computational Intelligence Research ISSN 0973-1873 Volume 13,

Number 7 (2017), pp. 1821-1828

3.) Leaf disease detection using image processing Sujatha R*, Y Sravan Kumar and Garine

Uma Akhil School of Information Technology and Engineering, VIT University, Vellore

4.) Surender Kumar Department of CSE Chandigarh University Gharuan, Punjab

International Journal of Computer Applications (0975 – 8887)

5.) S. Megha, R C. Niveditha, N. SowmyaShree, K. Vidhya ; International Journal of Advance

Research, Ideas and Innovations in Technology.

6.) Journal of Advanced Bioinformatics Applications and Research ISSN 0976-2604 Vol 2, Issue

2, June-2011, pp 135-14



S.Karthick Raja & S.Praveenkumar

NMSSVN College,Madurai


In the past decades, computer networks were primarily used by the researches for sending

e-mails and by corporate employees for sharing the printers. While using network systems for

theses utilities, security was not a major threat and did not get the due attention .In today’s

world computer networks gained an immensely and cover a multitude of sins. It covers simple

issues like sending hate mails. Security problems also are very severe like stealing the

research papers of the recent discoveries and inventions by the scientists, who use internet as

a sharing tool, also hacking the financial products like credit cards, debit cards, bank accounts

etc by hacking the passwords and misusing the accounts. Cryptography is the ancient science

of encoding messages so that only the sender and receiver can understand them.

Cryptography can perform more mathematical operations in a second than a human being

could do in a lifetime. There are three types of cryptographic schemes. They are:

Secret Key Cryptography (SKC)

Public Key Cryptography (PKC)

Hash Functions


A basic understanding of computer networks is requisite in order to understand the

principles of network security. The Internet is a valuable resource, and connection to it is

essential for business, industry, and education. Building a network that will connect to the

Internet requires careful planning. Even for the individual user some planning and decisions

are necessary. The computer itself must be considered, as well as the device itself that makes

the connection to the local-area network (LAN), such as the network interface card or modem.

The correct protocol must be configured so that the computer can connect to the Internet.

Proper selection of a web browser is also important


What Is A Network?

A “network” can be defined as “any set of interlinking lines resembling a net, a

network or roads||an interconnected system, a network is simply a system of interconnected

computers and how they are connected is irrelevant.

The International Standards Organization (ISO) Open System Interconnect (OSI) model

defines internetworking in terms of a vertical stack of seven layers. The upper layers of the

OSI model represent software that implements network services like encryption and

connection management. The lower layers of the OSI model

implement more primitive, hardware-oriented functions like routing, addressing, and flow



X.800 defines it as: a service provided by a protocol layer of communicating open systems,

which ensures adequate security of the systems or of data transfers

RFC 2828 defines it as: a processing or communication service provided by a system to give

a specific kind of protection to system resources

X.800 defines it in 5 major categories

Authentication - assurance that the communicating entity is the one claimed

Access Control - prevention of the unauthorized use of a resource

Data Confidentiality –protection of data from unauthorized disclosure


Data Integrity - assurance that data received is as sent by an authorized entity

Non-Repudiation - protection against denial by one of the parties in a communication


Passive attacks - eavesdropping on, or monitoring of, transmissions to:

obtain message contents, or monitor traffic flows

Active attacks – modification of data stream to:

Masquerade- of one entity as some other replay previous messages modify messages in

transitdenial of service


Symmetric encryption or conventional / private-key / single-key sender and recipient

share a common key all classical encryption algorithms are

Private-key was only type prior to invention of public-key in 1970’s

Symmetric Cipher Model:


Cryptography can be characterized by:

Type of encryption operations used--substitution / transposition / product

number of keys used--single-key or private / two-key or public way in which plaintext is

processed--block / stream


‣ Cipher text only--only know algorithm / cipher text, statistical, can identify plaintext

‣ known plaintext --know/suspect plaintext &cipher text to attack cipher

‣ chosen plaintext --select plaintext and obtain cipher text to attack cipher

‣ chosen cipher text --select cipher text and obtain plaintext to attack cipher

‣ chosen text --select either plaintext or cipher text to en/decrypt to attack cipher



The Data Encryption Standard (DES) is a cipher (a method for encrypting

information) that was selected by NBS as an official Federal Information Processing Standard

(FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use

internationally. It is based on a Symmetric-key algorithm that uses a 56-bit key. The

algorithm was initially controversial with classified design elements, a relatively short key

length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently

came under intense academic scrutiny which motivated the modern understanding of block

ciphers and their cryptanalysis.

DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit

key size being too small; in January, 1999, and the Electronic Frontier

Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see

chronology). There are also some analytical results which demonstrate theoretical weaknesses

in the cipher, although they are unfeasible to mount in practice. The algorithm is believed to

be practically secure in the form of Triple DES, although there are theoretical attacks. In

recent years, the cipher has been superseded by theAES

DES is the archetypal block cipher — an algorithm that takes a fixed-length string of

plaintext bits and transforms it through a series of complicated operations into another cipher

text bit string of the same length. In the case of DES, the block size is 64 bits. DES also uses a

key to customize the transformation, so that decryption can supposedly only be performed by


those who know the particular key used to encrypt. The key ostensibly consists of 64 bits;

however, only 56 of these are actually used by the algorithm. Eight bits are used solely for

checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it

is usually quoted as such.

Like other block ciphers, DES by itself is not a secure means of encryption but must instead

be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further

comments on the usage of DES are contained in FIPS-74. [15]

The algorithm's overall structure is there are 16 identical stages of processing, termed rounds.

There is also an initial and final permutation, termed IP and FP, which are inverses (IP

"undoes" the action of FP, and vice versa). IP and FP have almost no cryptographic

significance, but were apparently included in order to facilitate loading blocks in and out of

mid-1970s hardware, as well as to make DES run slower in software.

Before the main rounds, the block is divided into two 32-bit halves and processed

alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures

that decryption and encryption are very similar processes — the only difference is that the

subkeys are applied in the reverse order when decrypting. The rest of the algorithm is

identical. This greatly simplifies implementation, particularly in hardware, as there is no need

for separate encryption and decryption algorithms.

The symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles

half a block together with some of the key. The output from the F-function is then combined

with the other half of the block, and the halves are swapped before the next round. After the

final round, the halves are not swapped; this is a feature of the Feistel structure which makes

encryption and decryption similar processes.


The key-schedule of DES

The key schedule for encryption — the algorithm which generates the subkeys.

Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1) —

the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then

divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds,

both halves are rotated left by one and two bits (specified for each round), and then 48subkey

bits are selected by Permuted Choice 2 (PC-2) — 24 bits from the left half, and 24 from the

right. The rotations (denoted by "


It's important to build systems and networks in such a way that the user is not

constantly reminded of the security system.

Developers need to evaluate what is needed along with development costs, speed of

execution, royalty payments, and security strengths. That said, it clearly makes sense to use as

strong security as possible, consistent with other factors and taking account of the expected

life of the application. Faster computers mean that longer keys can be processed rapidly but

also mean that short keys in legacy systems can be more easily broken.

It's also extremely important to look at the methods of applying particular algorithms,

recognizing that simple applications may not be very secure. Related to this is the issue of

allowing public scrutiny, something that is essential in ensuring confidence in the product.

Any developer or software publisher who resists making the cryptographic elements of their

application publicly available simply doesn't deserve trust and is almost certainly supplying

an inferior product.

Secure communication over insecure channels is the objective of this concept. The claim of

complete security is substantiated to a large extent. Thus a detailed study of Cryptography &

Network Security is reflected in this presentation



Computer Networking-Tanenbaum.

Cryptography and Network Security-William Stallings

Eli Biham: A Fast New DES Implementation in Software Cracking DES: Secrets of

Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier


A.Biryukov, C. De Canniere, M. Quisquater (2004)."On Multiple Linear

Approximations".Lecture Notes in Computer Science3152: 1–22.doi:10.1007/b99099. (preprint).

Keith W. Campbell, Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–


Don Coppersmith. (1994). The data encryption standard (DES) and its strength against

attacks. IBM Journal of Research and Development, 38(3), 243–250. [1]

Whitfield Diffie, Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data

Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84








The aim o digital image processing is to improve the pictorial information for human

interpretation; and processing image data for storage transmission, and representation for

autonomous machine perception. It is the use of computer algorithms to perform image

processing on digital images. It has many advantages over analog image processing. It allows a

much wider range of algorithms to be applied to the input data and can avoid problems such as

the build-up-of noise and signal distortion during processing. It may be modeled in the form of

multidimensional systems.

Image processing mainly include the following steps:

Importing the image via image acquisition tools;

Analysing and manipulating the image;

Output in which result can be altered image or a report which is based on analysing

that image.

Some techniques which are used in digital image processing include:

Image editing

Image restoration

Independent component analysis

Linear filtering

Partial different equations




The Brain Tumor is affecting many people worldwide. It is not only limited with the old

age people but also detected in the early age. Brain tumor is the abnormal growth of cell inside the

brain cranium which limits the functioning of the brain. Early detection the brain tumor is possible

with the advancement of machine learning and image processing. In medical image processing is

the most challenging and emerging field today. This describes the methodology of detection &

extraction of brain tumor from pattern’s MRI scan image of the brain. This method incorporates

with some noise removal functions, segmentation and morphological operations which are the basic

concept of image processing. Detection and extraction of tumor from MRI scan images of the brain

is done by using MATLAB software.

MRI Imaging plays an important role in brain tumor for analysis, diagnosis and treatment

planning. It’s helpful to doctor for determine the previous steps of brain tumor. Brain tumor

detections are using MRI images is a challenging task, because the complex structure of the brain.

Brain tumor can be detected by begin or malignant type. The benign being non-cancerous and

malignant is cancerous. Malignant tumour is classified into two types; primary and secondary

tumour benign tumor is less harmful than malignant.

The basic idea is to develop application software to detect the presence of brain tumor in

MRI images. We are using image processing techniques to detect exact position of tumor.


There are various medical imaging techniques like x-ray, computed tomography, positron

emission tomography, magnetic resonance imaging, are available for tumor detection. The MRI is

the most commonly used modality for brain tumor growth imaging and location detection due to its

higher resolution.

1) To improve the performance and reduce the complexity involves in the image

processing, we have investigated Berkeley wavelet transformation based brain tumor segmentation.

2) To improve the accuracy and quality rate of the support vector machine based classifier,

relevant features are extracted from each segmented tissue.



This has two stages, first is pre-processing of given MRI image and after that segmentation and

then morphological operations.

• Input image

• Multiparameter Calculations

• Segmentation o brain tumor using Region of Interest command

Image Processing Techniques

Median Filtering for Noise Removal

Is a non-linear filtering technique used for noise removal. It is used to remove salt and pepper

noise from the converted gray scale image. It replaces the intensity values of the center pixel with

the median of the intensity values in the neighbourhood of that pixel. Median filters are particularly

effective in the presence of impulse noise.

Various De-noising Filters

Mean Filter- Based on average value of pixels

Median Filter – Based on the median value of pixels

Wiener Filter – Based on inverse filtering in frequency

Hybrid Filter – Combination of median and wiener filter

Modified hybrid median filter – Combination of mean and median Filter

Morphology Based De-noising- Based on Morphological opening and closing Operations.

Image Enhancement

Poor contrast is one of the defects found in acquired image. The effect of that defect has great

impact on the contrast of image. When contrast is poor the contrast enhancement method plays an

important role. The gray level of each pixel is scaled to improve the contrast. Contrast

enhancements improve the visualization of the MRI image.

Edge Detection

Is an image processing technique for finding the boundaries of objects with in images. It works

by detecting discontinuities in brightness. It is used for image segmentation and data extraction in

areas such as image processing, computer vision, and machine vision.



It is a simple, effective, way of partitioning an image into a foreground and background. This

image analysis technique is a type of image segmentation that isolates objects by converting gray

scale images into binary images. Image threshold is most effective in images with high levels of


Morphological Operation

It is used as an image processing tools for sharpening the regions. It is a collection of non-linear

operations related to the shape or morphology of features in an image.


Image segmentation is the process of portioning a digital image into multiple segments. It is

typically used to locate objects and boundaries in image, the process of assigning a label to every

pixel in an image such that pixels with the same level share certain visual characteristics.

Brain Tumor Technology:

• Classification Technology

• Clustering Technology

• Atlas-based segmentation

• Histogram thersholding

• Watershed and Edge detection in HSV colour model


The most of the existing methods had ignored the poor quality images like images with

noise or poor brightness. Most of the existing work on tumor detection has neglected the use of

image processing. To enhance the tumor detection rate further we will also integrated the new

object based tumor detection. The proposed technique will have the ability to produce effective

results even in case of high density of the noise.

In this study Digital Image Processing Techniques are important for brain tumor

detection by MRI images. The processing techniques include different methods like Filtering,

Contrast enhancement, Edge detection is used for image smoothing. The pre-processed image are

used for post processing operations like; threshold, histogram, segmentation and morphological

which is used to enhance the images.



1) Digital Image Processing (3 rd Edition): Rafael C. Gonzalez. Richard E. Woods

2) Website:

3) Sharma, Komal, AkwinderKaur, and ShruthiGujral. “A review on various brain tumor

detection techniques in brain MRI images.” IOSR Journal of Engineering Volume 04, Issue 05, pp:

06-12,May 2014



M.Shree Soundarya

Fatima College(Autonomous),Madurai


Virtual reality (VR) is a technology which allows a user to interact with a computersimulated

environment, whether that environment is a simulation of the real world or an imaginary

world. It is the key to experiencing, feeling and touching the past, present and the future. It is the

medium of creating our own world, our own customized reality. It could range from creating a

video game to having a virtual stroll around the universe, from walking through our own dream

house to experiencing a walk on an alien planet. With virtual reality, we can experience the most

intimidating and gruelling situations by playing safe and with a learning perspective. Very few

people, however, really know what VR is, what its basic principles and its open problems are. In

this paper a virtual reality and its basic technologies are listed and how VR works on it are



Virtual realitymeans comes from both ‘virtual’ as a near us experience, and ‘reality’,

something we can experience as human beings. The term itself can apply to almost anything that

is possible to exist in reality but is stimulated by a computer.


To know about Virtual Reality techand its works.

To know the disparities between Virtual Reality(VR) and Augmented Reality(AR).

To know about challenges in Virtual Reality with Virtual Entertainment(VE).


How it will be in future VR as a significant technology asworld-wide.



A virtual experience includes three-dimensional images which appear life-sized to the

user.Experiences are delivered to our senses via a computer and a screen, or screens, from our

device or booth. A tool, or machine like a booth, or haptic system, may give our senses additional

experiences, such as touch or movement. Speakers or earphones built into the headset, or set into a

machine, will provide sound.For virtual reality to work, we have to believe in the

experience. Virtual reality applications are getting closer and closer to a real-world background all

the time.

The technology needed for a virtual experience depends on the audience, purpose and of course the

price point. If you are developing an interactive training simulator for a workplace, you will be

looking for far more advanced technology. For everyday use here’s a quick overview of the

technology and equipment are below.





1 Virtual reality for smartphones.




Standalone virtual reality headsets.

Powerful PC, laptop or console.

Virtual reality headset.


Augmented reality and virtual reality are inverse reflections of one in another with what each

technology seeks to accomplish and deliver for the user. Virtual reality offers a digital recreation of

a real life setting, while augmented reality delivers virtual elements as an overlay to the real world.







1 PURPOSE It creates own It


reality that is experiences by adding

completely computer

generated and driven.


components such as

digital images, graphics,


sensations as a new layer

of interaction with the

real world.




It is usually delivered to

the user through a headmounted,

or hand- held


It is being used more

and more in mobile

devices such as laptops,

smart phones, and

tablets to change how

the real world.


Terminology in the virtual reality sector is fast changing. It’s argued that the first virtual

reality devices emerged decades ago, but the experience they provided is entirely different to the

virtual experience of today. As virtual reality becomes more intertwined with our daily lives, we

start to think in terms of virtual environment. We can also define virtual enjoyment, which

includes any virtual experience provided for pure entertainment – gaming, movies, videos and

social skills for example – leaving more serious elements of virtual reality including training,

education, and healthcare applications. Understanding how virtual reality works is crucial to being

able to acknowledge this powerful technology honestly. It provides each of our senses with

information to immerse us in a virtual experience, a near real experience – which our minds and

bodies almost completely perceive as real.


The Big challenges in the field of virtual reality are developing better tracking systems, finding

more natural ways to allow users to interact within a virtual environment and decreasing the time it

takes to build virtual spaces. While there are a few tracking system companies that have been

around since the earliest days of virtual reality. Likewise, there aren’t many companies that are

working on input devices specifically for VR applications. Most VR developers have to rely on and

adapt technology originally meant for another discipline, and they have to hope that the company

producing the technology stays in business. As for creating virtual worlds, it can take a long time to

create a convincing virtual environment - the more realistic the environment, the longer it takes to

make it.


The future of Virtual Reality depends on the existence of systems that address issues of ‘large

scale’ virtual environments. In the coming years, as more research is done we are bound to see VR

become as mainstay in our homes and at work. As the computers become faster, they will be able to

create more realistic graphic images to simulate reality better. It will be interesting to see how it

enhances artificial reality in the years to come. It is very possible that in the future we will be

communicating with virtual phones. Nippon Telephone and Telegraph (NTT) in Japan are

developing a system which will allow one person to see a 3D image of the other using VR

techniques. The future is virtual reality, and its benefits will remain immeasurable.



Virtual Reality is now involved everywhere. You can’t imagine your life without the use

of VR Technology. Now we use mail or conference for communication while the person is not

sitting with you, but due to technology distance is not matter.This technology give enormous scope

to explore the world of 3D and your own imagination.



V.Selvalakshmi &K.Sivapriya

Fatima College(Autonomous),Madurai


Remote sensing is the acquisition of information about an object or phenomenon without making

contact with the object and thus in contrast to on-site observation. Remote sensing is used in

numerous fields, including geography, land surveying and most Earth Science disciplines; it also has

military, intelligence, commercial, economic, planning, and humanitarian current

usage, the term “remote sensing” generally refers to the use of satellite or aircraft based sensor

technologies to detect and classify objects on Earth.

Remote sensing image processing in nowadays a mature research area. The techniques

developed in the field allow many real-life applications with great societal value. For instance, urban

monitoring, fire detection or flood prediction can have a great impact on economical and

environment issues. To attain such objectives, the remote sensing community has turned into a

multidisciplinary field of science that embraces physics, signal theory, computer science, electronics

and communications. Form a machine learning and signal/ image processing point of view, all the

applications are tackled under specific formalisms, such as classification and clustering, regression

and function approximation, image coding, restoration and enhancement, source un mixing, data

fusion or feature selection and extraction. This paper serves as a survey of methods as applications,

and reviews the last methodological advances in remote sensing and image processing.



The Digital Image Processing is refers to processing digital images by means of digital

computer. Note that a digital image is composed of a finite number of elements, each of which has

a particular location and value. These elements are called picture elements, image elements, pels,

and pixels. Digital image processing focuses on two major tasks improvement of pictorial

information for human interpretation, processing of image data for storage, transmission and

representation for autonomous machine perception. Some argument about where image processing

ends and fields such as image analysis and computer vision start. The computer vision can be

broken up into low-mid and high-level processes. Digital image processing helps us enhance

images to make them visually pleasing, or accentuate regions or features of an image to better

represent the content. For example, we may wish to enhance the brightness and contrast to make a

better print of a photograph, similar to popular photo-processing software. In a magnetic resonance

image (MRI) of the brain, we may want to accentuate a certain region of image intensities to see

certain parts of the brain. Image analysis and computer vision, which go beyond image processing,

helps us to make decisions based on the contents of the image.


Remote Sensing is the science and art of acquiring information (spectral, spatial, and temporal)

about immaterial, objects, area, or phenomenon, without coming into physical contact with the

objects, or area, or phenomenon under investigation. Without direct contact, some means of

transferring information through space must be utilized. In practice, remote sensing is the stand-off

collection through the use of a variety of devices for gathering information on a given object or

area. In remote sensing, information transfer is accomplished by use of electromagnetic radiation

(EMR). EMR is a form of energy that reveals its presence by the observable effects it produces

when it strikes the matter. EMR is considered to span the spectrum of wavelengths from 10-10 mm

to cosmic rays up to 1010 nm, the broadcast wavelengths, which extends from 0.30-15 mm.Sensors

that collect information some remote distance from the subject. This process is called remote

sensing of the environment. The remote sensor data can be stored in an analog format or in digital

format the analog and digital remote sensor data can be analyzed using analog and/or digital image

processing techniques.


Scientists have made significant advances in digital image processing of remotely sensed data for

scientific visualization and hypothesis testing and neural network image analysis, hyper spectral data

analysis, and change detection.


Since the beginning of the space age, a remarkable progress has been made in utilizing remote

sensing data to describe study, monitor and model the earth’s surface and interior. Improvements in

sensor technology, especially in the spatial, spectra, radiometric and temporal resolution, have

enabled the scientific community to operationalise the methodology. The trend of development of

remote sensing is being from panchromatic, multi-spectral, hyper-spectral to ultra-spectral with the

increase in spectral resolution. On the other hand, spatial resolution is reaching its highest side of

one metre resolution.

A remote sensing application is a software application that processes remote sensing data.

Remote sensing applications are similar to graphics software, but they enable generating geographic

information from satellite and airborne sensor data. Remote sensing applications read specialized

file formats that contain sensor image data, georeferencing information, and sensor metadata. Some

of the more popular remote sensing file formats include: GeoTIFF, NITF, JPEG 2000, ECW,MrSID,

HDF, and NetCDF.

Remote sensing applications perform many features including:

Change detection- determines the changes from images taken at different times of the same area.

Othorectification-warp and image to its location on the earth.

Spectral analysis-for example, using non-visible parts of the electromagnetic spectrum to

determine if a forest is healthy

Image classification- categorization of pixels based on reflectance into different land cover classes

Until recently, the 2om spatial resolution of SPOT was regarded as ‘high spatial resolution’. Since the

launch of IKONOS in 1999 a new generation of very high spatial resolution (VHR) satellites was

born, followed byOuick Bird late 2001. The widely used Landsat and Spot sensore are now called

‘medium resolution’. Especially the new satellite sensor generation meets the strong market demands

from end-users, who are interested in image resolution that will help them observe and monitor their

specific objects of interest. The increasing variety of satellites and sensors and spatial resolutions lead


to a broader spectrum of application but not automatically to better results. The enormous amounts of

data create a strong need for new methods to exploit these data efficiently. Airborne bases sensors to

collect information about a given object or area. Remote sensing data collection methods can be

passive or active. Passive sensors detect natural radiation that is emitted or reflected by the object or

area being observed. In active remote sensing energy is emitted and the resultant signal that is

reflected back is measured.

Some latest powerful practical applications include anti-terrorism.

Surgical strikes

Disaster relief

Detecting source of pollution



Digital Image Processing of satellite data can be primarily grouped into three categories:

image rectification and restoration. Enhancement and information extraction. Image rectification

is the pre-processing of satellite data for geometric and radiometric connections. Enhancement is

applied to image data in order to effectively display data for subsequent visual interpretation.

Information extraction is based on digital classification and is used for generating digital thematic



1. Digital image processing (3 rd Edition): Rafael C. Gonzalez. Richard E. woods

2. Introductory digital image processing (3 rd Edition): John R. Jensen

3.! Po=16.0550




Thirumalai & Thievekan

Fatima College(Autonomous),Madurai


Like Oxygen, the world is encompassed by information today. The amount of

information that we reap and eat up is flourishing forcefully in the digitized world. Expanding

utilization of new advancements and online networking produce immense measure of information

that can acquire marvelous data if legitimately broke down. This expansive dataset for the most part

known as large information, don't fit in conventional databases in view of its rich size. Associations

need to oversee and investigate enormous information for better basic leadership and results. Along

these lines, huge information examination is accepting a lot of consideration today. In human

services, huge information investigation has the likelihood of cutting edge quiet care and clinical

choice help. In this paper, we audit the foundation and the different strategies for huge information

investigation in human services. This paper likewise explains different stages and calculations for

enormous information investigation and talk on its favorable circumstances and difficulties. This

review ends up with a talk of difficulties and future bearings Keywords: big data, cloud computing,

hadoop , big data mining, predictive analytics.


The new advances in Information Technology(IT) manual for smooth making of

information. For example, 72 long periods of recordings are transferred to YouTube

consistently[26]. Social insurance part likewise has delivered tremendous measure of information

by keeping up records and patient care. Opposite of putting away information in printed shape, the

form is digitizing those boundless information. Those digitized information can be utilized to move

forward the social insurance conveyance quality in the meantime diminishing the expenses and

hold the guarantee of supporting a wide range of restorative and medicinal services capacities.

Additionally it can give progressed customized mind, enhances understanding results furthermore,

stays away from superfluous expenses.


By depiction, huge information in human services alludes to electronic wellbeing datasets so

extensive and complex that they are hard to deal with customary programming, equipment,

information administration apparatuses and strategies Medicinal services huge information

incorporates the clinical information, specialist's composed notes and remedies, therapeutic pictures

for example, CT and MRI examines results, research center records, drugstore records, protection

documents and other managerial information, electronic patient records (EPR) information; web

based life posts, for example, tweets, reports on site pages what's more, various measure of

therapeutic diaries. Thus, enormous measure of social insurance information are accessible for

enormous information researchers. By understanding stencils and patterns inside the information,

huge information examination appears to enhance mind, spare lives what's more, diminish costs.

Consequently, enormous information investigation applications in human services exploit removing

bits of knowledge from information for better choices making reason. Examination of huge

information is the way toward reviewing tremendous measure of information, from various

information sources and in different arrangements, to convey bits of knowledge that can empower

basic leadership in genuine time. Different scientific ideas, for example, information mining and

computerized reasoning can be connected to break down the information.

Huge information systematic methodologies can be utilized to perceive inconsistencies which

can be found because of incorporating immense measures of information from various

informational indexes. information for better choices making reason. Investigation of huge

information is the way toward reviewing colossal measure of information, from various information

sources and in different organizations, to convey experiences that can empower basic leadership in

genuine time.

Different scientific ideas, for example, information mining and computerized reasoning can be

connected to break down the information. Huge information investigative methodologies can be

utilized to perceive peculiarities which can be found because of coordinating immense measures of

information from various informational indexes. In whatever remains of this paper, initially we

present the regular foundation, definitions and properties of huge information. At that point

different huge information stages and calculations are talked about. In the long run, the difficulties,

future bearings and conclusions are introduced.


Definition and properties

Amir Gandomi defined the 3V’s as takes after. Volume means the heaviness of the

information. For the most part huge information sizes would be in terabytes or peta bytes or

exabytes. Doug Beaver said that Facebook by and by stores 260 billion pictures which are around

20 petabytes in size and it forms in excess of 1 million photographs every second. Assortment

alludes the auxiliary decent variety in a dataset. Because of mechanical development, we can utilize

diverse sorts of information those have different arrangement. Despite the fact that enormous

information has been seen generally, it still has distinctive purpose of perspectives about its

definition. Huge information in medicinal services is rising not just in view of its volume yet in

addition the assortment of information writes and the speed at which it ought to be overseen. The

accompanying definitions can assist us with understanding better the enormous information idea.

Truth be told, enormous information has been characterized since 2001 itself. Certainly the

measure is the significant ascribe that strikes a chord at whatever point we consider huge

information. Nonetheless, it has some other properties likewise. Doug Laney (Gartner)

characterized huge information with a 3V's model.

It discussed the expansion of volume, speed and assortment. Apache Hadoop (2010)

characterized enormous information as "datasets which couldn't be caught, overseen and handled

by general PCs inside an worthy degree". In 2012, it was reclassified by Gartner as:"Enormous

information is high volume, high speed and high assortment data resources that require new type of

preparing to empower upgraded basic leadership, understanding disclosure and process

streamlining Such information writes comprise of sound, video, content, picture, log documents et

cetera. Enormous information design is arranged into three. They are organized, unstructured and

semi-organized information.

It is appeared in the accompanying figure.

Figure-1. Content formats of big data.


Structured data denotes the tabular data in spreadsheets and databases. The image, audio,

video are unstructured data that are noted as difficult to analyze.Interestingly, nowadays 90% of big

data are unstructured data. The size of this data goes on rising through the use of internet and smart

phones. The characteristics of semistructured data lie between structured and unstructured data. It

does not follow any strict standards.

XML (Extensible Markup Language) is a common example of semi-structured data. The third

‘V’ velocity means the speed at which data is produced and analyzed. As mentioned earlier, the

emergence of digital devices such as smart phones and sensors has allowed us to create these

formats of data in an extraordinary rate. The various platforms behind big data and algorithms are

discussed in detail in the later sections


A. Big data platforms

As in ,huge information utilizes conveyed capacity innovation in light of distributed

computing as opposed to nearby capacity. Some enormous information cloud stages are Google

cloud administrations, Amazon S3 and Microsoft Azure. Google's disseminated document

framework GFS (Google File System) and its programming model Map lessen are the lead in the

field. The execution of guide diminish has gotten a substantial measure of consideration in huge

scale information handling. So numerous associations utilize enormous information handling

structure with delineate. Hadoop, a powerful angle in huge information was produced by Yahoo

and it is an open-source adaptation of GFS [29]. Hadoop empowers putting away and preparing

huge information in dispersed condition on vast bunches of equipment .Colossal information

stockpiling and quicker preparing are bolstered by hadoop. Hadoop Distributed File System

(HDFS) gives dependable and versatile information stockpiling. HDFS influences different

duplicates of every datum to square and disseminates them on frameworks on a bunch to empower

dependable access. HDFS bolsters distributed computing through the utilization hadoop, a

disseminated information preparing stage.


Another, 'Huge Table' was produced by Google in 2006 that is utilized to process

immense measure of organized information. It likewise bolsters outline. ystem. It is a versatile

disseminated information store manufacturedfor Amazon's stage. It gives high unwavering quality

cost viability, accessibility and execution. Tom white explains different instruments for huge

information investigation. Hive, a structure for information warehousing over hadoop. It was

worked at Facebook. Hive with hadoop for capacity and handling meets the adaptability needs and

is costeffective. Hive utilizes an inquiry dialect called HiveQL which is similar on SQL. A scripting

dialect for investigating extensive dataset is called 'Pig'1

A conclusion of guide diminish is that written work of mappers and reducers,

incorporating the bundle and code are extreme thus the advancement cycle is long. Subsequently

working with mapreduce needs encounter. Pig survives this feedback by its effortlessness. It

enables the designers to compose straightforward Pig Latin questions to process the huge

information and consequently spare the time. A dispersed section situated database Hbase based

over Hadoop Distributed Record System. It can be utilized when we require irregular access of

expansive datasets. It accelerates the execution of activities. Hbase can be gotten to through

application programming interfaces (APIs) like REST (Illustrative State Transfer) and java. Hbase

does not have its own inquiries, so it relies upon Zookeeper. Zookeeper oversees tremendous

measure of information. This permits dispersed procedure to oversee through a namespace of

information registers.This dispersed administration additionally has ace and slave hubs like in

hadoop. Another imperative instrument is Mahout. It is an information mining and machine

learning library. It can be sorted as aggregate separating, classification, bunching and mining. It can

be executed by Mapreduce in a dispersed mode. Enormous information investigation isn't just in

view of stages yet additionally examination calculations plays a huge part.


B. Algorithmic techniques

Huge information mining is the strategy for winnowing shrouded, obscure however

valuable data from enormous measure of information. This data can be utilized to anticipate f uture

circumstances as an assistance to basic leadership process.Accommodating learning can be found

by the utilization of information mining systems in social insurance applications like choice

emotionally supportive network. The enormous information created by human services associations

are extremely muddled and immense to be dealt with further more, investigated by regular

techniques. Information mining stipends the methodology to change those groups of information

into helpful data for choice help. Huge information mining in medicinal services is tied in with

learning models to anticipate patients' ailment. For instance, information mining can encourage

social insurance protection associations to distinguish scoundrels and abuse, medicinal services

foundations settle on choices of client relationship administration, specialists distinguish successful

medications and best practices, and patients get made strides furthermore, more conservative

human services administrations. This prescient examination is generally utilized as a part of

medicinal services. There are different information mining calculations talked about in 'Top 10

calculations in information mining' by Wu X et al. It talked about assortment of calculations

alongside their constraints. Those calculations include grouping, order, relapse, measurable

realizing which are the issues in information mining examination. The ten calculations talked about

incorporate C4.5, k-implies, Apriori, Support VectorMachines, Naïve Bayes, EM, CART, and so

on. Enormous information investigation incorporates different techniques, for example, content

examination, mixed media investigation et cetera.

In any case, as given over, one of the urgent classifications is prescient investigation which

incorporates different measurable strategies from demonstrating, information mining and machine

discovering that break down present and verifiable certainties to make expectation about future. In

healing center setting, there are prescient strategies used to distinguish on the off chance that

somebody might be in danger for readmission or is on a genuine subsidence.This information

encourages advisors to settle on essential care choices. Here it is important to think about machine

learning since it is broadly utilized in prescient examination. The procedure of machine learning is

especially indistinguishable of information mining. Them two chase through information to


Figure-2. Machine learning algorithms - A hierarchal view.


Corridor laid out a strategy for building taking in the tenets of the huge dataset. The

approach is to have a solitary choice plan produced using a gigantic subset of information.

In the interim Patil et al. sought after a cross breed way blending the two hereditary

calculation and choice tree to make an propelled choice tree to enhance execution and

effectiveness of calculation. With the expanding information in the territory of enormous

information, the assortment of systems for dissecting information is spoken to in

'Information Lessening Techniques for Large Qualitative Data Sets'. It depicts that the

determination for the specific method is in view of the kind of dataset and the way the

example are to be examined. Its connected K-implies bunching utilizing Apache Hadoop.

They went for proficiently investigating substantial dataset in an insignificant measure of

time. They likewise clarified that the exactness and discovery rate are influenced by the

quantity of fields in log records or not Along these lines, their tried outcomes demonstrate

the right number of groups and the right measure of passages in log documents, yet the rate

of precision lessens when the quantity of sections increments. The outcome demonstrates

that the precision should be improved .Classification is one of the information mining

techniques used to anticipate and order the foreordained information for the particular class.

There are diverse orders techniques propose by specialists. The generally utilized strategies

are portrayed by Han et al.

It incorporates the accompanying:

Neural network algorithm

Decision tree induction

Bayesian classification

Rule based classification

Support vector machine

K-Nearest neighbor classifier

Rough set approach

Genetic algorithm,

Fuzzy set approach

Any of the previously mentioned arrangement strategies can be connected to characterize the

application arranged information. The material arrangement strategy is to be


picked by the kind of use and the dimensionality of the information. It is a major test to the

analysts to choose and apply the fitting information mining arrangement calculation for

diagnosing medicinal related issues. Picking the right strategy is a testing errand.

The correct technique can be picked simply in the wake of breaking down all the accessible

arrangement techniques and checking its execution in term of precision. Different inquires about

have been completed in the region of medicinal analyses by utilizing arrangement system. The

most imperative reality in restorative finding framework is the exactness of the classifier. This

exploration paper investigations the diverse characterization strategies connected in restorative

conclusions and thinks about the execution of order precision. C4.5 is connected to break down

the SEER dataset for bosom disease and order the patients either in thebeginning stage or pre

malignancy organize The records broke down are 500 and the precision accomplished in testing

datasets is 93% Shweta utilized the Naïve Bayes, ANN, C4.5 and choice tree calculations for

analyze and visualizations of bosom tumor.The outcomes demonstrate that the choice trees give

higher exactness of 93.62 % where Credulous Bayes gives 84.5%, ANN produces 86.5% and

C4.5 produces 86.7% of exactness. Chaitrali is utilized Choice Trees, Naïve Bayes and Neural

Network calculations for dissecting coronary illness. The outcomes examination tells that the

Naïve Bayes accomplishes 90.74% of exactness while Neural Network and Decision Trees give

100% and 99.62% of exactness separately. Diverse information mining systems were connected

to foresee coronary illness The exactness of each calculation is checked and expressed as Naïve

Bayes, Decision Tree and ANN are accomplished 86.53%, 89% and 85.53% of exactness

individually. The three distinct information mining calculations, ANN, C4.5 and Decision Trees

are utilized to investigate heart related infections by utilizing ECG signals . The examination

comes about plainly demonstrate the Decision Tree calculation performs best and gives the

exactness of 97.5%. C4.5 calculation gives 99.20% exactness while Naïve Bayes calculation

gives 89.60 % of exactness.Here these calculations are utilized to appraise the supervision of

liver scatter. Christobel et al. connected KNN technique to diabetic dataset. It gives the exactness

of 71.94% with 10 overlap cross approval. C5.0 is the arrangement calculation which is

appropriate for enormous informational collections. It defeats C4.5 on the speed, memory and the

execution. C5.0 technique works by part the example in light of the field that gives the most

extreme data pick up.

The C5.0 framework can part tests with respect to of the greatest data pick up field. The example

subset that is got from the past part will be part later. The activity will proceed until the example


subset can't be part and is for the most part as per another field. At long last, think about the least

level split, those example subsets that don't have prominent commitment to the model will be

dropped. C5.0 approach effortlessly handles the multi esteem trait and missing quality from

informational collection .The C5.0 manage sets have recognizably brings down mistake rates on

inconspicuous cases for the rest and woods datasets. The C4.5 what's more, C5.0 govern sets have

the same prescient precision for the pay dataset, yet the C5.0 run set is littler. The times are nearly

not similar. For example, C4.5 required about 15 hours finding the run set for woods, yet C5.0

finished the errand in 2.5 minutes. C5.0 generally utilizes a request of extent less memory than

C4.5 amid administer set development [20]. So obviously C5.0 approach is superior to C4.5 in

numerous viewpoints. Hsi-Jen Chiang proposed a strategy for breaking down prognostic pointers

in dental embed treatment. They break down 1161 inserts from 513 patients.

Information on 23 things are taken as effect factors on dental inserts. These 1161 inserts are

examined utilizing C5.0 technique. Here 25 hubs are created by utilizing C5.0 approach. This

model accomplishes the execution of 97.67% precision and 99.15% of specificity.


Enormous information investigation not just gives enchanting openings yet in addition

faces parcel of difficulties. The challenge begins from picking the huge information examination

stage. While picking the stage, a few criteria liken accessibility, usability, adaptability, level of

security and congruity ought to be thought about Alternate difficulties of enormous information

investigation are information inadequacy, adaptability furthermore, security. Since distributed

computing plays a real part in huge information investigation, cloud security ought to be

considered. Studies demonstrate that 90% of huge information are unstructured information.

Yet, the portrayal, investigation and access of various unstructured information are as yet a test.

Information convenience is additionally basic in different human services regions like clinical

choice help for settling on choices or giving data that advisers for take choices. Huge information

can settle on choice help more straight forward, quicker and that's only the tip of the iceberg

exact on the grounds that choices depend on higher volumes of information that are more

present and significant. This needs versatile investigation calculations to create opportune

outcomes Be that as it may, the vast majority of the present calculations are wasteful in terms of

huge information examination. So the accessibility of viable examination calculations is

additionally important. Worries about protection and security are prevalent, despite the fact that


these are progressively being endeavored by new validation methodologies and strategies that

better secure patient identifiable information.


A lot of heterogeneous medicinal information have turned out to be accessible in different

human services associations. The rate of electronic wellbeing record (EHR) appropriation keeps

on moving in both inpatient and outpatient viewpoints. Those information could be an

empowering asset for inferring bits of knowledge for enhancing tolerant care and lessening waste

Investigating the gigantic measure of medicinal services data that is recently accessible in

computerized configuration should empower propelled recognition of great treatment, better

clinical choice help and precise forecasts of who is likely to become ill. This requires elite

processing stages and calculations. This paper surveys the different enormous information

investigation stages and calculations and difficulties are examined. In view of the examination,

albeit restorative analyze applications utilize distinctive calculations, C4.5 calculation gives better

execution. Yet at the same time the act of spontaneity of C4.5 calculation is required to boost

exactness, handle huge measure of information, decrease the space prerequisite for huge measure

of datasets and bolster new information composes and to diminish the mistake rate. C5.0

approach conquers these reactions by creating more precision; requiring less space when volume

of information is expanded from thousands to millions or billions. It additionally has bring down

mistake rate and limits the prescient blunder. C5.0 calculation is the conceivably appropriate

calculation for any sort of therapeutic judgments.

In the event of enormous information, the C5.0 calculation works quicker and gives the

better exactness with less memory utilization. Despite the restricted work done on enormous

information examination up until now, much exertion is expected to beat its issues identified with

the previously mentioned challenges. Likewise the fast advances in stages and calculations can

quicken the execution.



1. Alexandros Labrinidis and H. V. Jagadish, “Challenges and opportunities with big data,” Proc.

VLDB Endow. 5, pp. 2032-2033, August 2012.

2. Aneeshkumar, A.S. and C.J. Venkateswaran, “Estimating the surveillance of liver disorder

using classification algorithms”. Int. J. Comput. Applic., 57: pp. 39-42, 2012.

3. Amir Gandomi, Murtaza Haider, “Beyond the hype: Big data concepts, methods, and

analytics,” International Journal of Information Management 35, pp. 137-144, 2015

4. Chaitrali, S., D. Sulabha and S. Apte, “Improved study of heart disease prediction system using

data mining classification techniques," Int. J. Comput. Applic. 47: 44-48, 2012.

5. Doug Beaver, Sanjeev Kumar, Harry C. Li, Jason Sobel, Peter Vajgel, Facebook Inc, “Finding

a Needle in Haystack: Facebook’s Photo Storage” 2010.

6. Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike

Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber, “Bigtable: A Distributed Storage

System for Structured Data,” ACM Trans. Comput. Syst. 26, 2, Article 4, June 2008

7. Jason Brownlee, “Machine Learning Foundations, Master the definitions and concepts”,

Machine Learning Mastery, 2011.

8. Rulequest research, comparison.html.

9. Ghemawat, Howard Gobioff, and Shun-Tak Leung: “The Google file system,” Proceedings of

the 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA,

October 19-22, 2003.


Similar magazines