02.08.2018 Views

combinepdf (5)

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A STUDY ON BIOMETRIC IMAGE PROCESSING<br />

J. Arockia Jenifer & S. Lakshmi<br />

Fatima College(Autonomous), Madurai<br />

ABSTRACT<br />

Biometric image processing and recognition, which can be used in attendance, like<br />

employee attendance, student attendance. It is more secure for user access, e-commerce and other<br />

security applications. This includes the addition of the different options such as fingerprint based<br />

attendance system, face recognition attendance systems.<br />

This biometric scan can solve issues pertaining to information security. Fundamental<br />

challenges encountered by biometric systems in real world applications. Biometric technology<br />

used to measure and analyze personal characteristics, these characteristics include fingerprints,<br />

voice patterns, hand measurements.<br />

Fingerprints are considered to be the best and fastest method for biometric identification.<br />

The biometric identification systems are widely used for unique identification, mainly for<br />

verification and identification. Using fingerprint based attendance system has been introduced for<br />

automatically monitoring and calculating the student attendance in a class.<br />

INTRODUCTION<br />

Image processing is a method to perform some operations on an image, in order to get an<br />

enhanced image or to extract some useful information from it. It is a type of signal processing in<br />

which input is an image and output may be image characteristics associated with the image.<br />

Analyzing and manipulating the image.<br />

Digital Image Processing is the technology of manipulating the groups of bits to enhance<br />

the quality of the image or create different perspectives or to extract information from the image<br />

digitally, with the help of computer algorithms. Digital image is how computer record the<br />

pictures. The smallest unit is called pixel, normally consists of a 0-255 value for gray images. For<br />

color images, each pixel has three 0-255 values, representing RGB.<br />

Digital image processing visualization observes the objects, which are not visible. Image<br />

sharpening and restoration is used to create a better image. Measurement of pattern measures<br />

various objects in an image. Biometric image processing and recognition can be used in<br />

attendance. Biometric attendance systems are quickly gaining in most offices and institutions.<br />

It can reduce problems such as the presence of the missing paper and easily damaged.<br />

With this system can replace the existing manual system to a more systematic.


In fingerprint attendance system with access control, that not only allows you to login and<br />

out times of the employees, but also enables you prevent unauthorized entries into the workplace.<br />

This makes easier for both the employee and the business as work hours are logged automatically<br />

when the employee enters and leaves the office. This eliminates the possibility of timesheets<br />

getting lost or manipulated; it also saves a lot of time. In biometrics, image processing is required<br />

for identifying and individual whose biometric image is stored in the database previously, faces,<br />

fingerprints, etc., are image based biometrics, which require image processing and pattern<br />

recognition techniques.<br />

TECHNOLOGY USED<br />

1. Field used<br />

It can be used in institutions, colleges, companies and many developing fields. An<br />

automated system eliminates the need for paper tracking and instead makes use of touch screens,<br />

magnetic stripe cards. This makes easier for both the employees and students. Automated<br />

fingerprint verification is a closely related technique used in applications such as attendance and<br />

access control systems. It compares the basic fingerprint patterns between a previously stored<br />

template and a candidate fingerprint.<br />

2. Fingerprint Process<br />

Biometric technologies can provide a means for uniquely recognizing humans based upon<br />

one or more physical or behavioral characteristics and can be used to establish or verify personal<br />

identity of individuals previously enrolled.<br />

Fingerprints are considered to be the best method for biometric identification. They are<br />

secure to use, unique for every person and do not change in one’s life time. A fingerprint<br />

recognition system operates either in verification mode or in identification mode. Automated<br />

fingerprint identification is the process of automatically matching one or many unknown<br />

fingerprints against a database of known and unknown prints.<br />

Finger scanners consist of<br />

‣ A reader or scanning device.<br />

‣ Software that coverts the scanned information into digital form and compares<br />

matchpoints.<br />

‣ A database that stores the biometric data for comparison.


Binarization converts gray scale image into binary image by image by fixing the threshold<br />

value. The pixel values above and below the threshold are set to 1 and 0 respectively.<br />

It’s the most critical task in the fingerprint matching system. The binarized image is thinned using<br />

block filter to reduce the thickness of all ridge lines to a single pixel width to extract minutiae<br />

points effectively. Thinning preserves outermost pixels by placing white pixels at the boundary of<br />

the image, as a result first five and last five rows, first five and last five columns are assigned<br />

value of one.<br />

The minutiae location and the minutiae angles are derived after minutiae extraction. To<br />

compare the input fingerprint data with the template data minutiae matching is used. Its use a<br />

minute features on the finger.<br />

The major minutiae features are ridge ending, bifurcation, and short ridge. The ridge<br />

ending is the point at which a ridge terminates. Single ridge splits into two ridges. Short ridges<br />

are ridges which are significantly shorter than the average ridge length on the fingerprint.<br />

Minutiae and patterns are very important in the analysis of fingerprints since no two fingers have<br />

been shown to be identical. During the matching process, each input minutiae point is compared<br />

with template minutiae point.<br />

3. Finger Based Attendance System<br />

Student identification should be done by student’s finger print, for identification, the<br />

device scans the ridge and edge of the finger and creates a template. The system searches all the<br />

templates that are stored in the system database and matches with each saved template. Student’s<br />

fingerprints from the fingerprint scanner used as input. The fingerprint scanner can read<br />

fingerprints of any or more fingers of the both hands. The basic information was stored in the<br />

particular table and the fingerprints were in the template data table.


CONCLUSION<br />

This paper presented the related works and performance analysis for fingerprint biometric.<br />

This paper mainly comprised of development of attendance management system and fingerprint<br />

identification system. Attendance system management was made automated and online. There are<br />

some limitations of the fingerprint technology. In identification mode, biometric sample is taken<br />

for future recognition purpose and in verification mode. It is platform independent and results are<br />

very accurate. It gives a real time security system along with portability and low cost.<br />

REFERENCES<br />

1. Zhang Yongqiang and Liu Ji, The design of wireless fingerprint attendance system,<br />

International Conference on CommunicationTechnogy, 2010.<br />

2. Younhee Gil, Access Control System with high level security using fingerprints,IEEE<br />

the 32nd Applied Imagery Pattern Recognition Workshop (AIPR 09)<br />

3. Jain, A.K., Hong, L., and Bolle, R(1997), On-Line Fingerprint Verification, IEEE<br />

Trans. On Pattern Anal And Machine Intell.


MOBILE SENSING ON INTERNET OF THINGS<br />

R.Astalakshmi & M.H.Saied Sharmila<br />

MAM College, Trichy<br />

ABSTRACT<br />

The Internet of Things (IOT) is rising rapidly and its interconnection with mobile devices<br />

or Web is increasing. Mobile devices connected with sensors provide more developed services,<br />

enhance the user knowledge, experience, awareness and encourage better living. This paper is a<br />

survey paper which provides various applications or field of mobile sensing when combined with<br />

IOT or Web technology. The study has been done over 60-70 papers, which comprised of most<br />

relevant information in the domain, categorize these papers into eight different categories based<br />

on their area of application (health, transportation, games & sports, and agriculture), the nature of<br />

interface (crowd sensing, feedback, control). The challenges and problems have been analyzed<br />

and discussed. Finally, suggestions were provided to enhance the use of mobile sensing in various<br />

fields. Keywords—Internet of Things (IOT), mobile sensing, Radio Frequency Identification<br />

(RFID), QR sensing, barcodes, NFC, Web of Things (WOT).<br />

INTRODUCTION<br />

The present period of remote system sensors, radio recurrence identification(RFID) and<br />

short-run remote correspondence have allowed the Internet to get to or to experience an installed<br />

processing. Ordinary objects of our encompassing are joined to sensors, getting to be<br />

interconnected and are remarkably tended to, permits association among them through Internet<br />

made the Internet of Things a reality. Porting in IPv6 (128 bits) gives the extensive tending to go<br />

which gives the office of joining non-living condition with advanced world (Mobiles, web<br />

applications and so forth.), causes the IOT to develop at quicker rate. This developing innovation<br />

is upheld by the Web of Things (WOT) which additionally characterizes the Web systems to<br />

interface this age with physical items.The IOT interconnect the distinctive gadgets at the Network<br />

layer while the WOT is stretching out the Web practice to port the gadgets into it. The connecting<br />

of portable registering with IOT and WOT has expanded with the quick ascent of cell phones<br />

with multi-sensors. This gives the higher applications to estimating temperature, ascertaining<br />

increasing speed and checking areas or record recordings or catch photographs.There are a few<br />

study papers identified with IOT and Mobile detecting, however we have reviewed the quick<br />

developing interconnection between portable detecting and Internet of Things or Web. We have<br />

experienced such a significant number of works in the field of versatile detecting and Internet of


Things. Lastly determined different classes that clarified the work or versatile applications in<br />

particular field. These applications have experienced with numerous issues and difficulties, which<br />

have been examined in last segment.<br />

CLASSIFICATION OF APPLICATION<br />

The mobile phone has become a mandatory aspect of our life. The mobile device has<br />

extended its use in areas for text messages, taking pictures and even detecting our location (GPS<br />

built in). The internet of things and the web of things (IOT, WOT) have made it possible for the<br />

user to get interacted with the physical world through mobile phone and could sense everything<br />

with the sensing application of the mobile phone. The work relates to mobile technologies can<br />

also be classified in areas like:<br />

1) Crowd sensing<br />

The term crowd sensing is a method wherein the data is collected and shared with the help<br />

of mobile devices. Due to the ubiquitous presence of mobile devices, it has become an enticing<br />

method for the businesses and companies including Google, Face book, etc. for providing<br />

services.<br />

2) Control<br />

Controlling the appliances is again an area where the sensing factor has played an<br />

important role. The home appliances are easily controlled by the sensors how far we are from<br />

home. There are other areas too like offices and factories where it can be a daily need.<br />

3) Health<br />

The wellbeing related portable application is frequently utilized strategy generally for the<br />

seniority ShivamJuyalNavyata Joshi Rahul Chauhan Dept. of Computer Science and Dept. of<br />

Electronics and Communication Dept. of Electronics and Communication Engineering, Graphic<br />

Era Hill University Engineering, Graphic Era Hill University Engineering, Graphic Era Hill<br />

University Dehradun, India Dehradun, India Dehradun, India shivamjuyal19@gmail.com<br />

navyata12joshi@gmail.com chauhan14853@gmail.com individuals today. The sensors identify<br />

the body movement and transfer it to the web for breaking down it and after that give the<br />

criticism.


4) Feedback applications<br />

These incorporate versatile applications that offer criticism to the client about their<br />

ecological conditions. These applications can decrease the undeniably negative impacts which<br />

can be because of their exorbitant ecological utilization.<br />

5) Agriculture<br />

To improve the productivity and get intact with the customer’s mobile applications have<br />

been a boom for the farmers. It even helps to check the productivity and manage the livestock.<br />

This is known as Smart Farming.<br />

6) Games and Sports<br />

The different physical sensors used during sports activity gives the information about the<br />

athlete performance, checking the heart rate measure the speed and the separation secured. The<br />

versatile moreover shares web diversions which help them with rivaling the buddies enduring the<br />

troubles<br />

7) Interaction with Surroundings<br />

There can be seen innumerable applications today which have related the world. The long<br />

range relational correspondence goals share the information with the all inclusive community and<br />

swell our social messages.<br />

8) Transportation<br />

The identifying features of the mobile phone are furthermore used to take a gander at it the<br />

fitting ceasing domains and all through help the action delays, road condition and caution the<br />

setbacks. In this stage, each application is given and discussed most weighty works that have been<br />

done in the individual field.<br />

‣ Versatile SENSING<br />

‣ Transportation Control<br />

‣ Recreations and Sports<br />

‣ Wellbeing


‣ Group detecting<br />

‣ Criticism<br />

‣ Horticulture<br />

A. Group detecting<br />

Gathering identifying is on a very basic level an amassing of data and decoding it. It<br />

basically weights on identifying the zone where they live. It is a participation of the social<br />

occasion of people, sharing their data which help for facing the environmental challenges. With<br />

the use of recognizing PDAs, it will help us with investigating the parts of our lives. It's a scene<br />

of get-together and sharing the area information and separate it. In a couple of utilizations like<br />

Noise Tube, mobile phones work as a fuss sensor to evaluate the uproar presentation of a man in<br />

the customary condition. The versatile identifying is fundamental for the customers to get<br />

possessed with different activities and it in like manner helps in getting aware of their condition<br />

and find the solutions for its headway.<br />

B. Control<br />

The mobile phones can in like manner be used as a source to control diverse physical<br />

devices from wherever at whatever point. A champion among the most surely understood<br />

conventional outlines is the home computerization which may join controlling the lights,<br />

ventilation, circulating air through and cooling and furthermore the security structure. This<br />

methodology can in like manner be used as a piece of different distinctive fields. The Internet of<br />

things has given the resolute quality to identify and screen the things inside a framework. The<br />

advances of the web have been giving moved other options to the home automation. The<br />

contraptions can be identified with the help of Wi-Fi and subsequently through the web the<br />

devices can be controlled. A flexible application system used to control and adjust the lights and<br />

aficionados of the house is presented in. This is a measure to save the essentialness and use<br />

contraptions more supportively. There are various more game plans made to work for the home<br />

and the building computerization through the employments of web and web. An Ocean is a<br />

computerization advantage that controls the earth of the house through the mobile phone.<br />

Moreover, even the advancement of plants can be distinguished with the help of sensors and<br />

mobile phone to manage parameters like stickiness, temperature, and watering office. Therefore,<br />

with the help of convenient applications, the physical contraptions can be controlled satisfactorily<br />

giving simplicity and moreover the individual fulfillment. Essentialness saving can in like manner


e said as a factor which is seen here for our advantage and pleasant life in the present and<br />

moreover later on.<br />

C. Wellbeing<br />

In a specific order, flexible applications related the body sensors and the web or telephone<br />

sensors, and are utilized to perceive some issue by assessing some strange lead of the body part.<br />

Body sensors check the flourishing status and send the standard to the telephones, which is shared<br />

on the Web, and a brief span later examination and testing are finished by success specialists. In<br />

coordinate, a remote correspondence organize is produced that accomplices telephone with<br />

sensors or human body parts. Remedial issues are ending up more normal than later in continuous<br />

memory on the planet today. With the dynamic in helpful science now it has wound up being<br />

unquestionably not hard to see thriving related issues and the way of life of individuals which is<br />

finding the opportunity to be disgraceful legitimately.<br />

There are different foundations for fundamental remedial issues like unequal eating regimen,<br />

nonattendance of calories use, trademark issues, extend and whatnot. These have induced<br />

individuals coordinate with different wearisome honest to goodness illnesses like infection,<br />

diabetes, circulatory strain, thickness, and so on. Advantageous applications like Food Cam.<br />

Correspondence with Surrounding is utilized to screen person's dietary cases in worry with<br />

supplements. From this time forward thriving has changed into a basic key point for the people.<br />

The ascending of the correspondence advances has impelled different adjustments in the life of<br />

people. The web of things can comparatively serve the individual satisfaction of individuals<br />

experiencing various tribulations. This should be possible with the assistance of flourishing<br />

correspondence progresses<br />

D. Feedback Application<br />

Feedback adaptable application gives contribution about the effect of condition, for<br />

instance, essentialness usage, use of water and capacity to the customer. It goes about as a hover<br />

between the customer and the biological response. These applications respond a contribution to a<br />

customer. The ammeter structure screens or measures the essentialness usage of a home machine<br />

by a customer. It gives the contribution to the customer about its usage of essentialness by the<br />

family contraption with the objective that it can be checked or controlled. With this advancement,<br />

it is in like manner possible to finish an examination of the essentialness usage with online<br />

neighbors. Feedback application in PDAs furthermore measures the development and the rising


pollution which show the growing masses. This assistant in plotting some new benchmarks in<br />

remote identifying of the physical enveloping.<br />

E. Agriculture<br />

Robotization incredibly influences developing, making a sharp developing [16] method<br />

which allows the more viable developing. Adaptable applications interface the farmers with the<br />

plant hone which gives a fitting utilization of benefits. With flexible figuring application like<br />

MooMonitor +, agriculturists measure their cow-like prosperity and extravagance by putting<br />

sensors inside their body. With the compact applications, the farmer can offer their things by<br />

sharing them on the Web which allows the online courses of action. There are such countless<br />

supper applications through which farmers channel the things to check the idea of the sustenance.<br />

This can similarly check the expiry of the generation which prompts the diminishing of<br />

sustenance wastage. As agribusiness is surely not a shielded space, so adroit distinguishing and<br />

convenient preparing is imperative to save the item from bug, manage ideal pH and temperature.<br />

In this way, connecting versatile processing with IOT, including web data, permits quicker,<br />

appropriate more educated development of farming.<br />

F. Games and Sports<br />

These engaging exercises are the quickest developing applications in the field of Internet<br />

of Thing and Web of Thing innovation. In this classification, the portable application like Bike


Net, process the quickening, separate secured; breathing rate, and so forth of the sportsman.<br />

Additionally, there are a few sensors which put in shoes to figure the weight made by an athletic<br />

which encourages him to enhance his execution. Internet amusements and virtual diversions give<br />

the physical appearance of the portable client. Its collaboration with outside condition upgrades<br />

the gaming knowledge of the versatile client<br />

G. Interaction with Surroundings<br />

This is the more extensive class, which associates the surroundings with web-empowered<br />

administrations to upgrade or to mechanize the things. Straight to the point et al. exhibited the<br />

possibility of a looking framework in which sensor cell phones are utilized to locate the lost<br />

element. The cell phones are associated with advanced item code framework that peruses the<br />

RFID labels and standardized identification.<br />

The standardized tag examining or settling is conceivable through cell phones. The drug store<br />

therapeutic store is likewise focused by it, to advance the hunt procedure. The NFC-empowered<br />

cell phones are utilized to check the accessibility of a specific solution. With cell phone<br />

applications like My2cents clients would feedback be able to about the item and furthermore<br />

share the nature with the item. Associating with environment or things through versatile detecting<br />

requires different advancements like NFC, RFID, scanner tags and QR detecting.<br />

H. Transportation<br />

Web of Things or IOT aimed to connect transportation with the mobile application for<br />

better driving, easy parking and more convenience in public transport. For example, Densitybased<br />

traffic controller with IOT manages the traffic light system in accordance with the density<br />

of crowd on a particular junction and shared the information of traffic on the mobile application.<br />

Waze is the popular mobile application which connects 60 million users. It informs about the<br />

status of traffic and provides live-mapping, voice navigation system, alerts about road accidents<br />

and even more. INRIX traffic app provides similar services, offers parking services, based on the<br />

type of vehicle and the period of parking.<br />

V Track monitors the traffic delays by locating the driver’s phones. In, road situation<br />

monitoring and alert function are provided which offers the alerts about road condition to the<br />

user. Park here, is a sensing GPS parking facility which helps the driver in the automatic finding<br />

of vehicle parking. GoMetro is a mobile application which links location of the user with public


transport for convenience. In this field, authors like to do more research work for offering the<br />

better driving facility. The above mobile apps like GoMetro are getting popular especially in the<br />

USA.<br />

ANALYSIS AND OPEN PROBLEMS<br />

As above talked about, we can see the quantity of utilizations that join the Internet of<br />

Things or Web of Things with versatile detecting to get more educated and improved<br />

administrations. In this stage, we will do examination and talked about open issues related with<br />

individual application or work. With this study of related work, investigation has been shaped<br />

which are as per the following<br />

a) Cell phone sensors are utilized as a part of the participatory method to permit clients for the<br />

blend of information on the Web or Internet.<br />

b) The versatile applications are utilized to control gadgets (home mechanization) situated in the<br />

physical condition.<br />

c) The cell phones interface the body sensors with the web for games and wellbeing applications.<br />

d) The sensors are put inside the dairy animals body to gauge the wellbeing and fruitfulness of a<br />

cow.<br />

e) The portable application is utilized to check the status of activity and gives live-mapping.<br />

f) The portable application shares the data of various clients to encounter social cooperation.<br />

The detecting advances in every one of the eight applications are clarified. The information<br />

reconciliation from different cell phones is a costly undertaking and requires a ton of<br />

computation. As indicated by the overview, a few billions of dollars are utilized for these IOT<br />

works. It is normal that IOT profit will develop quickly in the coming years. Versatile figuring<br />

has an extraordinary effect in the field of wellbeing, transportation, control and Games or games.<br />

These applications have the most noteworthy business esteem as per the overview. Versatile<br />

applications in input and home mechanization are expanding. At last, the utilization of the<br />

versatile application in keen cultivating or agribusiness and association with things are still<br />

slower. There are a few difficulties, talked about in different reviews that ran over when portable<br />

detecting is joined with the Internet of Things or Web of Things, which are as per the following:


1. Regular Sensing<br />

A portion of the versatile applications require general detecting, information exchange through<br />

sensors which request high computational cost, stockpiling, and other equipment. There must be<br />

some other alternative in which sense is the second assignment in the cell phone.<br />

2. Mobile Crowd sensing<br />

It has different issues like how to quantify detecting quality, how to manage deficient information<br />

and how to utilize normal administrations.<br />

3. Security<br />

It is the most important in mobile sensing when interacting with IoT or WoT. Security must<br />

be ensuring while sharing the resources against unauthenticated users. High-security architecture<br />

like Yaler must be developed for the solution.<br />

4. Privacy<br />

Security is the significant issue or test for the client. Amid the sharing of individual data<br />

on the web, protection must be in thought. Versatile applications must be furnished with high<br />

security.<br />

5. Congestion<br />

While exchanging information or sharing assets, it must guarantee that the information<br />

being exchanged is sans blunder or exact.<br />

6. Precise<br />

It is imperative angle particularly in couple of uses like Health that comprise of body<br />

arrange. In this, the estimations must be dependable and exact. The versatile applications must<br />

recognize the imperatives. It additionally decides to what low-level of estimation can be<br />

performed.<br />

7. Personnel


It is normal that your cell phone alarms you about your wellbeing. Assume, you are going<br />

by your specialist and his telephone recommends him supplements and supplements for you, as<br />

per the status of your body sensors.<br />

8. Mood analysis<br />

It is practically equivalent to work force that offer better mentalities to the client to<br />

manage a circumstance as indicated by their feelings given by versatile application. It<br />

encourages representatives or laborers to enhance their wellbeing and productivity.<br />

9. Cloud storage<br />

It is fundamental in the vast majority of the group applications, keeping in mind the end<br />

goal to store enormous measure of information created by portable applications. Normal detecting<br />

prompts the age of enormous information which must be put or put away in an appropriate place.<br />

10. Market Cases<br />

It is imperative to get the ideal market idle of IOT or WOT with portable detecting,<br />

demonstrating the mechanized world. The financial attainability of IOT or Web-based resource<br />

must be clear. Every one of the issues talked about above influences the distinctive applications<br />

in an alternate way.<br />

As per the creators of numerous sorts of research, examined in this study, the result and<br />

criticalness of each issue are diverse in every application. They are separated as exceptionally<br />

basic, fundamental, critical, less vital and not applicable. For instance, Security is basic in<br />

wellbeing, control, input, less essential in swarm detecting, diversions, and games while vital in<br />

horticulture and communication with things. It is watched that security and protection are<br />

fundamental in the field of wellbeing, criticism, and transportation. Distributed storage is vital in<br />

the application where huge or enormous information is created like in swarm detecting. The<br />

answers for the current issues not just raise the use of versatile detecting in IOT or WOT yet<br />

additionally increment the market esteem particularly in the field of wellbeing, games, and<br />

transportation. Versatile application territory will be noteworthy or expanded in future<br />

incorporate shrewd city, tourism, seismic tremor and surge administration, in the safe exchange<br />

and small scale organizing inside the human body for wellbeing related data.


CONCLUSION<br />

This paper is an overview paper which portrays different utilizations of versatile<br />

detecting. The Internet of things and web advances have been a fundamental administration to<br />

make our lives exceptionally more straightforward furnishing us with the different applications in<br />

the cell phone which could detect the world through the advances. We have talked about the<br />

portable detecting and its applications in various fields. This is giving more dynamic data to the<br />

general population helping in better basic leadership in our life. We considered numerous<br />

examination papers and reasoned that the field of portable detecting is progressively getting to be<br />

huge and important.<br />

This paper is examination the eight distinct classifications of portable detecting (swarm detecting,<br />

criticism, wellbeing, transportation, sports, cooperation with the encompassing, agribusiness,<br />

control). Additionally, the difficulties of versatile detecting in various fields have been talked<br />

about. At long last, we have investigated the issues and proposed the particular arrangements<br />

enhance the utilization of versatile detecting all the more proficiently. Summing up, the web of<br />

things and web application has empowered us to exploit the new openings and the critical factor<br />

is to work in this field and grow more research to fulfill the difficulties


REFERENCES<br />

1. P. Sommer, L. Schor, and R. Wattenhofer, "Towards a zero arrangement remote sensor organize<br />

engineering for brilliant structures," in Proc. BuildSys, Berkeley, CA, USA, 2009, pp. 31-36.<br />

2. D. Yazar and A. Dunkels, "Proficient application reconciliation in IP-based sensor systems" in<br />

Proc.! st Workshop Embedded Sens. Syst. Vitality Efficiency Build. Berkeley, CA, USA, Nov.<br />

2009, pp. 43-48.<br />

3. J. W.Huiand D. E. Culler, "IP is dead, long live IP for remote sensor systems," in Proc. sixth<br />

Conf. Netw.Embedded Sensor Syst. (SenSys), Raleigh, NC, USA, Nov. 2008.<br />

4. E. Wilde, "Putting things to REST" School Inf., UC Berkeley, CA, USA, UCB iSchool Tech.<br />

Rep 2007-015, Nov. 2007.<br />

5. Kamilaris, A. Pitsillides, and M. Yiallouros, "Building vitality mindful shrewd homes utilizing<br />

Web innovations," J. Encompassing Intell. Savvy Environment, vol. 5, no. 2, 2013<br />

6. N.D. Path et al., "A Survey of cell phone detecting," IEEE Commun. Mag., vol. 5, no. 2, pp. 161-<br />

186, 2013.<br />

7. G. G. Meyer, K. Framling, and J. Holmstrom, "Canny items: An overview "Comput. Ind., vol.<br />

60, no.3, pp. 137-148, 2009.


SURVEY ON IMAGE PROCESSING METHODS USING THE FARMER<br />

PROFIT PRODUCTION<br />

M.Bhuvaneswari & R.Vinci Vinola<br />

Fatima College(Autonomous), Madurai<br />

ABSTRACT<br />

India is an agricultural country wherein about 70% of the population depends on<br />

agriculture. Agriculture has changed more in the past century than it has since farming began<br />

many millennia ago. Research in agriculture is aimed towards increase of productivity and food<br />

quality at reduced expenditure and with increased profit. Now day Computer technologies have<br />

been shown to improve agricultural productivity in a number of ways. One technique which is<br />

emerging as a useful tool is image processing. These techniques produce an output for the input<br />

image acquired through any of the imaging technique. The results are informative and supportive<br />

for farmers, agro based industries and marketers. It provides a timely support for decision making<br />

at an affordable cost. This paper presents a short survey on using image processing techniques to<br />

assist researchers and farmers to improve agricultural practices.<br />

Keywords<br />

Precision Agriculture; Image Processing; Profit for Farmer.


INTRODUCTION<br />

Now this year the continued demand for food with an increasing population, climate<br />

change and political instability. The agriculture industry continues to search for new ways to<br />

improve productivity and sustainability. This has resulted in researched from multiple disciplines<br />

searching for ways to incorporate new technologies and precision into the agronomic system.<br />

There is a need for efficient and precise techniques of farming, enabling farmers to put minimal<br />

inputs for high production. Precision agriculture is one of such techniques that is helping farmers<br />

in meeting both the above needs, precision agriculture can assist farmers in decision making<br />

about seed selection, crop production, disease monitoring, weed control, pesticide and fertilizers<br />

usage. It analyzes and controls farmer’s requirements using location specific information data and<br />

imagery techniques, Schellberg et al.(2008). In many parts of the world, Mainly in the rural areas<br />

this kind of data is inaccessible and the cost of procurement of these techniques is also not<br />

affordable by the farmers, Mondal and Basu(2009). The trends towards precision farming<br />

techniques are reliant on location specific data including the taking of multiple image databases.<br />

The use of image a survey of Image Processing Techniques for agriculture to be used for decision<br />

making<br />

IMAGE PROCESSING IN AGRICULTURAL CONTEXT<br />

Image processing techniques can be used to enhance agricultural practices by improving<br />

accuracy and consistency of processes while reducing farmers manual monitoring. Often it offers<br />

flexibility and effectively substitutes the farmer visual decision making. Image Processing term<br />

meaning image acquisition process of retrieving of a digital image from a physical source capture<br />

an image using sensors gray scale conversion process of converting. A color or multi channel<br />

digital image to a single channel where image pixel possess a single intensity value where image<br />

pixel possess a single intensity value image background, retrieving foreground objects Image<br />

enhancement improvement in perception of image details for human and machine analysis image<br />

histogram analysis pixel plot analysis in terms of peaks and valleys formed by pixel frequency<br />

and pixel intensities binary image segmentation foreground objects separation from background<br />

in a binary(black and white).<br />

Image color image segmentation image objects separation in color image, regions of interest<br />

image filtering process of distorting. An image in a desired was using a filter feature extraction<br />

process of defining a set of features or image characteristics that efficiently or meaningfully<br />

represent the information important for analysis and classification image registration process of


transforming different sets of data into one coordinate system image transition process of<br />

transforming different sets of data into one co ordinate system image transition process of<br />

changing state or defining a condition between two or more images image object detection<br />

process of finding instances of real world objects such as weeds, plants and insects in images or<br />

video sequences image object analysis process extracting reliable and meaningful information<br />

from images. Table (1) Applications/ Models/ Systems Developed Using Image Processing<br />

Techniques And Remarks Over Its Accuracy And Usability.<br />

REFERENCES<br />

OBJECTIVE OF<br />

METHOD<br />

RESULT<br />

THE STUDY<br />

Payne Et Al.,2014<br />

To Predict Mango<br />

Image Processing<br />

Mango Fruits Were Detected With<br />

Yield<br />

78.3%<br />

Prathibha Et Al.,<br />

2014<br />

Early Pest Detection Image Processing System Can Detect The Pests In<br />

Tomato Fruit In The Early Stage.<br />

Intaravanne &<br />

To Predict The<br />

Android Device<br />

Leaf Color Levels Can Be<br />

Sumriddetch, 2015<br />

Needed Fertilizer For<br />

And Image<br />

Accurately Identified<br />

Rice<br />

Processing<br />

Mainkar Et Al.2015 To Automatically<br />

Detect Plant Leaf<br />

Diseases<br />

Image Analysis<br />

Leaf Diseases Were Detected By<br />

The Proposed System<br />

Gopal, 2016<br />

To Develop An Auto<br />

Image Processing<br />

Developed Model Was Useful For<br />

Irrigation And Pest<br />

Irrigation And Pest Detection<br />

Detection System<br />

Hanson Et Al.2016<br />

For Detection And<br />

Image Processing,<br />

Watermelon Leaf Diseases Were<br />

Identification Of<br />

Neural Networks<br />

Detected With 75.9% Of<br />

Plant Disease<br />

Accuracy<br />

Zhao Et Al., 2016<br />

To Predict Oriental<br />

Digital Image<br />

Developed System Was<br />

Fruit Moths<br />

Processing<br />

Successful For Prediction (0.99<br />

Technology<br />

,P


Maharlooei Et Al., To Detect And Count E-Image Image Captured With An<br />

2017 Different Sized Processing Inexpensive Digital Camera Gave<br />

Soybean Aphids On Technique Satisfactory Results<br />

A Soybean Leaf<br />

Gray Scale Conversion<br />

After image acquisition, pre processing of the images involves gray scale conversion,<br />

Eerens et al.(2014) and Jaya et al.(2000). Du and Sun(2004) highlights gray scale conversion as<br />

an intermediate step in food quality evaluation models. They reported various applications<br />

evaluating food items like fruits, fishery, grains, meat, vegetables and others the use of image<br />

processing techniques applicable to different assessments. Other work (2013) reported on the use<br />

of gray level determination of foreign fibers in cotton production images that enhanced<br />

background separation and segmentation. Java et al.(2000) also demonstrated the image analysis<br />

techniques using neural networks for classification of the agriculture products. This study<br />

reported that the multi-layer neural networks classifiers are the best in performing categorization<br />

of agricultural products. For example here:<br />

Image Background Extraction in Applications<br />

The Background is of minimal use it is preferable to extract if from the images. Such<br />

images having regions of interests solid objects in dissimilar background are easily extractable.<br />

This results in non-uniform gray levels distribution between objects of interests and the image<br />

background, Eerens et al.(2014) and Java et al.(2000) following this understanding du and<br />

sun(2004) report various applications where background is not taken into consideration while


evaluating the food products quality including pizza, corn germplasm and cob,etc. Similarly wu<br />

et al.(2013) extracted background of the foreign fiber images detected in to cotton products. This<br />

aids in the clear detection of foreign fibers which were difficult to trace out. A survey on<br />

advanced techniques by Sankaran et al.(2010) highlight the us of fluorescence spectroscopy and<br />

imaging, visible and infrared spectroscopy, hyper spectral imaging in detecting plant diseases and<br />

on future enhancements, which could focus on the metabolic activities of the plants and trees<br />

releasing volatile organic compounds.<br />

DISCUSSION<br />

The review of survey papers on the use of image processing techniques showed that these<br />

techniques can be useful to assist agricultural scientists. The deep approach which is improve<br />

application in computer vision, automatic speech recognition and natural language processing,<br />

Bengio(2009) is emerging as the preferred approach. The review found that crop identification<br />

and disease detection the common uses for the technique.<br />

CONCLUSION<br />

This paper presented a survey on using image processing techniques used in an agriculture<br />

context. Employing the processes like segmentation, features extraction and clustering can be<br />

used to interrogate images of the crops. There is a need to select the most appropriate techniques<br />

to assist decision-making. The image processing techniques have been used across a vast range of<br />

agricultural production contexts. It can be effective in food quality assessment, fruit defects<br />

detection, weed crop classification. There are a number of applications and methods to choose<br />

from for implementation to real time needs. While the existing application sustaining the needs of<br />

today, there are more and more new methods are evolving to assist and ease the farming<br />

practices.It is evident that these approaches will all contribute to the wider goal of optimizing<br />

global of production. One factor, which could increase the development of image processing<br />

techniques for agriculture is the availability of Online data sets. No online images databases are<br />

available on food quality assessment, fruit defects detection or weed/crop classification. Similar


to databases like handwritten or printed documents and characters, faces, there is a need of<br />

agricultural databases that will ease in the testing and verification of newly developed image<br />

processing methods.<br />

REFERENCES<br />

1. Lalit P. Saxena1 and Leisa J.(2014) Armstrong2, A survey of image processing techniques for<br />

agriculture.<br />

2. A.B. Ehsanirad And Y. H. S. Kumar (2010), Leaf Recognition For Plant Classification Using<br />

Glcm And Pca Methods, Oriental Journal Of Computer Science & Technology, 3 (1), Pp. 31–36,<br />

2010.<br />

3. Image Analysis Applications In Plant Growth And Health Assessment, Journal Of Agricultural<br />

Faculty Of Mustafa Kemal University.(2017).<br />

4. C. C. Yang, S. O. Prasher, J. A. Landry, J. Perret And H. S. Ramaswamy (2000), Recognition Of<br />

Weeds With Image Processing And Their Use With Fuzzy Logic For Precision Farming,<br />

Canadian Agricultural Engineering, 42 (4), Pp. 195-200, 2000.<br />

5. D. S. Jayas, J. Paliwal and N. S. Visen (2000), Multi-layer neural networks for image analysis of<br />

agricultural products, J. Agric. Engng Res., 77 (2), pp. 119–128, 2000.<br />

6. D. J. Mulla (2013), Twenty five years of remote sensing in precision agriculture: Key advances<br />

and remaining knowledge gaps, Biosystems Engineering, 114, pp. 358–371, 2013.<br />

7. E. E. Kelman and R. Linker (2014), Vision-based localisation of mature apples in tree images<br />

using convexity, Biosystems Engineering, 114, pp. 174–185, 2014.<br />

8. C. J. Du and D. W. Sun (2004), Recent developments in the applications of image processing<br />

techniques for food quality evaluation, Trends in Food Science & Technology, 15, pp. 230–249,<br />

2004.<br />

9. C. Puchalski, J. Gorzelany, G. Zagula and G. Brusewitz (2008), Image analysis for apple defect<br />

detection, detection, Biosystems and Agricultura Engineering, 8, pp. 197–205, 2008.


AN OVERVIEW OF CLOUD COMPUTING<br />

R. Thirumalai Kumar& V.Vinoth Kumar<br />

NMSSVN College,Madurai<br />

ABSTRACT<br />

The development of Cloud Computing services is speeding up the rate in which the<br />

organizations outsource their computational services or sell their idle computational resources.<br />

Even though migrating to the cloud remains a tempting trend from a financial perspective, there<br />

are several other aspects that must be taken into account by companies before they decide to do<br />

so cloud computing has become a key IT buzzword. Cloud computing is in its infancy in terms of<br />

market adoption. However, it is a key IT megatrend that will take root. Aiming to give a better<br />

understanding of this complex scenario, in this paper giving an overview of the cloud computing<br />

in this emerging technology.<br />

INTRODUCTION<br />

Cloud computing is a subscription-based service where you can obtain networked storage<br />

space and computer resources. Cloud computing entails running computer/network applications<br />

that are on other people’s servers using a simple user interface or application format. The cloud<br />

technology has many benefits and that would explain its popularity. First, companies can save a<br />

lot of money; second, they are able to avoid the mishaps of the regular server protocols. When a<br />

company decides to have a new piece of software, whose license can only be used once and it’s<br />

pretty expensive, they wouldn’t have to buy software for each new computer that is added to the<br />

network. Instead, they could use the application installed on a virtual server<br />

somewhere and share, in the ‘cloud’.


How can we use the cloud?<br />

The cloud makes it possible for you to access your information from anywhere at any<br />

time. While a traditional computer setup requires you to be in the same location as your data<br />

storage device, the cloud takes away that step. The cloud removes the need for you to be in the<br />

same physical location as the hardware that stores your data. Your cloud provider can both own<br />

and house the hardware and software necessary to run your home or business applications.<br />

This is especially helpful for businesses that cannot afford the same amount of hardware and<br />

storage space as a bigger company. Small companies can store their information in the cloud,<br />

removing the cost of purchasing and storing memory devices. Additionally, because you only<br />

need to buy the amount of storage space you will use, a business can purchase more space or<br />

reduce their subscription as their business grows or as they find they need less storage space.<br />

One requirement is that you need to have an internet connection in order to access the cloud. This<br />

means that if you want to look at a specific document you have housed in the cloud, you must<br />

first establish an internet connection either through a wireless or wired internet or a mobile<br />

broadband connection. The benefit is that you can access that same document from wherever you<br />

are with any device that can access the internet. These devices could be a desktop, laptop, tablet,<br />

or phone. This can also help your business to function more smoothly because anyone who can<br />

connect to the internet and your cloud can work on documents, access software, and store data.<br />

Imagine picking up your smart phone and downloading a pdf document to review instead of<br />

having to stop by the office to print it or upload it to your laptop. This is the freedom that the<br />

cloud can provide for you or your organization.


CHARACTERISTICS OF THE CLOUD<br />

‣ The “no-need-to-know” in terms of the underlying details of infrastructure, applications<br />

interface with the infrastructure via the APIs.<br />

‣ The “flexibility and elasticity” allows these systems to scale up and down at will<br />

utilizing the resources of all kinds (CPU, storage, server capacity, load balancing, and databases)<br />

‣ The “pay as much as used and needed” type of utility computing and the “always on<br />

anywhere and any place” type of network-based computing.<br />

• Cloud are transparent to users and applications, they can be built in multiple ways<br />

‣ Branded products, proprietary open source, hardware or software, or just off-the-shelf<br />

PCs.<br />

In general, they are built on clusters of PC servers and off-the-shelf components plus Open<br />

Source software combined with in-house applications and/or system software.<br />

TYPES OF CLOUDS<br />

There are different types of clouds that you can subscribe to depending on your needs. As<br />

a home user or small business owner, you will most likely use public cloud services.


1. Public Cloud - A public cloud can be accessed by any subscriber with an internet<br />

connection and access to the cloud space.<br />

2. Private Cloud - A private cloud is established for a specific group or organization and<br />

limits access to just that group.<br />

3. Hybrid Cloud - A hybrid cloud is essentially a combination of at least two clouds,<br />

where the clouds included are a mixture of public and private.<br />

CHOOSING A CLOUD PROVIDER<br />

Each provider serves a specific function, giving users more or less control over their cloud<br />

depending on the type. When you choose a provider, compare your needs to the cloud services<br />

available. Your cloud needs will vary depending on how you intend to use the space and<br />

resources associated with the cloud. If it will be for personal home use, you will need a different<br />

cloud type and provider than if you will be using the cloud for business. Keep in mind that your<br />

cloud provider will be pay-as-you-go, meaning that if your technological needs change at any<br />

point you can purchase more storage space (or less for that matter) from your cloud provider.<br />

There are three types of cloud providers that you can subscribe to: Software as a Service<br />

(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). These three types


differ in the amount of control that you have over your information, and conversely, how much<br />

we can expect our provider to do for. Briefly, here is what can expect from each type.<br />

1. Software as a Service - A SaaS provider gives subscribers access to both resources and<br />

applications. SaaS makes it unnecessary for you to have a physical copy of software to install on<br />

your devices. SaaS also makes it easier to have the same software on all of your devices at<br />

once by accessing it on the cloud. In a SaaS agreement, you have the least control over the cloud.<br />

2. Platform as a Service - A PaaS system goes a level above the Software as a Service<br />

setup.APaaS provider gives subscribers access to the components that they require to develop and<br />

operate applications over the internet.<br />

3. Infrastructure as a Service - An IaaS agreement, as the name states, deals primarily with<br />

computational infrastructure. In an IaaS agreement, the subscriber completely outsources the<br />

storage and resources, such as hardware and software, that they need.<br />

OPPORTUNITIES AND CHALLENGES OF CLOUD COMPUTING<br />

<br />

<br />

<br />

The use of the cloud provides a number of opportunities<br />

It enables services to be used without any understanding of their infrastructure.<br />

Cloud computing works using economies of scale:<br />

It potentially lowers the outlay expense for start up companies, as they would no<br />

longer need to buy their own software or servers.<br />

Cost would be by on-demand pricing.<br />

Vendors and Service providers claim costs by establishing an ongoing revenue<br />

stream.<br />

Data and services are stored remotely but accessible from “anywhere”.<br />

<br />

In parallel there has been backlash against cloud computing


Use of cloud computing means dependence on others and that could possibly limit<br />

flexibility and innovation:<br />

The others are likely become the bigger Internet companies like Google and IBM,<br />

who may monopolise the market.<br />

Some argue that this use of supercomputers is a return to the time of mainframe<br />

computing that the PC was a reaction against.<br />

<br />

Security could prove to be a big issue:<br />

<br />

It is still unclear how safe out-sourced data is and when using these services<br />

ownership of data is not always clear.<br />

<br />

<br />

<br />

<br />

<br />

data.<br />

There are also issues relating to policy and access:<br />

If your data is stored abroad whose policy do you adhere to?<br />

What happens if the remote server goes down?<br />

How will you then access files?<br />

There have been cases of users being locked out of accounts and losing access to<br />

ADVANTAGES OF CLOUD COMPUTING<br />

Lower computer costs<br />

o You do not need a high-powered and high-priced computer to run cloud<br />

computing's web-based applications.<br />

o Since applications run in the cloud, not on the desktop PC, your desktop PC does<br />

not need the processing power or hard disk space demanded by traditional desktop<br />

software.<br />

o When you are using web-based applications, your PC can be less expensive, with a<br />

smaller hard disk, less memory, more efficient processor...<br />

o In fact, your PC in this scenario does not even need a CD or DVD drive, as no<br />

software programs have to be loaded and no document files need to be saved.


Improved performance<br />

o With few large programs hogging your computer's memory, you will see better<br />

performance from your PC.<br />

o Computers in a cloud computing system boot and run faster because they have<br />

fewer programs and processes loaded into memory…<br />

Reduced software costs<br />

o Instead of purchasing expensive software applications, you can get most of what<br />

you need for free-ish<br />

• applications today, such as the Google Docs suite.<br />

o better than paying for similar commercial software<br />

• which alone may be justification for switching to cloud applications.<br />

Instant software updates<br />

o Another advantage to cloud computing is that you are no longer faced with<br />

choosing between obsolete software and high upgrade costs.<br />

o When the application is web-based, updates happen automatically<br />

• available the next time you log into the cloud.<br />

o When you access a web-based application, you get the latest version<br />

• without needing to pay for or download an upgrade.<br />

Improved document format compatibility<br />

o You do not have to worry about the documents you create on your machine being<br />

compatible with other users' applications or OSes<br />

o There are potentially no format incompatibilities when everyone is sharing<br />

documents and applications in the cloud.


Unlimited storage capacity<br />

o Cloud computing offers virtually limitless storage.<br />

o Your computer's current 1 Tbyte hard drive is small compared to the hundreds of<br />

Tbytes available in the cloud.<br />

Increased data reliability<br />

o Unlike desktop computing, in which if a hard disk crashes and destroy all your<br />

valuable data, a computer crashing in the cloud should not affect the storage of<br />

your data.<br />

• if your personal computer crashes, all your data is still out there in the<br />

cloud, still accessible<br />

o In a world where few individual desktop PC users back up their data on a regular<br />

basis, cloud computing is a data-safe computing platform!<br />

Universal document access<br />

o That is not a problem with cloud computing, because you do not take your<br />

documents with you.<br />

o Instead, they stay in the cloud, and you can access them whenever you have a<br />

computer and an Internet connection<br />

o Documents are instantly available from wherever you are<br />

Latest version availability<br />

o When you edit a document at home, that edited version is what you see when you<br />

access the document at work.<br />

o The cloud always hosts the latest version of your documents<br />

o as long as you are connected, you are not in danger of having an outdated version<br />

Easier group collaboration


o Sharing documents leads directly to better collaboration.<br />

o Many users do this as it is an important advantages of cloud computing<br />

o Multiple users can collaborate easily on documents and projects<br />

Device independence<br />

o You are no longer tethered to a single computer or network.<br />

o Changes to computers, applications and documents follow you through the cloud.<br />

o Move to a portable device, and your applications and documents are still available.<br />

DISADVANTAGES OF CLOUD COMPUTING<br />

Requires a constant Internet connection<br />

o Cloud computing is impossible if you cannot connect to the Internet.<br />

o Since you use the Internet to connect to both your applications and documents, if<br />

you do not have an Internet connection you cannot access anything, even your<br />

own documents.<br />

o A dead Internet connection means no work and in areas where Internet<br />

connections are few or inherently unreliable, this could be a deal-breaker.<br />

Does not work well with low-speed connections<br />

o Similarly, a low-speed Internet connection, such as that found with dial-up<br />

services, makes cloud computing painful at best and often impossible.<br />

o Web-based applications require a lot of bandwidth to download, as do large<br />

documents.<br />

Features might be limited<br />

o This situation is bound to change, but today many web-based applications simply<br />

are not as full-featured as their desktop-based applications.<br />

• For example, you can do a lot more with Microsoft PowerPoint than with<br />

Google Presentation's web-based offering


Can be slow<br />

o Even with a fast connection, web-based applications can sometimes be slower than<br />

accessing a similar software program on your desktop PC.<br />

o Everything about the program, from the interface to the current document, has to<br />

be sent back and forth from your computer to the computers in the cloud.<br />

o If the cloud servers happen to be backed up at that moment, or if the Internet is<br />

having a slow day, you would not get the instantaneous access you might expect<br />

from desktop applications.<br />

Stored data might not be secure<br />

o With cloud computing, all your data is stored on the cloud.<br />

• The questions is How secure is the cloud?<br />

o Can unauthorised users gain access to your confidential data?<br />

Stored data can be lost<br />

o Theoretically, data stored in the cloud is safe, replicated across multiple machines.<br />

o But on the off chance that your data goes missing, you have no physical or local<br />

backup.<br />

• Put simply, relying on the cloud puts you at risk if the cloud lets you down.<br />

THE FUTURE<br />

• Many of the activities loosely grouped together under cloud computing have already been<br />

happening and centralised computing activity is not a new phenomena<br />

• Grid Computing was the last research-led centralised approach<br />

• However there are concerns that the mainstream adoption of cloud computing could cause<br />

many problems for users<br />

• Many new open source systems appearing that you can install and run on your local cluster<br />

– It should be able to run a variety of applications on these systems


CONCLUSION<br />

To summarize, the cloud provides many options for the everyday computer user as well as<br />

large and small businesses. It opens up the world of computing to a broader range of uses and<br />

increases the ease of use by giving access through any internet connection. However, with this<br />

increased ease also come drawbacks. You have less control over who has access to your<br />

information and little to no knowledge of where it is stored. You also must be aware of the<br />

security risks of having data stored on the cloud. The cloud is a big target for malicious<br />

individuals and may have disadvantages because it can be accessed through an unsecured internet<br />

connection. If you are considering using the cloud, be certain that you identify what information<br />

you will be putting out in the cloud, who will have access to that information, and what you will<br />

need to make sure it is protected. Additionally, know your options in terms of what type of cloud<br />

will be best for your needs, what type of provider will be most useful to you, and what the<br />

reputation and responsibilities of the providers you are considering are before you sign up.<br />

REFERENCE<br />

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf.<br />

Won Kim: “Cloud Computing: “Status and Prognosis”, in Journal of Object Technology, vol. 8,<br />

no.1, January-February 2009, pp. 65-72<br />

http://www.jot.fm/ Cloud Computing: Today and Tomorrow


A STUDY ON IMAGE PROCESSING IN AGRICULTURE TO DETECT<br />

INTRODUCTION<br />

PLANT DISEASE<br />

A.L.Deepika,S.Vinothini<br />

Fatima College(Autonomous),Madurai<br />

Image processing is a method to perform some operations on an image, in order to get an<br />

enhanced image or to extract some useful information from it. It is a type of signal processing in<br />

which input is an image and output may be image or characteristics/features associated with that<br />

image. Nowadays, image processing is among rapidly growing technologies. It forms core<br />

research area within engineering and computer science.<br />

Image processing basically includes the following three steps:<br />

1. Importing the image via image acquisition tools<br />

2. Analyzing and manipulating the image<br />

3. Output in which result can be altered image or report that is based on image analysis.<br />

There are two types of methods used for image processing namely, analog and digital<br />

image processing. Analog image processing can be used for the hard copies like printouts and<br />

photographs. Image analysts use various fundamentals ofinterpretation while using these visual<br />

techniques. Digital image processing techniques help in manipulation of the digital images by<br />

using computers. The three general phases that all types of data have to undergo while using<br />

digital technique are pre-processing, enhancement, and display, information extraction. DIP is the<br />

use of computer algorithms to create, process, communicate, and display digital images. The<br />

input of that system is a digital image and the system process that image using efficient<br />

algorithms, and gives an image as an output. The process of digital image processing is defined in<br />

the form of phases.


Image Acquisition<br />

Image Pre- Processing<br />

Image Segmentation<br />

Feature Extraction<br />

Classification based on<br />

Statistical Analysis<br />

DETECTING THE PLANT DISEASE<br />

In recent years the importance of sustainable agriculture has risen to become one of the most<br />

important issues in agriculture. In addition, plant diseases continue to play a major limiting role in<br />

agricultural production. The control of plant diseases using classical pesticides raises serious concerns<br />

about food safety, environmental quality and pesticide resistance, which have dictated the need for<br />

alternative pest management techniques. In particular, nutrients could affect the disease tolerance or<br />

resistance of plants to pathogens. When some diseases are not visible to naked eye but actually they<br />

are present, then it is difficult to detect it with the naked eye. Earlier, microscope is used to detect the<br />

disease, but it become difficult as to observe each and every leaf and plant. So, the fast and effective<br />

way is a remote sensing technique. Detection and recognition of diseases in plants using machine<br />

learning is very fruitful in providing symptoms of identifying diseases at its earliest. Computer<br />

processing Systems are developed for agricultural applications, such as detection of leaf diseases,<br />

fruits diseases etc. In all these techniques, digital images are collected using a digital camera and<br />

image processing techniques are applied on these images to extract useful information that are<br />

necessary for further analysis. Digital Image processing is used for the implementation which will<br />

take the image as input and then perform some operation on it and then give us the required or<br />

expected output.


The image processing can be used in agricultural applications for following purposes:<br />

1. To detect diseased leaf, stem, fruit<br />

2. To quantify affected area by disease.<br />

3. To find shape of affected area.<br />

4. To determine color of affected area<br />

5. To determine size & shape of fruits<br />

IMAGE ACQUISITION<br />

The image acquisition is required to collect the actual source image. An image must be<br />

converted to numerical form before processing. This conversion process is called digitization.<br />

IMAGE PREPROCESSING<br />

The principle objective of the image enhancement is to process an image for a specific task<br />

so that the processed image is better viewed than the original image. Image enhancement<br />

methods basically fall into two domains, spatial and frequency domain.<br />

IMAGE SEGMENTATION<br />

In image processing, segmentation falls in to the category of extracting different image<br />

attributes of an original image.<br />

Segmentation subdivides an image into constituent regions or objects<br />

CLASSIFICATION TECHNIQUE<br />

There are three important techniques are used. They are<br />

1. Artificial Neural Network<br />

2. Clustering Method<br />

3. SVM (Support Vector Machine


ARTIFICIAL NEURAL NETWORK<br />

Description<br />

ANN means Artificial Neutral Network. It is a type of artificial intelligence that includes some<br />

functions of the person mind.<br />

ANN’s have three layers that are interconnected.<br />

1. The first layer consists of input neurons. Those neurons send data on to the second layer,<br />

which in turn sends the output neurons to the third layer.<br />

2. ANN is also known as a neural network.<br />

Characteristics<br />

A large number of very simple processing neuron-like processing elements . A large number of<br />

weighted connections between the elements . Distributed representation of Knowledge over the<br />

connections . Knowledge is acquired by network through a learning process.<br />

Advantages<br />

1. It is a non-parametric classifier.<br />

2. It is an universal functional with arbitrary accuracy.<br />

3. capable to present functions such as OR, AND, NOT<br />

4. It is a data driven self-adaptive technique<br />

5. Efficiently handle. noisy inputs<br />

Disadvantages<br />

1. It is semantically poor.<br />

2. ANN is time taking.<br />

3. Problem of over fitting.<br />

CLUSTERING METHOD<br />

Description<br />

This is an iterative technique that is used to partition an image into clusters. Clusters can be<br />

selected manually or randomly. Distance between the pixel and cluster center is calculated by the<br />

squared or absolute. The difference is typically based on pixel color, intensity, texture, and location,<br />

or a weighted combination of these factors. More commonly used clustering algorithm<br />

k- means algorithm, fuzzy c-means algorithm, expectation – maximization (EM) algorithm.


Characteristics<br />

Clustering based on intensity and threshold separation.<br />

1. It uses stochastic approach<br />

2. Performance and accuracy depends upon the threshold selection.<br />

Advantages<br />

1. Simpler classifier as exclusion of any training process.<br />

2. Applicable in case of a small dataset which is not trained<br />

Disadvantages<br />

1. Speed of computing distance increases according to numbers available in training<br />

samples.<br />

2. Expensive testing of each instance and sensitive to irrelevant inputs<br />

SVM (Support Vector Machine)<br />

Description<br />

A support vector machine builds a hyper plane or set of hyper planes in a high- or infinite<br />

dimensional space, used for classification, generalization error of the classifier.<br />

Characteristics<br />

SVM uses Nonparametric with binary classifier approach and can handle more input data very<br />

efficiently. Performance and accuracy depends upon the hyper plane selection and kernel<br />

parameter.<br />

Advantages<br />

1. It gains flexibility in the choice of the form of the threshold.<br />

2. Contains a nonlinear transformation.<br />

3. It provides a good Generalization capability.<br />

4. The problem of over fitting is eliminated.<br />

5. Reduction in computational complexity.


Disadvantages<br />

1. Result transparency is low.<br />

2. Training is time consuming.<br />

3. Structure of algorithm is difficult to understand.<br />

CONCLUSION<br />

Therefore, this system will help the farmer to increase the production of agriculture. This<br />

method we can identify the disease present in monocot or dicot. Most of us we mainly used<br />

popular method, clustering method. This deals with the accuracy and fast approach of disease<br />

detection. By using this concept the disease identification is done for all kinds of plants and also<br />

the user can know the affected area of the plants in percentage. By identifying the disease<br />

properly, the user can rectify the problem very easy and with less cost. So I conclude that image<br />

processing is one of the important tools for disease detection of plants.<br />

REFERENCES<br />

1. P.Revathi, M.Hemalatha, “Classification of Cotton Leaf Spot Diseases Using Image<br />

Processing Edge Detection Technique”, ISBN, 2012, 169-173, IEEE.<br />

2. Santanu Phadikar & Jaya Sil[2008] Rice Disease Identification Using Pattern Recognition<br />

Techniques,<br />

3. Agrios, George N. (1972). Plant Pathology (3rd ed.). Academic Press.


ABSTRACT<br />

INTERNET OF BIOMETRIC THINGS IN FINGERPRINT<br />

TECHNOLOGIES AND DATABASES<br />

S.Jaishreekeerthana & S.Kowsalya<br />

MAM College,Trichy<br />

Since security vulnerabilities speak to one of the great difficulties of the Internet of<br />

Things, scientists have proposed what is known as the Internet of Biometric Things, which<br />

blends customary biometric advancements with setting mindful validation procedures. A<br />

standout amongst the most popular biometric innovations is electronic fingerprint<br />

acknowledgment, which get fingerprint utilizing diverse advancements, some of which are<br />

more appropriate for others. In addition, diverse fingerprint databases have been worked to<br />

contemplate the effect of different factors on the exactness of various fingerprint<br />

acknowledgment calculations and frameworks. In this paper, we study the accessible<br />

fingerprint obtaining advances and the accessible fingerprint databases. We likewise<br />

distinguish the preferences and disservices of every innovation and database. List Terms—<br />

Capacitive, Digital Cameras, Fingerprints, Internet of BiometricThings, Optical, Synthetic<br />

Fingerprints Generation, Thermal, Ultrasonic.<br />

INTRODUCTION<br />

Later, billions of items (or things) will be associated together making what is known as the<br />

Internet of Things .security vulnerabilities speak to one of the fabulous difficulties Utilizing<br />

biometric identification strategies to address a portion of the Security vulnerabilities is an<br />

exceptionally engaging arrangement. As of late, a few scientists have proposed what is known<br />

as the Internet of Biometric Things , which makes utilization of customary biometric<br />

identification strategies and setting mindful verification systems .A standout amongst the most<br />

popular biometrics innovations is electronic Fingerprint verification. It can be utilized inside<br />

various applications, for example, cell phones, ATMs, shrewd homes, brilliant structures,<br />

transportation frameworks, and so on. Along these lines, computerized fingerprint<br />

identification is getting consistently developing enthusiasm from both scholarly and business<br />

parties. The significance of concentrating on fingerprint, as a principle biometric device to<br />

exceptionally recognize individuals and henceforth the security and protection comes from<br />

numerous reasons. The first one is that no two people Supported by the Jordan University of<br />

Science and Technology Deanship of Research


Fig. 1: A Fingerprint Clarification have been found to have the same fingerprints. Another<br />

reason is that fingerprints don't change with age. At long last, each finger of a solitary<br />

individual has an alternate fingerprint. This reality can be used to distinguish not just the<br />

individual who contacted a specific question, yet in addition the finger utilized by that<br />

individual. As Shown in Figure 1, a fingerprint is framed from the accompanying parts: Ridge<br />

Endings, the terminals of the edges. Bifurcations, the purpose of separating edge into two<br />

branches. Specks,little edges. Islands irregular point without any connectors. Pore: a white<br />

point encompassed by dark point. Scaffolds, little edges joining two longer contiguous edges<br />

Crossovers: two edges which cross each other Core: focal point of the fingerprint. Deltas:<br />

focuses, at the lower some portion of the fingerprint, encompassed by a triangle. Fingerprints<br />

can be classified as per their shapes into three fundamental writes; Whorl fingerprint, Arch<br />

fingerprint and circle fingerprint. Each class has its edge flow and design. Figure 2<br />

demonstrates these three sorts with their reality rates. Circle fingerprint are the most wellknown<br />

with a percent (60% 65%) of all fingerprints.<br />

Curve fingerprints which are once in a while exist represents just 5% percent[6]. An average<br />

Automatic Fingerprint Identification System (AFIS) more often than not comprises of<br />

numerous phases that incorporate Image Obtaining, fingerprint division,image preprocessing<br />

(improvement), highlight extraction, and coordinating outcome .<br />

• Image Acquisitio<br />

The phase of picture procurement is a standout amongst the most essential factors that<br />

contribute in the achievement of the fingerprint acknowledgment framework, here there are a<br />

few elements to be considered before picking the gadget that will be utilized to catch<br />

fingerprints regarding execution, cost and size of the subsequent picture, in this investigation<br />

we center around the fingerprint catch stage as it were.• Segmentation: The principle of this<br />

step is to separate the image of the finger from the background, it is an initial step of<br />

fingerprint system


• Preprocessing<br />

This phase includes some modifications to the image with the aim of enhancement, this would<br />

increase the accuracy of the system used in fingerprint recognition..<br />

• Feature extraction and Person<br />

At last, comparable fingerprints are recovered. Along these lines, just this arrangement<br />

of fingerprints (source) are contrasted and the objective fingerprint. Fingerprints are gained in<br />

various ways. For non-mechanized (or non-on the web) frameworks, fingerprints are typically<br />

procured by stamping inked fingertips on an extraordinary paper. On the off chance that these<br />

fingerprints should be put away on a computerized gadget, a picture of the paper is taken<br />

utilizing a scanner for instance. Then again, fingerprints are procured utilizing fingertips<br />

scanners for mechanized frameworks. Scanners may utilize distinctive advances, for example,<br />

optical, capacitive, warm, and ultra-sonic. These scanners more often than not require the<br />

fingertip to be squeezed or hauled over some surface. This may reshape or crush the<br />

subsequent fingerprint which may prompt mistaken identification. Likewise, this makes it<br />

difficult to cross utilize diverse finger tips scanners in securing and verification process as<br />

various scanners may brings about various reshaping comes about. As of late, as most present<br />

day computerized frameworks are given, or can be effortlessly given, with fantastic advanced<br />

cameras, a course of research began to consider.The likelihood of securing fingerprints<br />

utilizing these cameras. As indicated by the investigations in this approach is promising and<br />

the outcomes are worthy. To plan distinctive procedures and concentrate the effect of various<br />

components identified with robotized fingerprint acknowledgment frameworks, an expansive<br />

fingerprint picture set is required .In this dad per, we go for cultivating the improvement of<br />

concentrating on fingerprint verification. Specifically, we direct a review on the accessible<br />

fingerprint picture sets and the procurement gadgets used to gather each, for example, optical<br />

sensor, capacitive sensor, warm sensor, and diverse kinds of advanced cameras. We list the<br />

fundamental highlights of every innovation and fingerprint picture set, next to their favorable<br />

circumstances and hindrances.


FINGERPRINT ACQUISITION DEVICES<br />

This section gives an overview of the most common devices used for online fingerprint<br />

acquisition. Many technologies have been utilized in these devices. Namely, optical,<br />

capacitive, thermal, and ultrasonic<br />

A. Optical sensor<br />

Optical fingerprint devices are the oldest and the most commonly used devices to<br />

capture fingerprints. Figure 3(a) 1 clarifies how this sensor works. The fingerprint image is<br />

captured by placing a finger on one side of a prism, a light source that emits light vertical on<br />

the second prism side, and a camera that is placed in front of the third prism side. The light<br />

collected by the camera is then converted into a digital signal where bright areas reflect the<br />

valleys and dark areas reflect the ridges.This sensor was used to acquire a large number finger<br />

print databases, such as FVC2000 (DB1, DB3), FVC2002 (DB1 and DB2), FVC2004 (DB1<br />

and DB2), FVC2006 (DB2). Figure 3(b) is an example of a commercial optical sensor. The<br />

main advantages of optical sensors are temperature insensitivity, low cost, and high resulting<br />

fingerprint resolution. On the other hand, it suffers form many disadvantages. As optical<br />

sensor depends on light reflection, as a result, its functionality may suffer from lighting<br />

environment.<br />

A) Clarification of how optical sensors work


B) Commercial optical sensor based Fingerprint<br />

Acquisition Device picture as they may cover. Besides, this deposit can be utilized<br />

to take the fingerprint. One of the business cases that uses this innovation is appeared in<br />

Figure 3(b) is the SecuGen Hamster IV 2. it is one of the normal gadgets in fingerprint<br />

frameworks, which have been certified by the FBI and STQC. Close to SecuGen Hamster IV,<br />

other optical gadgets were utilized to gather the IIIT-D Multi-sensor Optical and Latent<br />

Fingerprint .B. Capacitive Sensor<br />

This gadget depends on the standard of estimating the electric limit of the human skin.<br />

Fingerprints are set on a thin plate that contains an arrangement of small scale capacitors. Skin<br />

of a fingerprint contains edges and valleys. The capacitive sensor fundamentally measure the<br />

limit of the skin of the finger. The limit of edges is diverse frame the limit of valleys on the<br />

grounds that the limit depends very on the separation of the skin from the miniaturized scale<br />

capacitor plate. A basic model of a capacitive sensor is appeared. This sensor was utilized to<br />

catch various fingerprint datasets that incorporates FVC2000 (DB2) [20] and FVC2002 (DB3)<br />

. Favorable position of capacitive sensors are little in estimate and consequently they are less<br />

expensive to deliver. They additionally devours Capacitive Technology low energy.This<br />

advantage is especially helpful in the battery controlled gadgets. It likewise obtuse to<br />

ecological impacts, for example, daylight. Numerous producers, for example, Sony, Fujitsu,<br />

Hitachi, and Apple, have embraced this innovation in their fingerprint acknowledgment<br />

frameworks.C. Synthetic Fingerprint Generation


This strategy naturally creates artificial fingerprints in light of a model. This model<br />

typically is controlled by an arrangement of parameters which can be changed to deliver<br />

variant fingerprints as far as clearness, territory, focused or not, revolution, the difficulty of<br />

the acknowledgment of the fingerprint, and some more. This technique can be utilized to<br />

manufacture substantial fingerprint databases in brief time with no cost. Additionally, the<br />

produced artificial fingerprints might be tuned to recreate normal fingerprints. This technique<br />

was utilized as a part of some fingerprint databases, Sensor depends on pyro electric material.<br />

This material can change temperature contrasts to various voltages. By the contrast between<br />

the temperatures of the edges and the valleys, the voltage created for every one is not the same<br />

as the other.<br />

A standout amongst the most widely recognized warm sensors is "Atmel FingerChip"<br />

which is utilized to secure numerous outstanding fingerprint databases.Namely, FVC2004<br />

(DB3) and FVC2006 (DB3). This technology is not very common commercially. One of the<br />

difficulties faced when dealing with this technology is that the thermal image should be<br />

captured in a very short time. That is because when a finger is placed over the pyro electric<br />

material, the temperature difference between the areas under the ridges and the valleys will be<br />

initially detectable. Eventually, after a short period of time, the temperature difference<br />

vanishes as the adjacent areas heat each other. As this method depends on body temperature, it<br />

is very hard to be received E. Ultrasound Sensor<br />

This gadget depends on the rule of sending high recurrence sound waves, and after that<br />

catching the reflection from the finger surface. The upside of this innovation is its capacity to<br />

go through residue, soil, and skin oil that may cover the finger surface. Therefore, fingerprint<br />

ultrasound detecting might be utilized to catch top notch pictures if the finger is messy. Then<br />

again, this method has numerous weakness. For instance, it is exceptionally costly and may<br />

set aside long opportunity to catch the fingerprint picture. Qualcomm4 are using ultrasonic<br />

innovation


a) Clarification of how Ultrasonic Sensors work<br />

F. Digital Cameras<br />

Numerous advanced camera writes are utilized as a part of the writing of fingerprint<br />

frameworks. We arrange them into webcams, cell phone cameras and progressed advanced<br />

cameras .Webcams are computerized cameras that are generally low in determination and<br />

exceptionally shoddy in cost. These cameras comes generally implicit work force PCs for<br />

video visiting purposes. Since these cameras have low determination, it is difficult to get the<br />

fingerprint points of interest from a picture. Accordingly, numerous examines proposed<br />

numerous picture pre-handling strategies to defeat this issue. Over the most recent couple of<br />

years, cell phones cameras have seen an enormous change which have empowered them to be<br />

utilized to procure fingerprint. Current determination of cell phone have achieved 8 uber<br />

pixels all things considered. Nonetheless, there are an arrangement of criteria that ought to be<br />

considered when taking fingerprint pictures utilizing these gadgets, ffor example, lighting,<br />

remove between the finger and the camera, the proper concentration that ought to be picked,<br />

and the picture preprocessing steps. Progressed computerized cameras, additionally, have<br />

been considered and used to secure fingerprint pictures. These cameras delivers high<br />

determination pictures. In these cameras, picture obtaining is profoundly flexible. They can<br />

likewise apply propelled picture upgrade and pressure calculations<br />

FINGERPRINT DATABASES<br />

In this segment, we portray briefly the absolute most basic fingerprint databases:<br />

FVC2000 database is utilized as a part of the First International Finger-print Verification<br />

Competition. The database comprises of four sections; DB1, DB2, DB3 and DB4. DB1 and<br />

DB2 contain pictures of 110 fingers with eight impressions for every finger (880 pictures<br />

altogether). The pictures of up to four fingers for every member where taken. Fingerprints of<br />

the fore and the center fingers of the two hands are taken. Half of the fingerprints are for guys.<br />

The times of the people in these two databases are 20-30 years of age. DB1 was procured<br />

utilizing a minimal effort optical sensor (Secure Desktop Scanner) while DB2 was gained an<br />

ease capacitive sensor (TouchChip by ST Microelectron-ics). As indicated by, these two<br />

databases can't be utilized to assess the precision of the biometric calculations. That is on the<br />

grounds that no alerts were taken to ensure the nature of the pictures and the plates of the<br />

sensor were not methodicallly cleaned after each utilization In DB3, the fingerprints are<br />

obtained from 19 volunteers, of whom 55% are guys. The pictures of six fingers are taken


from every member (thumb, fore, and center of left and right hands). The periods of the<br />

people are in the vicinity of 5 and 73. DB3 pictures were obtained utilizing an optical sensor<br />

(DF-90 by Identicator Technology). DB4 was naturally created utilizing engineered age.<br />

FVC2002 database was utilized as a part of the Second International Fingerprint Verification<br />

Competition. It comprises of four sections; DB1, DB2, DB3 and DB4. The quantity of<br />

pictures is the same as in FVC2000. In this database, four distinct kinds of advances were<br />

utilized to obtain the fingerprints in the four unique databases (I) an optical sensor<br />

"TouchView II" by Identix, (ii) an optical sensor "FX2000" by Biometrics, (iii) a capacitive<br />

sensor "100 SC" by Precise Biometrics, and (iv) manufactured fingerprint age. 90 people took<br />

an interest in this database gathering.In DB1, DB2, and DB3, pictures of various 30 members<br />

were gathered. Pictures of 4 fingers (the fore and the center fingers of the two hands) of every<br />

member were gathered. No any additional exertion were taken to ensure the nature of the<br />

pictures as the perpos of these pictures are simply to be utilized as a part of rivalry. DB4 was<br />

artificially created.<br />

CONCLUSIONS AND FUTURE WORK<br />

In this paper, we have studied the writing for the most widely recognized fingerprint<br />

procurement gadgets and the diverse accessible fingerprint picture sets. We have seen that the<br />

most recent pattern in fingerprint obtaining is to utilize distinctive kinds of advanced cameras<br />

and uncommonly the utilization of cell phone cameras. We have additionally seen the<br />

accessible fingerprint dataset obtained utilizing computerized cameras are altogether taken<br />

with work in debasements due to specific lighting, distinctive foundation, low camera quality,<br />

picture pressure, and some more. At the end of the day, the accessible fingerprint databases<br />

needs to a flawless fingerprint database that can be utilized for testing the effect of various<br />

adjustable debasements. As a future work, we intend to assemble a computerized camera<br />

procured fingerprint set in idealize conditions that can be redone effectively and can be<br />

utilized for testing the effect of various debasements on the precision of various fingerprint<br />

acknowledgment calculations.


A STUDY ON DIGITAL IMAGE PROCESSING<br />

R.Kowshika & R.Nihitha<br />

Fatima College(Autonomous),Madurai<br />

INTRODUCTION<br />

Image processing is a method to perform some operations on an image, in<br />

order to get an enhanced image or to extract some useful information from it. It is a<br />

type of signal processing in which input is an image and output may be image or<br />

characteristics features associated with that image. Now a days, image processing is<br />

among rapidly growing technologies. There are two types of methods used for image<br />

processing namely analogue and digital image processing. Analogue image processing<br />

can be used for the hard copies like printouts and photographs. Image analysts use<br />

various fundamentals of interpretation while using these visual techniques. The three<br />

general phases that all types of data have to undergo while using digital technique are preprocessing<br />

, enhancement, and display, information extraction. Analog image processing is<br />

done on analog signals. In this type of processing the images. Digital image processing<br />

deals with developing a digital system that performs operations on an digital image.<br />

APPLICATION<br />

Capturing three-dimensional models has become more and more important during<br />

last year, due to rapid development of 3D technologies, especially 3Dprinters. There<br />

are several possibilities of capturing such 3D models ,but the most frequent method is<br />

3D scanning, which means direct capturing of 3D models. This technology is spreading<br />

into the new domains and more and more new applications. This paper deals with<br />

applications of 3D scanning technologies in medicine, where 3D scanning brings<br />

significant advantages on Medical Volumetry


STORING AND VISUALIZTION OF 3D MODELS<br />

Each cube contains probability by object eg. Density in case of MRI or CT.<br />

All object in scan are then approximated by number of same cubes. On the other<br />

hand, computation over such model are simple and fast. It is useful for models in low<br />

resolution , which are frequently updated and rebuild.<br />

MEDICAL OF 3D SCANNING<br />

The universal of 3D scanning device suitable for medical purposes and<br />

usable in following abilities:<br />

High accuracy- essential parameter to being able to distinguish even tiny<br />

changes of body caused by muscle strengthening.<br />

Flexibilty since device should be universal , we shall be capable of scanning<br />

entire body as same as tiny details. Because 3D scanner must be very flexibile.<br />

Low operational costs- to allow its everyday, use operation shall be<br />

inexpensive.<br />

Simple manipulation- device must be as much as possible automated, not<br />

distributing the personnel with complex setting before each scanning.<br />

High speed- the scanning procedure must be very fast. In other cases, the<br />

personnel would not have time to use it and would prefer estimation.<br />

No limitations- device should be usable with any patient. There should be no<br />

limitations according to metal parts , health state etc.....<br />

Harmless operation-using the device shall not be harmful for personnel in any<br />

circumstances.<br />

To fulfil all these requirements, the design of medical 3D scanner shall be as<br />

follows:<br />

Data capturing sensor shall be structured light 3D scanner or laser scanner due<br />

to precision reasons. As a result of this, computation of 3D point position. Sensor<br />

motion shall be motorized from reason of precise sensor localization, automatic<br />

movement and keeping in measuring range. The problem with computational<br />

requirements is not serious since the model is once captured and afterwards modified<br />

occasionally.


Because such device is not commercially available, we created our own 3D scanner<br />

meeting the specifications. High accuracy is reached by using precise manipulator<br />

with accurate laser scanner and high flexibility is caused by programmable scanning<br />

by replaceability of laser scanner, what provides possibility of scanning both tiny and<br />

large structures. It is 3D modelling system, useful for many different medical<br />

applications besides. Such models are visualized to operator who defines regions of<br />

interest(ROI) is defined by method itself.<br />

TECHNOLOGY USE<br />

The technology was recently by Sitemens healthineers. MRI scanners equipped<br />

compressed sensing technologies operate much more quickly than MRI scanners<br />

currently in use. The result is the first clinical application using compressed sensing<br />

technology . “compressed sensing “ technology which was developed scans of the<br />

beating heart can be completed in as few as 25 seconds while the patient breathes<br />

freely. In contrast in an MRI scanner equipped with conventional acceleration<br />

techniques must lie still for four minutes or more and hold their breath for as<br />

manyas seven to 12 times related procedures. In the future, compressed sensing might<br />

change the way MRI of the abdomen are performed. Today certain populations like<br />

excluded from having abdominal MRI due to their inability to perform long,<br />

consecutive and exhausting ,the amount of data required for an image of excellent<br />

diagnostic quality may be reduced enabling imaging of the abdomen in one continuous<br />

run. While still in the early stages research reported today in the advances has made<br />

significant steps towards a new MRI method with enable to personalise life-saving<br />

medical treatments and allow real-time imaging to take place in locations such as<br />

operating theatres and GP practises.<br />

MRI which works by detecting the magnetism of molecules to create an image is a<br />

crucial tool in medical diagnostics. However current technology is not very efficient- a<br />

typical hospital scanner will effectively detect only one molecule in every 200,000,<br />

making it difficult to see the full picture of what happening in the body.<br />

The research team based at the university of York, has discovered a way to make<br />

molecules more magnetic and therefore more visible an alternative method which<br />

could produce a new generation of low-cost and highly sensitive imaging techniques.<br />

Professor Simon Duckett from the centre for Hyper polarisation in magnetic resonance<br />

at the university of York said. “what we think we have the potential to achieve with


MRI what could be compared to improvements in computing power and performance<br />

over the last 40 years. While they are a vital diagnostic tool, current hospital scanners<br />

could be compared to the abacus, the recent development of more sensitive scanners<br />

takes us to Alan Turing’s computer and we are now attempting to create something<br />

scalable and low-cost that would bring us to the tablet or smartphones.<br />

The research team has found a way to transfer the “invisible” magnetism -a magnetic<br />

form. Improved scanners are now being trailed in various countries, but because they<br />

operate in the same way as regular MRI scanners- using a superconducting magnetthese<br />

new models.<br />

MEDICAL TECHNOLOGY EQUIPMENT<br />

PET SCANNER<br />

PET stands for position Emission Tomography and is a method of body<br />

scanning that detects radioactive compounds that have been injected into the body to<br />

provide information on function rather than structure . PET scanners are relatively new<br />

to the secondary equipment market.<br />

MRI SCANNER<br />

Magnetic Resonance Imaging is an imaging technique used primarily in medical<br />

settings to produce high-quality images of the inside.MRI produce images that are the<br />

visual equivalent of a slice of anatomy. MRI user radio frequencies a computer and a<br />

large magnet that surrounds the patient.<br />

CT SCANNER<br />

Computerized Axial Tomography scanners use a fan beam of x-rays and a<br />

detector system that rotates then display on a computer or transferred to film.<br />

NUCLEAR MEDICINE<br />

In nuclear medicine diagnosing techniques very small amount of radioactive<br />

materials. Information from nuclear medicine studied describes organ function not just<br />

structure.


ULTRASOUND<br />

This medical imaging techniques uses high-frequency sound waves and their<br />

echoes. The main advantage is that certain images can be observed without using<br />

radiation.<br />

PACS/PICTURE ARCHIVING AND COMMUNICATION TECHNOLOGIES<br />

These IT-based products improve the speed and consistency of image<br />

communication with in the radiology department and throughout an enterprise.<br />

OPINION<br />

It provides a summary of scientific knowledge on security scanners for<br />

passenger screening. Although the does per scan arising from the use of screening for<br />

security purposes are well below the public does limit this does not remove the<br />

requirements for justification.<br />

CONCLUSION<br />

In order to show the degrees of sophistication principles, which are generally at<br />

the basis of applications. The prospects in this area follows those of scanning. The<br />

same scanning modality can produce several types of scanning. From this point of<br />

view magnetic resonance imaging(MRI) is a very good example because different<br />

sequences give access to the anatomy.<br />

REFERENCE<br />

Digital image processing (3 rd edition):Rafael c .Gonzalez . Richard wood<br />

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3841891/#!Po=16.0550


A STUDY ON DIGITAL IMAGE PROCESSING IN CINEMATOGRAPHY<br />

J.Roselin Monica,S.Sivalakshmi<br />

Fatima College(Autonomous),Madurai<br />

INTRODUCTION<br />

Digital image processing is the use of computer algorithms to perform image<br />

processing on digital images. As a subcategory or field of digital signal processing, digital<br />

image processing has many advantages over analog image processing. It allows a much<br />

wider range of algorithms to be applied to the input data and can avoid problems such as the<br />

build-up of noise and signal distortion during processing. Since images are defined over two<br />

dimensions (perhaps more) digital image processing may be modeled in the form<br />

of multidimensional systems.Many of the techniques of digital image processing, or digital picture<br />

processing as it often was called, were developed in the 1960s at the Jet Propulsion<br />

Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a<br />

few other research facilities, with application to satellite imagery, wire-photo standards<br />

conversion, medical imaging, videophone, character recognition, and photograph enhancement. The<br />

cost of processing was fairly high, however, with the computing equipment of that era. That changed<br />

in the 1970s, when digital image processing proliferated as cheaper computers and dedicated<br />

hardware became available. Images then could be processed in real time, for some dedicated<br />

problems such as television standards conversion. As general-purpose computers became faster, they<br />

started to take over the role of dedicated hardware for all but the most specialized and computerintensive<br />

operations. With the fast computers and signal processors available in the 2000s, digital<br />

image processing has become the most common form of image processing and generally, is used<br />

because it is not only the most versatile method, but also the cheapest.Digital image processing<br />

technology for medical applications was inducted into the Space Foundation Space<br />

Technology Hall of Fame in 1994.<br />

VFX:<br />

Visual Effects (abbreviated VFX) is the process by which imagery is created or manipulated<br />

outside the context of a live action shot in film making.Visual effects involve in the<br />

integration of live-action footage (special effects) and generated imagery (digital effects) to<br />

create environments which look realistic, but would be dangerous, expensive, impractical,<br />

time consuming or impossible to capture on film.<br />

Visual effects using computer-generated imagery (CGI) have recently become accessible to<br />

the independent filmmaker with the introduction of affordable and easy-touse<br />

animation and compositing software.Visual effects primarily divides into two groups of:


1. Special Effects: It covers any visual effects that take place in live action, e.g. on set<br />

explosions or stunt performances.<br />

2. Digital Effects (commonly shortened to digital FX or FX): It covers the various<br />

processes by which imagery is created or manipulated with or from photographic<br />

assets. Digital Effects often involve the integration of still photography and computergenerated<br />

imagery (CGI) to create environments which look realistic but would be<br />

dangerous, costly, or impossible to capture in camera. FX is usually associated with<br />

the still photography world in contrast to visual effects which is associated with<br />

motion film production. Digital FX also divides into different subgroups of<br />

professions such as:<br />

‣ Matte paintings and stills: digital or traditional paintings or photographs which serve<br />

as background plates for 3D characters, particle effects, digital sets, backgrounds.<br />

‣ Motion Capture (Mo-Cap for short): It’s the process of recording the movements of<br />

objects and or people. In a session of motion capture, the subject whose motion is<br />

being captured is recorded and sampled many times per second by different scanners<br />

placed all over the environment. There are different types of systems that read the<br />

actor’s movement. One of which is the optical method that uses tracking cameras that<br />

lock onto specialized markers placed over the actor’s motion capture suit. The other<br />

type of method is called the non-optical method where instead of capturing the<br />

markers location in space, it recorders and measures the inertia and mechanical<br />

motion in the area. This type of motion capture doesn’t just apply to the body, but can<br />

be used to track the facial movements and expressions of an actor and transfer them<br />

to a 3d model later on in the pipeline. The same type of concept of using markers to<br />

track motion is used, but more often than not, the actor’s face will have painted dots<br />

on their face rather than ball shaped markers. Not only is the actor’s movements<br />

recorded in this process, but the movement of the camera is also recorded, which<br />

allows editors to use this data to enhance the environment the motion captured set is<br />

imagined in. Once all of this is captured, the motion captured data is mapped to a<br />

virtual skeleton using software such as Autodesk’s MotionBuilder or other software<br />

of choice.<br />

‣ Modelling: Creating 3D models of props or characters using specialised software.<br />

‣ Animation: Assign movements for any objects and characters in 2D or 3D.<br />

‣ Compositing: Combining visual elements from different sources to create the illusion<br />

that all those elements are parts of the same scene.


TECHNOLOGY<br />

The first animation films can be traced back to the early 1900s which featured<br />

characters from the popular comic strips of the time. These films made use of the single<br />

frame method which involved images projected at a high volume of frames per<br />

second. Gertie the Dinosaur created by sketch artist Winsor McCay in 1914 is believed to be<br />

the first successful animation short film.<br />

King Kong, released in 1933 was among the pioneering movies to use this technique. The<br />

use of miniature models was another Hollywood VFX technique that was in use in the early<br />

1900s, but it was taken to another level by the classic sci-fi<br />

franchises StarWars and StarTrek. Hundreds of miniature models manipulated by some<br />

brilliant camerawork marked these films, which created an unprecedented fan base.<br />

Superman, released in 1978, was another milestone in the special effects industry. By using<br />

cables, blue screen and some clever camera tricks, the movie makers created the illusion of a<br />

flying superhero.Visual effects definitely add value to movies and while Hollywood remains<br />

the mecca of special effects, film makers the world over are now using VFX to enhance their<br />

movies. Some of the recent hit films in China to have used VFX heavily include the Zhong<br />

Kui: Snow Girl and the Dark Crystal, John Woo’s The Crossing, and the recently released<br />

Monster Hunt, which went on to become the highest grossing Chinese film of all time. In<br />

India, the larger-than-life Bollywood films use an average of 500 to 1500 VFX shots. Films<br />

like Dhoom, Chennai Express and the smash-hit Bahubali have used special effects to<br />

perfection in recent times. The growing popularity of visual effects in world cinema has led<br />

to a spurt in the demand for special effects companies like Toolbox Studio, which has been<br />

doing impressive work in the field.<br />

CONCLUSION<br />

VFX is a technology which is used for making animated and unbelievable movies<br />

with its originality. This paper says about the definition, history, and techniques of the VFX.<br />

This also includes about the lifespan of technology from the starting stage and still now how<br />

it is useful for flims. VFX also makes a good carrier in life for the persons who have passion<br />

in cartoon drawing.


REFERENCE<br />

<br />

<br />

<br />

http://www.toolbox-studio.com/blog/history-of-vfx-in-hollywood/<br />

https://en.wikipedia.org/wiki/Digital_image_processing#History<br />

https://en.wikipedia.org/wiki/Visual_effects


ROAD TRAFFIC SURVEILLANCE AND ACCIDENT FINDING SYSTEM<br />

BASED ON IOT- A SURVEY<br />

ABSTRACT<br />

S.Muralidharan & M.Deepak<br />

MAM College,Trichy<br />

Road Traffic is a standout amongst the most essential issue in our creating world. This<br />

paper presents investigation of various viewpoints and issues identified with the issue. This paper<br />

underscores on utilizing unmistakable innovation – "Web of Things" (IOT) for creating shrewd<br />

framework to screen different parameters identified with street movement and utilizing it for<br />

viable arrangement.<br />

The overview of the current frameworks and concerned methods identified with the issue zone<br />

are talked about. Distinctive issues like vehicle identification, impediment recognition, and path<br />

shot location, mishap discovery and related techniques to comprehend these issues are<br />

investigated. We propose our "IOT based activity checking and mischance discovery<br />

framework" comprising of "Raspberry pi", and "pi camera as equipment" which will utilize live<br />

video as information and process it for social event data about the live movement. The<br />

framework produces data seeing the activity, for example, number of vehicles, crisis mishap<br />

circumstances, and dishonorable shot of vehicle. The produced data can be utilized to deal with<br />

and occupy the live movement as expected to keep away from the issues identified with street<br />

activity<br />

Keywords—Internet of Things (IOT); Raspberry pi; Pi camera; Traffic Monitoring;


INTRODUCTION<br />

Rising road Traffic is one of the most concerning issue handled by the world in<br />

everyday life. Individuals endure each day in a few or the other route because of<br />

overwhelming street activity. It is much imperative to consider and enhance the factor of<br />

street security while contemplating the issue of street activity. The effective answer for this<br />

issue is to utilize keen computerized innovation for taking care of the continuous movement.<br />

There has been different research work did to discover the answer for this issue. Yet at the<br />

same time there is a need of proficient and perfect framework that can be for all intents and<br />

purposes tried and executed.<br />

Movement reconnaissance can be extremely helpful for future arranging, for example,<br />

contamination control, activity flag control, framework and street arranging, inadvertent<br />

blockage evasion. The information acquired from this framework can be valuable for the<br />

expectation of the voyaging time and way. Using Internet of Things (IOT) can be<br />

exceptionally useful for growing such a shrewd framework. IOT is having interconnected<br />

system of physical parts and gadgets associated for the accumulation, trade and handling of<br />

different sorts of information for finish of some particular goal. IOT has been very effective<br />

in expressing in fact brilliant ideas for different spaces associated with this present reality, for<br />

example, savvy city, shrewd home, keen horticulture, and keen security and so forth.<br />

This paper exhibits top to bottom portrayal of issues and research issues identified with<br />

activity observation and mischance identification framework. We additionally exhibit<br />

examination and discoveries from the investigation of the writing identified with the issue. It<br />

likewise introduces our proposed framework and future work. The proposed system in<br />

perspective of IOT is limited with the help of a taking care of board to process the data and a<br />

camera module to give the live video as information. Raspberry pi board will be used as<br />

taking care of module and pi camera module will give the data information in video crude<br />

arrangement h.264 to the Raspberry pi. The framework will distinguish the quantity of<br />

vehicle cruising by, mischance and anticipate the path shot of the vehicles out and about. The<br />

foundation subtraction utilizing Gaussian blend model and edge location utilizing watchful<br />

edge is been executed on Raspberry pi.Motivation<br />

Significant urban areas of India like Mumbai, Surat and Ahmedabad are confronting the issue<br />

of street movement gravely. The general population stay stuck in roads turned parking lots<br />

and gridlocks for a considerable length of time.<br />

58


There are numerous crisis circumstances on street because of activity which needs moment<br />

consideration. These crisis riotous circumstances can turn out to be defenseless. In this way it<br />

is important to direct the activity to guarantee quick stream of the vehicles out and about.<br />

Subsequently in this developing universe of innovation, the plan to utilize savvy mechanical<br />

frameworks with wide future extension can be significantly more powerful.<br />

RESEARCH PROBLEMS<br />

1. A Research Problems Related to System in Live Environment<br />

There is wide extent of change for the framework to be executed and tried for all<br />

intents and purposes. Numerous analysts have proposed different procedures like shrewd<br />

edge discovery, Spatial-Temporial investigation with Hidden Markov Model (HMM) ,<br />

thresholding and foundation subtraction , 3-D demonstrate and neural system for vehicle<br />

following, mischance recognition and path shot. They have utilized changing scope of<br />

equipment like stratix III FPGA, cameras, for example, skillet tilt and zoom camera ,<br />

standard camera, and V8 camcorder at various shifting edge rate and resolutions yet some<br />

way or another the design of the framework created do not have the accompanying<br />

components and should be assessed.Ractical compatibility Analysts and technocrats have<br />

proposed numerous thoughts<br />

and created numerous mixes of equipment and programming to give most ideal arrangement<br />

they can.Yet at the same time the frameworks need to give for all intents and purposes<br />

similarity this present reality for down to earth utilization. A few analysts have worked just<br />

with disconnected video information which does not give understanding about working of<br />

their proposed work in genuine world, While some examination work have been tried in<br />

certifiable which needs more investigation about its working in various clear continuous<br />

circumstances for down to earth approach with adaptable usage with live movement<br />

2) Environment and background problem<br />

This foundation and condition impacts in a few different ways of the gadgets. The<br />

hindrances out of sight, for example, tree, plants, individuals and so on are factors that can<br />

influence to the video preparing calculation and consequently can give second rate comes<br />

about. Edge differential procedure utilized as a part of can have changed surprising<br />

substantial handling on Raspberry pi board if there is excessively overwhelming movement<br />

and foundation is intensely thick and if the edge rate isn't set to work with to the irregular<br />

59


circumstance. Thus the encompassing condition caught in the casing of the camera is a<br />

critical point to be considered while adopting the pragmatic strategy for this framework.<br />

3) Software and hardware<br />

Similarity As depicted before there are numerous products and equipment that has been<br />

consolidate for a few proposed work. It is extremely imperative for programming to<br />

commonly work correctly with the equipment. Raspberry pi board isn't intended to work with<br />

x86 working framework [21] which incorporates windows and a portion of the Linux based<br />

OS. We liked to utilize Raspbian OS, which is an extraordinarily planned and favored<br />

working framework for Raspberry pi and python is the advanced dialect that is desirable over<br />

work with Raspberry pi. The conduct of devices and programming will contrast for various<br />

equipment, for instance the establishment and working of the Open CV library by Intel as per<br />

the programming dialect on IOT gadgets for video preparing will vary not at all like in<br />

ordinary designed PC. Subsequently it is important to discover most ideal mix of the two<br />

components which will be reasonable to work in live condition with great execution.<br />

4) Independency<br />

Most of the current systems which are been observed and tested in the live environment<br />

needs formation of the design to work as an independent system with remote access and<br />

control over the network to provide it more compact ability and feasibility to be implemented<br />

in real world.<br />

5) Cost effectiveness<br />

Financially plausible framework will give much better down to earth and achievable<br />

approach. In the event that we consider greater situation where this sort of framework is<br />

should have been utilized and kept up on expansive number of streets of the city, cost ends<br />

up one main consideration. There emerges a need of building framework with minimum cost<br />

equipment and programming.<br />

B. Asset limitations for IOT based framework<br />

1) Processing Power<br />

The proposed framework will have constrained assets for preparing which will influence<br />

the handling power. Raspberry pi has just 1 gigabytes of smash and 1.2 gigahertz of Quad<br />

Core processor. These assets should be overseen and taken care of in an approach to play out<br />

the different video preparing tasks. Subsequently we will undoubtedly work as indicated by<br />

this confinement.<br />

60


2) Power supply<br />

The propose framework requires consistent power supply yet at current circumstance we<br />

intend to utilize a battery. So we will have a constrained power supply for actualizing and<br />

testing our framework.<br />

3) Atmospheric impact<br />

The equipment segments of the proposed framework won't have any assurance against<br />

the climatic circumstances like warmth, rain, and cool and so on which can influence the<br />

ordinary working of the framework.III. RELATED WORK<br />

The past work completed by various scientists have been broke down and examined thinking<br />

about different elements and parameters to produce the vital discoveries of the procedure,<br />

methods and criteria required for building up the proposed framework. The writing audit has<br />

been set up into five table speaking to the investigation of systems utilized for vehicle<br />

discovery, mishap recognition and vehicle following, dataset utilized and sort of the camera<br />

utilized. The nature of the framework is judged in light of the execution parameter alongside<br />

the physical equipment that has been utilized for improvement of the framework. The tables<br />

are readied in light of criteria whether crafted by writers has been executed in live condition<br />

or tried in disconnected mode.<br />

A. Vehicle Detection<br />

The strategies Spatio-transient investigation, Kalman separating Frame distinction ,<br />

Gaussian blend show [20] have been utilized for getting the extraction of the state of vehicle.<br />

The Gaussian blend demonstrates has been more exact to isolate out the state of moving<br />

vehicle and it additionally works for versatile foundation. Spatio-transient distinction and<br />

Kalman separating strategies require complex preparing. Paired Frame contrast is less<br />

unpredictable yet gives less accuracy contrasted with Gaussian blend demonstrate.<br />

B. Vehicle Tracking<br />

Vehicle following gives the path or the way of the moving vehicle. The path of the<br />

vehicle can be utilized to judge and screen the customary stream of the activity. Vehicle<br />

following is mostly isolated into two classes’ direction based and optical stream. Optical<br />

stream based strategy requires low calculation contrasted with direction based. In light of the<br />

study Lucas-Kanade technique in view of optical stream and K-implies bunching in view of<br />

direction is discovered more proficient. Relatively Lucas-Kanade needs less calculation and<br />

gives high precision. The critical examination and discoveries that are acquired The<br />

investigation about different procedures/strategies was done to discover best methods. The<br />

61


investigation of the past work helped us to get understanding about the current framework<br />

and their philosophy. It helped us in seeing how to plan and propose more productive<br />

framework.<br />

D. Available Open Source Data Set for Experimentation<br />

1) VIRAT video informational index<br />

This informational index contains crude video records of 8.5 hours in .mp4 organize having<br />

determination of hd 720pixel. The recordings in the informational collection are recorded<br />

from stationary camera and contain ground data with blend of people strolling and vehicles<br />

out and about.<br />

2) MIT Traffic Video Data set<br />

The MIT movement informational index contains 90 min video succession recorded at<br />

720x480 utilizing a standard camera. The informational index contains day by day life<br />

exercises out and about including people. This informational index can consummately be<br />

utilized for testing and experimentation reason<br />

PROPOSED SYSTEM<br />

After the investigation of the related work and examination of the current framework we<br />

chose to plan a framework which survives and handle a portion of the current research issues<br />

talked about before. The degree and target of the exploration in light of the elements and<br />

assets accessible are enrolled underneath.<br />

A. Extent of Research<br />

The framework will be made out of raspberry pi and pi camera as equipment gadgets.<br />

The programming of the framework will be finished utilizing Raspbian OS. The<br />

programming dialect utilized will be python and openCV library. The framework will work<br />

with video got in crude arrangement h.264 from raspberry pi camera at 1296 x 730<br />

determinations 1-49fps or 640 x 480 at 1-90fps. The framework will distinguish<br />

overwhelming vehicles including trucks, transports, and autos and path of the vehicle will be<br />

plotted. The vehicle identification will be tested in live condition. The framework will<br />

distinguish oddity and mischance or uncommon conduct of the vehicle. The mishap<br />

discovery will be performed in demonstrated/recreated condition or disconnected information<br />

because of confinement of assets.<br />

62


B. Goal of Research<br />

To review break down and think about various procedures identified with vehicle, path<br />

and mischance discovery for movement reconnaissance. To test the different systems on the<br />

proposed blend of equipment and check the yield as indicated by various parameters.<br />

Determination of method for steps portrayed in Frame handling diagram, the overview or<br />

tried consequences of analyses. To create framework with those chose strategies/calculation<br />

for identifying number, path and mischance of vehicles with help of information video<br />

caught from camera at particular required parameter. To take a stab at actualizing and testing,<br />

this framework in live condition for vehicle location.<br />

Hardware:-<br />

1) Raspberry pi<br />

Raspberry pi is a little measured PC. It has an ARM calculable processor of 1.2 GHZ and<br />

slam of 1 Gigabyte. It has different working framework as alternative to create different IoT<br />

ventures. Raspberry pi likewise contains GPIO pins which are additionally programmable for<br />

different tasks. Because of little size can without much of a stretch be fit for our proposed<br />

framework.<br />

2) pi Camera<br />

Pi camera is a camera module that can be utilized with raspberry pi. There is opening in<br />

Raspberry pi to straightforwardly associate the pi camera. Pi camera gives input video in<br />

determination of 640, 720, or 1080 HD. Henceforth it can be effortlessly utilized for our<br />

motivation. Diverse elements like shine, power can be balanced for Pi camera.<br />

3) Battery as power source<br />

4) Internet dongle for remote access<br />

Programming tools:-<br />

5) Python<br />

Python is the authority bolstered dialect for Raspberry pi. There is additionally wide<br />

network taking a shot at Raspberry and python. The Raspberry pi has been firmly upheld by<br />

the python programming. Subsequently we work with Python IDE.<br />

6) Opencv (Intel open source library)<br />

OpenCV is an open source library for video and picture preparing and it is exceptionally<br />

productive and powerful for different activities identified with our proposed framework.<br />

OpenCV is lightweight as far as preparing.<br />

63


7) Eclipse IDE for remote investigating<br />

Shroud can be utilized for remote troubleshooting. We can interface remote raspberry and<br />

Eclipse on neighborhood PC for troubleshooting the projects on raspberry pi. As Eclipse IDE<br />

has great troubleshooting support consequently it gives us straightforwardness to take a shot<br />

at Raspberry pi.Architectural diagrams<br />

Fig . 2. Architecture of Proposed System<br />

Execute concerned operations on each frames extracted from the live video of pi camera,<br />

elaborates the processing and operations on single frame.<br />

D. Experiment and demonstration<br />

The initial step of our trial was to design the Raspberry pi. The headless establishment of<br />

the Raspbian OS was done effectively on Raspberry pi. After the establishment of the<br />

Raspbian OS the establishment of<br />

the python and openCV was done in the virtual condition to satisfy every one of the<br />

conditions required. For association of the Raspberry pi with PC/PC remote association<br />

utilizing secure shell SSH is favored. After the setup of the Raspberry pi we tried three<br />

strategies in view of our investigation and discoveries for vehicle identification and<br />

foundation division Frame distinction, Gaussian Mixture Model, Bayesian division. The trial<br />

64


was done on two dataset VIRAT and MIT Traffic as The strategies were likewise tried on a<br />

video of Indian street movement with just 360p determination and temperamental camera as<br />

the most dire outcome imaginable to get the correct data about the working of the three<br />

methods and discover the best reasonable. We discovered that GMG performed well contrast<br />

with others and consequently we utilized it with Canny edge identification to check whether<br />

we can utilize all the more precisely followed edge of moving vehicle for our framework.<br />

Our future work will incorporate development of bouncing box and impediment end. The<br />

jumping box is a container shape covering moving vehicle in each edge and it will likewise<br />

be valuable in distinguishing impediment between two vehicles. We will attempt to explore<br />

different avenues regarding distinctive techniques for vehicle following and mischance<br />

location and actualize them to locate the most appropriate one for our objective. We intend to<br />

test the proposed framework in live condition for vehicle discovery. The framework will be<br />

tried for remote access over the web utilizing the web dongle and a battery as power source<br />

giving it an independency. This gets to and controls the framework from any geological area.<br />

65


CONCLUSION<br />

A top to bottom review of methodologies utilized by analysts for vehicle discovery,<br />

vehicle following, and mischance identification is displayed. The accessible open source<br />

video datasets, helpful for disconnected experimentation, are likewise talked about. From<br />

study, we reason that (I) Hidden Markov Model and Neural Network strategies are more<br />

effective and perfect to try different things with mischance discovery as these strategies give<br />

great exactness, (ii) Lucas-Kanade, a strategy in view of optical stream, and K-implies<br />

grouping, a system in light of centroid count of the moving article, are helpful for vehicle<br />

following. The examinations on VIRAT and MIT activity datasets were led and it is reasoned<br />

that for vehicle identification Gaussian blend model and edge contrast methods demonstrate<br />

more compelling. The shrewd edge location strategy performed well to find the edge of the<br />

vehicle. The climate conditions, control supply and preparing power are basic parameters to<br />

be considered while planning IOT based movement observation framework. In future, we<br />

intend to create IOT based movement observation framework. We expect to test the<br />

framework in live condition with a dream of its pragmatic execution in everyday’s life<br />

REFERENCES<br />

1. S. Kamijo, Y. Matsushita, K. Ikeuchi and M. Sakauchi. "Traffic monitoring and accident<br />

detection at intersections." IEEE. Trans. on Intelligent transportation systems, vol. 1, no. 2,<br />

pp. 108-118, June2000<br />

2. R. Cucchiara, M. Piccardi, and P. Mello, ” Image Analysis and Rule- Based Reasoning<br />

for a Traffic Monitoring System” IEEE transactions on Intelligent transportation systems,<br />

vol. 1, No. 2, pp. 119-130,June2000<br />

3. C. Lin, J. Tai and K. Song, “Traffic Monitoring Based on Real-Time Image Tracking”,<br />

IEEE International Conference on Robotics & Automation, Taiwan, pp. 2091-2096,<br />

September 2003<br />

4. Y. Jung, K. Lee, and Y. Ho, “Content-Based Event Retrieval Using Semantic Scene<br />

Interpretation for Automated Traffic Surveillance” IEEEtransactions on Intelligent<br />

transportation systems, vol. 2, no. 3, pp. 151-163, September 2001<br />

5. Y. Ki, and D. Lee, “A Traffic Accident Recording and Reporting Model at<br />

Intersections”,IEEE. Trans. on Intelligent transportationsystems,vol.8, no.2,June2007<br />

66


STUDY ON IMAGE PROCESSING IN SECURITY USING TOUCH<br />

SCREEN & VERIFICATION SOFTWARE<br />

T. Muthu Krithika<br />

P.Brindha<br />

Fatima College(Autonomous),Madurai<br />

ABSTRACT<br />

Security system with image processing, touch screen and verification software which can<br />

be used in banks, companies, and at personal secured places.Using Image Processing touch<br />

screen and verification software more security will be provided than any other systems. This<br />

system can be used in locker system of banks, companies and at Personal secured places. The<br />

techniques uses are color processing which use primary filtering to eliminate the unrelated color<br />

or object in the image.<br />

INTRODUCTION<br />

A digital image is a numeric representation, normally binary, of a two-dimensional image.<br />

Depending on whether the image resolution is fixed, it may be of vector or raster type. By itself,<br />

the term "digital image" usually refers to raster images or bitmapped images (as opposed to<br />

vector images).Image processing is often viewed as arbitrarily manipulating an image to achieve<br />

an aesthetic standard or to support a preferred reality. However, image processing is more<br />

accurately defined as a means of translation between the human visual system and digital imaging<br />

devices.<br />

APPLICATION<br />

It is very important for banks, companies to provide high security system to their valuable<br />

items. In this paper by using image processing, touch screen and verification software more<br />

security will be provided than any other systems. This systemcan be used in locker system of<br />

banks, companies and at personal secured places. The object detection technique uses color<br />

processing which use primary filtering to eliminate the unrelated color or object in the image.<br />

Touch screen and verification software can be used as an extra level of security for customers to<br />

verify their identify. Customers has to undergo for security verification in three steps, In the first<br />

67


step, Person is identified using touch screen in which he/she has to touch a point in the touch<br />

screen which should be same as that of the point recorded initially during the account opening in<br />

bank.<br />

In the second step, at the time of account opening in bank, an object is given to the<br />

customer which is different for different customer which may vary in shape, size or colour and<br />

hence each object has different pixels value. Verification in second step is done using object<br />

detection technique of image processing, in which object is verified using camera and object<br />

should be placed in the screen at the same point as that of point in which it wasinitially placed in<br />

the screen at the time of account opening.<br />

In the third step, Verification is done using verification software in which person has to<br />

answer the details asked by the system. If all the three responses of the person matches with the<br />

response initially stored in the microcontroller during account opening, the locker security system<br />

of bank opens otherwise it remains in the closed. This system is more secure than other systems<br />

because three steps are required for verification.<br />

TECHNOLOGY<br />

TOUCH SCREEN POINT DETECTION<br />

In security system touch screen is used to provide extra security tousers. A touch screen is<br />

divided into nine points in which atthe time of account opening user will be asked to choose<br />

anyone point from the eight points, that one point chosen by theuser will be stored in the<br />

microcontroller<br />

. At the time of lockeropening in bank, the user will be asked in the first step totouch the<br />

screen, if he/she touches the correct point of thetouch screen which was initially selected by the<br />

user at thetime of account opening the system will proceed for the nextstep for object verification<br />

otherwise the system will not gofor next step, the locker doors will remain closed. After thetouch<br />

screen point detection, user has to undergo objectdetection which is the second step of this<br />

system. The ninepoint Touch screen is shown in figure.<br />

68


OBJECT RECOGNITION<br />

Object recognition is a process which identifies a specific object in a digital image or<br />

video. Object recognition algorithms rely on matching, learning, or pattern recognition<br />

algorithm using appearance-based or feature based techniques. Common techniques include<br />

edges, gradients, Histogram of Oriented Gradients, Haar wavelets, and linear<br />

binary patterns. The techniques using are such as color processing which are used as primary<br />

filtering to eliminate the unrelated color or object in the image. Besides that, shape detection has<br />

been used where it will use the edge detection, Circular Hough Transform .Identification of<br />

person is done by matching the data which is initially stored in the microcontroller. Person has to<br />

verify the object by placing it at the same place on the screen which was initially placed during<br />

the account opening. If the position and object is verified the system proceeds for the next step.<br />

69


VERIFICATION SOFTWARE<br />

Verification software can be used as an extra level of security or customers to verify their<br />

identify. . At the time of registration of the customer they will be asked to fill details for which<br />

they have to fill the details that will be stored in themicro controller. All the records of details of<br />

different customers will be stored in microcontroller that will be done by creating appropriate<br />

verification software using C or C++.At the time of entering in locker system, different customers<br />

based on their Touch screen point detection and object recognition and previous information<br />

stored in micro processor will be asked to enter the verification details, if they will enter all the<br />

verification details correctly than the locker gate gets open. The customer will be asked to enter<br />

the password(which will be given initially by the bank) and if the type password is matched with<br />

password given by bank then the customer will be asked to enter object number (which will be<br />

given initially at the time of account opening) and if the type object number is matched with<br />

object number given by bank then the customer will be asked to enter answer of a security<br />

question (the security question will be chosen by the user at the time of account opening and its<br />

answer is entered and is stored in the microcontroller) if the given answer matches with the stored<br />

answer the locker gate gets open otherwise it main in the closed position.<br />

WORKING PRINCIPLE<br />

This security system consists of microcontroller, Object detection algorithm, Verification<br />

Software, keyboard, LED and LCD. The system works on three steps. In the first step, Person is<br />

Identified using touch screen in which he/she has to touch a point in the touch screen which<br />

should be same as that of the point recorded initially during the account opening in bank. In<br />

second step, at the time of account opening in bank, an object is given to the customer who is<br />

different for different customer who may vary in shape, size or color and hence each object has<br />

different pixels value. Verification in second step is done using object detection technique of<br />

image processing, in which object is verified using camera and object should be placed in the<br />

screen at the same point as that of point in which it was initially placed in the screen at the time of<br />

account opening. In the third step, Verification is done using verification software in which<br />

person has to answer the details asked by the system.<br />

If all the three responses of the person matches with the response initially stored in<br />

themicro controller during account opening, the locker security system of bank opens otherwise it<br />

remains in the closed Position.<br />

70


CONCLUSION<br />

A banking locker security system using object detection technique, touch screen point<br />

detection and verification software is implemented. It is more secured system which is cost<br />

effective. The microcontroller compares with the datastored at the time of account opening. It<br />

compares the whether the object is same and at the same place as that of initially recorded. It also<br />

compares the point of touch screen touched with the point selected initially at the time of account<br />

opening and finally this system verifies details using verification software.<br />

REFERENCES<br />

1. Islam, N.S. Wasi-ur-Rahman, M. “An intelligent SMS- based remote Water Metering System”.<br />

12th International Conference on Computers and Information Technology, 2009, 21-23 Dec.<br />

2009, Dhaka, Bangladesh.<br />

2. International Conference on Robotics, Vision, Information and Signal Processing 2007<br />

(ROVISP2007), Penang, 28 – 30 November 2007.<br />

71


A STUDY ON DIGITAL CAMERA IMAGE<br />

FORENSIC METHODS<br />

K.Muthulakshmi & S.Anjala<br />

Fatima College,Madurai<br />

ABSTRACT<br />

The aim of this survey is to provide a comprehensive overview of the state of the<br />

art in the area of image forensics. These techniques have been designed to identify the source of<br />

a digital image or to determine whether the content is authentic or modified, without the<br />

knowledge of any prior information about the image under analysis. All the tools work by<br />

detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the<br />

digital image by the acquisition device and by any other operation after its creation. The paper<br />

has been organized by classifying the tools according to the position in the history of the digital<br />

image in which the relative footprint is left: acquisition based methods, coding –based methods,<br />

and editing –based schemes.<br />

There are two main interest in digital camera image forensics, namely source<br />

identification and forgery detection. In this paper, we first briefly provide an introduction to the<br />

major processing stages inside a digital camera and the review several methods for source<br />

Digital camera identification and forgery detection. Existing methods for source identification<br />

explore the various processing Stages inside a digital camera to device the clues for<br />

distinguishing the source cameras while forgery detection checks for Consistencies in image<br />

quality are for present for contain characteristics as evidence of tampering.<br />

The growth sophisticated image processing and editing software has made the<br />

manipulation of digital images easy and imperceptible to the naked eyes. This increased the<br />

demand to assess the trustworthiness of digital images when used in crime investigation, as<br />

evidence in court of law and for surveillance purposes. This paper presents a comprehensive<br />

investigation of the progress and challenges in the field of digital image forensics to help the<br />

beginners in developing understanding, apprehending the requirements and identifying the<br />

research gaps in this domain.<br />

72


INTRODUCTION<br />

In digital image processing, computer algorithms are used to perform image processing.<br />

Digital image processing has several advantages over the analog image processing. It provides a<br />

large number of algorithms to be used with the input data. In digital image processing we can<br />

avoid some processing problems such as noise creation and signal distortion at some point in<br />

signal processing. In 2000s, fast computers became available for signal processing and digital<br />

image processing has become the popular form of image processing. For the reason that, signal<br />

image processing became versatile method, and also cheapest.<br />

The term digital image processing generally refers to processing of a two-dimensional picture<br />

by a digital computer. In a broader context, it implies digital processing of any two-dimensional<br />

data. A digital image is an array of real numbers represented by a finite number of bits. The<br />

principle advantage of Digital Image Processing methods is its versatility, repeatability and the<br />

preservation of original data precision.<br />

In digital image processing, we are concentrating on one specific application. That is digital<br />

camera image forensic methods.<br />

Multimedia Forensics has become important in the last few years. There are two main interests,<br />

namely source identification and forgery detection. Source identification focuses on identifying<br />

the source digital devices (cameras, mobile phones, camcorders, etc) using the media produced<br />

by them, while forgery detection attempts to discover evidence of tampering by assessing the<br />

authenticity of the digital media (audio clips, video clips, images, etc).<br />

A digital camera or digicam is a camera that captures photographs in digital memory. Most<br />

cameras produced today are digital, [1] and while there are still compact cameras on the<br />

market, the use of dedicated digital cameras is dwindling, as digital cameras are now<br />

incorporated into many devices ranging from mobile devices to vehicles. However, high-end,<br />

high-definition dedicated cameras are still commonly used by professionals.<br />

73


TECHNOLOGIES USED IN DIGITAL CAMERA<br />

1. ELEMENTS OF A TYPICAL DIGITAL CAMERA<br />

Digital cameras consist of lens system, filters, and color filter array (CFA), image sensor, and<br />

digital image processor (DIP). Color images may suffer from aberrations caused by the<br />

lenses, such as,<br />

<br />

<br />

Chromatic aberration and<br />

Spherical aberration.<br />

2. CAPTURING ON IMAGE<br />

In capturing on image, First, We have to capture on 3d image using a digital camera and send<br />

it to a particular system (digital image processing system) to focus on some specific area, and<br />

then it will produce an output as a zoomed image. Here, it is focused on a water drop in a leaf.<br />

74


3. TYPES OF DIGITAL CAMERA<br />

<br />

<br />

<br />

<br />

<br />

Compact digital camera<br />

Bridge camera<br />

Mirror less interchangeable-lens camera<br />

Digital single-lens reflex(DSLR) camera<br />

Digital single-lens translucent(DSLT) camera<br />

4. DIGITAL IMAGE FORENSIC METHODS<br />

There are some techniques that use to detect the digital image forensic methods. They<br />

are,<br />

1. Using JPEG Quantization Tables<br />

2. Using Chromatic Aberration<br />

3. Using Lighting<br />

4. Using Camera Response Function (CRF)<br />

5. Using Bicoherence and Higher Order Statistics<br />

6. Using Robust Matching<br />

5. FLOWCHART--DIGITAL IMAGE FORENSIC METHODS<br />

75


The forensic analysis of digital images has become more significant in determining the<br />

origin and authenticity of a photograph.The trustworthiness of photographs has an essential role<br />

in many areas, including: forensic investigation, criminal investigation, surveillance systems,<br />

intelligence services, medical imaging, and journalism.Digital images can be captured by some<br />

digital cameras or scanners and can be generated on computers too. Passive image forensic<br />

techniques for source identification work on the basic assumption that the fingerprints of the<br />

imaging sensors, in-camera processing operations and compression are always present in images.<br />

Detection of camera specific fingerprints identifies the image capturing device and justify that the<br />

image is not computer rendered. The two images having the same in-camera fingerprints are<br />

judged to be taken by the same device. The absence of fingerprints in images suggests that either<br />

the image is computer generated or has been maliciously tampered thereby calling for image<br />

integrity verification. Based on the above assumptions the published works are presented in this<br />

section with respect to two issues: firstly, to distinguish between the natural and computer<br />

generated images; and secondly, to identify the image capturing device if the image is natural.<br />

Image tampering is a deliberate attempt to add, remove or hide some important details of<br />

an image without leaving any obvious traces of the manipulation. The digital images are<br />

generally tampered by region duplication, image splicing or image retouching. Region<br />

duplication is also recognized as cloning or copy-move attack, where selective regions from an<br />

image are copied, sometimes transformed, and then pasted to new locations within the image<br />

itself with the main aim of concealing some original image contents. Image splicing on the other<br />

hand uses selected regions from two or more images to be pasted together for producing a new<br />

image.<br />

CONCLUSION<br />

With the outgrowth of the imaging and communication technology, the exchange of<br />

digital images has become easy and extensive. But at the same time, the instances of<br />

manipulations in the digital images have also increased thereby resulting in greater need for<br />

establishing ownership and authentication of the media. Digital image forensic researcher<br />

community is continuously attempting to develop techniques for detection of the imaging device<br />

used for image acquisition, tracing the processing history of the digital image and locating the<br />

region of tampering in the digital images. The sensor, operational and compression fingerprints<br />

have been studied with various image features to achieve the purposes. An attempt to recover the<br />

tampered region details is expected to be an appealing investigation domain for many researchers.<br />

76


Most of the work done in image forensics has focused on detecting the fingerprints of a specific<br />

kind of tampering operation. But, practically a manipulated image is often the result of multiple<br />

such tampering operations applied together. Thus, the need is to develop a technique or<br />

framework capable of detecting multiple attacks and tampering.<br />

REFERENCES<br />

<br />

<br />

<br />

www.researchgate.net/publication/299367087_Digital_Image_Forensics_Progress_and_C<br />

hallenges<br />

http://www.comp.nus.edu.sg/~mohan/papers/forensic_surv.pdf<br />

<br />

Z. J. Geradts, J. Bijhold, M. Kieft, K. Kurosawa, K. Kuroki, and N. Saitoh, “Methods for<br />

Identification of Images Acquired with Digital Cameras”, Proc. of SPIE, Enabling<br />

Technologies for Law Enforcement and Security, vol. 4232, February 2001.<br />

<br />

J. Lukas, J. Fridrich, and M. Goljan, “Digital Camera Identification from Sensor Pattern<br />

Noise”, IEEE Transactions on Information Forensics and Security, June 2006.<br />

<br />

Cheddad A. Doctored Image Detection: A Brief Introduction to Digital Image Forensics.<br />

Inspire magazine, July 2012.<br />

77


A STUDY ON RAINFALL PREDICTION<br />

S.Alamelu & S.Nasreen Farzhana<br />

Fatima College(Autonomuos),Madurai<br />

ABSTRACT<br />

Water is one of the most important substances on earth. All plants and animals must have<br />

water to survive. If there was no water there would be no life on earth.This paper represents an<br />

analysis about rainfall prediction using Digital image processing. The proposed approach here is<br />

to use the digital cloud images to predict rainfall. Considering the cost factors and security issues,<br />

it is better to predict rainfall from digital cloud images rather than satellite images. The status of<br />

sky is found using wavelet. The status of cloud is found using the Cloud Mask Algorithm. The<br />

type of cloud can be evolved using the K-Means Clustering technique. The type of rainfall cloud<br />

is predicted by analyzing the color and density of the cloud images. The result predicts the type of<br />

cloud with its information like classification, appearance and altitude and will provide the status<br />

of the rainfall.<br />

INTRODUCTION<br />

Digital image processing is the processing of digital images with computer algorithms.A<br />

digital image is nothing more than a two dimensional signal. It is defined by the mathematical<br />

function f(x,y) where x and y are the two co-ordinates horizontally and vertically. The value of<br />

f(x,y) at any point is given as the pixel value at that point of an image.<br />

Digital image processing is the use of computer algorithms to create, process, communicate, and<br />

display digital images. Digital image processing algorithms can be used to convert signals from<br />

an image sensor into digital images, to improve clarity and remove noise and other artifacts, to<br />

extract the size, scale, or number of objects in a scene, to prepare images for display or printing<br />

and to compress images for communication across a network.<br />

A digital image is formed as follows. Since capturing an image from a camera is a physical<br />

process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of<br />

the image. So when the sunlight falls upon the object, then the amount of light reflected by that<br />

object is sensed by the sensors, and a continuous voltage signal is generated by the amount of<br />

sensed data. In order to create a digital image, we need to convert this data into a digital form.<br />

This involves sampling and quantization. Sampling and quantization result in a two dimensional<br />

array or matrix of numbers which are nothing but a digital image.<br />

78


RAINFALL PREDICTION<br />

The paper is about “Rainfall Prediction Using Digital Image Processing”. The main<br />

aim is to use digital cloud image for estimating rainfall and to detect the type of clouds using<br />

different image processing techniques. Water is a life giver - even a life creator. A life without<br />

water is unimaginable. Every single cell in the body requires water in order to function properly.<br />

One important way in which the bodily tissues use water, for instance, is to regulate the body<br />

temperature. Hydraulic power is a wonderful source of green energy, where power is created out<br />

of water. Water is used in agriculture for irrigating fields of crops. A large amount of fresh water<br />

is got from rain. Rainfall is one such important component for life. Rainfall play vital role in our<br />

country’s social and economic development.<br />

Rainfall being that much important, how are they formed as follows. Rain that’s fallen<br />

may have been water in the ocean a couple of days before. The water that is at the top of oceans,<br />

rivers and lakes turns into water vapor in the atmosphere using energy from the sun. The water<br />

vapor rises in the atmosphere and there it cools down and forms tiny water droplets through<br />

something called condensation. These then turn into clouds. When they all combine together,<br />

they grow bigger and are too heavy to stay up there in the air. This is when they will fall to the<br />

ground as rain.<br />

Rainfall can also be predicted with the use of satellite, but it the costliest method. So<br />

digitalimage processing techniques are used to determine rainfall. With a prior prediction of<br />

rainfall, the people can find some way to save the water for future usage. Accurately forecasting<br />

heavy rainfall can allow for warning of floods before rainfall occurs. Additionally, such<br />

information is useful in agriculture to improve irrigation practices and the effectiveness of<br />

applying fertilizer, pesticides, and herbicides to crops. Digitalimage processing is one such useful<br />

technique to predict rainfall.<br />

TECHNOLOGY USED FOR THE RAINFALL PREDICTION<br />

Predicting the rainfall consists of six phases. In the first phase data is collected. In the<br />

second phase the status of sky is found. In the third phase the status of cloud is found. In the<br />

fourth phase the type of cloud is evolved. In the fifth phase the information about the cloud and<br />

the status of rain are displayed. In the sixth phase the analysis and measurement takes place.<br />

79


Data<br />

collection<br />

Sky status<br />

Cloud<br />

status<br />

Cloud type<br />

Rainfall<br />

estimation<br />

I. Data collection: The data collected is the digital cloud images. A digital camera is used to<br />

capture the clouds. The image is stored in the file system. The dimension chosen for the images is<br />

400 x 250. The format used is *.jpeg.<br />

II. Sky status: The second step is sky status. The wavelet is used for finding the sky status. A<br />

wavelet is nothing but a small wave of water. It separates the point needed for the cluster. In<br />

wavelet separation points are used in the identification of the clouds. The wavelet threshold for<br />

the clouds is > 50 and


Also the cloud mask algorithm is used here. The cloud mask algorithm consists of certain tests.<br />

Single pixel threshold tests are used first. Dynamic histogram analysis is used to get threshold.<br />

Thick High Clouds (Group 1): Thick high clouds are detected with threshold tests that rely on<br />

brightness temperature in water vapor bands and infrared. Thin Clouds (Group 2): Thin clouds<br />

tests rely on brightness temperature difference tests. Low Clouds (Group 3): Low clouds are best<br />

detected using solar reflectance test and brightness temperature difference. Spatial uniformity test<br />

is also used over land surface. High Thin Clouds (Group 4): This test is similar to Group1, but it<br />

is spectrally turned to detect the presence of thin cirrus. Spatial uniformity test and Brightness<br />

temperature difference test are applied. Temporal uniformity test is also processed.<br />

IV.<br />

Cloud type: The major task is to find the type of cloud as per the cloud status. Each and every<br />

cloud will be having its own shape and density and the values are matched accordingly. The type<br />

of cloud is identified by using clustering. We use K-means clustering to combine the pixels in<br />

order to differentiate the clouds. The thickness of the clouds will be in the base part. The color,<br />

Shape and Texture are the concepts used in order to find the type of cloud. The formula to find<br />

the cloud type is shown as follows:<br />

H(n) = ∑ C[i,j] , Cloud id = Highest Density of Cloud Status<br />

K-Means algorithm is an unsupervised clustering algorithm that classifies the input data points<br />

into multiple classes based on their inherent distance from each other. The algorithm assumes that<br />

the d ata features form a vector space and tries to find natural clustering in them. The points are<br />

clustered around centroids μi∀=1….k which are obtained by minimizing the objective. An<br />

iterative version of the algorithm can be implemented. The algorithm takes a 2 dimensional image<br />

as input. Various steps in the algorithm are as follows:<br />

Compute the intensity distribution (also called the histogram) of the intensities.<br />

The Centroids are initializing with the k random intensities.<br />

Repeat the below steps until the cluster a label of the image does not change anymore.<br />

Cluster the points based on distance of their intensities from the centroid intensities.<br />

Cluster the points based on distance of their intensities from the centroid intensities.<br />

Compute the new centroid for each of the clusters.<br />

V. Rainfall estimation: The major step is the estimation of rainfall is estimated as per the type we<br />

recognize. There are different types of clouds. They are as follows:<br />

81


High Clouds:<br />

Cirrus - The ice-crystal cloud is a feathery white cloud that is the highest in the sky. It has a<br />

wispy looking tail that streaks across the sky and is called a fall streak.<br />

Cirrostratus - A milky sheet of ice-crystal cloud that spreads across the sky.<br />

Low clouds:<br />

Cumulus - The fair weather cloud. Each cloud looks like a cauliflower.<br />

Stratocumulus - A layer of cloud that can sometimes block the sun.<br />

Rain Clouds:<br />

Cumulonimbus and Nimbostratus - The dark, rain carrying clouds of bad weather. They are to be<br />

blamed for most of the winter rains and some of the summer ones. They cover the sky and block<br />

the sun.<br />

Cumulonimbus and nimbostratus are the rainfall clouds. So we take the color and shape and also<br />

the width and find the rainfall status. The temperature is also taken into account. Cloud<br />

information gives the theoretical proof of cloud that is altitude, height, appearance and<br />

classification are given.<br />

CONCLUSION<br />

The rainfall would be estimate accurately by determining type of cloud using the methods<br />

like pca or contourlet, cloud screening algorithm and k-mean clustering. Theoretical study says<br />

that it must provide better performance compared to other techniques like wavelet and other.<br />

Considering the cost factors and security issues, the digital cloud images were used to predict<br />

rainfall rather than satellite images. Prediction of rainfall can also be done with the help of neural<br />

networks and artificial intelligence.<br />

82


BIBLIOGRAPHY<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

Watch "Introduction to Digital Image Processing by Ms. Geetanjali Raj [Digital Image<br />

Processing]" on YouTube<br />

https://youtu.be/NZpAb6_Yzbg<br />

Watch "Types of Clouds - Cirrus, Cumulus, Stratus, Nimbus | UPSC IAS Geography" on<br />

YouTube<br />

https://youtu.be/BC6t_QOtVu8<br />

Watch "Technical Course: Cluster Analysis: K-Means Algorithm for Clustering" on<br />

YouTube<br />

https://youtu.be/aiJ8II94qck<br />

Prediction of rainfall using image processing<br />

https://www.slideshare.net/mobile/VineeshKumar1/prediction-of-rainfall-using-imageprocessing<br />

Rain Fall Using Cloud Images | Cluster Analysis | Cloud<br />

https://www.scribd.com/document/146111231/Rain-Fall-Using-Cloud-Im<br />

83


A STUDY ON PLANT DISEASE IDENTIFICATION<br />

A.Gnana Priya & B.Muthulakshmi<br />

Fatima College(Autonomuos),Madurai<br />

ABSTRACT:<br />

India is an agricultural country. More than 70 percent of the people depend on agriculture.<br />

Growing up the crop and production is a challengeable task. Because, many of the crops are<br />

affected by pest. Insecticide is one of the best remedy for pest attack. But it is sometimes<br />

dangerous for birds, animals and also for humans. Some crops requires close monitoring that<br />

helps in management of diseases. Now a day’s digital image processing is widely used in<br />

agricultural field. This method helps to identify the parts of the plant and find out the disease or<br />

pest as earlier. It is also useful for understanding the better relationship between the climatic<br />

condition and disease.<br />

Image processing techniques could be applied on various applications as follows:<br />

1. To detect plant leaf, stem, and fruit diseases. 2. To quantify affected area by disease. 3.<br />

To find the boundaries of the affected area. 4. To determine the color of the affected area 5.To<br />

determine size & shape of fruit.A counties wealth is detected by agriculture. Disease in plants<br />

causes major loss in an economy. DIP is the use of computer algorithms to create process,<br />

communicate, and display digital images. We may conclude that DIP is a useful and effective<br />

technique for the crop cultivation.<br />

Disease is caused by pathogen which is any agent causing disease. In most of the cases<br />

pests or diseases are seen on the leaves or stems of the plant. Therefore identification of plants,<br />

leaves, stems and finding out the pest or diseases, percentage of the pest or disease incidence,<br />

symptoms of the pest or disease attack, plays a key role in successful cultivation of crops.<br />

84


INTRODUCTION<br />

In biological science, sometimes thousands of images are generated in a single<br />

experiment. These images can be required for further studies like classifying lesion, scoring<br />

quantitative traits, calculating area eaten by insects, etc. Almost all of these tasks are processed<br />

manually or with distinct software packages two major issue excessive processing time and<br />

subjectiveness rising from different individuals. Hence to conduct high throughput experiments,<br />

plant biologist need efficient computer software to automatically extract and analyze significant<br />

content.<br />

Machine learning-based detection and recognition of plant diseases can provide extensive<br />

clues to identify and treat the diseases in its very early stages. Comparatively, visually or naked<br />

eye identification of plant diseases is quite expensive, inefficient, inaccurate and difficult.<br />

Automatic detection of plant diseases is very important to research topic as it may prove the<br />

benefits in monitoring large fields of crops, and thus automatically detect the symptoms of<br />

diseases as soon as they appear on plant leaves.Initially, the infected Plant leaf images are taken<br />

as input image and image is passed to pre-processing step, to resize the image and remove the<br />

noise content by using a median filter. At the next stage, the pre-processed image is passed to the<br />

segmentation for partitioning into clusters. FCM clustering technique is used for segmentation<br />

which is very fast, flexible and easy to implement than others. Further, the segmented image<br />

extracts the features of an image by using methods like color correlogram, SGLDM, and Otsu<br />

methods. Finally, the classifier is used for classification and recognition of plant disease. One of<br />

the best classifiers is SVM, which is more accurate than others and then it is stored in the<br />

knowledge base<br />

85


PROPOSEDMETHODOLOGY:<br />

1.Image Acquisition: First we need to select the plant which is affected by the disease and then<br />

collect the leaf of the plant and take a snapshot of leaf and load the leaf image into the system.<br />

2.Segmentation: It meansrepresentation of the image in more meaningful and easy to analyse<br />

way. In segmentation a digital image is partitioned into multiple segments can defined as superpixels.<br />

Low Contrast: image pixel values are concentrated near a narrow range.<br />

Contrast Enhancement: In figure.2, the original image is the image given to the system and the<br />

output of the system after contrast enhancement.<br />

3. Feature extractions, is the process done after segmentation. According to the segmented<br />

information and predefined dataset some features of the image should be extracted. This<br />

extraction could be the any of statistical, structural, fractal or signal processing. Color cooccurrence<br />

Method, Grey Level Co-occurrence Matrices (GLCM), Spatial Gray-level<br />

Dependence Matrices (SGDM) method, Gabor Filters, Wavelets Transform and Principal<br />

component analysis are some methods used for feature extraction<br />

4. Classification of diseases<br />

Classification technique is used for training and testing to detect the type of leaf disease.<br />

Classification deals with associating a given input with one of the distinct class. In the given<br />

system support vector machine [SVM] is used for classification of leaf disease. The classification<br />

process is useful for early detection of disease, identifying the nutrient deficiency.<br />

Based on classification leaves are mainly affected with fungal, bacterial and viral. The<br />

following describes common symptoms of fungal, bacterial and viral plant leaf diseases.<br />

a) Bacterial disease symptoms:<br />

The disease is characterized by yellowish green spots which come into view as watersoaked.<br />

The lesions amass and then appear as dry dead spot<br />

86


) Viral disease symptoms:<br />

Among all plant leaf diseases, those caused by viruses are most complicated to diagnose.<br />

All viral disease presents some degree of reduction in virus-infected plants. The production<br />

length of such infected is usually short. This virus looks yellow or green stripes or spots on<br />

foliage. Leaves might be wrinkled, curled and growth may be stunned as depicte<br />

c) Fungal disease symptoms:<br />

It is a type of plant pathogen, they are responsible for serious plant diseases. They damage<br />

plants by killing cells. It disseminates through the wind, water, and movements of contaminated<br />

soil, animals, birds etc. In the initial stage, it appears on older leaves as water-soaked, gray-green<br />

spots. Later these spots darken and then white fungal growth forms on the undersides<br />

CONCLUSION<br />

We would like to conclude that this is an efficient and accurate technique for a detection<br />

of plant diseases. In our research, plant disease is detected by SVM classifiers. The SVM<br />

classifiers are based on color, texture and shape features. The algorithm in the proposed approach<br />

uses image segmentation and classification technique for the detection of plant disease. The<br />

accurate Disease detection and classification of the plant leaf image is very important for the<br />

successful cultivation of cropping and this can be done using image processing. This paper<br />

discussed various techniques to segment the disease part of the plant. Hence there is working on<br />

development of fast, automatic, efficient and accurate system, which is use for detection disease<br />

on unhealthy leaf. Also Comparison of different techniques of digital image processing is done<br />

which gives the different results on different databases. Work can be extended for development of<br />

system which identifies various pests and leaf diseases also.<br />

87


REFERENCES<br />

1.)SujeetVarshneyet al, International Journal of Computer Science and Mobile Computing,<br />

Vol.5 Issue.5, May- 2016, pg. 394-398<br />

2.) International Journal of Computational Intelligence Research ISSN 0973-1873 Volume 13,<br />

Number 7 (2017), pp. 1821-1828<br />

3.) Leaf disease detection using image processing Sujatha R*, Y Sravan Kumar and Garine<br />

Uma Akhil School of Information Technology and Engineering, VIT University, Vellore<br />

4.) Surender Kumar Department of CSE Chandigarh University Gharuan, Punjab<br />

International Journal of Computer Applications (0975 – 8887)<br />

5.) S. Megha, R C. Niveditha, N. SowmyaShree, K. Vidhya ; International Journal of Advance<br />

Research, Ideas and Innovations in Technology.<br />

6.) Journal of Advanced Bioinformatics Applications and Research ISSN 0976-2604 Vol 2, Issue<br />

2, June-2011, pp 135-14<br />

88


ANALYSIS ON NETWORK SECURITY<br />

S.Karthick Raja & S.Praveenkumar<br />

NMSSVN College,Madurai<br />

ABSTRACT:<br />

In the past decades, computer networks were primarily used by the researches for sending<br />

e-mails and by corporate employees for sharing the printers. While using network systems for<br />

theses utilities, security was not a major threat and did not get the due attention .In today’s<br />

world computer networks gained an immensely and cover a multitude of sins. It covers simple<br />

issues like sending hate mails. Security problems also are very severe like stealing the<br />

research papers of the recent discoveries and inventions by the scientists, who use internet as<br />

a sharing tool, also hacking the financial products like credit cards, debit cards, bank accounts<br />

etc by hacking the passwords and misusing the accounts. Cryptography is the ancient science<br />

of encoding messages so that only the sender and receiver can understand them.<br />

Cryptography can perform more mathematical operations in a second than a human being<br />

could do in a lifetime. There are three types of cryptographic schemes. They are:<br />

Secret Key Cryptography (SKC)<br />

Public Key Cryptography (PKC)<br />

Hash Functions<br />

INTRODUCTION<br />

A basic understanding of computer networks is requisite in order to understand the<br />

principles of network security. The Internet is a valuable resource, and connection to it is<br />

essential for business, industry, and education. Building a network that will connect to the<br />

Internet requires careful planning. Even for the individual user some planning and decisions<br />

are necessary. The computer itself must be considered, as well as the device itself that makes<br />

the connection to the local-area network (LAN), such as the network interface card or modem.<br />

The correct protocol must be configured so that the computer can connect to the Internet.<br />

Proper selection of a web browser is also important<br />

89


What Is A Network?<br />

A “network” can be defined as “any set of interlinking lines resembling a net, a<br />

network or roads||an interconnected system, a network is simply a system of interconnected<br />

computers and how they are connected is irrelevant.<br />

The International Standards Organization (ISO) Open System Interconnect (OSI) model<br />

defines internetworking in terms of a vertical stack of seven layers. The upper layers of the<br />

OSI model represent software that implements network services like encryption and<br />

connection management. The lower layers of the OSI model<br />

implement more primitive, hardware-oriented functions like routing, addressing, and flow<br />

control.<br />

SECURITY SERVICES<br />

X.800 defines it as: a service provided by a protocol layer of communicating open systems,<br />

which ensures adequate security of the systems or of data transfers<br />

RFC 2828 defines it as: a processing or communication service provided by a system to give<br />

a specific kind of protection to system resources<br />

X.800 defines it in 5 major categories<br />

Authentication - assurance that the communicating entity is the one claimed<br />

Access Control - prevention of the unauthorized use of a resource<br />

Data Confidentiality –protection of data from unauthorized disclosure<br />

90


Data Integrity - assurance that data received is as sent by an authorized entity<br />

Non-Repudiation - protection against denial by one of the parties in a communication<br />

SECURITY ATTACKS<br />

Passive attacks - eavesdropping on, or monitoring of, transmissions to:<br />

obtain message contents, or monitor traffic flows<br />

Active attacks – modification of data stream to:<br />

Masquerade- of one entity as some other replay previous messages modify messages in<br />

transitdenial of service<br />

ENCRYPTION<br />

Symmetric encryption or conventional / private-key / single-key sender and recipient<br />

share a common key all classical encryption algorithms are<br />

Private-key was only type prior to invention of public-key in 1970’s<br />

Symmetric Cipher Model:<br />

CRYPTOGRAPHY<br />

Cryptography can be characterized by:<br />

Type of encryption operations used--substitution / transposition / product<br />

number of keys used--single-key or private / two-key or public way in which plaintext is<br />

processed--block / stream<br />

TYPES OF CRYPTANALYTIC ATTACKS:<br />

‣ Cipher text only--only know algorithm / cipher text, statistical, can identify plaintext<br />

‣ known plaintext --know/suspect plaintext &cipher text to attack cipher<br />

‣ chosen plaintext --select plaintext and obtain cipher text to attack cipher<br />

‣ chosen cipher text --select cipher text and obtain plaintext to attack cipher<br />

‣ chosen text --select either plaintext or cipher text to en/decrypt to attack cipher<br />

91


DES<br />

The Data Encryption Standard (DES) is a cipher (a method for encrypting<br />

information) that was selected by NBS as an official Federal Information Processing Standard<br />

(FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use<br />

internationally. It is based on a Symmetric-key algorithm that uses a 56-bit key. The<br />

algorithm was initially controversial with classified design elements, a relatively short key<br />

length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently<br />

came under intense academic scrutiny which motivated the modern understanding of block<br />

ciphers and their cryptanalysis.<br />

DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit<br />

key size being too small; in January, 1999, distributed.net and the Electronic Frontier<br />

Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see<br />

chronology). There are also some analytical results which demonstrate theoretical weaknesses<br />

in the cipher, although they are unfeasible to mount in practice. The algorithm is believed to<br />

be practically secure in the form of Triple DES, although there are theoretical attacks. In<br />

recent years, the cipher has been superseded by theAES<br />

DES is the archetypal block cipher — an algorithm that takes a fixed-length string of<br />

plaintext bits and transforms it through a series of complicated operations into another cipher<br />

text bit string of the same length. In the case of DES, the block size is 64 bits. DES also uses a<br />

key to customize the transformation, so that decryption can supposedly only be performed by<br />

92


those who know the particular key used to encrypt. The key ostensibly consists of 64 bits;<br />

however, only 56 of these are actually used by the algorithm. Eight bits are used solely for<br />

checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it<br />

is usually quoted as such.<br />

Like other block ciphers, DES by itself is not a secure means of encryption but must instead<br />

be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further<br />

comments on the usage of DES are contained in FIPS-74. [15]<br />

The algorithm's overall structure is there are 16 identical stages of processing, termed rounds.<br />

There is also an initial and final permutation, termed IP and FP, which are inverses (IP<br />

"undoes" the action of FP, and vice versa). IP and FP have almost no cryptographic<br />

significance, but were apparently included in order to facilitate loading blocks in and out of<br />

mid-1970s hardware, as well as to make DES run slower in software.<br />

Before the main rounds, the block is divided into two 32-bit halves and processed<br />

alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures<br />

that decryption and encryption are very similar processes — the only difference is that the<br />

subkeys are applied in the reverse order when decrypting. The rest of the algorithm is<br />

identical. This greatly simplifies implementation, particularly in hardware, as there is no need<br />

for separate encryption and decryption algorithms.<br />

The symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles<br />

half a block together with some of the key. The output from the F-function is then combined<br />

with the other half of the block, and the halves are swapped before the next round. After the<br />

final round, the halves are not swapped; this is a feature of the Feistel structure which makes<br />

encryption and decryption similar processes.<br />

93


The key-schedule of DES<br />

The key schedule for encryption — the algorithm which generates the subkeys.<br />

Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1) —<br />

the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then<br />

divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds,<br />

both halves are rotated left by one and two bits (specified for each round), and then 48subkey<br />

bits are selected by Permuted Choice 2 (PC-2) — 24 bits from the left half, and 24 from the<br />

right. The rotations (denoted by "


CONCLUSION<br />

It's important to build systems and networks in such a way that the user is not<br />

constantly reminded of the security system.<br />

Developers need to evaluate what is needed along with development costs, speed of<br />

execution, royalty payments, and security strengths. That said, it clearly makes sense to use as<br />

strong security as possible, consistent with other factors and taking account of the expected<br />

life of the application. Faster computers mean that longer keys can be processed rapidly but<br />

also mean that short keys in legacy systems can be more easily broken.<br />

It's also extremely important to look at the methods of applying particular algorithms,<br />

recognizing that simple applications may not be very secure. Related to this is the issue of<br />

allowing public scrutiny, something that is essential in ensuring confidence in the product.<br />

Any developer or software publisher who resists making the cryptographic elements of their<br />

application publicly available simply doesn't deserve trust and is almost certainly supplying<br />

an inferior product.<br />

Secure communication over insecure channels is the objective of this concept. The claim of<br />

complete security is substantiated to a large extent. Thus a detailed study of Cryptography &<br />

Network Security is reflected in this presentation<br />

95


REFERENCES<br />

Computer Networking-Tanenbaum.<br />

Cryptography and Network Security-William Stallings<br />

Eli Biham: A Fast New DES Implementation in Software Cracking DES: Secrets of<br />

Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier<br />

Foundation<br />

A.Biryukov, C. De Canniere, M. Quisquater (2004)."On Multiple Linear<br />

Approximations".Lecture Notes in Computer Science3152: 1–22.doi:10.1007/b99099.<br />

http://www.springerlink.com/content/16udaqwwl9ffrtxt/. (preprint).<br />

Keith W. Campbell, Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–<br />

520<br />

Don Coppersmith. (1994). The data encryption standard (DES) and its strength against<br />

attacks. IBM Journal of Research and Development, 38(3), 243–250. [1]<br />

Whitfield Diffie, Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data<br />

Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84<br />

WEBSITES<br />

www.rsasecurity.com<br />

www.itsecurity.com<br />

www.cryptographyworld.com<br />

96


A STUDY ON DIGITAL IMAGE PROCESSING INBRAIN TUMOR<br />

DETECTION<br />

INTRODUCTION<br />

SINDHUJA&<br />

P. M. SONIA PRIYA<br />

The aim o digital image processing is to improve the pictorial information for human<br />

interpretation; and processing image data for storage transmission, and representation for<br />

autonomous machine perception. It is the use of computer algorithms to perform image<br />

processing on digital images. It has many advantages over analog image processing. It allows a<br />

much wider range of algorithms to be applied to the input data and can avoid problems such as<br />

the build-up-of noise and signal distortion during processing. It may be modeled in the form of<br />

multidimensional systems.<br />

Image processing mainly include the following steps:<br />

<br />

<br />

<br />

Importing the image via image acquisition tools;<br />

Analysing and manipulating the image;<br />

Output in which result can be altered image or a report which is based on analysing<br />

that image.<br />

Some techniques which are used in digital image processing include:<br />

<br />

<br />

<br />

<br />

<br />

<br />

Image editing<br />

Image restoration<br />

Independent component analysis<br />

Linear filtering<br />

Partial different equations<br />

Pixelation<br />

97


APPLICATION- OVERVIEW<br />

The Brain Tumor is affecting many people worldwide. It is not only limited with the old<br />

age people but also detected in the early age. Brain tumor is the abnormal growth of cell inside the<br />

brain cranium which limits the functioning of the brain. Early detection the brain tumor is possible<br />

with the advancement of machine learning and image processing. In medical image processing is<br />

the most challenging and emerging field today. This describes the methodology of detection &<br />

extraction of brain tumor from pattern’s MRI scan image of the brain. This method incorporates<br />

with some noise removal functions, segmentation and morphological operations which are the basic<br />

concept of image processing. Detection and extraction of tumor from MRI scan images of the brain<br />

is done by using MATLAB software.<br />

MRI Imaging plays an important role in brain tumor for analysis, diagnosis and treatment<br />

planning. It’s helpful to doctor for determine the previous steps of brain tumor. Brain tumor<br />

detections are using MRI images is a challenging task, because the complex structure of the brain.<br />

Brain tumor can be detected by begin or malignant type. The benign being non-cancerous and<br />

malignant is cancerous. Malignant tumour is classified into two types; primary and secondary<br />

tumour benign tumor is less harmful than malignant.<br />

The basic idea is to develop application software to detect the presence of brain tumor in<br />

MRI images. We are using image processing techniques to detect exact position of tumor.<br />

TECHNOLOGY-USED<br />

There are various medical imaging techniques like x-ray, computed tomography, positron<br />

emission tomography, magnetic resonance imaging, are available for tumor detection. The MRI is<br />

the most commonly used modality for brain tumor growth imaging and location detection due to its<br />

higher resolution.<br />

1) To improve the performance and reduce the complexity involves in the image<br />

processing, we have investigated Berkeley wavelet transformation based brain tumor segmentation.<br />

2) To improve the accuracy and quality rate of the support vector machine based classifier,<br />

relevant features are extracted from each segmented tissue.<br />

98


Methodology<br />

This has two stages, first is pre-processing of given MRI image and after that segmentation and<br />

then morphological operations.<br />

• Input image<br />

• Multiparameter Calculations<br />

• Segmentation o brain tumor using Region of Interest command<br />

Image Processing Techniques<br />

Median Filtering for Noise Removal<br />

Is a non-linear filtering technique used for noise removal. It is used to remove salt and pepper<br />

noise from the converted gray scale image. It replaces the intensity values of the center pixel with<br />

the median of the intensity values in the neighbourhood of that pixel. Median filters are particularly<br />

effective in the presence of impulse noise.<br />

Various De-noising Filters<br />

Mean Filter- Based on average value of pixels<br />

Median Filter – Based on the median value of pixels<br />

Wiener Filter – Based on inverse filtering in frequency<br />

Hybrid Filter – Combination of median and wiener filter<br />

Modified hybrid median filter – Combination of mean and median Filter<br />

Morphology Based De-noising- Based on Morphological opening and closing Operations.<br />

Image Enhancement<br />

Poor contrast is one of the defects found in acquired image. The effect of that defect has great<br />

impact on the contrast of image. When contrast is poor the contrast enhancement method plays an<br />

important role. The gray level of each pixel is scaled to improve the contrast. Contrast<br />

enhancements improve the visualization of the MRI image.<br />

Edge Detection<br />

Is an image processing technique for finding the boundaries of objects with in images. It works<br />

by detecting discontinuities in brightness. It is used for image segmentation and data extraction in<br />

areas such as image processing, computer vision, and machine vision.<br />

99


Threshold<br />

It is a simple, effective, way of partitioning an image into a foreground and background. This<br />

image analysis technique is a type of image segmentation that isolates objects by converting gray<br />

scale images into binary images. Image threshold is most effective in images with high levels of<br />

contrast.<br />

Morphological Operation<br />

It is used as an image processing tools for sharpening the regions. It is a collection of non-linear<br />

operations related to the shape or morphology of features in an image.<br />

Segmentation<br />

Image segmentation is the process of portioning a digital image into multiple segments. It is<br />

typically used to locate objects and boundaries in image, the process of assigning a label to every<br />

pixel in an image such that pixels with the same level share certain visual characteristics.<br />

Brain Tumor Technology:<br />

• Classification Technology<br />

• Clustering Technology<br />

• Atlas-based segmentation<br />

• Histogram thersholding<br />

• Watershed and Edge detection in HSV colour model<br />

CONCLUSION<br />

The most of the existing methods had ignored the poor quality images like images with<br />

noise or poor brightness. Most of the existing work on tumor detection has neglected the use of<br />

image processing. To enhance the tumor detection rate further we will also integrated the new<br />

object based tumor detection. The proposed technique will have the ability to produce effective<br />

results even in case of high density of the noise.<br />

In this study Digital Image Processing Techniques are important for brain tumor<br />

detection by MRI images. The processing techniques include different methods like Filtering,<br />

Contrast enhancement, Edge detection is used for image smoothing. The pre-processed image are<br />

used for post processing operations like; threshold, histogram, segmentation and morphological<br />

which is used to enhance the images.<br />

100


REFERENCES<br />

1) Digital Image Processing (3 rd Edition): Rafael C. Gonzalez. Richard E. Woods<br />

2) Website: www.ijettcs.org<br />

3) Sharma, Komal, AkwinderKaur, and ShruthiGujral. “A review on various brain tumor<br />

detection techniques in brain MRI images.” IOSR Journal of Engineering Volume 04, Issue 05, pp:<br />

06-12,May 2014<br />

101


VIRTUAL REALITYANDITS TECHNOLOGIES<br />

M.Shree Soundarya<br />

Fatima College(Autonomous),Madurai<br />

ABSTRACT<br />

Virtual reality (VR) is a technology which allows a user to interact with a computersimulated<br />

environment, whether that environment is a simulation of the real world or an imaginary<br />

world. It is the key to experiencing, feeling and touching the past, present and the future. It is the<br />

medium of creating our own world, our own customized reality. It could range from creating a<br />

video game to having a virtual stroll around the universe, from walking through our own dream<br />

house to experiencing a walk on an alien planet. With virtual reality, we can experience the most<br />

intimidating and gruelling situations by playing safe and with a learning perspective. Very few<br />

people, however, really know what VR is, what its basic principles and its open problems are. In<br />

this paper a virtual reality and its basic technologies are listed and how VR works on it are<br />

described.<br />

INTRODUCTION<br />

Virtual realitymeans comes from both ‘virtual’ as a near us experience, and ‘reality’,<br />

something we can experience as human beings. The term itself can apply to almost anything that<br />

is possible to exist in reality but is stimulated by a computer.<br />

OBJECTIVES<br />

<br />

<br />

<br />

To know about Virtual Reality techand its works.<br />

To know the disparities between Virtual Reality(VR) and Augmented Reality(AR).<br />

To know about challenges in Virtual Reality with Virtual Entertainment(VE).<br />

SCOPE<br />

How it will be in future VR as a significant technology asworld-wide.<br />

102


VIRTUAL REALITY TECH AND WORKINGS<br />

A virtual experience includes three-dimensional images which appear life-sized to the<br />

user.Experiences are delivered to our senses via a computer and a screen, or screens, from our<br />

device or booth. A tool, or machine like a booth, or haptic system, may give our senses additional<br />

experiences, such as touch or movement. Speakers or earphones built into the headset, or set into a<br />

machine, will provide sound.For virtual reality to work, we have to believe in the<br />

experience. Virtual reality applications are getting closer and closer to a real-world background all<br />

the time.<br />

The technology needed for a virtual experience depends on the audience, purpose and of course the<br />

price point. If you are developing an interactive training simulator for a workplace, you will be<br />

looking for far more advanced technology. For everyday use here’s a quick overview of the<br />

technology and equipment are below.<br />

103


TABLE 1<br />

PARTICULARS<br />

EQUIPMENTS<br />

1 Virtual reality for smartphones.<br />

2<br />

3<br />

4<br />

Standalone virtual reality headsets.<br />

Powerful PC, laptop or console.<br />

Virtual reality headset.<br />

DISPARITIES BETWEEN VR AND AR<br />

Augmented reality and virtual reality are inverse reflections of one in another with what each<br />

technology seeks to accomplish and deliver for the user. Virtual reality offers a digital recreation of<br />

a real life setting, while augmented reality delivers virtual elements as an overlay to the real world.<br />

104


TABLE 2<br />

PARTICULAR DIFFERS VIRTUAL<br />

REALITY<br />

AUGMENTED<br />

REALITY<br />

1 PURPOSE It creates own It<br />

enhances<br />

reality that is experiences by adding<br />

completely computer<br />

generated and driven.<br />

virtual<br />

components such as<br />

digital images, graphics,<br />

or<br />

sensations as a new layer<br />

of interaction with the<br />

real world.<br />

105


2 DELIVERY<br />

METHOD<br />

It is usually delivered to<br />

the user through a headmounted,<br />

or hand- held<br />

controller.<br />

It is being used more<br />

and more in mobile<br />

devices such as laptops,<br />

smart phones, and<br />

tablets to change how<br />

the real world.<br />

CHALLENGES IN VR WITH VE<br />

Terminology in the virtual reality sector is fast changing. It’s argued that the first virtual<br />

reality devices emerged decades ago, but the experience they provided is entirely different to the<br />

virtual experience of today. As virtual reality becomes more intertwined with our daily lives, we<br />

start to think in terms of virtual environment. We can also define virtual enjoyment, which<br />

includes any virtual experience provided for pure entertainment – gaming, movies, videos and<br />

social skills for example – leaving more serious elements of virtual reality including training,<br />

education, and healthcare applications. Understanding how virtual reality works is crucial to being<br />

able to acknowledge this powerful technology honestly. It provides each of our senses with<br />

information to immerse us in a virtual experience, a near real experience – which our minds and<br />

bodies almost completely perceive as real.<br />

106


The Big challenges in the field of virtual reality are developing better tracking systems, finding<br />

more natural ways to allow users to interact within a virtual environment and decreasing the time it<br />

takes to build virtual spaces. While there are a few tracking system companies that have been<br />

around since the earliest days of virtual reality. Likewise, there aren’t many companies that are<br />

working on input devices specifically for VR applications. Most VR developers have to rely on and<br />

adapt technology originally meant for another discipline, and they have to hope that the company<br />

producing the technology stays in business. As for creating virtual worlds, it can take a long time to<br />

create a convincing virtual environment - the more realistic the environment, the longer it takes to<br />

make it.<br />

FUTURE ENHANCEMENT<br />

The future of Virtual Reality depends on the existence of systems that address issues of ‘large<br />

scale’ virtual environments. In the coming years, as more research is done we are bound to see VR<br />

become as mainstay in our homes and at work. As the computers become faster, they will be able to<br />

create more realistic graphic images to simulate reality better. It will be interesting to see how it<br />

enhances artificial reality in the years to come. It is very possible that in the future we will be<br />

communicating with virtual phones. Nippon Telephone and Telegraph (NTT) in Japan are<br />

developing a system which will allow one person to see a 3D image of the other using VR<br />

techniques. The future is virtual reality, and its benefits will remain immeasurable.<br />

107


CONCLUSION<br />

Virtual Reality is now involved everywhere. You can’t imagine your life without the use<br />

of VR Technology. Now we use mail or conference for communication while the person is not<br />

sitting with you, but due to technology distance is not matter.This technology give enormous scope<br />

to explore the world of 3D and your own imagination.<br />

108


STUDY ON REMOTE SENSING IN DIGITAL IMAGE PROCESSING<br />

V.Selvalakshmi &K.Sivapriya<br />

Fatima College(Autonomous),Madurai<br />

ABSTRACT<br />

Remote sensing is the acquisition of information about an object or phenomenon without making<br />

contact with the object and thus in contrast to on-site observation. Remote sensing is used in<br />

numerous fields, including geography, land surveying and most Earth Science disciplines; it also has<br />

military, intelligence, commercial, economic, planning, and humanitarian applications.in current<br />

usage, the term “remote sensing” generally refers to the use of satellite or aircraft based sensor<br />

technologies to detect and classify objects on Earth.<br />

Remote sensing image processing in nowadays a mature research area. The techniques<br />

developed in the field allow many real-life applications with great societal value. For instance, urban<br />

monitoring, fire detection or flood prediction can have a great impact on economical and<br />

environment issues. To attain such objectives, the remote sensing community has turned into a<br />

multidisciplinary field of science that embraces physics, signal theory, computer science, electronics<br />

and communications. Form a machine learning and signal/ image processing point of view, all the<br />

applications are tackled under specific formalisms, such as classification and clustering, regression<br />

and function approximation, image coding, restoration and enhancement, source un mixing, data<br />

fusion or feature selection and extraction. This paper serves as a survey of methods as applications,<br />

and reviews the last methodological advances in remote sensing and image processing.<br />

109


INTRODUCTION<br />

The Digital Image Processing is refers to processing digital images by means of digital<br />

computer. Note that a digital image is composed of a finite number of elements, each of which has<br />

a particular location and value. These elements are called picture elements, image elements, pels,<br />

and pixels. Digital image processing focuses on two major tasks improvement of pictorial<br />

information for human interpretation, processing of image data for storage, transmission and<br />

representation for autonomous machine perception. Some argument about where image processing<br />

ends and fields such as image analysis and computer vision start. The computer vision can be<br />

broken up into low-mid and high-level processes. Digital image processing helps us enhance<br />

images to make them visually pleasing, or accentuate regions or features of an image to better<br />

represent the content. For example, we may wish to enhance the brightness and contrast to make a<br />

better print of a photograph, similar to popular photo-processing software. In a magnetic resonance<br />

image (MRI) of the brain, we may want to accentuate a certain region of image intensities to see<br />

certain parts of the brain. Image analysis and computer vision, which go beyond image processing,<br />

helps us to make decisions based on the contents of the image.<br />

APPLICATIONS IN REMOTE SENSING<br />

Remote Sensing is the science and art of acquiring information (spectral, spatial, and temporal)<br />

about immaterial, objects, area, or phenomenon, without coming into physical contact with the<br />

objects, or area, or phenomenon under investigation. Without direct contact, some means of<br />

transferring information through space must be utilized. In practice, remote sensing is the stand-off<br />

collection through the use of a variety of devices for gathering information on a given object or<br />

area. In remote sensing, information transfer is accomplished by use of electromagnetic radiation<br />

(EMR). EMR is a form of energy that reveals its presence by the observable effects it produces<br />

when it strikes the matter. EMR is considered to span the spectrum of wavelengths from 10-10 mm<br />

to cosmic rays up to 1010 nm, the broadcast wavelengths, which extends from 0.30-15 mm.Sensors<br />

that collect information some remote distance from the subject. This process is called remote<br />

sensing of the environment. The remote sensor data can be stored in an analog format or in digital<br />

format the analog and digital remote sensor data can be analyzed using analog and/or digital image<br />

processing techniques.<br />

110


Scientists have made significant advances in digital image processing of remotely sensed data for<br />

scientific visualization and hypothesis testing and neural network image analysis, hyper spectral data<br />

analysis, and change detection.<br />

TECHNOLOGY USED IN REMOTE SENSING<br />

Since the beginning of the space age, a remarkable progress has been made in utilizing remote<br />

sensing data to describe study, monitor and model the earth’s surface and interior. Improvements in<br />

sensor technology, especially in the spatial, spectra, radiometric and temporal resolution, have<br />

enabled the scientific community to operationalise the methodology. The trend of development of<br />

remote sensing is being from panchromatic, multi-spectral, hyper-spectral to ultra-spectral with the<br />

increase in spectral resolution. On the other hand, spatial resolution is reaching its highest side of<br />

one metre resolution.<br />

A remote sensing application is a software application that processes remote sensing data.<br />

Remote sensing applications are similar to graphics software, but they enable generating geographic<br />

information from satellite and airborne sensor data. Remote sensing applications read specialized<br />

file formats that contain sensor image data, georeferencing information, and sensor metadata. Some<br />

of the more popular remote sensing file formats include: GeoTIFF, NITF, JPEG 2000, ECW,MrSID,<br />

HDF, and NetCDF.<br />

Remote sensing applications perform many features including:<br />

Change detection- determines the changes from images taken at different times of the same area.<br />

Othorectification-warp and image to its location on the earth.<br />

Spectral analysis-for example, using non-visible parts of the electromagnetic spectrum to<br />

determine if a forest is healthy<br />

Image classification- categorization of pixels based on reflectance into different land cover classes<br />

Until recently, the 2om spatial resolution of SPOT was regarded as ‘high spatial resolution’. Since the<br />

launch of IKONOS in 1999 a new generation of very high spatial resolution (VHR) satellites was<br />

born, followed byOuick Bird late 2001. The widely used Landsat and Spot sensore are now called<br />

‘medium resolution’. Especially the new satellite sensor generation meets the strong market demands<br />

from end-users, who are interested in image resolution that will help them observe and monitor their<br />

specific objects of interest. The increasing variety of satellites and sensors and spatial resolutions lead<br />

111


to a broader spectrum of application but not automatically to better results. The enormous amounts of<br />

data create a strong need for new methods to exploit these data efficiently. Airborne bases sensors to<br />

collect information about a given object or area. Remote sensing data collection methods can be<br />

passive or active. Passive sensors detect natural radiation that is emitted or reflected by the object or<br />

area being observed. In active remote sensing energy is emitted and the resultant signal that is<br />

reflected back is measured.<br />

Some latest powerful practical applications include anti-terrorism.<br />

<br />

<br />

<br />

Surgical strikes<br />

Disaster relief<br />

Detecting source of pollution<br />

112


CONCLUSION<br />

Digital Image Processing of satellite data can be primarily grouped into three categories:<br />

image rectification and restoration. Enhancement and information extraction. Image rectification<br />

is the pre-processing of satellite data for geometric and radiometric connections. Enhancement is<br />

applied to image data in order to effectively display data for subsequent visual interpretation.<br />

Information extraction is based on digital classification and is used for generating digital thematic<br />

map.<br />

REFERENCE<br />

1. Digital image processing (3 rd Edition): Rafael C. Gonzalez. Richard E. woods<br />

2. Introductory digital image processing (3 rd Edition): John R. Jensen<br />

3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3841891/#! Po=16.0550<br />

4. https://www.geospatialworld.net<br />

113


BIG DATA ANALYTICS IN HEALTHCARE- A SURVEY<br />

Thirumalai & Thievekan<br />

Fatima College(Autonomous),Madurai<br />

ABSTRACT<br />

Like Oxygen, the world is encompassed by information today. The amount of<br />

information that we reap and eat up is flourishing forcefully in the digitized world. Expanding<br />

utilization of new advancements and online networking produce immense measure of information<br />

that can acquire marvelous data if legitimately broke down. This expansive dataset for the most part<br />

known as large information, don't fit in conventional databases in view of its rich size. Associations<br />

need to oversee and investigate enormous information for better basic leadership and results. Along<br />

these lines, huge information examination is accepting a lot of consideration today. In human<br />

services, huge information investigation has the likelihood of cutting edge quiet care and clinical<br />

choice help. In this paper, we audit the foundation and the different strategies for huge information<br />

investigation in human services. This paper likewise explains different stages and calculations for<br />

enormous information investigation and talk on its favorable circumstances and difficulties. This<br />

review ends up with a talk of difficulties and future bearings Keywords: big data, cloud computing,<br />

hadoop , big data mining, predictive analytics.<br />

INTRODUCTION<br />

The new advances in Information Technology(IT) manual for smooth making of<br />

information. For example, 72 long periods of recordings are transferred to YouTube<br />

consistently[26]. Social insurance part likewise has delivered tremendous measure of information<br />

by keeping up records and patient care. Opposite of putting away information in printed shape, the<br />

form is digitizing those boundless information. Those digitized information can be utilized to move<br />

forward the social insurance conveyance quality in the meantime diminishing the expenses and<br />

hold the guarantee of supporting a wide range of restorative and medicinal services capacities.<br />

Additionally it can give progressed customized mind, enhances understanding results furthermore,<br />

stays away from superfluous expenses.<br />

114


By depiction, huge information in human services alludes to electronic wellbeing datasets so<br />

extensive and complex that they are hard to deal with customary programming, equipment,<br />

information administration apparatuses and strategies Medicinal services huge information<br />

incorporates the clinical information, specialist's composed notes and remedies, therapeutic pictures<br />

for example, CT and MRI examines results, research center records, drugstore records, protection<br />

documents and other managerial information, electronic patient records (EPR) information; web<br />

based life posts, for example, tweets, reports on site pages what's more, various measure of<br />

therapeutic diaries. Thus, enormous measure of social insurance information are accessible for<br />

enormous information researchers. By understanding stencils and patterns inside the information,<br />

huge information examination appears to enhance mind, spare lives what's more, diminish costs.<br />

Consequently, enormous information investigation applications in human services exploit removing<br />

bits of knowledge from information for better choices making reason. Examination of huge<br />

information is the way toward reviewing tremendous measure of information, from various<br />

information sources and in different arrangements, to convey bits of knowledge that can empower<br />

basic leadership in genuine time. Different scientific ideas, for example, information mining and<br />

computerized reasoning can be connected to break down the information.<br />

Huge information systematic methodologies can be utilized to perceive inconsistencies which<br />

can be found because of incorporating immense measures of information from various<br />

informational indexes. information for better choices making reason. Investigation of huge<br />

information is the way toward reviewing colossal measure of information, from various information<br />

sources and in different organizations, to convey experiences that can empower basic leadership in<br />

genuine time.<br />

Different scientific ideas, for example, information mining and computerized reasoning can be<br />

connected to break down the information. Huge information investigative methodologies can be<br />

utilized to perceive peculiarities which can be found because of coordinating immense measures of<br />

information from various informational indexes. In whatever remains of this paper, initially we<br />

present the regular foundation, definitions and properties of huge information. At that point<br />

different huge information stages and calculations are talked about. In the long run, the difficulties,<br />

future bearings and conclusions are introduced.<br />

115


Definition and properties<br />

Amir Gandomi defined the 3V’s as takes after. Volume means the heaviness of the<br />

information. For the most part huge information sizes would be in terabytes or peta bytes or<br />

exabytes. Doug Beaver said that Facebook by and by stores 260 billion pictures which are around<br />

20 petabytes in size and it forms in excess of 1 million photographs every second. Assortment<br />

alludes the auxiliary decent variety in a dataset. Because of mechanical development, we can utilize<br />

diverse sorts of information those have different arrangement. Despite the fact that enormous<br />

information has been seen generally, it still has distinctive purpose of perspectives about its<br />

definition. Huge information in medicinal services is rising not just in view of its volume yet in<br />

addition the assortment of information writes and the speed at which it ought to be overseen. The<br />

accompanying definitions can assist us with understanding better the enormous information idea.<br />

Truth be told, enormous information has been characterized since 2001 itself. Certainly the<br />

measure is the significant ascribe that strikes a chord at whatever point we consider huge<br />

information. Nonetheless, it has some other properties likewise. Doug Laney (Gartner)<br />

characterized huge information with a 3V's model.<br />

It discussed the expansion of volume, speed and assortment. Apache Hadoop (2010)<br />

characterized enormous information as "datasets which couldn't be caught, overseen and handled<br />

by general PCs inside an worthy degree". In 2012, it was reclassified by Gartner as:"Enormous<br />

information is high volume, high speed and high assortment data resources that require new type of<br />

preparing to empower upgraded basic leadership, understanding disclosure and process<br />

streamlining Such information writes comprise of sound, video, content, picture, log documents et<br />

cetera. Enormous information design is arranged into three. They are organized, unstructured and<br />

semi-organized information.<br />

It is appeared in the accompanying figure.<br />

Figure-1. Content formats of big data.<br />

116


Structured data denotes the tabular data in spreadsheets and databases. The image, audio,<br />

video are unstructured data that are noted as difficult to analyze.Interestingly, nowadays 90% of big<br />

data are unstructured data. The size of this data goes on rising through the use of internet and smart<br />

phones. The characteristics of semistructured data lie between structured and unstructured data. It<br />

does not follow any strict standards.<br />

XML (Extensible Markup Language) is a common example of semi-structured data. The third<br />

‘V’ velocity means the speed at which data is produced and analyzed. As mentioned earlier, the<br />

emergence of digital devices such as smart phones and sensors has allowed us to create these<br />

formats of data in an extraordinary rate. The various platforms behind big data and algorithms are<br />

discussed in detail in the later sections<br />

RELATED TECHNOLOGIES<br />

A. Big data platforms<br />

As in ,huge information utilizes conveyed capacity innovation in light of distributed<br />

computing as opposed to nearby capacity. Some enormous information cloud stages are Google<br />

cloud administrations, Amazon S3 and Microsoft Azure. Google's disseminated document<br />

framework GFS (Google File System) and its programming model Map lessen are the lead in the<br />

field. The execution of guide diminish has gotten a substantial measure of consideration in huge<br />

scale information handling. So numerous associations utilize enormous information handling<br />

structure with delineate. Hadoop, a powerful angle in huge information was produced by Yahoo<br />

and it is an open-source adaptation of GFS [29]. Hadoop empowers putting away and preparing<br />

huge information in dispersed condition on vast bunches of equipment .Colossal information<br />

stockpiling and quicker preparing are bolstered by hadoop. Hadoop Distributed File System<br />

(HDFS) gives dependable and versatile information stockpiling. HDFS influences different<br />

duplicates of every datum to square and disseminates them on frameworks on a bunch to empower<br />

dependable access. HDFS bolsters distributed computing through the utilization hadoop, a<br />

disseminated information preparing stage.<br />

117


Another, 'Huge Table' was produced by Google in 2006 that is utilized to process<br />

immense measure of organized information. It likewise bolsters outline. ystem. It is a versatile<br />

disseminated information store manufacturedfor Amazon's stage. It gives high unwavering quality<br />

cost viability, accessibility and execution. Tom white explains different instruments for huge<br />

information investigation. Hive, a structure for information warehousing over hadoop. It was<br />

worked at Facebook. Hive with hadoop for capacity and handling meets the adaptability needs and<br />

is costeffective. Hive utilizes an inquiry dialect called HiveQL which is similar on SQL. A scripting<br />

dialect for investigating extensive dataset is called 'Pig'1<br />

A conclusion of guide diminish is that written work of mappers and reducers,<br />

incorporating the bundle and code are extreme thus the advancement cycle is long. Subsequently<br />

working with mapreduce needs encounter. Pig survives this feedback by its effortlessness. It<br />

enables the designers to compose straightforward Pig Latin questions to process the huge<br />

information and consequently spare the time. A dispersed section situated database Hbase based<br />

over Hadoop Distributed Record System. It can be utilized when we require irregular access of<br />

expansive datasets. It accelerates the execution of activities. Hbase can be gotten to through<br />

application programming interfaces (APIs) like REST (Illustrative State Transfer) and java. Hbase<br />

does not have its own inquiries, so it relies upon Zookeeper. Zookeeper oversees tremendous<br />

measure of information. This permits dispersed procedure to oversee through a namespace of<br />

information registers.This dispersed administration additionally has ace and slave hubs like in<br />

hadoop. Another imperative instrument is Mahout. It is an information mining and machine<br />

learning library. It can be sorted as aggregate separating, classification, bunching and mining. It can<br />

be executed by Mapreduce in a dispersed mode. Enormous information investigation isn't just in<br />

view of stages yet additionally examination calculations plays a huge part.<br />

118


B. Algorithmic techniques<br />

Huge information mining is the strategy for winnowing shrouded, obscure however<br />

valuable data from enormous measure of information. This data can be utilized to anticipate f uture<br />

circumstances as an assistance to basic leadership process.Accommodating learning can be found<br />

by the utilization of information mining systems in social insurance applications like choice<br />

emotionally supportive network. The enormous information created by human services associations<br />

are extremely muddled and immense to be dealt with further more, investigated by regular<br />

techniques. Information mining stipends the methodology to change those groups of information<br />

into helpful data for choice help. Huge information mining in medicinal services is tied in with<br />

learning models to anticipate patients' ailment. For instance, information mining can encourage<br />

social insurance protection associations to distinguish scoundrels and abuse, medicinal services<br />

foundations settle on choices of client relationship administration, specialists distinguish successful<br />

medications and best practices, and patients get made strides furthermore, more conservative<br />

human services administrations. This prescient examination is generally utilized as a part of<br />

medicinal services. There are different information mining calculations talked about in 'Top 10<br />

calculations in information mining' by Wu X et al. It talked about assortment of calculations<br />

alongside their constraints. Those calculations include grouping, order, relapse, measurable<br />

realizing which are the issues in information mining examination. The ten calculations talked about<br />

incorporate C4.5, k-implies, Apriori, Support VectorMachines, Naïve Bayes, EM, CART, and so<br />

on. Enormous information investigation incorporates different techniques, for example, content<br />

examination, mixed media investigation et cetera.<br />

In any case, as given over, one of the urgent classifications is prescient investigation which<br />

incorporates different measurable strategies from demonstrating, information mining and machine<br />

discovering that break down present and verifiable certainties to make expectation about future. In<br />

healing center setting, there are prescient strategies used to distinguish on the off chance that<br />

somebody might be in danger for readmission or is on a genuine subsidence.This information<br />

encourages advisors to settle on essential care choices. Here it is important to think about machine<br />

learning since it is broadly utilized in prescient examination. The procedure of machine learning is<br />

especially indistinguishable of information mining. Them two chase through information to<br />

119


Figure-2. Machine learning algorithms - A hierarchal view.<br />

120


Corridor laid out a strategy for building taking in the tenets of the huge dataset. The<br />

approach is to have a solitary choice plan produced using a gigantic subset of information.<br />

In the interim Patil et al. sought after a cross breed way blending the two hereditary<br />

calculation and choice tree to make an propelled choice tree to enhance execution and<br />

effectiveness of calculation. With the expanding information in the territory of enormous<br />

information, the assortment of systems for dissecting information is spoken to in<br />

'Information Lessening Techniques for Large Qualitative Data Sets'. It depicts that the<br />

determination for the specific method is in view of the kind of dataset and the way the<br />

example are to be examined. Its connected K-implies bunching utilizing Apache Hadoop.<br />

They went for proficiently investigating substantial dataset in an insignificant measure of<br />

time. They likewise clarified that the exactness and discovery rate are influenced by the<br />

quantity of fields in log records or not Along these lines, their tried outcomes demonstrate<br />

the right number of groups and the right measure of passages in log documents, yet the rate<br />

of precision lessens when the quantity of sections increments. The outcome demonstrates<br />

that the precision should be improved .Classification is one of the information mining<br />

techniques used to anticipate and order the foreordained information for the particular class.<br />

There are diverse orders techniques propose by specialists. The generally utilized strategies<br />

are portrayed by Han et al.<br />

It incorporates the accompanying:<br />

Neural network algorithm<br />

Decision tree induction<br />

Bayesian classification<br />

Rule based classification<br />

Support vector machine<br />

K-Nearest neighbor classifier<br />

Rough set approach<br />

Genetic algorithm,<br />

Fuzzy set approach<br />

Any of the previously mentioned arrangement strategies can be connected to characterize the<br />

application arranged information. The material arrangement strategy is to be<br />

121


picked by the kind of use and the dimensionality of the information. It is a major test to the<br />

analysts to choose and apply the fitting information mining arrangement calculation for<br />

diagnosing medicinal related issues. Picking the right strategy is a testing errand.<br />

The correct technique can be picked simply in the wake of breaking down all the accessible<br />

arrangement techniques and checking its execution in term of precision. Different inquires about<br />

have been completed in the region of medicinal analyses by utilizing arrangement system. The<br />

most imperative reality in restorative finding framework is the exactness of the classifier. This<br />

exploration paper investigations the diverse characterization strategies connected in restorative<br />

conclusions and thinks about the execution of order precision. C4.5 is connected to break down<br />

the SEER dataset for bosom disease and order the patients either in thebeginning stage or pre<br />

malignancy organize The records broke down are 500 and the precision accomplished in testing<br />

datasets is 93% Shweta utilized the Naïve Bayes, ANN, C4.5 and choice tree calculations for<br />

analyze and visualizations of bosom tumor.The outcomes demonstrate that the choice trees give<br />

higher exactness of 93.62 % where Credulous Bayes gives 84.5%, ANN produces 86.5% and<br />

C4.5 produces 86.7% of exactness. Chaitrali is utilized Choice Trees, Naïve Bayes and Neural<br />

Network calculations for dissecting coronary illness. The outcomes examination tells that the<br />

Naïve Bayes accomplishes 90.74% of exactness while Neural Network and Decision Trees give<br />

100% and 99.62% of exactness separately. Diverse information mining systems were connected<br />

to foresee coronary illness The exactness of each calculation is checked and expressed as Naïve<br />

Bayes, Decision Tree and ANN are accomplished 86.53%, 89% and 85.53% of exactness<br />

individually. The three distinct information mining calculations, ANN, C4.5 and Decision Trees<br />

are utilized to investigate heart related infections by utilizing ECG signals . The examination<br />

comes about plainly demonstrate the Decision Tree calculation performs best and gives the<br />

exactness of 97.5%. C4.5 calculation gives 99.20% exactness while Naïve Bayes calculation<br />

gives 89.60 % of exactness.Here these calculations are utilized to appraise the supervision of<br />

liver scatter. Christobel et al. connected KNN technique to diabetic dataset. It gives the exactness<br />

of 71.94% with 10 overlap cross approval. C5.0 is the arrangement calculation which is<br />

appropriate for enormous informational collections. It defeats C4.5 on the speed, memory and the<br />

execution. C5.0 technique works by part the example in light of the field that gives the most<br />

extreme data pick up.<br />

The C5.0 framework can part tests with respect to of the greatest data pick up field. The example<br />

subset that is got from the past part will be part later. The activity will proceed until the example<br />

122


subset can't be part and is for the most part as per another field. At long last, think about the least<br />

level split, those example subsets that don't have prominent commitment to the model will be<br />

dropped. C5.0 approach effortlessly handles the multi esteem trait and missing quality from<br />

informational collection .The C5.0 manage sets have recognizably brings down mistake rates on<br />

inconspicuous cases for the rest and woods datasets. The C4.5 what's more, C5.0 govern sets have<br />

the same prescient precision for the pay dataset, yet the C5.0 run set is littler. The times are nearly<br />

not similar. For example, C4.5 required about 15 hours finding the run set for woods, yet C5.0<br />

finished the errand in 2.5 minutes. C5.0 generally utilizes a request of extent less memory than<br />

C4.5 amid administer set development [20]. So obviously C5.0 approach is superior to C4.5 in<br />

numerous viewpoints. Hsi-Jen Chiang proposed a strategy for breaking down prognostic pointers<br />

in dental embed treatment. They break down 1161 inserts from 513 patients.<br />

Information on 23 things are taken as effect factors on dental inserts. These 1161 inserts are<br />

examined utilizing C5.0 technique. Here 25 hubs are created by utilizing C5.0 approach. This<br />

model accomplishes the execution of 97.67% precision and 99.15% of specificity.<br />

CHALLENGES AND FUTURE DIRECTION<br />

Enormous information investigation not just gives enchanting openings yet in addition<br />

faces parcel of difficulties. The challenge begins from picking the huge information examination<br />

stage. While picking the stage, a few criteria liken accessibility, usability, adaptability, level of<br />

security and congruity ought to be thought about Alternate difficulties of enormous information<br />

investigation are information inadequacy, adaptability furthermore, security. Since distributed<br />

computing plays a real part in huge information investigation, cloud security ought to be<br />

considered. Studies demonstrate that 90% of huge information are unstructured information.<br />

Yet, the portrayal, investigation and access of various unstructured information are as yet a test.<br />

Information convenience is additionally basic in different human services regions like clinical<br />

choice help for settling on choices or giving data that advisers for take choices. Huge information<br />

can settle on choice help more straight forward, quicker and that's only the tip of the iceberg<br />

exact on the grounds that choices depend on higher volumes of information that are more<br />

present and significant. This needs versatile investigation calculations to create opportune<br />

outcomes Be that as it may, the vast majority of the present calculations are wasteful in terms of<br />

huge information examination. So the accessibility of viable examination calculations is<br />

additionally important. Worries about protection and security are prevalent, despite the fact that<br />

123


these are progressively being endeavored by new validation methodologies and strategies that<br />

better secure patient identifiable information.<br />

CONCLUSION<br />

A lot of heterogeneous medicinal information have turned out to be accessible in different<br />

human services associations. The rate of electronic wellbeing record (EHR) appropriation keeps<br />

on moving in both inpatient and outpatient viewpoints. Those information could be an<br />

empowering asset for inferring bits of knowledge for enhancing tolerant care and lessening waste<br />

Investigating the gigantic measure of medicinal services data that is recently accessible in<br />

computerized configuration should empower propelled recognition of great treatment, better<br />

clinical choice help and precise forecasts of who is likely to become ill. This requires elite<br />

processing stages and calculations. This paper surveys the different enormous information<br />

investigation stages and calculations and difficulties are examined. In view of the examination,<br />

albeit restorative analyze applications utilize distinctive calculations, C4.5 calculation gives better<br />

execution. Yet at the same time the act of spontaneity of C4.5 calculation is required to boost<br />

exactness, handle huge measure of information, decrease the space prerequisite for huge measure<br />

of datasets and bolster new information composes and to diminish the mistake rate. C5.0<br />

approach conquers these reactions by creating more precision; requiring less space when volume<br />

of information is expanded from thousands to millions or billions. It additionally has bring down<br />

mistake rate and limits the prescient blunder. C5.0 calculation is the conceivably appropriate<br />

calculation for any sort of therapeutic judgments.<br />

In the event of enormous information, the C5.0 calculation works quicker and gives the<br />

better exactness with less memory utilization. Despite the restricted work done on enormous<br />

information examination up until now, much exertion is expected to beat its issues identified with<br />

the previously mentioned challenges. Likewise the fast advances in stages and calculations can<br />

quicken the execution.<br />

124


REFERENCES<br />

1. Alexandros Labrinidis and H. V. Jagadish, “Challenges and opportunities with big data,” Proc.<br />

VLDB Endow. 5, pp. 2032-2033, August 2012.<br />

2. Aneeshkumar, A.S. and C.J. Venkateswaran, “Estimating the surveillance of liver disorder<br />

using classification algorithms”. Int. J. Comput. Applic., 57: pp. 39-42, 2012.<br />

3. Amir Gandomi, Murtaza Haider, “Beyond the hype: Big data concepts, methods, and<br />

analytics,” International Journal of Information Management 35, pp. 137-144, 2015<br />

4. Chaitrali, S., D. Sulabha and S. Apte, “Improved study of heart disease prediction system using<br />

data mining classification techniques," Int. J. Comput. Applic. 47: 44-48, 2012.<br />

5. Doug Beaver, Sanjeev Kumar, Harry C. Li, Jason Sobel, Peter Vajgel, Facebook Inc, “Finding<br />

a Needle in Haystack: Facebook’s Photo Storage” 2010.<br />

6. Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike<br />

Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber, “Bigtable: A Distributed Storage<br />

System for Structured Data,” ACM Trans. Comput. Syst. 26, 2, Article 4, June 2008<br />

7. Jason Brownlee, “Machine Learning Foundations, Master the definitions and concepts”,<br />

Machine Learning Mastery, 2011.<br />

8. Rulequest research, http://rulequest.com/see5- comparison.html.<br />

9. Ghemawat, Howard Gobioff, and Shun-Tak Leung: “The Google file system,” Proceedings of<br />

the 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA,<br />

October 19-22, 2003.<br />

125

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!