13.07.2015 Views

Final Program EXPRES 2012 - Conferences

Final Program EXPRES 2012 - Conferences

Final Program EXPRES 2012 - Conferences

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

flow of the software, it is possible to implement it into themodel of the presented method for concurrency.V. CONCLUSIONConcurrent programming is complex and hard toachieve. In most cases the parallelization of software isnot a straightforward and easy task. The realizedconcurrent programs usually have safety and performanceissues. For embedded systems the existing parallelizationalgorithms and solutions are not optimal due to resourcerequirements and safety issues. The goal is to realize sucha solution for concurrent programming, which is optimalfor embedded systems and helps and simplifies thedevelopment of concurrent programs. The key tosuccessful development of parallel programs is in therealization of tools which take into consideration thehuman factors and aspects of parallel development.The model presented in this paper builds on theadvantages of existing parallelization algorithms withhuman factor as its primary deciding factor. In thedevelopment of a concurrent applications, the usedparallelization algorithms and solutions are important, butthe most important factor is the developer/user itself. Toachieve the best possible results, to achieve efficientsoftware, we must concentrate on the most importantfactor of development, the human (developer).The presented parallelization model is best applicable ifthe problem we would like to solve is not triviallyparallelizable, which is true for the great number ofalgorithms and software.REFERENCES[1] Tim Harris, Simon Marlow, Simon Peyton Jones, MauriceHerlihy, “Composable memory transactions,” Proceedings of thetenth ACM SIGPLAN symposium on Principles and practice ofparallel programming, pp. 48–60, 2005.[2] Anthony Discolo, Tim Harris, Simon Marlow, Simon PeytonJones, Satnam Singh, “Lock -Free Data Structures using STMs inHaskell,” Functional and Logic <strong>Program</strong>ming,pp.65–80, 2006.[3] Tim Harris and Simon Peyton Jones, “Transactional memory withdata invariants,” ACM SIGPLAN Workshop on TransactionalComputing, 2006.[4] Paul Baran, “On Distributed Communications Networks,” IEEETransactions on Communications Systems, vol. 12, issue 1., pp 1-9, 1964.[5] César Sanchez, “Deadlock Avoidance for Distributed Real-Timeand Embedded”, Dissertation, Department of Computer Science ofStanford University, 2007 May.[6] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “MTIO. Amultithreaded parallel I/O system,” Parallel ProcessingSymposium,. Proceedings, 11th International, pp. 368-373, 1997.[7] Girija J. Narlikar, Guy E. Blelloch, “Space-efficient scheduling ofnested parallelism,” ACM Transactions on <strong>Program</strong>mingLanguages and Systems, pp. 138-173, 1999.[8] Girija J. Narlikar, Guy E. Blelloch, “Space-efficientimplementation of nested parallelism,” Proceedings of the SixthACM SIGPLAN Symposium on Principles and Practice ofParallel <strong>Program</strong>ming, 1997.[9] Bondavalli, A.; Simoncini, L., “Functional paradigm for designingdependable large-scale parallel computing systems,” AutonomousDecentralized Systems, 1993. Proceedings. ISADS 93,International Symposium on Volume, Issue, 1993, pp. 108 – 114.[10] Jeffrey Dean and Sanjay Ghemawat, “MapReduce: SimplifiedData Processing on Large Clusters,” OSDI'04: Sixth Symposiumon Operating System Design and Implementation, 2004.[11] Gene Amdahl, “Validity of the Single Processor Approach toAchieving Large-Scale Computing Capabilities,” AFIPSConference Proceedings, (30), pp. 483-485, 1967.[12] Rodgers, David P., “Improvements in multiprocessor systemdesign,” ACM SIGARCH Computer Architecture News archiveVolume 13, Issue 3, pp. 225-231, 1985134

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!