FICT Publication 2024 - UM Home of Tech
2024 FYP Publication
2024 FYP Publication
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
FICT@UM
HOME
OF
TECH
2024
BIOMETRICS AND SECURITY
Did you know that your body has unique features like fingerprints? Biometrics is a cool way to
keep things safe!
How to use a Search Engine
What is Biometrics?
Biometrics = Body measurements.
Fingerprint, Face, Eye (Iris), Voice
Why are Biometrics
important?
Biometrics help keep our stuff
safe!
No Need for Passwords:
No more forgetting or sharing
passwords!
How does Biometrics
work?
1. Scanning your feature
2. Turning it into a special code
3. Matching the code to unlock
4. Common Biometric Features
Your fingerprint
is like your
secret code!
Your face is
unique, just
like you!
Your iris has
its own special
pattern!
Your voice
has a
unique sound!
How to use Biometrics
Stay Safe with Biometrics
Unlock your
phone with
your fingerprint!
Some computers use
your face to log in!
Biometrics keep
your stuff safe
and secure!
eSkills
eSkills Malta Foundation
www
/ESkillsMalta @eSkills_Malta eskills.org.mt
FICT@UM
HOME
OF
TECH
Editorial
Iggy Fenech Dr Conrad Attard
What do you think of when
you think of home?
Hopefully, the word conjures
up visions of a place that
is welcoming; somewhere
where you can safely grow, where
you are encouraged to fulfil your
potential, and where you can be
yourself and thrive. That is certainly
the idea behind the theme for this
year’s FICT Expo and accompanying
publication, which you are currently
holding in your hands.
The Faculty of Information &
Communication Technology (FICT)
has been the home of tech at the
University of Malta (UM) since
2007. We pride ourselves in being
the place where research on the
matter is conducted, encouraged,
and supported. But we aim to be
more than just the home of theories,
gizmos, and programming languages:
we’d like to think of ourselves as a
place that inspires those who want
to learn about technology, and the
areas it impacts, as well as a point
of reference for them once they’ve
spread their wings.
We also want to ensure that
industry and the general public
know how important our students’
research is. This is why we have
always closed our academic years
with the FICT exhibition, an annual
occasion during which students
showcase the projects they have
worked on and at which we hold our
prize-giving ceremonies.
This year things are a bit different,
however, as the FICT is no longer
just having an exhibition but rather a
more prominent exposition. This still
lets us celebrate the best student
projects, but it also gives us a better
platform to explain the Faculty’s role
in scientific research to the public
while fostering a sense of just how
much fun technology and science
can be. Moreover, it helps us bring
together our students and industry
partners in the hope of increasing
opportunities for both sides.
↗The [Faculty]
is no longer just
having an exhibition
but rather a
more prominent
exposition↙
All this boils down to one truth,
and that’s that we hold this annual
event because ICT was never
meant to exist in a vacuum: ICT has
economic and social effects that
can be lifechanging. Indeed, ICT is
a means to an end, and that end
should be to improve processes,
objects, experiences, and lives.
University of Malta • Faculty of ICT 1
This can be seen in the articles throughout this
publication, where students explain how their projects
will or can make a difference. There are many interesting
insights into, among other things, how AI is being used to
bring the Maltese language into the digital age, and how
technology can better explain the process of undergoing
radiotherapy to younger patients.
The expo and publication, however, are only a small
part of what the Faculty does, as there are many other
outlets the FICT uses to promote ICT’s importance to the
community. Its other efforts include numerous school
visits throughout the year, plenty of collaborations with
government and industry, and even robot championships
to help amp up the cool factor.
This is over and above the Faculty’s
staff’s day-to-day work, which is focused
on delivering the highest possible level
of education in the field of ICT to all its
students at undergraduate, masters, PhD,
and post-doctoral level. To achieve this, the
FICT works with many other faculties at
UM to combine disciplines and offer more
targeted education, including in healthcare, aviation, and
business.
↗[ICT] needs
to become a
second language
that we can all
speak fluently↙
difficult to get by, especially as technology continues
to change the way we create, spread, and present
information (and misinformation).
Most worryingly, the lack of individual knowledge of
ICT will impact the collective, making society more poorly
resourced and less able to keep up with advancements in
technology. This may reduce the quality of life of people
across the board as we, ultimately, are all links in a chain.
In other words, all of society will either reap the
benefits of a positive vision in the area of ICT or suffer
the consequences; it’s up to us to decide where we’d
like it to go. Our two cents on the matter are that, just
like the sciences, languages, and the arts—all of which
are necessary and wonderful subjects—
our understanding of technology needs to
be fostered from a young age. It needs to
become a second language that we can
all speak fluently and use to advance our
careers and society.
With that, we will leave you to leaf
through the rest of the articles we have
put together for you. We hope you will find them as
inspirational as we did.
The Faculty does all this because, for all the microchips,
and bits, and quantum mechanics that dominate the
conversation surrounding ICT, it understands the fact
that this is a people-centric discipline, which has to go
beyond the screen it is activated on or the new processes
it automates. Yet that brings us to an even more pertinent
problem in our reality, and that is that we, as a nation,
are still failing to properly prioritise ICT through the
educational system, be it in terms of investment or the
career paths available to those who study it.
We believe that the whole system surrounding the
teaching and learning of ICT needs to be rethought and
reinvented in order to ensure a level of digital literacy
that will match the needs of tomorrow, not just today’s.
This must include the transfer of knowledge about
staying safe online, the real capabilities of AI, how online
platforms harvest data, and a basic knowledge of the key
components that operate our devices and software, like
programming languages.
Happy reading!
Dr Conrad Attard & Iggy Fenech
All the content in
this publication is
available online at
ictprojects.mt
Without these skills, the citizens of tomorrow will
struggle to keep afloat in a world that is increasingly
going digital. They will find it harder to secure jobs in
every sphere and every area, be it healthcare, finance,
archaeology, or manufacturing. They will also find it
For more information about the Faculty of ICT and our
degrees, please visit our website (www.um.edu.mt/ict)
or our Facebook page (fb.com/um.ictfaculty).
Alternatively, get in touch via email on ict@um.edu.mt or
call us on +356 2340 2530.
2 Faculty of Information and Communication Technology Final Year Projects 2024
Your Career Awaits!
Have you got what it takes to join Team Exigy?
We’d love to hear from you!
Follow us on
4 Faculty of Information and Communication Technology Final Year Projects 2024
A word from
the Dean
The Faculty of ICT’s annual exhibition, which this
publication coincides with, is a much-anticipated
event for both its staff and its students. It has
indeed become one of the most important dates on
our calendar, but did you know that the exhibition
itself actually predates the Faculty?
industry and government entities. One of these funders
is the Malta Council for Science and Technology (MCST),
which is supporting research on how AI could be used to
study the outcomes of cases that have previously made it
to the Small Claims Tribunal, giving us and future plaintiffs
data on whether a claim is likely to be successful or not.
The first exhibition took place in the year 2000, and
was a collaborative effort between the five departments
that would go on to form the Faculty of ICT in 2007. Since
then, it has gone from strength to strength, resulting
in a fully-fledged expo where, among other initiatives,
72 final-year students will this year be presenting their
projects to industry, government officials, and the public
at large.
As wonderful as the expo is, it is only a small part
of what the Faculty—thanks to its tireless staff, I must
say—does throughout the academic year. This year, for
example, we are expanding our offerings with two brand
new degrees.
The first is an undergraduate degree in Language
Technology and Artificial Intelligence (AI). A collaboration
between our Faculty and the Institute of Linguistics and
Language Technology, this BSc will allow enrollees to see
how AI could be applied to language, including our native
tongue, for a multitude of uses, including communication,
translation, and spell-checking.
We also can’t fail to mention that the Department
of Communications and Computer Engineering within
our Faculty is also part of two MCST Space Upstream
Programme research projects. In a nutshell, these projects
will be using AI to detect how spaceflight affects the DNA
of various lifeforms, as well as how AI can be used in
microscopy image segmentation and IRIF quantification.
In other words, the Faculty has been quite busy. Yet,
for all that, I would say that our biggest achievement
over the past year has been the fact that our intake of
students for the 2023/2024 academic year was even
greater than the one for 2022/2023. To us, that shows
that more people are understanding the importance of,
and showing interest in, the areas of Information and
Technology. This gives all our work a purpose.
With that in mind, I will leave you to pore over this
year’s publication, where you will find write-ups about
some truly interesting research projects conducted by
some of our most brilliant students.
The second is a BSc in Accountancy and
Information Systems, which is aimed at anyone who
aspires to work in Accounting, an area that is quickly
and increasingly embracing ICT. This is a collaboration
with the Faculty of Economics, Management &
Accountancy, ensuring students receive the highest
possible training in both areas.
Meanwhile, we have also received much-appreciated
funding from numerous external partners, including
Until next year,
Prof. Inġ. Carl James Debono
Dean of the Faculty of ICT
University of Malta • Faculty of ICT 5
Border Security • Digital Health • Cyber • Digital
Banking • Specialised IT Solutions
We believe in bringing people together to share their skills, creativity,
optimism, and vision to create an innovative legacy. Grow your career
with us.
PTL Ltd,
Nineteen Twenty Three, Valletta Road,
Marsa MRS 3000, Malta
T. (+356) 2144 5566 E. info@ptl.com.mt
Company Reg. No.C3545
PTL Ltd is part of
Harvest Technology plc
More than 15 Robots!
4 games over 2 days!
Develop
beyond your future
Build and deploy innovations that
shape the future of payments.
Make your first step towards your
global career. We offer you a wide
range of opportunities to grow
beyond yourself in an inspiring
and global work environment.
RS2.COM
recruitment@rs2.com
+356 2134 5857
RS2plc
Embark on your journey to
the dynamic world of tech
and pave your way to success.
The PwC Graduate Programme for Technology students is a journey that focuses on
your development during your studies at University and prepares you for a career at PwC.
What to expect?
A mix of digital upskilling, practical work experience, dissertation support, networking
with the best in the industry and much more!
Cyber
Security
& Trust
Data & AI
ERP & Cloud
Solutions
Digital
Transformation
& Innovation
Product &
Software
Development
Apply today
#FICT2024
Front cover and booklet designed by
Mr Jean Claude Vancell
Printing by
www.gutenberg.com.mt
Editorial board
Dr Conrad Attard and Mr Iggy Fenech
Text of articles by final-year
students reviewed by
Ms Colette Grech @restylelinguistic
FICT@UM
HOME
OF
TECH
Review of Articles
Prof. Chris Columbo and
Ms Rebecca Camilleri
Lead Administrator of Publication
Ms Samantha Pace
Administration of Publication
Mr Rene Barun and Ms Dorina Ndoj
Video Creator
Pyramid Pictures
Photography
Mr James Moffett and
Mr Kristov Scicluna
The Faculty of Information and Communication Technology gratefully acknowledges the following
firms and organisations for supporting this year’s Faculty of ICT Publication 2024:
Gold Sponsors
Silver Sponsors
Main Sponsor of Event
Event Sponsors
rning
Map
Data Science
Human Computer Interaction
Internet
of Things
Audio
Speech &
Language
Technology
COMMON
AREA -1B10
FICT@UM
HOME
OF
TECH
Door to Ring Road
Entrance
ICT LAB
-1B01
Entrance
Deep Learning
Door to Block B
#FICT2024
Digital Health Networks
& Communications
FOYER
LEVEL -1
Data Science
Human Computer Interaction
Door to Under Bridge
Natural Language
Processing
Software Engineering
& Web Applications
Internet
of Things
Door to Block A
Audio
Speech &
Language
Technology
COMMON
AREA -1B10
ICT@
WORK
Over the past few decades, ICT has
become an integral part of every
industry imaginable, and its importance
is continually growing. Here, we chat to
five key players in different industries to
discover why future professionals and
citizens will need to be fluent in ICT.
Some may think of ICT as a standalone
area that only impacts the
lives of the few... Yet, the reality
is that it’s become an invaluable
tool in every industry in the
world, helping professionals deliver
faster, better, and more-personalised
services.
In this article, we speak to
five leaders whose primary area
isn’t tech to discover how ICT has
impacted their work, discover
what role it will play in the future,
and shed light on why tomorrow’s
professionals in those areas need
to be proficient in tech.
14 Faculty of Information and Communication Technology Final Year Projects 2024
Photo by Giola Cassar
on IT since the late 80s. This is
especially so during the editing
and production stages, when
there are many people collectively
working on a publication.
on manuscripts, designs, covers,
and the like.
‘There have also been big
changes like the eBook, but
readers seem to be rather
‘In fact, purely in terms of
ambivalent to this development,
the book-creation process, the
leading to a situation where they
publishing industry didn’t suffer
and print books happily coexist.
the same amount of disruption
Chris Gruppetta
as others during the COVID
lockdowns, as many of the stages
‘Moving forward, the biggest
game-changer may be AI-
is the Director of Publishing at
had long been fully digitised.
generated text, images, and
Merlin Publishers. With 25 years
layouts. This, however, has many
of experience as a publisher and
‘Authors and, increasingly,
ethical implications for content
editor under his belt, he’s seen
illustrators tend to be based all
creators, and they’re proving
technology completely change
over the world nowadays, and
thornier than possibly the IT
the industry.
I cannot imagine a situation
crowd had anticipated, so we’ll
where we’d have to be limited
need to see about that. Either
‘Book publishing has an unfair
to working only with those
way, with all this in mind, I cannot
reputation of not being IT-friendly
physically based here; both in
imagine anyone working in the
but, ironically, it’s an industry
terms of online meetings, but
industry without great familiarity
that has been entirely reliant
especially in the day-to-day work
with ICT.’
Jorgen Souness
ever-present in healthcare and
has been CEO of Saint Vincent
at SVP through advances like
de Paul (SVP) for the past nine
telemedicine services, remote
months. His 16 years’ experi-
monitoring systems, smart
ence in healthcare have shown
home automation, virtual-reality
him that this area is innately in-
therapy, and the use of AI to
tertwined with technology.
analyse data from electronic
health records to identify
‘At SVP, the CEO plays a
patterns.
pivotal role in setting the strategic
residents, and the provision
direction and vision for the
of digital entertainment for
‘In other words, healthcare
organisation, ensuring it aligns
residents’ well-being.
professionals, including those
with the needs of the elderly
taking care of the elderly at SVP,
residents and their families.
‘Additionally,
ongoing
will need to be proficient in ICT
projects involve VR-Oculus for
subjects to keep pace with the
‘A part of how that’s done
neurological patient stimulation,
rapid advancements taking place
is certainly through ICT, which
and proposed initiatives like AI
in healthcare technology... This
can be seen at SVP through the
integration for early dementia
is so as they will be expected
early-stage electronic health
detection, as well as online
to, at the very least, effectively
records management, the piloting
applications to aid navigation
utilise electronic health records
of medication administration
within SVP’s extensive premises.
and deal with the streamlined
systems via electronic bracelets,
administrative tasks that help
the facilitation of communication
‘Going into the future, ICT
with improved operational
platforms among staff and
will undoubtedly continue to be
efficiency and accuracy.’
University of Malta • Faculty of ICT 15
Valentina Lupo
is the director of Atelier del Restauro
Ltd, a company that works
on the restoration and conservation
of artworks including
sculptures and paintings. While
a lot of her and her team’s work
is done by hand, ICT is a must in
the industry.
‘ICT plays a vital role in our
company’s operations. We use
various software applications
for processing photographs,
conducting mapping and
documentation of artworks,
analysing results from 3D and
CT scans, and performing false
colour-imaging infrared. We also
use software for report-writing,
editing images to restore missing
forms in artwork, environmental
monitoring, interpretation of data,
and plotting monitoring charts. It
is, therefore, an invaluable part of
the tools we use.
‘One instance of just how
important ICT has been in our
industry can be seen from a
project we conducted, which
involved the restoration of an
important religious icon that had
an eye missing. Using advanced
software, we stabilised the original
position of the missing eye on a
photograph and then used this
to carry out the reconstruction
using reversible techniques. This
approach allowed us to precisely
restore the icon while preserving
the integrity of the original
artwork.
‘Indeed, over these past 12
years, it’s become ever clearer
that a solid understanding of ICT
subjects is essential for anyone
aspiring to work in the restoration
and conservation industry, as
this enables professionals to
leverage cutting-edge tools and
techniques to achieve superior
results in preserving and restoring
cultural heritage. Additionally,
ICT literacy ensures effective
communication and collaboration
within interdisciplinary teams
both locally and abroad.’
Jonathan Caruana
is the Chief Technology Officer
(CTO) at APS Bank. Over his 25
years in the business, he has
seen technology transform
banking.
‘The banking industry has been,
and still is, rapidly digitalising, with
transactions shifting online and to
mobile channels, prompting banks
to invest in user-friendly interfaces
and omnichannel services to meet
customer expectations.
‘As CTO, therefore, my job is to
ensure our IT strategy is aligned
with the Bank’s objectives, as
well as to lead transformational
projects, optimise IT infrastructure,
ensure robust governance and
cyber resilience, foster digital
innovation, develop data analytics
capabilities, and oversee the
deployment of Business Continuity
measures.
‘Through this, the Bank can
enhance customer engagement,
operational efficiency, data
management, and cybersecurity,
as well as deliver technologyrelated
projects and promote
innovation and collaboration
across various functions. So, in
other words, technology is vital
in providing the best and most
secure service to clients.
‘Things are still changing,
though: advances in data
analytics and AI may soon enable
personalised banking services,
while the risk of more sophisticated
cyber threats will require enhanced
security measures. On top of this,
we have open-banking initiatives
and fintech collaborations that will
continue driving innovation. In the
meantime, a focus on sustainability
will integrate Environmental,
Social, and Governance (ESG)
criteria into banking operations.
‘That’s why a good
understanding of ICT subjects is
essential in delivering exceptional
customer experiences in the digital
age, especially in technologydriven
operations, cybersecurity,
data analytics, and regulatory
compliance domains. It also
prepares candidates to engage in
research and adapt to disruptive
technologies, ensuring readiness
for the evolving landscape of the
industry.’
16 Faculty of Information and Communication Technology Final Year Projects 2024
Alan Cini
is the owner and Managing
Director of Broadwing Limited.
This Malta-based, online HR &
Recruitment Agency leverages
technological advancements
in the sector to give the best
service possible to its local and
foreign clients.
‘Broadwing helps organisations
optimise processes around
hiring, inspiring, diagnosing, evaluating,
and measuring the potential
and the competencies of
candidates, employees, managers,
and their workforce. This is
something that’s always interested
me, which is why I’ve dedicated
more than a decade to it.
‘During that time, I’ve come
to realise just how imperative
it is to digitise the recruitment
process. Indeed, technology has
almost completely revolutionised
various aspects of the process,
offering a multifaceted impact on
both efficiency and success. For
example, as a company, we aim to
offer organisations insight into the
behavioural drives that determine
how employees work and how
they can best collaborate with
others.’
‘One way we’ve done so
is by using machine-learning
technology to create a model
that can digitise decisions related
to talent acquisition, employee
recognition, progression, success
planning, and career management.
Like all things, however, such
development comes with pros
and cons.
‘On the one hand, this
automated screening process
is known to accurately identify
patterns and trends in candidate
behaviour and performance,
reduce bias, and empower
recruiters to make better informed
hiring decisions even when they
have large volumes of applications
to sift through. On the other hand,
such processes result in vast
amounts of sensitive candidate
information being processed,
stored, and analysed, meaning we
need to better safeguard that data
and maintain the users’ privacy.
‘This is all part of the progress
route, and we had to embrace
this digital transformation if we
wanted to stay at the forefront
of the industry and to continue
offering top-tier services to our
clients.’
University of Malta • Faculty of ICT 17
The MGA is seeking Technical
Analysts and Various IT interns
MGA employees receive numerous benefits such as:
Study and exam leave
Team Events
Ongoing Training
Flexible working arrangements
Sports Leave
Remote Working
recruitment.mga@mga.org.mt | www.mga.org.mt
Stay in the l p!
Sign up for our weekly newsletter to stay
informed about the evolving digital
landscape, gain valuable insights into the
industry, and be the first to know about
career opportunities with the MCA.
www.mca.org.mt/newsletter/subscribe
Change your job
into a career!
The Faculty of Information & Communication Technology (ICT) offers a
range of specialised courses that enable students to study in advanced areas
of expertise, and improve their strategic skills to achieve career progression.
Get to know more
about our courses
um.edu.mt/ict
Our Master courses commencing October 2024
Master of Science (by Research)
Master of Science (Taught and Research) in the following areas:
= Artificial Intelligence
= Computer Information Systems
= Cybersecurity
= Data Science
= Digital Health
= Human Language Science and Technology
= Microelectronics and Microsystems
= Telecommunications Engineering
ALPHABETICAL INDEX
A
Agius Jerome Investigation of visual bias in generative AI 29
Agius Justin Investigating the use of augmented reality for live closed captioning 55
Apap Gabriel A private, secure and decentralised MANET intended for P2P messenger applications 82
Aquilina
Lydell
Towards optimizing cognitive load management for software
developers in the context of digital interruptions 56
Attard Melanie Creating a Maltese-English dual-language word embedding 86
Avona Andrea Study on context-enhanced weapon detection in surveillance systems 44
Azzopardi
Calvin
A comparative analysis of different machine learning techniques
in intrusion detection against evolving cyberthreats 45
Azzopardi Elisa Developing a protocol for human-motion capture using wearable inertial sensors 74
Azzopardi Paul Enhancing cognitive load management in coding environments through real-time eye-tracking data 57
B
Azzopardi
Rianne Marie
Snap-n-Tell: An Augmentative and Alternative Communication (AAC) app with Visual
Scene Display (VSD) for empowering individuals with speech disabilities 67
Bartolo Matthias Integrating saliency ranking and reinforcement learning for enhanced object-detection 30
Bezzina Benjamin Investigating simulated radio signals using machine learning techniques 31
Bezzina Shaizel Victoria The Quest of the Voynich Cipher 90
Bonnici Kelsey Large language model for Maltese 68
Borg
Andrea
BERTu Ġurnalistiku: Intermediate pre-training of BERTu on news articles
and fine-tuning for question answering using SQuAD 87
Borg Wayne A metaheuristic approach to the university course timetabling problem 46
Briffa David Towards a user-centric diet recommender 47
Bugeja
Andrew
Making headway: A user interface to enhance educational
accessibility for children with physical disabilities 58
Buhagiar David Vegas replace: A twist on plunderphonics 69
C
Buhagiar Marie Implementing a real-time solution for predicting rehabilitation potential of Maltese older adults 24
Cachia
Enriquez David Semi-supervised learning for affect modelling 48
Calleja Daniel A study to measure the effectiveness of a job-recommendation algorithm 91
Camilleri Reno Yuri User engagement in serious games 59
Cardona Luke Flexible-bus assignment and routing for carpooling fleets 32
Cauchi Etienne Detecting Earthquakes using AI 49
Chetcuti Karl Using generative AI to evaluate an academic thesis 33
D
Cremona Stephanie A machine learning solution for cyberbullying detection on social media 88
Debono Isaac Leveraging AI to improve demand forecasting in ERP systems 50
Deguara Jacob Feasibility of runtime verification with multiple runs 60
Deguara Mariah Investigating pitch-detection algorithms for improved rehearsal enhancement 70
Demicoli Kyle Brain-to-text 89
Dingli Mark Predictive modelling of sea debris around Maltese coastal waters 34
Drago Matthias Door-access-control system with facial recognition 75
F
Falzon Julian A rule-based DSL for the creation of game-play mechanics for team-based sports 92
Falzon Matteo Development of a 2D-ECC system for enhanced error correction in memory systems 83
Falzon Nicholas A Moneyball Approach to Fantasy Premier League 51
Fenech Jeremy AI in agriculture: Crop yield forecasting 52
Filipovic Damjan Optimizing architectures for MRI scan Alzheimer’s disease diagnosis 25
Formosa Eleanor Claire Adaptation of UI layout using web-usage-mining techniques 61
Formosa Jan Lawrence Ethics in artificial intelligence: A systematic review of the literature 43
G
Frendo Mathias Automobile computer security and communication issues 93
Gatt Dylan Implementation of a visual traffic-data system over FM-RDS and SDR technology 84
Gatt Emma Reinforcement learning for partially observable stochastic games 35
Genovese
Gian Luca
Enhancing navigation in public spaces for individuals with mobility
issues through the use of digital assistive solutions 26
K
M
Grech Ema Social media activity as a tool for diagnosis 36
Guidobono Pedro A. H. VA in VWLE: Virtual assistant in virtual-world learning environment 71
Kenely Matthew Optimisation of saliency-driven image-content-ranking parameters 62
Magro Ismael The evaluation of various web-application firewalls in the presence of malicious behaviour 94
Mangani Zack Evaluating and enhancing user interface design for elderly users 63
Mifsud Matthew Design of a piezoelectrically-actuated MEMS micro-mirror 76
Mizzi Mark Implementation of hardware-accelerated LDPC decoding 85
N
P
S
Muscat Isaac Investigating the impact of inset emojis on images in news articles 37
Muscat Isaac Predicting Eurovision rankings from lyrics, audio features, and sentiment 53
Naudi Luigi Predictive crime-mapping and risk-terrain modelling using machine learning 38
Pirotta Luca Miguel Piezo-actuated MEMS resonator for gas detection 77
Saliba Jacob The visibility and effectiveness of a 3D supermarket 95
Saliba Kris Skateboard-trick recognition through an AI-based approach 39
Saliba Marjohn Vehicle-engine-management security issues: Detection and mitigation 78
Sciberras Clyde Lost-baggage rerouting in commercial airports 79
Sciberras Dean VR Comm - Communication in VR using LLM 54
Sciberras Gianluca AI-Powered Subject Preference Detection for Personalised Virtual Reality Learning Environments 64
Scicluna Dale IoT-based environmental monitoring system for use in a drone 80
T
V
X
Z
Shtanko Olesia Physical rehabilitation of motor skills through an immersive VR environment 27
Spiteri Marcon Learning to rank humans’ emotional states 40
Theuma Julian A machine-learning-based digital twin for football training 41
Treki Amr Audio-signal processing (tone analysis) FPGA-based hardware for signal-processing applications 72
Vella Nicholas Visualisation of inertial data from wearable sensors 65
Xerri Janice Personalised course recommender in a virtual reality learning environment 42
Zahra Neil Using software for the generation and analysis of music 73
Zammit Mark Environment monitoring system using a wireless sensor network 81
Zammit Timothy AR driving using mobile phones 66
Zerafa Luke An indoor vision-based fall-monitoring system for elderly people 28
FICT@UM
HOME
OF
TECH
ARTICLES
CONTENTS
DIGITAL HEALTH
Implementing a real-time solution for predicting
rehabilitation potential of Maltese older adults 24
Optimizing architectures for MRI scan
Alzheimer’s disease diagnosis 25
Enhancing navigation in public spaces for individuals with
mobility issues through the use of digital assistive solutions 26
Physical rehabilitation of motor skills through
an immersive VR environment 27
An indoor vision-based fall-monitoring
system for elderly people 28
DEEP LEARNING
Investigation of visual bias in generative AI 29
Grammar Corrector: The ultimate
online tool for the Maltese language
Alana Busuttil 98
Data security in the age of
quantum computing
Ryan Debono, Maria Aquilina,
Dr Inġ. Christian Galea, & Aaron Abela 100
Turning radiotherapy
procedures into child’s play
Mark Agius & Gavin Schranz 102
Taking Maltese into the
realm of the Chatbot
Kurt Abela, Dr Marthese Borg & Kurt Micallef 104
Reducing inefficiency in
compressed air systems
Jurgen Aquilina 106
2023 Awards: an overview 108
Members of Staff 114
Integrating saliency ranking and reinforcement
learning for enhanced object-detection 30
Investigating simulated radio signals using
machine learning techniques 31
Flexible-bus assignment and routing for carpooling fleets 32
Using generative AI to evaluate an academic thesis 33
Predictive modelling of sea debris
around Maltese coastal waters 34
Reinforcement learning for partially
observable stochastic games 35
Social media activity as a tool for diagnosis 36
Investigating the impact of inset emojis
on images in news articles 37
Predictive crime-mapping and risk-terrain
modelling using machine learning 38
Skateboard-trick recognition through an AI-based approach 39
Learning to rank humans’ emotional states 40
A machine-learning-based digital twin for football training 41
Personalised course recommender in a
virtual reality learning environment 42
AI ETHICS
Ethics in artificial intelligence: A systematic
review of the literature 43
22 Faculty of Information and Communication Technology Final Year Projects 2024
DATA SCIENCE
Study on context-enhanced weapon
detection in surveillance systems 44
A comparative analysis of different machine
learning techniques in intrusion detection
against evolving cyberthreats 45
A metaheuristic approach to the university
course timetabling problem 46
Towards a user-centric diet recommender 47
Semi-supervised learning for affect modelling 48
Detecting Earthquakes using AI 49
Leveraging AI to improve demand forecasting in ERP systems 50
A Moneyball Approach to Fantasy Premier League 51
AI in agriculture: Crop yield forecasting 52
Predicting Eurovision rankings from lyrics,
audio features, and sentiment 53
VR Comm - Communication in VR using LLM 54
HUMAN COMPUTER INTERACTION
Investigating the use of augmented
reality for live closed captioning 55
Towards optimizing cognitive load management for
software developers in the context of digital interruptions 56
Enhancing cognitive load management in coding
environments through real-time eye-tracking data 57
Making headway: A user interface to enhance educational
accessibility for children with physical disabilities 58
User engagement in serious games 59
Feasibility of runtime verification with multiple runs 60
Adaptation of UI layout using web-usage-mining techniques 61
Optimisation of saliency-driven imagecontent-ranking
parameters 62
Evaluating and enhancing user interface
design for elderly users 63
AI-Powered Subject Preference Detection for
Personalised Virtual Reality Learning Environments 64
Visualisation of inertial data from wearable sensors 65
AR driving using mobile phones 66
VA in VWLE: Virtual assistant in virtualworld
learning environment 71
Audio-signal processing (tone analysis) FPGA-based
hardware for signal-processing applications 72
Using software for the generation and analysis of music 73
INTERNET OF THINGS
Developing a protocol for human-motion
capture using wearable inertial sensors 74
Door-access-control system with facial recognition 75
Design of a piezoelectrically-actuated MEMS micro-mirror 76
Piezo-actuated MEMS resonator for gas detection 77
Vehicle-engine-management security
issues: Detection and mitigation 78
Lost-baggage rerouting in commercial airports 79
IoT-based environmental monitoring
system for use in a drone 80
Environment monitoring system using
a wireless sensor network 81
NETWORKS AND TELECOMMUNICATIONS
A private, secure and decentralised MANET
intended for P2P messenger applications 82
Development of a 2D-ECC system for enhanced
error correction in memory systems 83
Implementation of a visual traffic-data system
over FM-RDS and SDR technology 84
Implementation of hardware-accelerated LDPC decoding 85
NATURAL LANGUAGE PROCESSING
Creating a Maltese-English dual-language word embedding 86
BERTu Ġurnalistiku: Intermediate pre-training
of BERTu on news articles and fine-tuning
for question answering using SQuAD 87
A machine learning solution for cyberbullying
detection on social media 88
Brain-to-text 89
SOFTWARE ENGINEERING & WEB
APPLICATIONS
The Quest of the Voynich Cipher 90
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Snap-n-Tell: An Augmentative and Alternative
Communication (AAC) app with Visual Scene Display (VSD)
for empowering individuals with speech disabilities 67
Large language model for Maltese 68
Vegas replace: A twist on plunderphonics 69
Investigating pitch-detection algorithms for
improved rehearsal enhancement 70
A study to measure the effectiveness of
a job-recommendation algorithm 91
A rule-based DSL for the creation of gameplay
mechanics for team-based sports 92
Automobile computer security and communication issues 93
The evaluation of various web-application firewalls
in the presence of malicious behaviour 94
The visibility and effectiveness of a 3D supermarket 95
University of Malta • Faculty of ICT 23
DIGITAL HEALTH
Implementing a real-time solution for predicting
rehabilitation potential of Maltese older adults
MARIE BUHAGIAR SUPERVISOR: Dr Conrad Attard
COURSE: B.Sc. IT (Hons.) Software Development
In Malta, assessing an older patient to determine their
likelihood of deriving benefit from a geriatric inpatient
rehabilitation programme relies solely on the clinician’s
expertise and judgement — both of which are prone to
being subjective — and there is no standard assessment
process in place.
This approach is vulnerable to inconsistent
assessments and may fail to make the most of the
wealth of data available from previous patient cases.
Additionally, with an ageing population and only one
state-operated acute hospital serving the entire
Maltese population, it would be imperative to adopt a
more standard/objective and transparent assessment
process, to ensure and improve efficiency and quality
of service provision that could identify vulnerable older
adults in the community as early as possible.
A doctoral study has been carried out recently to
address some of the limitations in the current Maltese
geriatric rehabilitation (GR) system by developing a
standardised and systematic assessment method,
including its digitisation to ensure feasibility and
usability in practice. The digitisation aspect of this PhD
study involved a digital application (the TERESA Patient
Assessment tool) that incorporated a predictive model
to aid clinicians in assessing patients for rehabilitation
potential (RP). However, the developed predictive model
relies on a limited dataset of only 250 patient records,
which raises concerns about its reliability and ability
to generalise well when presented with new unseen
data. This is a crucial limitation because, once the
model would be integrated into the application, it would
need to learn from new, valuable patient-assessment
information collected through daily use. It is as though
the model has taken a snapshot in time and does not
change after that. This limits its ability to improve its
predictions and support clinicians’ decision-making in
a reliable manner.
The current project has sought to address these
limitations by developing a machine learning (ML) model
Figure 1. The results page showing the prediction
result, clinician decision, and assessment summary
that could grow more knowledgeable as new data is
fed into it, providing predictions to the clinician in real
time. This allows the model to provide more up-to-date
and relevant predictions than a static model, thereby
allowing clinicians to make better-informed decisions,
ultimately resulting in better patient outcomes. The new
model was also integrated into the existing app, through
which all the patients’ details could be viewed, patient
data can be inputted, and prediction results from the
model could be returned and saved for future reference.
To ensure a user-centred approach, a new prototype
suggesting updates in the interface was also proposed.
This was done to integrate the new ML model and to
address feedback and recommendations from the
previous usability study conducted by the PhD student.
Additionally, ongoing communication with the PhD
student in question helped inform the iterative process
further. This critical phase in the development of the
application ensured the app’s practicality. It increased its
likelihood of being accepted and utilised in daily clinical
practice, leading to an effective tool for rehabilitation
assessment, in the long run.
24
Faculty of Information and Communication Technology Final Year Projects 2024
Optimizing architectures for MRI scan
Alzheimer’s disease diagnosis
DAMJAN FILIPOVIĆ SUPERVISOR: Prof. Inġ. Carl James Debono
COURSE: B.Sc. (Hons.) Computing Science
DIGITAL HEALTH
Alzheimer’s disease (AD) is a progressive,
neurodegenerative disease, which leads to a severely
diminished ability to memorise, think and, in the long
run, to carry out even simple tasks. In the majority of
cases, symptoms tend to appear in later stages of life,
in particular among adults over the age of 65. Severe
stages of this disease are characterised by an inability
to live without the help of others.
Although there is no known cure for AD, early
diagnosis would be crucial in slowing down or, at
least, stalling its progression, thus giving the affected
individuals enough time to plan for the future. This
highlights the importance of research in diagnostic
methods of AD and the associated technologies.
One such, relatively new, method involves the use of
artificial intelligence (AI) to support the diagnosis of AD
through magnetic resonance imaging (MRI) scans. This
involves training deep neural networks on thousands
of such images in order to identify accurate patterns
between different artefacts in MRI scans to establish
whether the individual would have AD.
Due to the volumetric (3D) nature of MRI images,
the training of such deep neural networks tends to be a
lengthy process. This poses a problem, as it significantly
slows down the development of new, potentially better,
algorithms. With the goal of helping to mitigate this
situation, this research focused on finding a way for
reducing the computations of an algorithm used in MRI
scans for diagnosing AD, while maintaining its original
accuracy. This optimisation was expected to result in a
shorter amount of time being required for training the
neural network, and to perform the prediction when
provided with an MRI scan.
Specifically, the proposed solution is based on the
observation that not all brain regions are affected by
AD. Furthermore, some of the affected regions exhibit
changes earlier than others. Consequently, the project
proposes a preprocessing step to the algorithm, which
extracts some of the earlier-affected brain structures
from the rest of the brain (see Figure 1). These extractions
Figure 1. Segmented subcortical structures
would then be used to train the neural network, instead.
This extraction process is referred to as segmentation
and brings about a significant decrease in MRI scan
volume that the algorithm would have to process. The
expected result would be a considerable reduction in
training and prediction time, with a marginal decrease
in accuracy of the algorithm. The project also included
the fine-tuning of the architecture of the algorithm to
decrease further the computational load during training
and prediction.
To achieve the proposed solution it was deemed best
to carry out the preprocessing of MRI scans by using
the Oxford Centre for Functional Magnetic Resonance
Imaging of the Brain software library (FMRIB). This is
a library of MRI analysis resources, offering adequate
tooling for segmentation of the sections of interest in
the brain.
The results obtained from this project were very
encouraging, although further work would still be
required. Therefore, the method proposed through this
project opens up further avenues of research in the
domain of AD diagnosis using AI.
University of Malta • Faculty of ICT 25
DIGITAL HEALTH
Enhancing navigation in public spaces for
individuals with mobility issues through
the use of digital assistive solutions
GIAN LUCA GENOVESE SUPERVISOR: Dr Michel Camilleri CO-SUPERVISOR: Ms Rebecca Camilleri
COURSE: B.Sc. IT (Hons.) Software Development
Navigating public spaces when having limited mobility
comes with multiple challenges. Obstacles such as
stairs, narrow passages, or rough terrain require
careful navigation, or the individual might also require
assistance — all of which lead to increased commuting
times and diminished independence. Research shows
that these obstacles not only limit physical mobility but
also fuel social isolation, and tend to have a negative
impact on mental well-being.
Acknowledging the diverse challenges encountered
by individuals with limited mobility, specifically within
urban and semi-urban settings, this study set out to
learn about their unique navigation needs with a view
to addressing them. The proposed artefact consists
of an efficient turn-by-turn solution for facilitating the
navigation of an individual with mobility issues when
passing through the University of Malta campus as a
test model.
Within the artefact, the user receives navigation
instructions made up of step-by-step guidance,
facilitated by text-to-speech technology, ensuring
accessibility for individuals with limited mobility. It is
expected that this artefact will provide safer and more
efficient routes, thus improving the independence,
confidence of users with mobility issues when seeking
to navigate public spaces, also giving them a sense of
inclusion. This artefact also holds potential for future
enhancements, which include: expanding its scope
to encompass a wider geographic area, refining user
experience for enhanced usability, and incorporating
features such as route-reporting to third parties for
added support and assistance, incorporating live
updates.
Figure 1. Mobility-impaired user navigating a campus
using the developed artefact
The technology behind the artefact is made up
of several components. In brief, the software uses
node-based mapping, along with Dijkstra’s algorithm
to generate the safest route for the user. Additionally,
digital mapping, possibly employing platforms like Google
Maps, would further enhance the navigation experience
The backend of the proposed artefact was developed
using TypeScript, with the database being hosted using
MongoDB. On the frontend, Dart was utilised, enabling
the application to be run using Flutter. The artefact’s
speech-to-text functionality was achieved by using
Flutter’s Speech_to_text package.
Figure 2. Context diagram of the artefact
26
Faculty of Information and Communication Technology Final Year Projects 2024
Physical rehabilitation of motor skills
through an immersive VR environment
OLESIA SHTANKO SUPERVISOR: Dr Colin Layfield CO-SUPERVISOR: Dr Stefan Buttigieg
COURSE: B.Sc. IT (Hons.) Software Development
DIGITAL HEALTH
Figure 1. Creating interactive objects with Unity VR
tutorial
Motor-skill rehabilitation is a treatment process that
involves a physical intervention tailored to the particular
injury, towards restoring the individual’s ability to
perform specific movements. Traditional methods of
rehabilitation require consistency for a fully successful
outcome. However, many of those undergoing
rehabilitation lack the confidence and motivation to
remain constant in their journey. One of the major
issues in traditional rehabilitation is the repetitiveness
of the exercises, which drastically undermines the
patients’ level of enthusiasm, leading to poor results.
The above situation is often encountered in the field
of physiotherapy, motivating experts to seek alternative
methods to stimulate their patients. One of the more
widely researched approaches is the incorporation of
gamification into treatments. The digital-game element
has been described as ‘enjoyable’ and ‘fun’, increasing
an individual’s sense of immersion while stimulating
a sense of determination towards achieving the final
goal.
The aim of this project was to explore what exactly
makes this approach so effective and to develop an app
along the same lines. Using the findings of research
and consultations with medical professionals, a virtual
reality (VR) application was designed to provide a set
of therapeutic exercises for upper-body motor skills.
Implementing the elements of immersion, taskoriented
training, entertainment, and motivational
achievement systems, the proposed application seeks
to incorporate the most effective digital and therapeutic
concepts into a more accessible tool for the benefit of
medical professionals and patients alike.
The application was created and designed using
the C# language and the Unity game engine, which
is a platform compatible with VR technology. Figure 1
displays the work in progress of creating objects with
which an individual could interact. The exercises are
based on movements required for activities of daily
living (ADLs), and the individual would be required
to interact with various objects in different ways as
objectives. The application includes a point-andachievement
system, which allows the users to gauge
their performance over time.
The game element is implemented in both the actual
exercises and in the visual environment by creating a
more pleasant atmosphere that aims to excite and
encourage. In developing the app, one of the main
challenges to overcome was calibrating the accuracy
between the input from the VR movement sensors and
the visual response of the application. The best solution
was to move the development onto a faster computer
with a better graphics card. However, the accuracy
could be slightly offset if the VR device would not be
wired to the computer.
Another challenge was establishing how the point
system would work, since real-life progress is variable
and not always consistent. Hence, while exercises
would focus on the basic movements, such as rotation,
grasping, and lifting, the achievement could be gauged
by the variables in the tasks that are affected by these
movements.
It would be necessary for the first couple of sessions
to be carried out with the relevant medical professional,
who would guide the individual to the correct technique.
The application itself has been evaluated in collaboration
with medical professionals and, in the long term, the
objective would be to extend its use to a broader range
of patients.
University of Malta • Faculty of ICT 27
DIGITAL HEALTH
An indoor vision-based fall-monitoring
system for elderly people
LUKE ZERAFA SUPERVISOR: Prof. Inġ. Edward Gatt CO-SUPERVISOR: Prof. Inġ. Carl James Debono
COURSE: B.Sc. (Hons.) Computer Engineering
The World Health Organization reports that one of the
most common recorded accidents are falls, with the
persons mostly at risk being elderly individuals. This
is because, as a person ages, certain bodily functions
start to diminish. Among the manifestations of this are
poorer eyesight and muscle decay. The majority of falls
experienced by elderly individuals are non-fatal, but a fall
may cause chronic pains that could affect their quality
of life. This raises the need for creating a technologybased
system to cater for such situations.
The aim of the project was to devise a fall-detection
system that could utilise computer-vision techniques
for use in the home of an elderly person or within an
active-ageing community. The system would interface
with a smartphone to alert a family member or caregiver
when a fall occurs. To ensure that the detector would
meet the speed requirements to process video, it
was implemented on a field-programmable gate array
(FPGA). An FPGA is an integrated circuit that would
allow the designer to create their own digital logic
using a hardware description language. If adopting the
appropriate design techniques, FPGAs could be fast
and can perform a number of tasks in parallel, making
them perfect for real-time applications. The main
disadvantage of the technology is the limited amount of
memory resources available. Hence, when designing the
detector, it would be crucial to strike the right balance
between accuracy and memory load.
The initial idea of the detector was to develop an
artificial intelligence (AI) model to determine — from
a video stream — whether a fall had taken place. This
would reduce the complexity of the design, as the model
would learn the patterns of a fall. The problem with this
approach is that the model would require a sufficient
memory allocation, thus also requiring an external
memory source.
In view of the above, it was deemed best to adopt
a signal-processing approach. Through this approach,
the designer would utilise digital filters and set criteria
indicating a fall. The operations required for this
approach would be simple, rendering its implementation
more feasible on an FPGA. This reduced the chip area
required, resulting in a more cost-effective system.
Another advantage of using such a method is that it
enables the designer to fully understand the internal
workings of the detector, including conditions that could
lead to false positives.
The final stage was developing an application to
complement the implementation of the detector. The
android app, when connected to the same wi-fi network
as the detector, would seek any messages sent by the
FPGA. Once the detector would perceive a fall, it would
send a message to the smartphone. It then signals
the smartphone user through an alarm, providing the
location of the fall.
The main focus of the project was to create
an accurate and inexpensive fall detector. Further
improvements could be made to improve the
responsiveness of the system. One such feature would
be the transmission of voice from both the smartphone
and the detector. When a fall would be detected, the
system would receive and transmit voice signals to
allow a two-way communication with the individual who
would have had a fall.
Figure 1. A high-level workflow diagram of the fall
detector
28
Faculty of Information and Communication Technology Final Year Projects 2024
Investigation of visual bias in generative AI
JEROME AGIUS SUPERVISOR: Dr Dylan Seychell CO-SUPERVISOR: Prof. John Abela
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
With the increasing sophistication and widespread
use of image-generation techniques, along with overall
acceptance of their use in daily life, there has been a
pressing demand for fair and non-biased systems. This
increased use of such systems, and the uncertainty
concerning their bias, served as the main motivation
for this study, which delved into the possible forms of
bias present in popular image-generation systems, in
particular Stable Diffusion, Dall-E and Midjourney.
These types of models function by converting
text prompts into images matching the description of
the provided prompts. In line with this, the selected
approach involved generating images of a set of
predetermined prompts, and analysing the said images
in terms of the gender, age, race and prominence of the
person/s depicted. The process also involved utilising
various metrics to establish the extent of the presence
and severity of the detected bias.
This task was achieved through the above-mentioned
models, alongside the following set of prompts, namely:
a picture of a doctor, a nurse, and a doctor and a nurse
facing the front, for which 3465 images were generated
spanning across all prompts and models (a sample of
the generated images is depicted in Figure 1). This was
done to provide a sufficiently accurate basis to support
the conclusions reached,
The images were then preprocessed to extract
the individual persons from the images through the
YOLOv8 person-detection model. This procedure was
crucial in facilitating accurate image annotation while
generating the required data to determine an individual’s
prominence in an image, in line with the implementation
presented in the REVISE research paper . These images
were then passed through the MTCNN face detector,
which extracted and realigned individual faces thereby
increasing the accuracy of correct annotations. The
annotation process itself was carried out through
the use of the FairFace and DeepFace models, which
provided gender, race and age predictions. Finally, the
resultant image attributes were processed to extract a
variety of metrics that eventually led to the conclusion.
Figure 1. ‘Doctor’, ‘nurse’ and ‘doctor and nurse’
images generated by Midjourney, Dall-E and Stable
Diffusion
It was also necessary to obtain a full picture of the
bias present in the said models’ images from the LAION-
400M dataset, and this entailed a similar process. Here,
images of doctors and nurses were extracted and
processed in the same manner as outlined above, but
with the added introduction of human annotation. This
procedure was carried out using multiple Google forms
and involved the annotation of a total of 194 images.
The purpose of this process was to outline the innate
bias present in the annotation models used (FairFace /
DeepFace) while also exposing the innate bias present
in the training data of such models. The reason for this
was that the LAION-400M dataset consisted of a subset
of the data used to train the Stable Diffusion model, and
it was the only publicly available dataset associated with
the above-mentioned generative models.
In conclusion, this study emphasises the need for
non-biased generative models while exposing the
bias present in some of the popular publicly available
generative models. It offers a simple tool with which
other similar models could be assessed.
DEEP LEARNING
University of Malta • Faculty of ICT 29
Integrating saliency ranking and reinforcement
learning for enhanced object-detection
MATTHIAS BARTOLO SUPERVISOR: Dr Dylan Seychell CO-SUPERVISOR: Dr Josef Bajada
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DEEP LEARNING
In an age where sustainability and transparency are
paramount, the importance of effective object-detection
algorithms cannot be overstated. While these algorithms
are notably fast, they lack transparency in their decisionmaking
process. This project has explored innovative
approaches to object detection, combining visualattention
methods based on reinforcement learning
with saliency ranking techniques, while providing the
necessary visualisations for explicit algorithm decisionmaking.
By employing saliency-ranking techniques
that could emulate human visual perception, the
reinforcement learning (RL) agent was equipped with
an initial bounding-box prediction. Given this preliminary
estimate, the agent iteratively refined these boundingbox
predictions by selecting from a finite set of actions
over multiple time steps, ultimately achieving accurate
object detection.
This study also investigated the use of various
image-feature extraction methods and explored various
Deep Q-Network (DQN) architectural variations for
localisation agent training based on deep reinforcement
learning. Additionally, it focused on optimising the
pipeline at every juncture, prioritising lightweight and
faster models. Moreover, the proposed system sought
to integrate several components into this pipeline,
including object classification. This enhancement
would allow for the classification of detected objects,
a capability absent in previous reinforcement-learning
approaches.
Figure 1. Architecture integrating reinforcement
learning and saliency ranking for object detection
Object detection tends to be applied to highrisk
domains, such as medical-image diagnosis and
security surveillance. Hence, it would be crucial for
these systems to ensure transparency. Compared to
previous methods, the proposed approach also includes
multiple configurable real-time visualisations. These
visualisations offer users a clear view of the current
bounding-box coordinates and the types of actions
being performed, thus being conducive to a more
intuitive understanding of algorithmic decisions.
The approach described above was built with the
aim of fostering trust and confidence, particularly in the
implementation of artificial intelligence (AI) techniques
in critical areas, while contributing to ongoing research
in the field of AI.
Figure 2. An entire action-log display
30
Faculty of Information and Communication Technology Final Year Projects 2024
Investigating simulated radio signals
using machine learning techniques
BENJAMIN BEZZINA SUPERVISOR: Prof. Adrian Muscat
COURSE: B.Sc. (Hons.) Computing Science
Pulsars are rotating neutron stars, formed as remnants
from a supernova explosion, that emit beams of radiation
from their poles. In view of these characteristics,
pulsars periodically send signals that could be detected
by satellites or radio telescopes. These signals are
particularly relevant to astronomers.
As signals travel through space, they pick up
any surrounding noise, making them less clear.
Consequently, the vast volume of data reaching the
satellites or telescopes would be too daunting to be
inspected manually to determine whether the features
of certain signals have originated from a pulsar. This
poses a problem, especially when the signal-to-noise
ratio (SNR) would be low. Therefore, using machine
learning (ML) techniques, as opposed to manual
inspection, would reduce the human labour required,
thus increasing the level of efficiency.
Many studies have been carried out on this topic,
using deep learning methods such as convolutional
neural networks (CNNs). Even so, a common
observation in the consulted research was that the
datasets for training the models had all been composed
of mixed amounts of noise and interference. Therefore,
the main aim of this study was to investigate the impact
of noise on the ML models.
The set objective was achieved by creating and
using three simulated datasets, each characterised by
different SNR levels, namely: low, medium, and high.
By comparing the results, it was possible to determine
whether noise affected the machine, and to what
extent. These datasets were then used on different ML
models created using various techniques. The results
obtained offer an insight as to which type of model
would be the most suitable, depending on the SNR.
The main conclusion reached was that different models
would be better for different levels of SNRs. On the
basis of this premise, future research in this area could
focus on whether the best model to be used for certain
data could be predetermined according to the noise
level extracted from the said data.
DEEP LEARNING
Figure 1. Pulses from the first discovered pulsar, PSR
1919+21
Figure 2. A pulsar
University of Malta • Faculty of ICT 31
Flexible-bus assignment and
routing for carpooling fleets
LUKE CARDONA SUPERVISOR: Dr Josef Bajada
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DEEP LEARNING
Public transport systems traditionally follow a fixed
route and have a fixed schedule. Recently, there has
been a rise in transport infrastructure that caters to the
demand for transport vehicles known as flexible buses.
Flexible buses are usually smaller in size and capacity
than regular buses, and do not follow a fixed schedule.
Instead, users typically request a trip through an online
booking system. A local example is the Tallinja On
Demand service, which currently covers only a specific
area of Malta.
Due to the fast-growing popularity of flexible buses,
there is still a lack of research and suitable algorithms
to schedule bus routes efficiently in real time. Hence,
this research largely consisted in evaluating traditional
algorithms for scheduling similar travelling-salesman
problems against more recent research that evaluated
newer possible algorithms. The different algorithms
were applied to the Maltese Islands and assessed
using existing bus stops as possible pick-up or dropoff
points for users of flexible buses. This mirrored
the operations of Tallinja On Demand. in Malta. In
being presented with an efficient and convenient
alternative to fixed‐route buses, passengers would
be motivated to use public transport more frequently.
The base algorithm used in this project was Tabu Search,
which is a search algorithm that starts from an initial
solution and explores similar solutions to find a better
solution. It keeps track of a list known as the Tabu List
to avoid visiting the same solution multiple times. Since
Tabu Search is a search algorithm, when applied to
complex scenarios, it can take some time to arrive at
a desirable solution. A longer time lapse could render
Tabu Search inefficient at handling incoming requests
that would require a real-time response.
Figure 1. One of the vehicles used for the Tallinja On
Demand flexible-bus service
Reinforcement learning involves training an agent
that interacts with an environment. In this project,
the environment was composed of the current bus
positions, their routes, and the current requests. The
agents got rewards according to the actions taken
in a specific state of the environment. Throughout
multiple iterations of an environment, the agent learnt
to maximise the rewards achieved. At the end of the
training process a model, also known as a policy, was
created for the purpose of real-time scheduling for the
flexible buses.
The different algorithms were evaluated in terms
of the speed at which they could provide feasible
solutions. They were also assessed according to the
metrics related to the solution’s efficiency, such as the
total distance travelled by the fleet of the flexible buses,
the distance each bus travelled without any passengers,
and the average time it took the algorithm to fulfil the
request.
32
Faculty of Information and Communication Technology Final Year Projects 2024
Using generative AI to evaluate
an academic thesis
KARL CHETCUTI SUPERVISOR: Prof. Alexiei Dingli
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Academic theses are long, complex documents that
demand significant time, focus, and commitment from
their authors. In view of their scale, these projects could
result in errors that might go unnoticed. Therefore, the
aim of this project was to create a writing-assistant
tool capable of providing feedback on academic papers,
specifically undergraduate theses by students from the
Faculty of Information & Communications Technology at
the University of Malta.
Generative AI was employed to address this
problem. Mixtral 8x7B, a large language model (LLM)
released by Mistral AI in late 2023, was chosen for its
high performance rates and unique mixture-of-experts
machine learning architecture, making it ideal for
evaluating academic works. To ensure that the model
could deliver a consistent, thorough, and unbiased
evaluation of a thesis, Mixtral needed to be trained on
a dataset of relevant information. This consisted in a
set of guidelines and a corpus of undergraduate theses
from previous years.
One notable challenge encountered during this project
was a lack of sufficient training data for Mixtral, as it
was not possible to obtain the corresponding grades for
the corpus of theses used. This was tackled by creating
a synthetic dataset using award-winning theses as the
best data. Additionally, a set of customised prompts
for Mixtral and an evaluation scheme were devised to
extract the necessary feedback from the model and
generate an estimated grade for a thesis.
The model underwent training using a technique
called retrieval augmented generation. This involved
converting the training documents into numerical
representations of themselves, called embeddings, and
indexing them into a vector store. This enabled the model
to retrieve the relevant documents from its database
when evaluating a new thesis, by comparing the input
document against the indexed vectors. This approach
also laid the foundation for future enhancements of the
evaluator, which may potentially allow it to cater to a
broader range of fields of study.
The outcome of this project was an interface that
could accept a thesis as input and return relevant
feedback to the user, in line with the Faculty of ICT’s
undergraduate guidelines, as well as an estimated
grade. Evaluation was performed on the trained model
by testing it on new theses, and comparing the final
version with the original untrained version, to determine
whether the training data had a significant impact on the
outcome. Feedback was also gathered from participants
who tested the evaluator with their respective theses.
DEEP LEARNING
Figure 1. A visualisation of the evaluation process
University of Malta • Faculty of ICT 33
Predictive modelling of sea debris
around Maltese coastal waters
MARK DINGLI SUPERVISOR:Dr Kristian Guillaumier
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DEEP LEARNING
Sea-surface debris presents numerous ecological
and environmental challenges that negatively affect
both marine ecosystems and human activities. This is
exacerbated by the absence of an effective system that
could predict their movement, making it more challenging
to address this issue effectively. Unfortunately, this is
also the case with Maltese coastal waters.
The primary objective of this project was to
create a forecasting system that could predict
dispersion patterns of sea-surface debris in Maltese
coastal waters. The proposed pipeline uses historical
data about sea-surface currents to predict future
conditions, while also having the ability to visualise
the movement of debris. This would be conducive to
a better understanding of such patterns, thus enabling
taking more informed decisions about our environment
and our effect on it.
To achieve this, a comprehensive machine learning
and physics-based pipeline was developed. This
pipeline would first select a specific area of interest
in the Maltese coastal waters, as seen in Figure 1.
The next step was to preprocess the historical data
concerning sea-surface currents; for each point within
this selected area, both long short-term memory
(LSTM) and gated recurrent unit (GRU) models were
trained to predict the ensuing 24 hours of sea-surface
currents. A comparison of the two models was carried
out, to identify the more effective one.
These predictions were fed into a Lagrangian model
to simulate and visualise the movement of surface
debris. Figure 2 offers a sample visualisation, showing
the initial and final locations of surface debris after 24
hours.
While several observations and challenges
were encountered (most notably concerning data
preprocessing and predictions of sea-surface
currents), the results obtained to date were very
promising. Moreover, a number of improvements could
be implemented to render the proposed system even
more robust and effective.
Figure 1. Area boundaries for the simulation Figure 2. Debris locations before and after the 24-
hour time frame
34
Faculty of Information and Communication Technology Final Year Projects 2024
Reinforcement learning for partially
observable stochastic games
EMMA GATT SUPERVISOR: Dr Josef Bajada
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Inscryption is what is known as a rogue-like deckbuilder
video game. Rogue-like games task the player
with navigating through a sequentially generated
labyrinth environment, while battling various enemies
and seeking to grow stronger. Should the player fail to
do so and die, they would be forced to start the game
from the very beginning.
Players get stronger in Inscryption by improving their
card deck, so this is where the deck-building aspect
comes into play. Through different types of events, the
player would be given different opportunities to improve
their deck; for example, by adding stronger cards or
removing weak cards from their deck.
Battles in this game are turn-based, where the
player always goes first. The player’s turn starts by the
drawing phase, followed by the play phase and finally
the attack phase. During the draw phase the player has
the option to either draw one card from the side deck
or from the main deck. Each card can be defined by its
cost, attack, health and, if applicable, a special move
called a sigil. The accompanying image displays three
different cards, showing the cost of the card (top right),
the attack (bottom left) and health (bottom right). The
card’s sigil can be seen at the bottom centre of two of
the cards.
The play phase is when the player can play the cards
on the board. Once the player has played all the cards they
want, the next phase begins. This is the attack phase,
when all the cards on the player’s side of the board attack
the enemy, this is then followed by the enemy’s turn.
In this project an artificial intelligence (AI) model
was trained to play and win these battles using
reinforcement learning (RL). The method followed was
presenting the AI model with a variety of battles and
decks, and depending on how it played the game, it was
either given a positive or a negative reward, with the
biggest positive reward being earned for winning the
game. This is called reward shaping, that is by giving
small intermediate rewards the model will converge
more quickly.
A particular challenge encountered in this project
was the implementation of the action space. This refers
to every possible action that the model could take. In
this case, the action space could be defined as either
drawing from the main or side deck and as a every
possible combination of card played, payment used,
and tile played. This definition becomes a 38,720-sized
action space and requires some sort of action masking,
so that the model would not waste time trying to take
impossible actions. This issue was tackled by storing
most of the information in the observation space, thus
reducing the overall size of the action space to 1,047.
The game was recreated using Python and a
custom-built RL environment was created using
OpenAI’s gymnasium. This learning environment was
then wrapped around the recreation and looped through
many different randomly generated decks and battles
to train the model thoroughly.
DEEP LEARNING
Figure 1. Three different cards from the decks used in Inscryption
University of Malta • Faculty of ICT 35
Social media activity as a tool for diagnosis
EMA GRECH SUPERVISOR: Prof. Lalit Garg
COURSE: B.Sc. IT (Hons.) Software Development
DEEP LEARNING
With every passing year, the number of persons
suffering from general depressive disorder (GDD) and
generalised anxiety disorder (GAD) continues to grow.
It proves to be especially prominent among the younger
generations, and has been linked to a prolonged use
of social media. Interestingly, this excessive scrolling
of posts on these platforms also makes them large
repositories of information. Consequently, there arose
the question as to whether social media posts, both
visual and textual, could be of use when it comes to
diagnosing the aforementioned mental disorders.
This study relied on the posting history of public
figures and celebrities, who had openly discussed their
past struggles with mental health online, in order to
avoid any ethical issues, including risking breaching data
privacy laws and exploiting the vulnerabilities of mental
health patients. Any concern or doubt experienced in
this respect was discussed with a professional in the
field.
An algorithm was devised to filter the sourced
information to pinpoint any evident similarities, and to
save the information for later use. Various attributes
were compared. Using Instagram as the main source,
images were analysed according to the colours present
(according to the distributions of red, blue and green),
as well as the objects present in said images and facial
expressions, if present.
The text from the posts was first cleaned. This
means that frequently used words, such as ‘the’, ‘of’,
etc., were removed along with the punctuation; the
remaining words were reduced to their root forms.
These are referred to as tokens, and were analysed in
terms of frequency both individually and as phrases
in different combinations (n-grams). Already existing
sentiment scores that were derived from other studies
were used to allocate scores to the emojis and words
used.
The resulting similarities were used to create a
neural network code that would filter posts and compare
their contents to the previous results obtained. Any
similarities encountered would be categorised thus:
‘no similarities’, ‘high similarity for depression’, ‘high
similarity for anxiety’, or ‘high similarity for both’. During
the training phase, the algorithm improved itself by
comparing its output with the expected output and then
applying a calculation to fix the network accordingly.
This step was repeated until the results obtained
were satisfactory. The algorithm was tested through
cross-validation, where the data collected initially was
segmented into training data and testing data a number
of times, ending by comparing the findings each time.
This study has relied on prolific social media
users to obtain sufficiently reliable results. While not
fully accurate, the developed algorithm has remained
consistent with the original aim of the study, which
was to determine if the said data could be used for
aiding in diagnosis of GDD and GAD. Moreover, it would
be important to bear in mind that such a tool should
always be used by mental-health professionals, with
the consent of their patients.
Figure 1. An overview of the text-cleaning process
36
Faculty of Information and Communication Technology Final Year Projects 2024
Investigating the impact of inset emojis
on images in news articles
ISAAC MUSCAT SUPERVISOR: Dr Dylan Seychell
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
This project focuses on the use of inset emojis on
images accompanying online news articles. Inset
emojis are standard emojis, such as a smiley face,
placed over a background image. In many news
articles, journalists opt to use images to support their
text and to attract readers. However, misuse of these
images could lead to misinformation, portraying the
events covered by the article inaccurately. Additionally,
inset emojis could influence the user’s interpretation
of the background image – or the news item itself. For
example, positioning an angry emoji over a picture of a
restaurant could evoke a sense of dissatisfaction with
the establishment.
The rapid evolution of the media landscape,
marked by the increase in information dissemination,
led to advancements in automated bias-detection
methods. While existing tools primarily target textual
bias, a few projects have sought to also tackle bias
in images. However, challenges persist, — not least
in connection with inset images — in view of their
unique characteristics. This necessitated a specialised
approach encompassing the nuances of image-based
media. Moreover, interdisciplinary collaboration among
researchers in fields including artificial intelligence,
media studies, and communication research was vital
for a holistic understanding. As technology progresses,
more research is required to refine bias-detection
methods and to analyse biases across different media
formats as comprehensively as possible.
To reduce the misuse of inset emojis in images,
this project developed a dataset of emojis inset onto
background images and also trained models to locate
these emojis in each image. Furthermore, an evaluation
involving actual users was carried out, to gain a better
understanding of how these emojis tended to influence
the reader’s overall perception of the image through
quantitative and qualitative research.
The implementation of an adequate dataset required
the use of two image datasets, one for background
images and one consisting of emojis. When creating
the dataset, the emojis were transformed randomly
in terms of rotation, scaling, and position to create a
diverse set of examples. One image could feature a
laughing emoji at the centre of the background image,
while another could feature three different types of
emojis, each slightly covering the other at the corner
of the background image. This diversity in the dataset
would render the models trained on this data more
robust and accurate in detecting such inset emojis.
The user-evaluation stage consisted of both a
survey and an interview. The survey addressed the
public’s knowledge regarding inset emojis and covered
a sequence of non-inset and inset images to analyse
the effect the emojis had on the viewer’s overall opinion
of the image. Moreover, an interview was conducted
with a professional in the field to gain insight into the
process of image selection and the reasoning behind
such a process.
DEEP LEARNING
Figure 1. Examples of inset emojis detected by the trained model
University of Malta • Faculty of ICT 37
Predictive crime-mapping and risk-terrain
modelling using machine learning
DEEP LEARNING
LUIGI NAUDI SUPERVISOR: Dr Michel Camilleri CO-SUPERVISOR: Ms Rebecca Camilleri
COURSE: B.Sc. IT (Hons.) Computing and Business
Crime has no boundaries, casting a shadow over societies
worldwide. From petty theft to aggravated assault,
its impact disrupts communities, economies, and the
very fabric of society. Preventing crime will always be
a highly challenging task. However, the application of
techniques and technologies in spatial data modelling,
machine learning (ML) and geographic information
systems, would enable law enforcement entities to
allocate their resources as efficiently as possible. This
could be achieved through the use of visualisation tools
and predictive technology in particular.
This project set out to explore the possibility of
modelling and predicting crime using ML techniques.
With the use of detailed historical crime data containing
spatial and temporal attributes, crime patterns and
trends could be captured and learned by AI models The
expectation was that ML models would demonstrate
superior performance and prediction power in identifying
and forecasting crime hotspots and trends, when
compared to conventional data-analysis methods.
Adopting a systematic methodology, the study
used a detailed crime dataset concerning Los Angeles,
alongside spatial data from other publicly available
sources. The initial phase involved data cleaning and
aggregation procedures, such as grouping similar crime
categories and the correction of inconsistencies in
spatial data. This entailed addressing instances where
data points were misclassified in one geographic area,
despite their spatial positioning in another. This helped
ensure the integrity and accuracy of the dataset.
Some noteworthy discoveries in the dataset included
unclassified crimes, as well as a significant amount of
missing data in 2014 for certain areas. These insights
were discovered through data analysis and statistical
plotting.
Of particular significance were the graphical
representations created as part of the project,
illustrating the temporal evolution of the top 10 crimes
across different regions of Los Angeles over the study
period. Employing linear regression (LR) analysis,
trend lines were fitted to the data to facilitate monthly
predictions for 2019. Central to the study’s objectives
was the comparison of prediction accuracy between
linear regression and ML methodologies, which are
adept at capturing intricate patterns within the data.
The study hypotheses that relying solely on LR may fall
short of capturing nuanced crime patterns, potentially
Figure 1. Heat map representing crimes by
area, as recorded in Los Angeles in 2019
leading to inferior predictive outcomes, when compared
to AI models.
To test this hypothesis, a group of ML models were
implemented using Python AI libraries and trained on
the aggregated dataset used for data analysis and LR.
Specifically designed for regression tasks, the models
were tasked with predicting the monthly total count
of criminal activity across various crime categories
within each region of Los Angeles for 2019. The project
also included the development of a web-based GIS
for visualising criminal hotspots from the models’
predictions, as well as querying capabilities.
The utilisation of QGIS and PostGIS technologies
played a pivotal role in this project. QGIS proved
indispensable for various tasks, including data analysis,
heatmap creation, and spatial queries. Additionally,
it facilitated a number of data-cleaning processes,
enhancing the accuracy in the representation of spatial
data. PostgreSQL served as the project’s databasemanagement
system, handling data storage, querying,
and preprocessing. Furthermore, the spatial capabilities,
augmented by the PostGIS extension, empowered the
system with advanced spatial-analysis functionalities.
38
Faculty of Information and Communication Technology Final Year Projects 2024
Skateboard-trick recognition through
an AI-based approach
KRIS SALIBA SUPERVISOR: Dr Joseph Bonello
COURSE: B.Sc. IT (Hons.) Software Development
In recent years, skateboarding has had a significant
revival, garnering widespread popularity, and asserting
its presence in mainstream culture and digital media.
Platforms such as YouTube, Instagram and TikTok have
become social hubs where skateboarders worldwide
showcase their skills.
This rise in skateboarding content has created a
demand for on-screen trick identification, enabling
inexperienced viewers to identify tricks performed by
skateboarders in videos through a digital overlay that
would label each manoeuvre. Traditionally, this required
labelling each trick in videos manually — a process
that is not only time-consuming but also susceptible
to errors. Building a system capable of identifying
skateboard tricks in real time could save time and
provide real-time benefits for live-streamed content,
such as skateboarding events.
This project employed deep learning strategies and
preprocessing techniques to identify which approaches
would be the most effective in improving accuracy and
robustness in classifying skateboard tricks. It explored
techniques such as: optical flow for frame extraction;
data augmentation for dataset enhancement; and pretrained
models (ResNet50, VGG) for feature extraction.
In addition, the project examined the use of long shortterm
memory (LSTM) and bidirectional LSTM (Bi-LSTM)
networks to understand the sequence of movements,
so as to compare their effectiveness in capturing the
complex dynamics of skateboard tricks.
For automating on-screen trick identification, the
Python programming language was chosen for its wide
use in machine learning and also due to its extensive
selection of libraries, such as Tensorflow and Keras,
which served as high-level interfaces for building
and training the models. This task posed significant
challenges due to the variable nature of skateboarding
videos, including diverse camera angles, lighting
conditions and complex skateboard movements.
Notwithstanding the various challenges encountered,
this research successfully developed a system that
offers a promising level of accuracy in real-time trick
identification, paving the way for future advancements
in combining artificial intelligence and skateboarding.
DEEP LEARNING
Figure 1. Optical-flow representation of a kickflip sequence
Figure 2. Proposed live broadcast through a trick-name overlay
University of Malta • Faculty of ICT 39
Learning to rank humans’ emotional states
MARCON SPITERI SUPERVISOR: Dr Konstantinos Makantasis
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DEEP LEARNING
Recognising and understanding human emotions is a
challenging task that influences how individuals interact
and communicate with each other. Moreover, analysing
these emotional states through technology could be
all the more challenging, primarily because emotions
are inherently subjective. This study has employed an
exploratory approach, utilising ranking algorithms to
mitigate subjectivity bias in emotion recognition.
This project has regarded emotions as ordinal
variables, rather than fixed categories. The dominant
approach towards emotion modelling treats emotions as
nominal variables, thus limiting the understanding of the
complex range of human emotions by oversimplifying
them into distinct categories, e.g., ‘happy’, ‘sad’ or
‘angry’. However, this approach tends to overlook the
fluidity and subtle variations of emotional states.
This study embraces an innovative strategy that
incorporates subjectivity as an inherent component of
emotions. The purpose is to evaluate the concept of
ordinal emotion modelling by classifying emotion labels
as ordinal variables, based on psychological theories and
evidence from various disciplines, such as neuroscience.
In order to achieve the above, five distinct ranking
algorithms were designed and subsequently evaluated
on their performance. These algorithms were: random
forest preference learning, ordinal logistic regression,
ordinal neural networks, RankNet, and LambdaMART.
The results generated by the above-mentioned
algorithms produced lists of emotional states that were
ranked according to their levels of arousal and valence.
In other words, they ranked emotions according to
Figure 1. Mapping emotions on an arousal-valence
grid
how intense they were perceived and how pleasant
or unpleasant they appeared. The performance of
these ranking algorithms was evaluated against two
publicly available datasets, AGAIN and RECOLA. For the
evaluation framework, statistical measures, such as
Kendall’s tau coefficient and the Pearson correlation
coefficient, were used to evaluate the quality of the
predicted ranks.
Through the integration of affective computing
and machine learning, this study has evaluated the
efficiency of ranking algorithms in reducing bias. The
results obtained confirmed that the proposed system
is effective, underscoring its suitability for real-world
applications.
Figure 2. The process of ranking
emotional states
40
Faculty of Information and Communication Technology Final Year Projects 2024
A machine-learning-based digital
twin for football training
JULIAN THEUMA SUPERVISOR: Prof. Lalit Garg
COURSE: B.Sc. IT (Hons.) Computing and Business
Digital twins are precise virtual replicas of physical
objects. The objective of this project was to develop a
workflow for creating a digital twin of a professional
footballer. This digital twin would mimic the footballer’s
shooting, passing, and dribbling skills in an accurate
manner. The proposed innovation seeks to create
opportunities for coaches to analyse the players of
rival teams and to tailor their team’s training programs
accordingly.
Machine learning (ML) is the process through which
a computer identifies patterns within data to make
predictions. In this project, the decisions of the digital
twin were based on the predictions of three separate
ML models, namely: the shooting, passing, and the
dribbling models. Each model was set to learn from the
footballer’s actions across several matches, and the
predictions reflected the player’s decision-making in a
given situation. A sample prediction could be the point
in goal that the footballer would aim to shoot, taking into
consideration the positions of the goalkeeper, defenders,
and the said footballer on the pitch. An overview of the
entire process is provided in Figure 1.
There exist multiple ML algorithms, such as:
random forests, decision trees, linear regression and
neural networks. Various iterations of each model
were developed and optimised using these techniques.
Subsequently, the models that reflected the footballer’s
in-game decisions the most accurately were selected
for integration into the digital twin.
Finally, a simulation environment for this digital twin
was developed to visualise the digital twin in action. The
simulation environment would allow users to create a
football scenario by dragging-and-dropping footballers
onto a pitch and visualise how the digital twin would
respond through animation, as shown in Figure 2. This
response could replicate the real-world footballer’s
decision-making, thus enabling coaches to prepare
training sessions aimed at countering these actions. As
a result, it promises to be an invaluable tool for securing
victories in future matches.
DEEP LEARNING
Figure 2. Screenshot of the simulation environment
Figure 1. System design of the digital twin
University of Malta • Faculty of ICT 41
Personalised course recommender in a
virtual reality learning environment
JANICE XERRI SUPERVISOR: Prof. Matthew Montebello
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DEEP LEARNING
Navigating the realm of online learning presents learners
with the daunting task of wading through a vast array
of courses, often leading to decision paralysis. In an
attempt to address the situation, this study proposes
the integration of a personalised course-recommender
system (PCRS) within a virtual reality learning
environment (VRLE). This innovative system seeks
to simplify course selection and enrich the learning
experience by providing a personalised and immersive
educational journey.
At the core of the study was exploring how artificial
intelligence (AI) and machine learning (ML) technologies
could be utilised to create a system that would adapt
to individual learning preferences, making learning about
courses more engaging and effective.
The solution combines various AI techniques,
such as collaborative and content-based filtering, to
offer customised course suggestions. An advantage
that would be worthy of note is the effectiveness of
content-based recommendations, especially when
complemented by success-rate predictors, which could
estimate a potential student’s likelihood of completing
a course, while taking into consideration their abilities
and interests.
To bring this concept to life, a tailored dataset
comprising user profiles, course specifics, and user
ratings was meticulously curated and analysed. This
dataset formed the foundation of the recommendation
system, and was developed through a hybrid approach
to address the unique challenges of online learning,
including varying levels of user engagement and diverse
learning goals. Initial feedback from users engaging
with the VRLE was positive, with users expressing
enthusiasm for this innovative learning approach. They
also provided valuable insights for further enhancement
By integrating the PCRS with virtual reality technology,
specifically through the Unity environment, this project
not only demonstrates the transformative potential of
AI in education but also establishes a new benchmark
for personalised learning. It empowers learners to
participate actively in their educational journey, rather
than being passive recipients of information.
Figure 1. Architecture of the proposed software
Figure 2. Screenshot of the proposed application
42
Faculty of Information and Communication Technology Final Year Projects 2024
Ethics in artificial intelligence: A
systematic review of the literature
JAN LAWRENCE FORMOSA SUPERVISOR: Dr Vanessa Camilleri
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Given the rapid strides in innovation, artificial
intelligence (AI) has emerged as an intrusive
technology, which has revolutionised the way things
are done in every aspect of daily life. As a result of
this unparalleled growth, the ethics surrounding the
development and implementation of AI have become
a central topic of countless discussions and debates.
These ethical implications, comprising accountability,
fairness, transparency and privacy in relation to AI
systems, have generated a myriad concerns, due to the
ability of AI to influence human behaviour and societal
structures at large.
There is already a substantial body of published
work concerning these ethical dilemmas, which
continues to grow, as legislation seeking to address
the said dilemmas is being drafted and implemented
daily. Nevertheless, there is a further need to assess
and scrutinise these resources to gain an in-depth
understanding of the ethical domain of AI in order to
identify any gaps. Academics, as well as companies
or organisations wishing to embark on research and
development of AI-based systems, may need guidance
and direction on issues that may range in complexity,
and which may affect humans and the intended output
from the systems.
Since substantial research has already been
published in various fields, the objective of this study
was to conduct a systematic review of the studies
surrounding ethics in AI, alongside one-to-one
interviews with experts in relevant fields, in order to
establish the key ethical issues and emerging trends.
The findings from this research have resulted in a list
of recommendations and policy guidelines concerning
AI-driven technologies.
The systematic review was conducted following an
adapted version of the PRISMA reporting guidelines,
with the results being analysed qualitatively, as opposed
to purely statistically (quantitively). This allowed for
better identification of key points, while also allowing for
better comparisons with the qualitative data obtained
from the expert interviews. In the interest of acquiring a
comprehensive understanding of the existing literature,
the systematic review was carried out across three
different databases, with papers being sampled
randomly from the relevant results, once they met a
set threshold of minimum citations. This ensured that
the papers to be analysed and included in the review
constituted the most relevant literature available at the
time the review was conducted.
Interviews were held with experts from various
relevant backgrounds and areas of work, which
allowed for deep insights into the ethical implications
and considerations regarding the day-to-day use
of AI in contemporary environments. The experts
were purposefully selected from various fields. They
were presented with the same set of questions to
gain meaningful and up-to-date results that could be
compared with each other and with the findings of the
systematic review.
After analysing and comparing the results from the
comprehensive systematic review and expert interviews,
an easily accessible tool in the form of a website was
developed to host the findings. This website contains
a list of recommendations and guidelines, the literature
from the systematic review and key points emerging
from the expert interviews. The goal of the website
was to help map the way forward for anyone wishing to
conduct research and develop AI-based systems.
AI ETHICS
Figure 1. Key fields targeted in the systematic review
University of Malta • Faculty of ICT 43
Study on context-enhanced weapon
detection in surveillance systems
ANDREA AVONA SUPERVISOR: Prof. Joseph G. Vella
COURSE: B.Sc. IT (Hons.) Software Development
DATA SCIENCE
This research in digital forensics (DF) tackles the
challenge of analysing vast amounts of CCTV footage
to investigate crimes involving weapons. Going through
hours of video evidence is a slow and tedious process,
potentially delaying investigations and missing crucial
details.
The driving hypothesis of this project was that
machine learning (ML) could be used to speed up the
process and make it more accurate. A model was
trained to analyse footage and detect weapons to
assist investigators working on resolving a crime. If
used correctly, the model could also help reduce the
response time in the event of a dangerous situation
involving a gun, and ideally to prevent an escalation.
The proposed solution is based on an ML model
called YOLOv8, which is trained to detect weapons in
videos, acting like a detective who can quickly identify
a gun in a frame. However, detectives need context,
and context adds accuracy and confidence to their
predictions. For this reason, additional ML models were
trained: one for the detection of violence, which could
quickly escalate into a crime involving a gun; another
to classify scenes as normal or abnormal, based on
movement and objects present; and a third to classify
sounds in audio recordings to detect any gunshots.
To train these ML models, it was necessary to
use vast datasets containing many images about
each model, for training and testing. These datasets
are publicly available and have been preprocessed
after being collected. Training ML models requires
substantial relevant data, and their quality would have
a direct impact on performance. Blurry CCTV footage,
for example, could hinder the weapon-detection model,
and this consideration emphasises the importance
of using diverse and high-quality data. The model’s
computation used an NVIDIA GeForce RTX 3060 Ti
graphics card, and specialised Python libraries.
The two main issues emerging from the project
were the search for high-quality datasets and the
long time required for training the models. Integrating
Figure 1. A gun being detected in a CCTV frame
the proposed ML models into a single DF system was
the ultimate objective of the research. They would
enable investigators to analyse footage quickly, with
the weapon-detection model as the starting point.
Sometimes, the context from an additional model
could be enough to suggest the presence of a gun,
even if it is not visible. For instance, if a scene would
be classified as abnormal, coupled with the detection
of violent behaviour near a cash register, it would be
safe to presume that the chance of a gun being involved
would be high. Another example could be detecting a
gunshot sound, followed by the detection of people
running away.
The proposed software could be a useful tool in
investigating past crimes, but also in extracting the
patterns that lead to a crime. The identified patterns
could later be used for a cybersecurity system.
Furthermore, the trained models are an improvement
over the basic YOLO model, with the weapon-detection
model reaching an F1 score of 0.909.
44
Faculty of Information and Communication Technology Final Year Projects 2024
A comparative analysis of different
machine learning techniques in intrusion
detection against evolving cyberthreats
CALVIN AZZOPARDI SUPERVISOR: Prof. Mark Micallef CO-SUPERVISOR: Dr Joseph Bugeja
COURSE: B.Sc. (Hons.) Computing Science
As technology continues to advance, more aspects
of our lives are becoming digitised, rendering our
lives easier. Consequently, cyberattacks have
become a pervasive inevitability in our modern
world. Cyberattacks can have far-reaching
consequences, including the exposure of private
information, financial losses, and the disruption
of critical systems. Hence, ensuring adequate
defence against these threats is of vital importance.
Among the numerous systems in place to prevent and
defend against cyberattacks, the most prominent
are network-intrusion detection systems. These
systems monitor all the information passing over a
network to identify any malicious activity. Upon the
detection of such activity, the system would respond
accordingly, for example by blocking the connection
or alerting an administrator. Current networkintrusion
detection systems are highly effective at
identifying attacks that have been seen and recorded
by researchers previously. However, they are less
effective in detecting attacks that have not been
previously encountered.
The advent of artificial intelligence (AI) holds great
promise in this domain, due to its ability to identify
complex patterns in data. In theory, this would allow
the training of an AI model on currently known attacks
to develop an understanding of what would define
malicious behaviour, enabling it to detect previously
unseen attacks based on their malicious properties.
In practice, achieving this goal presents considerable
challenges. There is a significant body of research that
has employed AI to build intrusion-detection systems.
However most of this research does not test the model’s
ability to detect attacks absent from its training data.
The key aim of this final-year project was to assess
and compare the efficacy of various AI techniques in
detecting unknown attacks. The findings from this
project would provide useful insight in determining the
most effective AI techniques for detecting both known
and unknown cyberattacks. Such a system could be
realised through specialised software integrated into
a router or a dedicated device linked to a router which
actively monitor all traffic traversing the network for
signs of malicious activity.
DATA SCIENCE
Figure 1. An illustration of a network-intrusion detection system in operation
University of Malta • Faculty of ICT 45
A metaheuristic approach to the
university course timetabling problem
WAYNE BORG SUPERVISOR: Dr Colin Layfield
COURSE: B.Sc. IT (Hons.) Software Development
DATA SCIENCE
The university course timetabling problem is an
optimisation task that seeks to generate a timetable
for a semester of an academic institution. This
consists in scheduling each lecture by allocating
a suitable room and timeslot that would fit the
requirements and preferences of lecturers, students
and any other concerned parties at the university.
This project explored whether using a hybrid
technique involving constraints programming
and genetic algorithms (GA) could generate good
timetables. A GA would imitate the process of
natural selection, where high-scoring individuals
from a generation help to repopulate the next one.
The GA starts with a pool of potential timetables
encoded as chromosomes, as displayed in Figure 1.
The chromosome stores two integers for each class,
to store the chosen room and timeslot from the
list of available placements of the respective class.
Repeatedly, a set of individuals are selected and
merged in a process called crossover, with a view to
producing a new generation of timetables that would
be better than the parents who created them. Thus,
each new generation would deliver better timetables
than previous ones, in meeting the university’s
preferences.
If possible, a set of valid timetables would be
generated to populate the first generation of the GA.
This is done by attempting to schedule the lectures
one at a time, while adhering to the requirements.
When a lecture is slotted into the timetable or
‘placed’, it would inherently limit the options of the
unplaced lectures, since these cannot clash with the
newly placed lecture. If an unscheduled lecture would
have no more open options, then backtracking would
be initiated by undoing an already placed lecture, in
order that another choice could be made. Figure 2
shows a graph representing this process of placing
and backtracking until a valid timetable is found.
Figure 1. Structure of a GA chromosome
Figure 2. Progress graph for generating a valid
timetable
46
Faculty of Information and Communication Technology Final Year Projects 2024
Towards a user-centric diet recommender
DAVID BRIFFA SUPERVISOR: Dr Joseph Bonello
COURSE: B.Sc. IT (Hons.) Software Development
In a world struggling with widespread health issues,
people strive to prioritise their well-being, particularly
concerning eating habits. The hypothesis at the core
of this study seeks to establish whether it would be
possible to generate a personalised diet, based on a
user’s profile, to promote healthier eating habits.
The personal-diet recommender proposed in this
project uses a number of data sources that provide
guidelines on the nutritional requirements of different
individuals, a large set of recipes and the individual
nutritional values of the ingredients in the recipes.
The diet recommender uses two techniques,
namely fuzzy logic (FL) and linear programming (LP),
to determine the optimum diet plan for a user. FL is
a mathematical technique that was used for analysing
user profiles according to factors such as age, gender,
weight, height, medical conditions, and activity levels.
Since different persons have different requirements,
obtaining their ideal nutritional requirements would
be a complex task. FL handles imprecise or uncertain
information by assigning nuanced degrees of truth
to statements, as opposed to just ‘true’ or ‘false’. An
example of this is provided in Figure 1, where a user
weighing 85kg would partially belong to two weight
categories, medium and high weight. This allows
the recommender to estimate the user’s nutritional
requirements more precisely.
Figure 1. Fuzzy logic weight-membership graph
On the other hand, LP is a mathematical method
for optimising outcomes by maximising or minimising a
linear objective function, subject to linear constraints. In
this context, it would take the nutritional requirements
as input to seek the best combination of recipes to fulfil
them, thereby providing an ideal diet plan, as shown in
Figure 2.
Finally, the proposed system is equipped to provide
ingredient substitutions for users with specific health
conditions, such as high blood pressure or diabetes, by
replacing ingredients that may be harmful to them.
DATA SCIENCE
Figure 2. A diet recommendation (1 day) for a particular profile
University of Malta • Faculty of ICT 47
Semi-supervised learning for affect modelling
DAVID CACHIA ENRIQUEZ SUPERVISOR: Dr Konstantinos Makantasis
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DATA SCIENCE
Affective computing is an interdisciplinary field
combining machine learning (ML) and psychology, and
acts as an umbrella term for problems and tasks that
deal with human emotions, feelings and sentiments
(referred to in psychology as affects). The field aims to
grant artificial machine systems with some degree of
emotional intelligence. To do this, sophisticated affect
models have been developed, and constitute the crux
of a sub-field of affective computing called affect
modelling.
In affect modelling, the main aim is to create complex
computational models that could emulate, recognise and
understand human emotions through a set of different
input streams. To achieve this, ML models are trained on
different data, such as physical data and physiological
signals. Using these inputs, specific labels are assigned
to them, which correspond to specific emotional states.
This mapping between input features and target states
is the basis for all affect modelling.
Within the landscape of affect modelling, the current
approach when creating a model is to utilise a fully
supervised approach. Therefore, a fully labelled and
annotated dataset would be required. This approach
yields valuable results, producing highly accurate
predictions. However, the need for a fully labelled dataset
has its own set of issues. One of the major problems
that could be encountered presents itself when creating
a dataset, as the process is time-consuming and costly.
Moreover, the larger the dataset, the more likely the
occurrence of human error, either by mislabelling or
by the annotator’s state of mind changing during the
annotation process.
Apart from fully supervised learning, two other
learning paradigms exist: unsupervised and semisupervised.
Unsupervised does not require any
annotation to be done on the dataset. However, since
it trains by inferring patterns in the data it would not
be the best match for this use case. Instead, a semisupervised
approach, which uses a partially annotated
dataset, could be used. In this process, the model still
learns in a similar way to a supervised approach but
uses unlabelled data to test how well the model would
be able to deal with unseen data.
This study set out to investigate whether using
a semi-supervised approach could give results
comparable to fully supervised learning. To this end,
a set of algorithmically dissimilar semi-supervised
approaches were used to train affect models, which
were then compared to a set of models trained by
adopting a fully supervised approach.
To test these comparisons, a series of evaluation
metrics were extracted from each trained model
and were then compared to each other. It should be
emphasised that the aim of the project was not to
obtain state-of-the-art results, but rather to analyse
the benefits and shortcomings of utilising one method
over the other.
The results of the study suggest that semisupervised
algorithms can produce models of effects
whose performance is not significantly different
from the performance of models produced using fully
supervised algorithms. This finding is crucial for building
robust and cost-effective affect models, towards
developing emotionally aware AI systems.
Figure 1. An overview of the three-step process for affect modelling
48
Faculty of Information and Communication Technology Final Year Projects 2024
Detecting Earthquakes using AI
ETIENNE CAUCHI SUPERVISOR: Dr Joel Azzopardi CO-SUPERVISOR: Dr Matthew Agius
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Earthquakes pose a significant threat at various levels,
and the field of seismology is vital for understanding
and mitigating their impact. While traditional methods
have made progress, detecting smaller earthquakes
remains a challenge. Here, artificial intelligence (AI)
and machine learning (ML) could play a significant
role, showing great promise in enhancing earthquakedetection
capabilities.
This project has been motivated by two main
considerations. The first is the need to improve the
cataloguing of earthquakes in the Maltese Islands.
By identifying the best AI/ML models, the Seismic
Monitoring and Research Group at the University of
Malta would be better equipped to take the necessary
measures. The second consideration was the need to
tackle the broader issue of detecting smaller seismic
events – which is crucial for understanding a region’s
complete seismic picture.
The primary aim of the project was to explore
how AI and ML combined would improve earthquake
detection through seismological data analysis. To
achieve this, three objectives were set, namely: 1) to
determine the most effective AI model for this task;
2) to assess how different preprocessing methods and
data-quality levels would affect the results, and 3) to
gauge how effective the model would be at detecting
earthquakes in other regions.
A robust review of the available literature was
compiled, ensuring that the decisions made were
scientifically backed and valid. A variety of shallow
and deep learning models were chosen, to ensure a
good combination. The same data was preprocessed
in various ways, with each varying one element,
so that the optimal input to the AI models could be
determined. These included: whether the input data
should be passed through a filter, and if so, with what
parameters; whether to perform oversampling or
undersampling techniques on the available data, and
whether adding artificial noise to the data produces a
better model. Once the ideal techniques were chosen,
the models were tuned to ensure that they could learn
optimally from the data provided.
DATA SCIENCE
Figure 1. Positive training data: local earthquake
identified
Figure 2. Negative training example: disruptive noise
University of Malta • Faculty of ICT 49
Leveraging AI to improve demand
forecasting in ERP systems
ISAAC DEBONO SUPERVISOR: Prof. Ernest Cachia
COURSE: B.Sc. IT (Hons.) Computing and Business
DATA SCIENCE
The world of enterprise resource planning (ERP)
is extensive. Ranging from the purchasing of raw
materials to deliveries of finished goods, new tools
are always being introduced to advance the objective
of ERP, a cross-functional streamlining of the supply
process. These systems are becoming the backbone
of modern industry leaders and one key contributor is
the demand-forecasting module.
Demand forecasting, in itself, refers to the process
of using historical data to predict customers’ future
demand for a product or service. This helps the
business to plan ahead and take better-informed
supply decisions, in accordance with the expected
output required to meet customer demand adequately.
Some of the major decisions affected by this forecast
include procurement and production scheduling,
changes to existing products and logistical planning for
distribution.
With the current artificial intelligence (AI) boom being
driven by generative AI technology, there is a rise in the
development of intelligent ERP (iERP) systems. This is
a novel approach at tackling enterprise-wide strategy
and communication through the use of predictive
analytics, big-data analysis and machine learning (ML)
techniques. This project set out to investigate how
these rapidly advancing technologies could be applied
to the demand-forecasting module in iERP systems, to
improve efficiency, and also ensure cost reduction and
the seamless processing of information. This would
include being able to scale and reschedule production
to demand as promptly as possible, thus mitigating
the risks associated with disruptions and information
inaccuracy across the supply chains.
The objective of this project was to take a subset of
real business data and combine it with other ERP data
to train a system to produce more accurate demand
forecasts, facilitated by meaningful visualisations.
These reports take into consideration the entire context
of the ERP system, communicating its findings in such
a way that would facilitate decision-making across all
the integrated functions of the ERP system.
Figure 1. Simple ERP dashboard prototype
One noteworthy discovery made during the literature
review was that ERP systems tended towards cloud
solutions. For this reason, for this project it was
deemed best to make the insights from the dashboard
accessible through a server-client architecture. To
achieve this, it was considered appropriate to use
Streamlit. This is a free-to-use Python framework
built for delivering dynamic and interactive data-driven
applications. Using the said framework would allow
executing all the computations on the server, with the
displays and visualisations being made accessible to the
client through a web browser.
The technology behind the interactive graphs
themselves was the Matplotlib Python library, which
facilitated the creation of the interactive visuals displayed
on the dashboard. The second group of key technologies
in this project were NumPy and Pandas, which were
used to conduct any necessary mathematical analysis
and data manipulation on the datasets. Lastly, Keras
was used to develop the demand-forecasting artificial
neural networks developed over Python to analyse
business data and produce demand forecasts.
50
Faculty of Information and Communication Technology Final Year Projects 2024
A Moneyball approach to Fantasy Premier League
NICHOLAS FALZON SUPERVISOR: Dr Joel Azzopardi
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Fantasy Premier League (FPL) is a fantasy sports
game — and the most popular of its genre — in which
the player must select a virtual team of players from
the English Premier League. These would be awarded
points based on their performance in real-world
Premier League matches.
Team selection could be very challenging for many
FPL players, as there are a large number of variables
that would need to be taken into consideration. The
primary objective of this project was to create a
machine learning (ML) model capable of accurately
predicting the number of points each player would
receive in a forthcoming match, solely through the
use of statistical data. The model was inspired by the
Moneyball strategy (i.e., a recruitment strategy used
in various sports where players are identified through
the use of data and statistics). The chosen approach
would help mitigate any bias that is commonly shown
by FPL players towards certain players or clubs.
The proposed model could assist players in their
team selection by allowing them to assess how well
the players in their current team would perform in an
upcoming fixture. This would enable them to make
informed transfers by swapping out a player who would
be predicted to perform poorly, with a player that is
predicted to give a better performance.
A number of ML regression models were
implemented and evaluated in order to determine which
of these would produce the best predictions in different
Figure 1. Fantasy Premier League team selection
scenarios. The chosen models were decision trees,
random forests and gradient-boosting machines and
a fully connected neural network (FCNN). Each model
was trained using the same dataset, which covered
data from the 2017-18 season up to the current season
(2023-24). The FCNN obtained the lowest test loss
score out of all the models, recording a score lower
than most of the previous studies conducted similar
to this one.
DATA SCIENCE
Figure 2. Flowchart of the player-performance prediction
University of Malta • Faculty of ICT 51
AI in agriculture: Crop yield forecasting
JEREMY FENECH SUPERVISOR: Dr Joel Azzopardi
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DATA SCIENCE
Crop yield forecasting consists of using historical and
statistical data to produce an estimate of the size of
a future harvest. For a number of years, this has been
crucial in guaranteeing food security and ensuring
appropriate resource allocation for farmers
This process could be facilitated through the use of
artificial intelligence (AI), which would make it possible
to eliminate the guess-work element towards producing
highly accurate results. This research has sought to test
numerous machine learning (ML) techniques, neural
networks in particular, to identify the most promising
among them. The main objective was to create a
reusable and precise model that would help achieve
global food security.
A number of algorithms was trained to learn to
forecast specific trends by using numerous sources such
as the amount of precipitation, average temperature,
soil-quality indexes, emissions, remote sensing data, and
historical yield sizes for a given area. The vast amount of
data allowed the algorithms to identify and learn valuable
patterns, which may not be perceived by humans.
The project followed a strict workflow, as seen in
Figure 1, beginning with the collection of historical crop
and climate data. The data underwent preprocessing
— including cleaning, normalisation, and feature
engineering — to prepare it for model training. Various
algorithms were explored and rigorously evaluated for
their suitability in forecasting crop yields, each trained
and optimised using a portion of the dataset. The
performances of the models were assessed using
evaluation metrics, and adjustments are made as
necessary to enhance accuracy. Finally, the models were
tested on unseen data to evaluate the generalisation
capabilities.
The findings pointed towards a promising outcome,
with models such as recurrent neural networks and
support vector regression delivering a high level of
accuracy in forecasting the grape yield in specific Italian
regions of Italy, for instance. The application of ML in
agriculture generates hope and empowers farmers
by providing them with the tools needed to ensure a
sustainable future for generations to come.
Figure 1 Overview of the machine learning workflow used in the project
Figure 2. Artificial intelligence in agriculture
52
Faculty of Information and Communication Technology Final Year Projects 2024
Predicting Eurovision rankings from
lyrics, audio features, and sentiment
ISAAC MUSCAT SUPERVISOR: Prof. Adrian Muscat
COURSE: B.Sc. (Hons.) Computing Science
The Eurovision Song Contest is known to many as an
annual musical competition celebrating music from
different regions. It has come to showcase diverse
music styles, languages and cultures on an international
stage. Considering the long history and ever-growing
interest in this event, some may –wonder what it is that
makes a song succeed in the contest. Do core factors
such as lyrics, the tune and people’s reactions even
matter anymore, or do other external factors such as
geopolitical relations hold most sway over the outcome?
This is precisely what this project has attempted to
answer.
The aim of the study was seeking to predict the
rankings that a set of songs would achieve at the
Eurovision, based on three aspects. The first step was
analysing the lyrics of all past entries with the purpose of
identifying correlations between the lyrics and success
in the contest. Next, an evaluation of audio features (e.g.,
energy, acoustics and tempo, among others) was carried
out. Lastly, a sentiment analysis of tweets regarding past
entries was carried out to build upon previous research,
taking into account an external factor, together with the
previous two internal factors.
Each of these three aspects was duly encoded; this
step was particularly important for the lyrics, which
needed to be converted from words to vectors of values
that could be used for the predictions in this study.
Subsequently, appropriate models were developed,
which constituted the core of this research. In this case,
artificial neural networks and random forests were
chosen as they represented two important areas to be
examined – one being neural networks (a computational
model developed to mimic the human brain) and the other
providing a more traditional decision-based approach.
The models were then trained using the encoded
data of the majority of past entries, and then they were
tested using the data of other past entries. It is worth
noting that, although rankings were the main variable the
project sought to predict, the scaled number of points
was also predicted alongside the rankings. Finally, the
correlations between the real and the predicted results
were examined, using not only metrics but also plots
depicting their relationship.
The final step was analysing the accuracy of the
predictions made by the developed models, to gauge the
role of these aspects in securing a successful outcome
at the contest. Nevertheless, it must also be taken into
consideration that that these three aspects alone might
not be sufficient in offering the ‘formula’ for winning the
contest.
This study goes beyond previous studies by not
focusing solely on internal factors (lyrics and audio
features) or an external factor (sentiment analysis)
alone, but investigating both. However, there are
still many other factors, mostly external, that could
determine the outcome of a song participating in the
contest. Nevertheless, with the ever-growing relevance
of artificial intelligence, such a study could provide the
first step in identifying further factors of an entry’s
success.
DATA SCIENCE
Figure 1. Block diagram depicting the framework of this project
University of Malta • Faculty of ICT 53
VR Comm - Communication in VR using LLM
DEAN SCIBERRAS SUPERVISOR: Dr Vanessa Camilleri
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
DATA SCIENCE
The interaction between human and chatbots lies in an
unrealistic communication setting. Evaluation studies
reveal that users perceive chatbot communication
through websites as being artificial and unrealistic. The
advent of large language models (LLMs) such as ChatGPT
has largely overcome this problem. However, this has
not resolved the issue of virtual bot communication in
virtual reality (VR) settings.
This final-year project employed LLMs with the
creation of a VR based avatar for a realistic conversational
situation in virtual worlds. Current advancements in
LLMs, chatbots, and VR avatars across various sectors,
such as communications, healthcare, engineering, and
marketing, are being evaluated to identify strengths and
challenges in the field. This project further addresses the
recent surge in VR avatar popularity, their technological
constraints and challenges, by designing and developing
an integrated LLM into a VR environment using a game
engine to enhance AI Chatbot functionalities. The
project adapted and implemented a number of existing
open technologies, including openAI, witAI, speech-totext,
text-to-speech and lip-syncing, and evaluated the
performance of the resulting system.
The initial tests have shown that the integration
of the chatbot using text-to-speech, speech-to-text
and the chatbot itself was successful. Despite the
inevitable challenges, lip-syncing, head-tracking and
blinking were also implemented. A feature to change
the personality of the chatbot was also added by using
speech-to-text.
The VR chatbot itself could have a substantial
impact on the educational, healthcare, marketing
and leisure sectors. However, it would be necessary
to train it considerably beforehand with prompts and
conversations that would be relevant to the particular
field in which it would be used. This would maximise the
efficiency of the results (all the more so if it would be
used in schools or clinics and other medical settings).
Figure 2. VR chatbot avatar moving its head to track
objects
Figure 1. VR chatbot avatar doing lip-syncing
54
Faculty of Information and Communication Technology Final Year Projects 2024
Investigating the use of augmented
reality for live closed captioning
JUSTIN AGIUS SUPERVISOR: Dr Chris Porter CO-SUPERVISOR: Prof. Mark Micallef
COURSE: B.Sc. IT (Hons.) Software Development
Persons with hearing impairments may struggle to
communicate with others, particularly in environments
that do not cater adequately for sign language, if at all.
This project sought to explore to what extent augmented
reality (AR) could be used to enable persons with hearing
impairments to access verbal communication through
closed captioning, superimposed on the real world.
Effective communication is crucial across contexts,
including the workplace. Hence, this project has explored
assistive technology that could support individuals in
understanding spoken communication.
Traditionally, closed captioning is used in television
and videos to transcribe dialogue to facilitate
comprehension by the hearing-impaired. It differs from
open captioning by allowing the viewer to enable or
disable it.
The solution utilises speech-to-text technology
within an AR environment (including headsets and/or
mobile devices) to produce closed captions for speech
uttered in a physical environment. Azure AI Speech
Service is a tool that provides support for multilingual
environments, including Maltese and English – although
support could be extended to include other languages.
Through the use of AR technology, equipment such as
AR glasses, phones, and tablets could also be used to
superimpose captions over the physical world, while
at the same time allowing the user to customise their
experience by adjusting aspects such as caption size
and location.
The effectiveness of the above-mentioned assistive
technologies would depend on elements such as
accessibility (e.g., contrast), usability (e.g., automated
language detection), customisability (e.g., location of
captions) and accuracy (e.g., effectiveness with varying
environmental noise conditions). For this reason, the
software was developed and evaluated rigorously,
informed by various metrics to indicate adherence to
specified success criteria.
The proposed solution was developed using the Unity
engine, a development suite that can be used to create
applications for many platforms, including desktop and
mobile. It supports various technologies and services,
such as the AR foundation package used in the solution,
allowing AR application development. The software was
designed to run on any AR-enabled device, including
phones, tablets and headsets, providing sufficient
flexibility to suit different needs and constraints
informed by the context of use.
HUMAN COMPUTER INTERACTION
Figure 1. The solution transcribing Maltese speech
University of Malta • Faculty of ICT 55
Towards optimizing cognitive load
management for software developers
in the context of digital interruptions
LYDELL AQUILINA SUPERVISOR: Dr Chris Porter CO-SUPERVISOR: Prof. Mark Micallef
COURSE: B.Sc. IT (Hons.) Computing and Business
HUMAN COMPUTER INTERACTION
In their line of work, software developers face the daily
challenge of staying focused despite a blast of digital
interruptions across their devices. Ranging from e-mail
alerts to instant messages, such interruptions could
markedly affect their concentration and efficiency.
Trying to solve a complex problem while notifications
keep coming in is not just annoying; it also negatively
affects the developer’s cognitive load, which is the mental
exertion needed to carry out a task. Hence, this project
set out to define how these digital interruptions tend
to influence the cognitive load of software developers
and to investigate strategies to mitigate their disruptive
impact. The human working memory is limited, and
an excessive cognitive load could impede learning and
problem-solving capabilities. For software developers,
who often undertake complex and demanding tasks,
managing cognitive load is vital for productivity and
mental well-being.
A two-phased approach underpinned the project.
It was deemed necessary to begin by exploring the
relationship between cognitive load and pupillary
response, which is known to be a physiological
marker of cognitive effort. Then, it was necessary
to establish the extent to which digital interruptions
influence cognitive load; this was done by studying
pupillary responses in a controlled study. In other
words, this project sought to discover whether digital
interruptions would increase the cognitive load of
software developers by analysing their pupillary
response.
Upon having laid the groundwork, the next step
was to present an innovative solution to address this
issue in the form of a tool that utilises eye-tracking
technology to evaluate a developer’s cognitive load
in real-time. This tool would also manage digital
interruptions according to the predicted cognitive
load. Through this data, the tool would find moments
of increased cognitive load and would silence desktop
digital interruptions at these crucial times, with the
aim of increasing concentration and productivity.
The research process included a controlled labbased
study, for which participants were asked
to carry out a series of tasks. The tasks were
Figure 1. A participant carrying out one of the tasks
during the lab-based study
dotted with actionable intrusions at specific points,
particularly when concentration levels tended to be
at their highest. This experimental framework was
instrumental in confirming the premise for this project,
and for building a model upon which the digitalinterruption-management
tool could be trained.
One of the observations made during the project
was related to the wide array of factors that needed
to be taken into consideration in order to interpret
pupillary responses accurately. These factors
included light levels, emotional states, and even the
nature of the tasks. The said factors tended to have a
significant effect on pupil data, presenting a challenge
in isolating the effects of cognitive load.
Despite these challenges, the project succeeded in
contributing to a better understanding of the interplay
between digital interruptions and cognitive load. By
harnessing eye-tracking technology, this work offers
a new path for developing tools capable of adapting
to the individual developer’s mental state and manage
digital interruptions, making digital environments less
disruptive and more conducive to maintaining the
right flow.
56
Faculty of Information and Communication Technology Final Year Projects 2024
Enhancing cognitive load management
in coding environments through
real-time eye-tracking data
PAUL AZZOPARDI SUPERVISOR: Dr Chris Porter CO-SUPERVISOR: Prof. Mark Micallef
COURSE: B.Sc. IT (Hons.) Software Development
Cognitive load is the mental processing power required
to perform tasks. Excessive cognitive load could lead
to errors, reduced productivity, and increased stress.
Software development is inherently complex, often
leading developers to experience high levels of cognitive
load. This could have a negative impact on their ability
to write, understand, and debug code efficiently.
In the constantly evolving field of software
development, maintaining optimal cognitive load levels
is crucial in ensuring developers’ productivity and
well-being. Recognising the need for new solutions to
manage cognitive load, this project aimed to explore
the potential of real-time eye-tracking data to enhance
cognitive load management in coding environments.
The foundation of this project rests on the
hypothesis that using real-time data from eye trackers
could effectively measure and, thus, offer ways for
reducing the cognitive load experienced by software
developers. An essential phase of this project involved
a controlled lab study. Participants were invited to
carry out a series of 11 coding tasks, each with varying
levels of complexity, during which comprehensive eyetracking
data was collected. An extension for Visual
Studio Code (VS Code) was developed to analyse
eye-tracking data and predict cognitive load based
on a pre-trained model. Along with a data-collection
exercise, this approach was adopted to devise a way
for evaluating the extension’s accuracy in real-time
cognitive load assessment, ensuring its effectiveness
and reliability in practical applications.
The project makes use of a combination of
eye-tracking hardware and software-development
tools. For eye tracking, the Gazepoint GP3, a widely
recognised research-grade eye tracker, was utilised to
gather detailed eye-movement data. This information
was then processed in real time by a machine learning
model capable of predicting cognitive load levels, which
was developed in Python. The interface for this system
was a VS Code extension, developed in TypeScript,
which utilises the Gazepoint API for fetching eyetracking
data in real time. This data was subsequently
sent to a custom-built API, integrating the predictive
model. This would enable the provision of immediate
feedback and recommendations within the VS Code
environment, such as suggesting the user take breaks,
thus aiding developers in managing their cognitive load
more effectively.
The implementation of the project offered several
key insights. Notably, eye-tracking metrics, such as
fixation duration and the number of fixations, served
as reliable indicators of cognitive load. Furthermore,
the project underscored the feasibility of integrating
advanced technologies such as eye tracking into
everyday software development tools, to improve user
experience and productivity.
This project demonstrates the potential of
eye-tracking technology to transform the way in
which developers manage cognitive load in coding
environments. By providing practical, real-time insights,
the developed VS Code extension would represent
a significant step forward in creating more intuitive
and supportive development tools. Although further
research would be required to refine and validate
the approach, the initial findings suggest a promising
approach to improving the well-being and productivity
of software developers worldwide.
HUMAN COMPUTER INTERACTION
Figure 1. Screenshot of a participant during a labbased
testing session
University of Malta • Faculty of ICT 57
Making headway: A user interface to
enhance educational accessibility for
children with physical disabilities
ANDREW BUGEJA SUPERVISOR: Dr Peter Albert Xuereb
COURSE: B.Sc. IT (Hons.) Computing and Business
HUMAN COMPUTER INTERACTION
In the contemporary educational landscape, it would
be imperative to prioritise accessibility for all children,
irrespective of physical limitations. Unfortunately,
children with physical disabilities often encounter
significant obstacles when trying to access certain
educational resources. For instance, a child with
cerebral palsy, brimming with eagerness to learn, might
be hindered by traditional interfaces that would fail to
accommodate their unique needs. This project set out
to address this issue by developing a specialised user
interface (UI) to enhance educational accessibility for
such children.
Central to the proposed solution is a graphical user
interface (GUI) that is controlled exclusively by head
movements. This builds upon advanced motion-sensing
technology. Through customisable input options, the
system would interpret subtle head gestures, enabling
meaningful interactions within educational applications.
Notably, during research the need to fine-tune the
sensitivity settings emerged as a critical factor in catering
to the varying levels of mobility among users. Moreover,
integrating real-time feedback mechanisms, such as
visual cues or audio prompts, proved instrumental in
enhancing user engagement and motivation.
To achieve this, machine learning algorithms were
trained to recognise specific gestures and map them
to predefined actions within the GUI. Furthermore,
accessibility features were seamlessly integrated into
the interface design, ensuring compatibility with screen
Figure 1. The GUI visible to the users
readers and alternative input devices commonly utilised
by individuals with disabilities.
In conclusion, the endeavour to enhance educational
accessibility for children with physical disabilities
through a specialised UI has yielded promising results.
The project contributes towards fostering a more
inclusive learning environment by embracing innovative
technologies and adopting a user-centric design
approach.
Looking ahead, continued innovation and
collaboration hold the key to continue to diminish the
existing gap in educational accessibility, ensuring that
every child — regardless of physical limitations — could
embark on a journey of knowledge and personal growth.
Figure 2. The camera being utilised to capture the
head, and head movements, to respond to the user
interface
58
Faculty of Information and Communication Technology Final Year Projects 2024
User engagement in serious games
RENO YURI CAMILLERI SUPERVISOR: Dr Vanessa Camilleri
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Serious games are a genre of game having a purpose
beyond mere entertainment. In recent years, the
use of games as educational tools — serious games
in particular — has garnered significant attention for
its ability to enhance student engagement. Given
the advancements in computational power and
innovative artificial intelligence (AI) techniques, this
project consisted in creating an educational maze
game and website that would combine learning and
entertainment.
The core idea behind this project was to investigate
the effectiveness of integrating AI techniques to
provide users with a more engaging experience. The
main AI techniques that were investigated were: 1)
creating the levels of difficulty of a maze game based
on user performance, in the attempt of reducing the
gap between high-scoring and low-scoring students
(thus creating a more competitive environment that
caters to students of all abilities); and 2) creating a
review system through which, upon completing a
particular quiz, users could test their knowledge with
newly created review questions and answers, based on
the original questions of that particular quiz.
This project also promises to empower teachers by
offering a user-friendly platform, through which they
could effortlessly transform traditional quizzes into
captivating, interactive maze games. The integration
of AI would allow teachers using this platform to add
new answer options with a click of a button to simplify
their workflow, as well as view student performance
in detail (e.g., the speed at which a student would
manage to answer questions or assess mistakes
made). Meanwhile, the students themselves would be
able to review their performance at the end of the game
and, where necessary, engage in additional exercises to
reinforce their learning.
Initial results indicated that the use of AI in game
creation could increase the engagement of both the
learner and the teacher, through a gamified approach
to learning. This suggests a number of implications
that could be further evaluated and tested in diverse
educational settings.
HUMAN COMPUTER INTERACTION
Figure 1. The improved maze game
Figure 2. Maze creator page for teachers
University of Malta • Faculty of ICT 59
Feasibility of runtime verification
with multiple runs
JACOB DEGUARA SUPERVISOR: Prof. Adrian Francalanza
COURSE: B.Sc. (Hons.) Computing Science
HUMAN COMPUTER INTERACTION
In the field of software development, ensuring the
reliability, robustness and security of programs is a
constant challenge. Runtime verification, a technique
for monitoring software executions against predefined
specifications, plays a crucial role in addressing this
challenge. Hence, this project proposes a potentially
practical approach for enhancing existing monitoring
techniques by building upon a previous work related to
the theory of monitors.
Traditionally, monitors operate by assessing the
behaviour of a single execution, also known as a ‘trace’,
against a set of specifications. These normally only use
the AND logic gate but the above-mentioned theory
suggests modifying multiple monitors to accept a history
of traces instead of just one. On this basis, it would be
possible to expand the range of properties the monitor
could scrutinise. This history would allow one monitor to
utilise the OR logic gate as an additional tool, since more
than one trace would be required to fully prove such a
violation. This research has set out to demonstrate the
feasibility of this concept in real-world scenarios and its
potential to improve monitors.
In seeking to validate this idea, it was deemed best
to build upon a pre-existing monitor system, utilising
the outline approach of this monitor, which is written in
Erlang. By integrating elements inspired by the abovementioned
theory of monitors into the detector, it was
attempted to prove the feasibility of enhancing monitors
to analyse a broader spectrum of behaviours. While the
proposed implementation may not fully capture every
property enabled by the theory of monitors in question,
it certainly adds value by extending the monitor’s
capabilities beyond its original scope.
Figure 1. Flowchart of the proposed monitor
This research contributes to the ongoing efforts to
improve software-monitoring techniques. While it may
not revolutionise monitors, it provides evidence that
the theory of monitors at the basis of the study offers
a viable extension to existing monitoring methods.
By accommodating a history of traces, the enhanced
monitor developed through this project could better
analyse software executions and improve overall
reliability, robustness and security.
While further research and refinement would
be necessary, this work represents a step forward
in improving software-monitoring techniques and
ultimately enhancing the reliability and security of
software systems.
60
Faculty of Information and Communication Technology Final Year Projects 2024
Adaptation of UI layout using webusage-mining
techniques
ELEANOR CLAIRE FORMOSA SUPERVISOR: Dr Colin Layfield
COURSE: B.Sc. IT (Hons.) Computing and Business
Fundamentally, web-usage mining focuses on
extracting useful patterns or user profiles from
data generated by a user’s interactions with a web
application. Therefore, it could be used to adapt
the interface. Such an approach is seen to be more
dynamic and less prone to biases, when compared to
methods relying on explicit user input or static profiles
to modify the user interface (UI) layout. The insights
gained from web-usage mining enable the creation of
highly personalised user experiences, offering a datadriven
approach to interface adaptation.
This study is based on the hypothesis that
dynamic adaptation of the UI layout would optimise
the user experience. Notably, the study addresses the
implications of UI adaptation on user response to visual
cues in the context of diagram-editing software. The
analysis of user data allows the system to adjust the
UI pre-emptively to highlight tools or suggest efficient
workflows, thus reducing the cognitive load on the
user and potentially speeding up the diagram-creation
process.
The solution is presented as a Google Chrome
extension, utilising content scripts to inject and execute
JavaScript within the context of the user’s browser
session. This extension was set to interact with the
Document Object Model of the webpage to adapt the
interface in accordance with the results produced
during the pattern-analysis stage.
Initially, user-behaviour data was collected and
stored using the Indexed Database API (IndexedDB).
This ensured that data collection respected the user’s
privacy and that the data would be readily accessible
for analysis without relying on external servers. The
data was preprocessed once a substantial amount of
data was logged. This phase involved filtering out any
irrelevant data, correcting any errors and converting
raw data into a format suitable for analysis.
Central to the approach was the Apriori algorithm, a
commonly used data-mining technique. The algorithm
assumes that the presence of any subset of a frequent
itemset implies the likely presence of other items from
the same set. This assumption allowed the application
to intuitively draw the user’s attention towards features
Figure 1. Comparing a standard diagram-editor
interface with a personalised interface, enhanced
for a better user experience
they would be more likely to use, forming the basis for
adapting the UI.
After the above stage, a score was assigned to a
set of elements, to determine if and how they would be
adapted in this algorithm, using visual cues. Elements
with a higher score were moved to a personalised
menu, medium-score elements were visually flagged
to capture the user’s attention and elements scoring
below a pre-defined threshold remained unchanged.
A proof-of-concept web application was developed
to demonstrate the presented adaptive features
using JavaScript client-side scripting, along with
‘mxGraph’, an external JavaScript library providing a
comprehensive framework for creating, displaying, and
managing interactive diagrams and graphs within web
applications. The purpose of the proof-of-concept was
to validate the feasibility of adaptive UI features in a
controlled setting.
The proposed application serves as a tangible
example of how adaptive features work in a real-world
scenario. Its advantage lies in its ability to facilitate
understanding the processes involved.
HUMAN COMPUTER INTERACTION
University of Malta • Faculty of ICT 61
Optimisation of saliency-driven imagecontent-ranking
parameters
MATTHEW KENELY SUPERVISOR: Dr Dylan Seychell CO-SUPERVISOR: Prof. Inġ. Carl James Debono
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
HUMAN COMPUTER INTERACTION
Visuals play a crucial role in securing users’ attention,
leading media outlets to utilise them in the constant
competition for our attention. Hence, there is an everincreasing
need for a mathematical model that could
automatically detect the most prominent parts of
images, allowing for the fairness of attention distribution
within user interfaces to be assessed and ensuring a
pleasant and reliable user experience.
Saliency is a subproblem of computer vision (a
subfield of machine learning concerned with the
automatic processing and interpretation of images) that
attempts to tackle the creation of such a model. While
the application of saliency to traditional photographs has
seen rapid development in recent years, its application
to user interfaces has been scarce. Hence, the aim of
this project was two-fold: 1) to optimise an existing
saliency-ranking framework (SaRa) that could measure
attention distribution fairness by organising interface
elements into ranks, and 2) to curate a dataset that
could convey inter-element saliency relationships
SaRa was optimised successfully through the
adoption of the state-of-the-art saliency generator,
DeepGaze IIE. Additionally, a new saliency-score formula
was implemented, along with a preprocessing step to
remove noise within the saliency maps. The dataset
used in the project was curated through the collection
of gaze-location data within news website interfaces.
This data was gathered through the use of a GazePoint
Figure 1. Saliency map and the saliency rankings
generated by SaRa
eye tracker, as well as through an online experiment that
tracked mouse trajectories.
To measure the impact of excessively salient
elements (such as ads, clickbait images, etc.) on the
viewing experience, participants were split into two
groups. Each of these groups had the excessively
salient elements either included or removed, with the
discrepancy between them serving as an indicator of
how distracting the excessively salient elements were.
Figure 2. Overview of the optimised SaRa saliency-ranking framework
62
Faculty of Information and Communication Technology Final Year Projects 2024
Evaluating and enhancing user
interface design for elderly users
ZACK MANGANI SUPERVISOR: Dr Peter Albert Xuereb
COURSE: B.Sc. IT (Hons.) Software Development
As the digital age progresses, it becomes increasingly
crucial to ensure that technology would be universally
accessible – in particular to the elderly, who constitute
an important and constantly growing proportion of the
population.
This demographic is often overlooked in user
interface (UI) design. Therefore, at the core of this
project lies the firm belief that by tailoring UIs to cater to
the specific needs of older adults, it would be possible
to enhance their digital engagement substantially. The
relevant hypothesis posits that UIs designed with an
emphasis on simplicity, readability, and ease of use
would diminish the challenge of using technology,
offering older adults an intuitive and empowering online
experience.
In seeking to achieve the above, the project also
entailed a comprehensive review of existing research
in this field. As a result, a set of consolidated UI
guidelines were designed specifically for older adults.
These guidelines featured larger fonts, high-contrast
colour schemes, and straightforward navigation to
accommodate the unique needs of older users. In
order to implement these principles, a website was
redesigned to serve as a tangible example of how such
guidelines could significantly enhance usability. This was
an iterative process, guided by valuable feedback from
elderly participants, and featuring continuous testing
and refinement.
In terms of technology, the project employed tools
including: Figma for UI/UX design; frameworks such
as ASP.NET for backend services; and Bootstrap for
responsive frontend design. The use of Google Fonts
and FontAwesome further contributed to the aesthetic
appeal and accessibility of the interface, while the
SpeechSynthesisUtterance API provided essential textto-speech
capabilities for users with visual impairments.
This endeavour underlined the critical importance
of user feedback in the design process, and not only
validated the original hypothesis but also highlighted the
broader implications of creating digital environments
that would be accommodating to users of all ages, and
thus being inclusive in nature.
HUMAN COMPUTER INTERACTION
Figure 2. Enhanced for ease: a user-friendly settings
menu designed specifically for older users
Figure 1. Before and after: a user interface
redesigned for easier accessibility
University of Malta • Faculty of ICT 63
AI-Powered Subject Preference
Detection for Personalised Virtual
Reality Learning Environments
GIANLUCA SCIBERRAS SUPERVISOR: Prof. Matthew Montebello
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
HUMAN COMPUTER INTERACTION
Traditional classroom learning tends to cause students’
attention to falter after a short while, especially if the
subject might not be sufficiently interesting or it would
be taught in a way that would not suit their respective
learning styles. Everyone learns differently – some
assimilate better through images, while others grasp
concepts through hands-on activities. A one-size-fitsall
approach in classrooms could leave many students
feeling lost and unmotivated.
While virtual learning environments (VLEs) offer
flexibility by breaking away from the physical constraints
of the classroom, many still replicate the limitations
of a standardised curriculum. This one-size-fits-all
approach overlooks the diversity of learning needs and
preferences among students. Moreover, the absence of
a more individual approach would risk stifling curiosity,
reducing motivation, and ultimately compromising
the effectiveness of the learning experience. Existing
methods for identifying students’ subject preferences
often rely on time-consuming surveys or questionnaires,
which could be inefficient or lack nuance.
Inspired by the challenges observed while tutoring
students who struggled with less-than-engaging
subjects, this project proposes the development of
an AI-powered subject-preference-detection system
designed for integration into virtual reality learning
environments (VRLEs). The proposed system aims
to overcome the limitations of traditional preferenceidentification
methods by rendering the process more
accurate, implicit, and continuous. By automating this
process, the system would facilitate dynamic tailoring
of learning paths and content, identifying individual
interests and aptitudes. This approach promises to
significantly enhance both the effectiveness and
enjoyment of VLEs.
The first step was to create a comprehensive
dataset. This consisted of meticulously labelled text
data, where each piece was rigorously categorised
according to its corresponding subject matter. A pretrained
model known as BERT was set to undergo further
training on this dataset to refine its ability to classify
the subjects within text content accurately. A system
for analysing user search history was also developed.
This system extracted text from visited URLs (uniform
Figure 1. Architectural diagram of the proposed
system
resource locators) and identified patterns within that
text. By analysing these patterns, the system could
reliably deduce a user’s preferred subject.
The goal of this project was achieving the seamless
integration of this subject-preference detector into
VRLEs. This integration was intended to empower
VRLEs to present immersive and personalised learning
environments as dynamically as possible. These
environments would feature content that could be
meticulously tailored to the subject interests of the
individual user.
To evaluate the software, a user study involving
actual students was duly carried out. Participants
were introduced to the VRLE and allowed to navigate
the virtual environment. After interacting with the
system for a designated period, in-depth interviews
were conducted. These interviews sought to gauge the
system’s effectiveness through questions focusing on:
1) whether the VRLE identified their preferred subject
areas accurately; 2) whether the learning modules
based on their interests were engaging and easy to
understand; 3) how the VR environment contributed to
the learning experience, and 4) if the level of difficulty of
the system was manageable.
64
Faculty of Information and Communication Technology Final Year Projects 2024
Visualisation of inertial data
from wearable sensors
NICHOLAS VELLA SUPERVISOR: Dr Ingrid Galea
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
Motion-capture technology has had a significant impact
on various industries, ranging from entertainment to
healthcare, by enabling the accurate representation of
human movements in virtual environments. However,
the processing and visualising of motion data presents
challenges, particularly in terms of cost limitations
and data management. In response to this, this
research project sought to optimise the processing and
visualisation of motion data acquired from XSens DOT
wearable sensors.
The primary goal was to address the challenges
associated with managing and visualising the vast
amount of motion data captured by wearable sensors,
while considering cost constraints. To address these
challenges, a comprehensive solution involving various
key components was developed.
The first step was to conduct thorough research
on existing animation software and motion-capture
methodologies to establish the best approach. This step
was crucial in understanding the current state-of-theart
techniques and identifying potential pitfalls.
The next step was preprocessing the raw
motion data obtained from CSV files generated by
the XSens DOT sensors. This involved parsing the
CSV files and organising the data into separate
data structures based on different types of motion
data, such as Euler angles and acceleration.
Mapping the preprocessed motion data to specific body
parts of a virtual character rig was another critical step.
This mapping process ensured that the movements
captured by the sensors accurately translated into the
corresponding joints and bones of the virtual character,
maintaining anatomical correctness and fidelity.
The project employed the Unity game engine and the
C# programming language in implementing the solution.
Unity provided a user-friendly development environment
with powerful animation capabilities, making it wellsuited
for translating motion-capture data into
immersive animations. Additionally, the integration of
Figure 1. Comparison between avatar animation and
real-time model movement, illustrating the accuracy
of motion capture and animation techniques
C# with Unity facilitated efficient implementation of
complex functionalities.
Various challenges were encountered during the
course of the project, which led to a number of noteworthy
discoveries. One of the said challenges was in managing
the large volume of motion data and ensuring its
accurate representation in virtual environments. Another
point that emerged was the importance of choosing
the appropriate sequence of Euler angles for describing
rotations, considering their susceptibility to gimbal lock
and other limitations.
In conclusion, this research project demonstrates
the potential of wearable sensors and advanced
motion-capture technology in enhancing motion-data
visualisation. By addressing the challenges associated
with processing and visualising motion data, the proposed
solution offers new possibilities for applications in virtual
reality, gaming, animation, and other fields.
HUMAN COMPUTER INTERACTION
University of Malta • Faculty of ICT 65
AR driving using mobile phones
TIMOTHY ZAMMIT SUPERVISOR: Dr Clyde Meli
COURSE: B.Sc. IT (Hons.) Software Development
HUMAN COMPUTER INTERACTION
Augmented reality (AR) is an extremely useful
technology that allows overlaying the real world with
additional information and makes it possible for users
to enjoy the benefits of technology while still being
connected with the real world. Unfortunately, most
existing AR solutions are very costly, with devices
ranging between €2000 and €3000 for a headset.
The objective of this final-year project was to
make AR more accessible by creating an app that
could be used on mobile devices with the addition
of a cheap €15 headset, as shown in Figure 1. This
app is intended for use while driving, with the aim
of keeping motorists’ eyes on the road and off their
GPS, speedometers, and other distractions.
Human sight is primarily perceived through
stereoscopic vision, which allows us to gauge
depth and the distance of objects from us. This
occurs, as each eye has a different perspective of
the world and, once each eye receives information,
the brain blends that information to create a single
stereoscopic image. This is important in AR because,
when using a headset, the device must present a
slightly different image to each eye. In fact, all AR
devices use multiple cameras to create an image to
show each eye. However, this is not possible with a
mobile phone.
Figure 1. Modified headset with adjustable lenses
Taking the above into consideration this project
has attempted to overcome the challenge by laying
the foundation for achieving stereoscopic vision using
a single mobile phone camera — and doing so with
minimal lag and delay. The idea to achieve this was to
obtain the camera video feed, replicating the feed twice
on screen, as shown in Figure 2, and cropping a small
amount of the image relative to each eye.
Figure 2. Camera feeds displayed to the phone
66
Faculty of Information and Communication Technology Final Year Projects 2024
Snap-n-Tell: An Augmentative and Alternative
Communication (AAC) app with Visual
Scene Display (VSD) for empowering
individuals with speech disabilities
RIANNE MARIE AZZOPARDI SUPERVISOR: Dr Peter Albert Xuereb CO-SUPERVISOR: Dr Dylan Seychell
COURSE: B.Sc. IT (Hons.) Software Development
Communication is an intricate and multi-layered
process central to human interaction. It serves various
purposes, including sharing information, persuasion,
and expressing emotions. However, some individuals
struggle to verbalise thoughts, due to difficulties
in forming words or in making the required muscle
movements. This creates a need for alternative means
of communication. Any method used as an alternative
to speech is referred to as augmentative and alternative
communication (AAC).
Snap-n-Tell is an Android app, and was co-created
with AAC experts using modern Android Material Design
principles, to improve communication for people with
speech difficulties. A key objective of this app is to
empower individuals with speech difficulties to achieve
greater independence by facilitating self-expression
and communication. The proposed app employs visual
scene display (VSD), which is a unique approach that
allows users to take or upload photos – referred to as
‘scenes’ – and linking words to specific elements within
these photos through interactive points, known as
’hotspots’.
Snap-n-Tell seeks to enhance communication
between users and their conversational partners through
the added context and understanding provided by the
captured photos. By tapping a hotspot, the app would
vocalise the associated word or phrase through a textto-speech
(TTS) system. Users could also record their
own voice messages for hotspots. Research shows that
apps utilising VSD are extremely helpful to: persons who
are new to AAC; individuals with cognitive impairments;
individuals with acquired physical conditions; early
communicators, and individuals with intellectual and
developmental disabilities (IDDs).
The proposed app is not limited to the elements
present in the photos. It also allows users to add
pictograms (or easy-access words). When these
pictograms are pressed, the text associated with them
is read out by the TTS, thus broadening the range of
topics for conversation. Furthermore, Snap-n-Tell
introduces an AI mode that uses artificial intelligence,
more specifically an object-detection technology. This
feature automatically creates hotspots and links words
Figure 1. Scene view with hotspots and easy-access
words
to them. Integrating AI significantly reduces the time
caregivers would normally spend on programming
such scenes. Moreover, it enhances user-friendliness
and independence, especially for technologically adept
users.
Snap-n-Tell also offers a number of customisation
features. One of these is the ‘Transition to Literacy’ tool,
which aids users in learning new vocabulary or improving
recall, which is especially beneficial to individuals with
degenerative cognitive impairments. Another feature,
‘Transition to Symbols,’ helps bridge the gap between
traditional AAC apps (i.e., apps that use grid display)
and VSD. The grid-display interface strategy arranges
symbols and words in a structured grid format. This
approach lacks context and imposes an additional
cognitive load on users whilst navigating among the
available symbols. Therefore, the ‘Transition to Symbols’
feature facilitates a smoother transition to grid-based
applications that tends to be otherwise absent.
To measure the app’s effectiveness, a focus group
consisting of domain experts (i.e., speech therapists and
technical staff) performed specific tasks using the app.
Their feedback was duly compiled and studied in order
to gain valuable insight regarding the extent to which
the app actually met the needs of its users, and how it
might be improved. The Initial reception was positive,
hence highlighting its potential utility and impact in
professional settings.
AUDIO SPEECH & LANGUAGE TECHNOLOGY
University of Malta • Faculty of ICT 67
Large language model for Maltese
KELSEY BONNICI SUPERVISOR: Prof. Alexiei Dingli
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Language models stand at the forefront of natural
language processing (NLP), marking a significant leap
in computational frameworks aimed at understanding
and generating human-like text. Fuelled by deep learning
algorithms, these language models have found their
place in various domains, ranging from customer service
to education. The recent surge in their popularity, driven
by the success of ChatGPT, has increased the demand
for such technology.
Instruction tuning, through which a language model
would be finely calibrated to adhere to user instructions
and produce tailored responses, has emerged as
a cornerstone technique in the development of
conversational chatbots. However, such models are
scarce within the Maltese linguistic landscape. This
hampers the advancement of applications and services
catering to Maltese speakers, impeding progress in this
area. The development of such technology not only
signifies technological progress but would also serve as a
vital tool for safeguarding cultural heritage and fostering
linguistic diversity. Furthermore, an important factor
that must also be taken into account is the substantial
financial outlay required for developing language models
of this nature.
This study addresses these challenges by
presenting Gendus, a low-cost, instruction-tuned
language model for Maltese. The name of the
proposed software is the word in Maltese for ‘water
buffalo’, with a nod to il-gendus Malti, which was
a species native to Malta. The project seeks to
contribute to existing knowledge by demonstrating the
effectiveness of instruction tuning and various costcutting
techniques in developing language models
for under-represented languages, such as Maltese.
Creating a tailored language model for Maltese,
would open avenues for diverse applications relevant
to Maltese speakers but also contribute to further
development of the framework for creating models
for other under-represented languages.
The methodology used in the project was adapted
from established practices used for developing such
language models. The first step in the process was the
machine translation of a dataset consisting of 52,000
instructions into Maltese. Each instruction described
a task for a language model to perform, along with
its anticipated output. Then, the LLaMA 2 7B model
Figure 1. A sample conversation on Gendus
was used as the base language model; this model was
fine-tuned on the dataset using techniques such as
parameter-efficient fine-tuning (PEFT) and low-rank
adaptation (LoRA). These techniques were geared
towards reducing hardware requirements during the
model’s training process, which in turn would reduce
the costs required to develop a language model.
Gendus underwent evaluation across a number of
downstream tasks, including sentiment analysis, partof-speech
tagging, and named-entity recognition.
A comparative analysis of the results obtained
with those of the BERTu model, another prominent
language model for Maltese, revealed that while
Gendus approached similar performance levels, it did
not surpass BERTu. Despite this outcome, Gendus
delivered a remarkable 99.78% reduction in training
costs, confirming its cost-effectiveness.
While falling short of achieving performance
superiority, the affordability of the proposed approach
renders it an attractive option, particularly in projects
constrained by budgetary considerations. In addition
to this, Gendus exhibits capabilities for open-ended
text generation, enhancing its versatility and potential
for various NLP tasks.
68
Faculty of Information and Communication Technology Final Year Projects 2024
Vegas replace: A twist on plunderphonics
DAVID BUHAGIAR SUPERVISOR: Mr Tony Spiteri Staines
COURSE: B.Sc. IT (Hons.) Software Development
With the rise of computers and samplers for the
production of music, the recycling of content continues
to become more commonplace. Whether it would be
used to include an element of familiarity to an otherwise
original composition, or to create a remix, content-reuse
techniques translate the effects of repetition in written
literature to the world of music.
John Oswald took this idea to the extreme in 1985,
with the concept he called plunderphonics. He argued
that anything that produces sound should count as a
musical instrument, including music-playing devices.
Based on this premise, he proceeded to make entire
’plunderphonic’ albums, which would consist of tracks
that were made up entirely of spliced samples from
existing music on CD. While these albums resulted
in threats of litigation against him by the Canadian
Recording Industry Association, Oswald’s techniques
continue to live on today in the form of parody remixes
on YouTube.
Some YouTube remix creators even publish their
remix project files (usually made in Magix Vegas Pro),
allowing anyone to download them. From this, the
Vegas replace (veg replace for short or VG) came about,
where a user would download a .veg project file from
the internet and replace the project media with different
files. This would result in the same musical structure
of the original project being played through the replaced
media. Benjamin Kaufold (known as Kyoobur9000)
compares VGs with playing a musical score with
different musical instruments, and with Oswald’s broad
definition of ’musical instrument’, the replaced sources
themselves would count as musical instruments.
VG became the starting point for learning how to
create what are known as Sparta remixes. However, the
ease of replacing sources in someone else’s project file
creates a problem. Besides the ethical and legal issues of
content reuse that Oswald faced with his plunderphonic
works, simply replacing media often results in audibly
unpleasant remixes of poor quality.
From the mass of low-quality VGs online, only a few
audibly pleasing exceptions have been created. These
exceptions demonstrate that VGs could be musically
pleasing if proper care would be put into preparing the
sources. This project was based on the premise that, by
gaining an understanding of how projects in Vegas Pro
are stored, and applying that to music theory, it would
be possible to determine the quality of VGs.
Sequences of pitch-shifted video clips in Vegas Pro
are very similar to MIDI notes on a digital piano. Also,
while the .veg format is part of the Vegas Pro software
brand, Vegas can import and export projects in EDL
form This allowed the creation of a software artefact for
bidirectional conversion between MIDI notes and EDL, in
this project.
The proposed solution allows both the automated
creation of remixes tailored to VG, as well as analysing
existing remixes in MIDI form. During development, it was
discovered that the pitch range that is possible in Vegas
is much broader than what the software allows. In fact,
the proposed solution has built upon this characteristic.
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Figure 1. The functionality of the proposed software
University of Malta • Faculty of ICT 69
Investigating pitch-detection algorithms
for improved rehearsal enhancement
MARIAH DEGUARA SUPERVISOR: Dr Conrad Attard
COURSE: B.Sc. IT (Hons.) Software Development
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Maintaining the right pitch throughout a musical piece
is crucial for the flow of music. This would apply to
members of a choir and soloists alike, and can only be
achieved through constant practice.
Ensuring the best use of rehearsal time would
depend on the quality of the individual practice, which
comes with suitable aids or guidance. Hence, this project
has investigated the existing support for pitch detection
and employed these works to create the SelfTune app,
which is the tangible outcome of this research. SelfTune
has been developed precisely to enhance the quality of
individual practice. It is mainly intended to be used by
singers to learn more about their voices and be able to
gauge themselves while practising.
The first stage of the implementation process was
to identify a good-quality dataset that would meet
the requirements of the project. This was necessary
to serve as ground truth in both the comparative
analysis between the three identified pitch-detection
algorithms and the calculation determining the singer’s
pitch accuracy in the SelfTune application. Ultimately,
a dataset was chosen, containing recordings of original
songs with respective recordings of the singer singing
solo, as well as a table containing a comprehensive list
of timestamps with the pitch of the last note or section
that was sung.
The second stage consisted in the identification
and comparative analysis of available pitch-detection
algorithms at the time of the research / testing stage.
Ultimately, the three selected algorithms were the YIN,
SWIPE, and SPICE algorithms, all of which provided a
different angle at which the pitch detection problem
could be viewed and presented a varied solution to
such a problem. In view of this diversity, a comparative
analysis was carried out. The algorithm that proved to
be the most accurate and efficient, was then used to
develop the application, SelfTune.
d the development of the final prototype of
SelfTune. Its clean and intuitive design was intended
to facilitate achieving the main objective of the
application, which was to provide ear training and
guidance in individual practice sessions. Upon opening
the application, a welcome screen appears, followed
by a list of songs one could listen to, learn, and record
while singing. After recording the singing voice, the
application would calculate the user’s pitch accuracy,
upon which the results would be displayed alongside
a list of inaccuracies and their corresponding pitches.
Upon testing SelfTune, many of the singers
expressed the need for such an application, which
was confirmed during the usability study conducted to
determine the effectiveness of SelfTune. This highlights
the potential of such applications in the music industry.
Moreover, numerous suggestions for improvements
have been given by the participants in the usability
study, which suggest the increased interest and need
for such an application.
This research project and the resulting final
prototype prove that, thanks to the simplistic interface
and precise pitch identification, applications such as
SelfTune mark the arrival of digital musical-rehearsal
assistants.
Figure 1. A screenshot of the application SelfTune, which features
the result screen
70
Faculty of Information and Communication Technology Final Year Projects 2024
VA in VWLE: Virtual assistant in virtualworld
learning environment
PEDRO A. H. GUIDOBONO SUPERVISOR: Prof. Matthew Montebello
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
The more nostalgic computer users might remember
Microsoft Office’s Clippit or Clippy, the virtual assistant
that often was found to be more frustrating than helpful.
Advancements in technology since then have paved the
way for more effective virtual assistants and, while
Clippy may have ‘retired’ in 2007, the resurgence of a
truly effective virtual assistant is quite possible, thanks
to the currently available AI tools.
In learning environments, the demand for
immediate inspiration or quick solutions to questions
often arises. Traditional information searches could
lead to a loss of focus and wasted time. Being a
groundbreaking feature designed to revolutionise any
learning environment, a virtual assistant (VA) could
also eliminate any delays. Moreover, incorporating an AI
system that could listen and provide vocal responses
within a virtual-world learning environment (VWLE),
would significantly elevate the learning experience.
Building upon the currently available advanced
AI technologies, including generative pre-trained
transformer (GPT) models like ChatGPT, which is
powered by OpenAI, allows for exploring — and ultimately
implementing — cutting-edge solutions in education.
This project sought to contribute to the ongoing
evolution of AI applications in learning environments,
enhancing the virtual learning experience through
advanced AI techniques. This goal was reached by
developing an intelligent and adaptable assistant
capable of understanding and responding to students’
educational needs in real time.
To achieve this goal, the project utilised the Unity
platform, which was originally designed to create
games. Unity 3D largely facilitated creating a virtual
environment. Moreover, packages from OpenAI and
similar software made it possible to:
• transform students’ speech to text,
• use the text with ChatGPT,
• and transform its response back to speech.
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Figure 1. The process involved in the proposed virtual assistant (partially powered by
DALL-E-3)
Figure 2. Unity is a popular and powerful cross-platform game engine and
development platform with a broad range of application
University of Malta • Faculty of ICT 71
Audio-signal processing (tone
analysis) FPGA-based hardware for
signal-processing applications
AMR TREKI SUPERVISOR: Prof. Ing. Edward Gatt
COURSE: B.Sc. (Hons.) Computer Engineering
AUDIO SPEECH & LANGUAGE TECHNOLOGY
In music and regular conversation, one parameter
within audio signals conveys so much information
to the point where it can predict the prosody
and emotion that is associated with the signal.
The parameter in question is the pitch or fundamental
frequency. This could be determined through a series
of processes.
These functions could be handled by fieldprogrammable
gate arrays (FPGAs) and this project
has sought to demonstrate how they could also be
programmed to reproduce the human ear. FPGAs
are very fast processing microchips. Unlike other
microchips, they offer a very high level of flexibility,
and can be programmed by the user all the way to
their hardware structure. Using an FPGA, the pitch
extraction process initiates with filtering the input
signal to eliminate noise, followed by pitch detection
via algorithms designed to extract the pitch of the
signal. These algorithms then provide real-time output
indicating the pitch state.
The way in which the FPGA extracts the pitch is
closely modelled on how the human ear functions.
Taking the anatomy of the human ear as the baseline
(see Figure 1), the outer ear and external auditory canal
would correspond to the pulse-density modulation
(PDM) microphone; the inner ear then transmits the
Figure 1. Anatomy of the human ear
sound wave to the three amplification bones (the
incus, malleus, and stapes). The vibrations would then
be converted into electrical signals by tiny hair cells
within the cochlear nerve.
Finally, the electrical signals are passed on to the
brain and the signals are interpreted by the auditory
cortex. The remaining stages can be compared to
the filtering stage of the FPGA. The human brain
corresponds to the main chip containing the relevant
algorithms for the processing and estimation of pitch.
Figure 2. The FPGA model
72
Faculty of Information and Communication Technology Final Year Projects 2024
Using software for the generation
and analysis of music
NEIL ZAHRA SUPERVISOR: Mr Tony Spiteri Staines
COURSE: B.Sc. IT (Hons.) Software Development
In the current digital age, the intersection of technology
and art offers unprecedented opportunities for
creativity and innovation, particularly in the world
of music composition. This project exploits this
intersection by developing AI-enhanced software for
music composition and generation of backing tracks.
The main aim was to make music creation and analysis
accessible to a wider pool of users. This endeavour
makes use of a number of tools powered by AI (artificial
intelligence) to create a plugin for JJazzLab, which
is a complete and open-source application, designed
to transform the way musicians interact with music,
allowing them to accompany their favourite songs
effortlessly.
Traditionally, music composition has been perceived
as a complex process, reserved for those with
extensive knowledge of music theory and composition
techniques or having a natural gift for it. However,
the utilisation of AI in music has brought about
significant change, enabling the creation of interactive
musical pieces without the need for deep theoretical
knowledge. By making the most of the advantages
of AI in pattern recognition and learning from vast
datasets, the project sought to transcribe and recreate
complex music structures within the shortest possible
time frame.
The core of the proposed project is the development
of a plugin that promises to simplify the music-creation
process. This plugin is not just a tool; it is a bridge
connecting musicians with their aspirations, enabling
them to accompany their favourite songs with ease
and precision. It makes best use of AI to convert MP3
files into MIDI format, subsequently transforming them
into CSV formats that JJazzLab could utilise.
One of the achievements of the project is its
potential for enhancing user accessibility, making music
composition and backing-track generation feasible for
users, regardless of their technical expertise or musical
background. By focusing on user-friendly design
and seamless integration with JJazzLab’s existing
architecture, the plug-in ensures a stable and intuitive
experience for all users.
Testing and user feedback played critical roles
in the iterative development of the plug-in, with a
structured approach employed to evaluate the accuracy
of the audio/MIDI-to-CSV conversion and the plug-in’s
overall performance. Feedback from beta testing with
musicians has been vital in refining the functionality
of the plug-in, highlighting the importance of user
experience in technological development.
This project not only contributes to the field of digital
music composition by introducing new capabilities in
music analysis and generation, but also represents a
significant step towards the integration of AI in creative
processes. The AI-enhanced JJazzLab plugin embodies
the potential of AI to complement human creativity,
offering a new dimension of artistic expression and
innovation in music composition.
This project has sought to demonstrate how the
integration of AI in music composition could open
new avenues for creativity and artistic expression.
Furthermore, it confirms that technology can serve as
a catalyst for creative innovation and making the art of
music composition more accessible.
AUDIO SPEECH & LANGUAGE TECHNOLOGY
Figure 1. Screenshot of the software loaded with a generated backing track
University of Malta • Faculty of ICT 73
Developing a protocol for human-motion
capture using wearable inertial sensors
ELISA AZZOPARDI SUPERVISOR: Dr Ingrid Galea
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
INTERNET OF THINGS
In recent years, there has been an increased interest in
understanding and analysing human motion for various
applications, such as sports-performance analysis,
rehabilitation, virtual reality, and animation. Traditional
motion-capture systems, such as optical marker-based
systems or magnetic systems, while highly accurate,
come with limitations such as high cost, restricted
mobility, and complex setup procedures. These
limitations spurred the development of alternative
solutions, with wearable inertial sensors emerging as a
promising technology.
Wearable inertial sensors offer a number of
advantages over traditional motion-capture systems.
They are lightweight, portable, and capable of capturing
motion data in real time, allowing for more spontaneous
and unencumbered movement. Additionally, they are
relatively affordable, making them accessible to more
researchers, practitioners, and enthusiasts. However,
adoption of wearable inertial sensors for human-motion
capture faces significant challenges. These include
the lack of standardised protocols for data collection,
processing, and analysis, as well as issues related to
sensor calibration and synchronisation. Without a
standardised protocol, it becomes difficult to compare
results across studies, replicate experiments, or
integrate data from different sensor systems.
With reference to the above scenario, this project
has sought to develop a comprehensive protocol for
human-motion capture using wearable inertial sensors,
specifically the Movella Xsens DOT sensors. This
protocol sought to address existing challenges and
provide guidelines for researchers and practitioners in
the field. By establishing standardised procedures for
data collection, processing, and analysis, this protocol
would enhance the reliability, validity, and reproducibility
of motion-capture experiments conducted with
wearable sensors.
In addition to the above, a well-defined protocol
would facilitate collaboration among researchers,
enabling the pooling of data from multiple studies to
create larger datasets for more robust analyses. It
would also simplify the development of software tools
and algorithms for motion analysis, as it would enable
developers to design solutions to comply with the
established protocol.
Figure 1. The sensors used during this study
Figure 2. Positioning of the wearable motion-capture
sensors
74
Faculty of Information and Communication Technology Final Year Projects 2024
Door-access-control system
with facial recognition
MATTHIAS DRAGO SUPERVISOR: Dr Inġ. Trevor Spiteri
COURSE: B.Sc. (Hons.) Computer Engineering
Access-control systems are prevalent in today’s security
landscape, playing a vital role in safeguarding physical
and digital assets across various industries. An accesscontrol
system is a security mechanism that regulates
entry to physical or virtual resources. It typically involves
the use of authentication and authorisation to ensure
that only authorised individuals or entities can access
certain areas or information.
Over the past decades, there have been numerous
advancements in the implementation of access control.
Traditional methods include mechanisms such as
access cards or codes, but they have proved to be
vulnerable as they could easily be misplaced or stolen.
This raises the need for a more secure and efficient
alternative.
The advent of biometrics has revolutionised access
control by introducing more robust and reliable methods
of identity verification, such as fingerprint scanning, iris
recognition and facial recognition. Unlike methods such
as keys, passwords, or access cards, biometrics offer
greater security and convenience, as they are inherently
tied to one’s unique physical characteristics.
This project has focused on the implementation
of a door-access-control system that would rely
on facial-recognition technology. The system aims
to provide secure and reliable access control, while
offering the convenience of being fast, contact-free
and user-friendly. The implementation was done using
the Raspberry Pi model for single-board computer,
paired with a camera and a servo motor-operated door
mechanism.
The system operates in two key stages, namely: face
detection and facial recognition. It was implemented
using Python libraries, which use conventional neural
networks for detection and recognition. Upon capturing
frames from the camera, the face-detection algorithm
identifies any face present and extracts it, while ensuring
it is clear and in close range. The facial-recognition
algorithm then recognises faces by extracting facial
landmarks and calculating unique measurements,
which are then compared with pre-existing data to
authenticate individuals.
A logging website was also developed in order to track
and monitor attempts. This platform displays detailed
logs containing name, date, time and photo of each
Figure 1. System output when an individual is
authenticated
attempt, with the ability to filter the logs by the different
attributes. An export functionality was also added to
enable exporting the logs for further processing. This
would allow for remote monitoring from anywhere with
internet access.
Throughout the development process, some
challenges were encountered in ensuring accuracy in
facial recognition, particularly in varying environmental
conditions and facial appearance. To mitigate these
issues, different parameters were fine-tuned and
mechanisms were put in place that were crucial for
optimal performance. Additionally, extensive testing
was conducted with different individuals to validate the
system’s performance and reliability.
This project seeks to complement the paradigm
shift in security measures. By harnessing the power
of facial biometrics and the Raspberry Pi, this system
promises to offer a seamless, secure, and user-friendly
alternative to traditional access-control methods.
INTERNET OF THINGS
University of Malta • Faculty of ICT 75
Design of a piezoelectricallyactuated
MEMS micro-mirror
MATTHEW MIFSUD SUPERVISOR: Prof. Ivan Grech CO-SUPERVISOR: Dr Inġ. Russell Farrugia
COURSE: B.Sc. (Hons.) Computer Engineering
The micro-mirror design in MEMS(micro
electromechanical systems) is increasingly being used
for different applications, such as automotive devices,
optical switching or scanning. Lidar (light detection
and ranging) is a method that determines the range
of a target by using laser technology and incorporates
MEMS into its design, since they consume less power
than others and are more compact, as well as being less
costly.
The aim of the project was to design a quasistatic
4mm × 3mm micro-mirror that could be actuated
piezoelectrically using the PiezoMUMPs process.
Quasistatic means that the tilting motion of the
mirror would occur as a result of the voltage applied
on the actuators, and the actuators are made up of
a piezoelectric material that would excite the mirror.
PiezoMUMPs is a typical process that many engineers
and researchers use to create sensors, microphones,
and autonomous microsystems.
In this project, the micro-mirror consisted of
two torsion springs connected to the mirror and four
serpentine springs, each connecting an actuator and
the mirror [2]. The stiffness of these springs, along with
the thickness of the mirror and the material used for
the actuators, affected the resulting scan angle and
the frequency at which torsional rotation was identified
(see Figures 1 and 2 respectively). Scan angle refers
to the displacement of the edge of the mirror when a
voltage is applied to two actuators, whereas torsional
rotation refers to the mirror rotating to the left and right
consecutively.
CoventorWare, which is a simulation software that
caters mostly for MEMS devices. was used in designing
and simulating the model. Using the software, multiple
tests such as the mesh, modal and DC analyses were
performed to experiment how different springs, meshes
and materials would affect the scan angle and the
frequency at which torsional rotation would occur.
INTERNET OF THINGS
Figure 1. Scan angle of 4.56 degrees when 10V is
applied
Figure 2. Torsional rotation is observed at a 500Hz
frequency
76
Faculty of Information and Communication Technology Final Year Projects 2024
Piezo-actuated MEMS resonator
for gas detection
LUCA MIGUEL PIROTTA SUPERVISOR: Prof. Ivan Grech CO-SUPERVISOR: Dr Inġ. Russell Farrugia
COURSE: B.Sc. (Hons.) Computer Engineering
Wherever we are, consciously or otherwise, we breathe
one or more types of gas. For instance, the air we
breathe is a mixture of gases including nitrogen, oxygen,
and carbon dioxide. Unfortunately, some of these gases
are harmful, especially when we are exposed to an
excessive amount of substances such as chlorine or
hydrogen sulphide. Every object vibrates at a specific
frequency and these vibrations are imperceptible.
In view of the above, this project proposes a device
that would resonate when exposed to a certain gas,
with the purpose of alerting people about the presence
of a dangerous gas. In comparison to standard sensing
techniques, the incorporation of resonator technology
based on micro-electromechanical systems (MEMS)
into gas-detection systems would improve sensitivity
and selectivity in detecting target gases.
The high-resonance frequencies and low mass
of piezo-actuated MEMS resonators, along with other
unique characteristics, were believed to make it possible
to detect gas molecules efficiently by amplifying the
sensor’s response to gas interactions and enhancing
detection limits, as well as response times. Furthermore,
it was anticipated that the sensor could be adjusted
to handle particular gas types by utilising the ability
of tunability of piezo-actuated resonators in order to
improve the selectivity.
MEMS technology was chosen for this project, in
view of its superior properties and characteristics,
when compared to other applicable technologies. A
fundamental advantage is the fact that MEMS are
extremely small – in fact the proposed device is 8000
micrometres long and 2000 micrometres wide. Their
size facilitates the creation of portable devices, thus
making them suitable for integration into a wide range
of products. Furthermore, since the resulting product
would be a small device, it could be mass-produced in
a cost-effective manner, rendering it very competitive.
The process specifically used for this device is
called PiezoMUMPS and is a specialised fabrication
process to integrate piezoelectric elements. In view of
the presence of an actuator in the device, PiezoMUMPS
Figure 1. The actuator resonating at harmonic
frequency
would be fundamental for operating its actuation
capabilities. The process enabled the fabrication
of complex structures with integrated piezoelectric
layers, allowing for precise control over its capabilities
to sense the gases. This process typically involves
depositing and patterning thin layers of piezoelectric
materials, in this case aluminium nitride (AIN), onto the
silicon substrate.
This project used Coventorware to handle all the
simulations required, and to validate the theoretical
results. This software made it possible to perform
multiple tests, such as the mesh, modal, and DC
analysis, to observe the effects of each simulation on
the resonator.
Taking the mesh analysis in particular, the process
focused on the frequencies obtained to showcase how
different materials affected the actuator’s frequency
when resonating. This made it possible to determine
which substance from among those tested (i.e.,
silicon, aluminium nitride and an aluminium-chromium
compound) had the most distinct effect.
INTERNET OF THINGS
University of Malta • Faculty of ICT 77
Vehicle-engine-management security
issues: Detection and mitigation
MARJOHN SALIBA SUPERVISOR: Dr Clyde Meli
COURSE: B.Sc. IT (Hons.) Software Development
INTERNET OF THINGS
Modern vehicles are equipped with a variety of
computers, one of which is the electronic control unit
(ECU) of the engine. The ECUs in modern engines
depend on data obtained from sensors to regulate the
engine in real time. This is achieved through the engine’s
ECU managing the various servo mechanisms and
actuators. Therefore, the focus of this research was to
investigate the security aspects of the communication
channel utilising analogue signals between the engine
ECU, the diverse sensors and actuators.
One of the primary objectives behind the research
was to address the relative ease with which security
breaches on analogue signals could occur, and
establishing the extent to which engine ECUs could
use individual signals, or combinations thereof, to
identify man-in-the-middle (MITM) attacks on analogue
signals. Another objective was proposing mitigation
technology for enhancing security and integrity in
engine-management systems, specifically in analogue
signals between engine sensors and the engine ECU,
and between the engine ECU and actuators. This was
expected to improve the detection of deviations from
a vehicle’s factory specifications — which are illegal
according to the EU regulation, if not approved by the
relevant authority.
An alternative hypothesis would be that such
breaches could be detected using overlapping range
checks and other techniques such as rate of change
and time series relation.
The methodology adopted for the project included
the development of an artefact made up of the engine
simulator, a simple-but-real OBD2-compliant engine
ECU, a simple-but-real OBD2-compliant dashboard,
and the simple-but-real MITM device.
The purpose of the engine simulator was to simulate
the real-world sensors and actuators of the engine.
This represented a real-life engine in operation. In fact,
the engine ECU was used to investigate and establish
the extent to which real-world engine ECUs could
detect analogue MITM attacks. The industry-standard
Figure 1. Engine simulator, engine ECU and dashboard
OBD2 port included in the engine ECU could be used to
connect an industry-standard OBD2 scan tool interface
for diagnostic purposes. This would ensure an accurate
representation of real-world automotive system
behaviour of the engine ECU.
The dashboard was built to enable live-data reading,
fault reading and fault deletion, thus also representing
a real-world dashboard. Live-data reading refers to the
displaying of current engine parameters, like rev counter
and the ‘check engine’ light. Fault reading consists in
displaying any faults or problems in the vehicle by the
means of error codes, known as diagnostic trouble
codes (DTCs), and their description. In accordance with
the ISO 15031 standard, the DTCs are classified into
three categories, namely: stored, pending or permanent.
Fault deletion refers to the DTCs being deleted by the
user. This can only occur in the case of stored or pending
DTCs, since the permanent DTC can only be deleted by
the vehicle itself.
The purpose of the MITM device was to cause the
MITM analogue security breaches to be investigated in
this research. This was done by first reading signals
from the analogue line present between the engine ECU
and the sensors, and engine ECU and actuators. Then,
such analogue signals were adjusted and sent to the
destination component.
78
Faculty of Information and Communication Technology Final Year Projects 2024
Lost-baggage rerouting in commercial airports
CLYDE SCIBERRAS SUPERVISOR: Dr Clyde Meli
COURSE: B.Sc. IT (Hons.) Computing and Business
As the number of air passengers continues to grow
annually, the volume of baggage processed at commercial
airports increases likewise. The aviation industry
constantly seeks to enhance the relevant processes in
order to increase efficiency and automation. However,
despite implementing the most modern technology in
baggage-handling systems, mishandling of baggage still
occurs.
Misplaced baggage happens due to a number of
factors. For example, baggage could be left behind
at origin. This issue is usually resolved by placing
the baggage on the next available flight. However, in
some cases, baggage gets sent to the wrong airport,
complicating rerouting efforts. This leads to passenger
frustration, placing airlines under pressure to rectify the
situation promptly for a smooth passenger experience.
In the case of a single flight or connecting flights,
the baggage may be sent on the wrong flight or left at
intermediary airports, respectively. For instance, if a
passenger travels from Malta to Singapore via Frankfurt,
their baggage might remain stranded in Frankfurt. Once
again, routing the baggage to the correct destination
might not be straightforward in such cases.
The accompanying image presents a basic overview
of the baggage-handling process in commercial airports,
from check-in to exiting the arrival hall/area. Naturally,
the baggage-handling process differs between airports.
Furthermore, it is also worth noting that luggage
mishandling usually occurs at the baggage handling and
sorting stage.
In order to get a sense of the research that has
already been undertaken in this area, this project
included a review of the existing literature on the topic,
to explore the fundamental concepts in this field. While
the majority of previous studies focused on simulating
baggage-handling systems, this study considered the
routing aspect. This idea served as the basis for the
project, which consisted in implementing a dashboard
offering a possible solution for rerouting lost baggage.
Therefore, the aim of this project was to help reduce
the occurrence of lost baggage, as well as enhancing
the level of satisfaction of passengers and, thus, their
experience of the airline.
With technology advancing rapidly, this study
contributes to a future with more efficient baggage
systems. For instance, future studies could consider
implementing such a dashboard within a physical airport
environment to further test its usefulness and impact.
INTERNET OF THINGS
Figure 1. Overview of a basic baggage-handling process
University of Malta • Faculty of ICT 79
IoT-based environmental monitoring
system for use in a drone
DALE SCICLUNA SUPERVISOR: Dr Inġ. Trevor Spiteri
COURSE: B.Sc. (Hons.) Computer Engineering
INTERNET OF THINGS
Unmanned aerial vehicles (UAVs) — better known
as drones — are specialised devices equipped with
powerful motors that ensure they maintain stability
when in flight.. Nowadays, drone technology and its
application have changed significantly. Drones were
first implemented solely for military purposes, to gather
intelligence by surveying specific destinations. However,
they have come to be used by the general public for
a variety of reasons, e.g., for commercial purposes
such as photography, or research purposes such as
environmental sensing, where the data collected from
specialised sensors would cover a hard-to-reach
geographical location.
This project proposes a drone that explores the
advancements of internet of things (IoT) technology and
advanced hardware, in order to determine its relevance
within the modern world in terms of environmental
health. A drone has been purposely designed and
fitted with environmental sensing capabilities through
IoT technology. The overall system consists of a
Raspberry Pi that acts as a secondary system, in which
it communicates with a pre-built flight controller to
obtain GPS (global positioning system) information,
the drone’s orientation and acceleration through serial
communications. A suitable data-transfer protocol
was implemented to connect sensors to the secondary
system and to store any incoming data stream to the
user directly using an app or cloud database accessible
through a wi-fi connection.
To truly appreciate the versatility and the
importance of the system ‘s applications, it would be
necessary to gain a deep understanding of the current
situation humanity is facing on a daily basis, and also
the implications for the future. Negative effects on
the environment have increased due to the abusive
exploitation of natural resources and the increasing
pollution being pumped into the atmosphere, which
contributes to irregular temperatures, abnormal weather
patterns, global warming, decreased crop yield, among
other consequences.
Environmental sensing systems have proven to
be essential tools for predicting weather patterns in
order to grasp the full extent of the natural degradation
taking place, especially due to the increase of carbon
dioxide and pollutants. This is crucial in seeking to
avert the irrevocability of our impact on the planet,
possibly reversing major damage caused, in order that
the planet would not be left with a crippled ecosystem.
The significance of drones in such applications is
immense. They have enabled researchers to cover
locations and areas that were previously out of reach.
Furthermore, such devices not only provide costeffective
and efficient solutions, when compared to
on-ground methods, but drones could also provide
comprehensive coverage of an area. Moreover, in
view of their network connectivity, the data collected
by drone could be transmitted seamlessly through a
centralised database, where it would be safeguarded
and duly analysed. It would then be possible to generate
predictions for the particular area, based on the
compiled data.
Figure 1. Overview of the architecture of the drone’s system
80
Faculty of Information and Communication Technology Final Year Projects 2024
Environment monitoring system
using a wireless sensor network
MARK ZAMMIT SUPERVISOR: Prof. Inġ. Edward Gatt CO-SUPERVISOR: Dr Inġ. Trevor Spiteri
COURSE: B.Sc. (Hons.) Computer Engineering
Our health largely depends on the air we breathe.
Unfortunately, there have been countless fatal
accidents, as a result of unintentional gas poisoning.
Certain gases, like carbon monoxide, are odourless, so
it is not possible to detect them using one’s senses in
the event of a gas leak. Other gases, such as propane,
are extremely flammable and can cause devastating
explosions if leaks are neglected.
The main aim of this project was to tackle this problem
by using a wireless sensor network (WSN). The system
would involve a number of sensors able to measure
certain gases, to be placed at various points (e.g., near
gas hobs). These sensors would then communicate with
each other for a broader understanding of the situation.
Another key aspect of the project is that the proposed
system uses low-energy consumption to enable battery
operation, thus increasing the portability of the system.
Moreover, this would allow deployment in points not
connected to a power supply.
Nowadays, most persons use devices that rely on
radio technologies, like wi-fi and Bluetooth. However,
these technologies have two major shortcomings,
namely: energy consumption and range. As most
persons have found out in their daily usage, in the
absence of repeaters, the range could at best cover a
single floor. To overcome such issues, this project has
worked with a relatively new radio technology: LoRa (an
abbreviation of ‘long range’).
LoRa has two major advantages: low energy
consumption and a broad range, thus solving
the problems associated with wi-fi. Through this
technology, the user could choose to use either direct
power supply (through wall sockets) or long-life
batteries. Furthermore, LoRa is known to be able to
communicate over several kilometres where a line-ofsight
is present between transmitter and receiver. This
adds a significant degree of flexibility to the deployment
scenarios of the system.
Figure 1. Block diagram of the system
The end devices, called sensor nodes, would collect
information about their surrounding environment, such
as temperature, humidity, and gas concentrations. The
micro-controller is the ’brain’ of the sensor node and
processes this information. After a period of time, it uses
LoRa to transmit this data wirelessly to the base station.
This is the overarching ‘control room’, which would receive
data from all sensor nodes in the system and upload the
data relating to the environment to the internet, meaning
that the user would be able to view this data in real time.
Should gas concentrations reach dangerous levels, the
base station would send an e-mail to the user to alert them
to take the necessary actions, thus ensuring user safety.
The proposed system could be used extensively and in
various scenarios. One example would be a deployment of
the system in fields to monitor soil conditions, swapping
gas sensors for sensors for soil humidity etc., which
would be battery-operated. This software would be able
to send data to the farmer every few hours. The base
station could be located at the farmer’s home, providing
the farmer with a full overview.
The flexibility of the proposed application allows for
implementing more functions. For instance, a farmer
could add the option for switching on automatic watering
systems. Overall, this project provides a proof-ofconcept
for a low-energy, WSN.
INTERNET OF THINGS
University of Malta • Faculty of ICT 81
A private, secure and decentralised MANET
intended for P2P messenger applications
GABRIEL APAP SUPERVISOR: Prof. Inġ. Victor Buttigieg
COURSE: B.Sc. (Hons.) Computer Engineering
NETWORKS AND TELECOMMUNICATIONS
The internet and instant messaging have brought
about considerable advancements in communication.
Combined, they have enabled voice and video calls,
media sharing, and messaging virtually in real time,
irrespective of whether the persons seeking to
communicate are in the same room or half-way across
the world from each other.
Although the advantages of this progress in
communication channel are indisputable, the
infrastructure that made all this possible has serious
drawbacks. While the internet was designed to be
decentralised, servers running online applications
and internet service providers have rendered it highly
centralised with few points of failure. Moreover, this
makes it vulnerable to being controlled by governments
and large corporations.
While the above might not be an issue for the persons
using these communication services to text friends and
family, it could be a major issue for rescue workers,
activists and journalists. For instance, a team of rescue
workers, scrambling to co-ordinate efforts to save
persons in the aftermath of an earthquake, bombing or
hurricane, would struggle if all communication would be
down due to torn internet cables or felled cell towers.
Another scenario would be activists and journalists
communicating over the internet with whistleblowers,
with all their messages passing through potentially
compromised servers or routers, or their traffic being
logged by an internet service provider that would relay
user data to governments.
The proposed solution uses a decentralised network
made up of small, lightweight and low-power nodes
to create a mesh network connecting members of a
group. The specific network topology (organisational
structure) is called a mobile ad hoc network (MANET).
This is a network where each node is both a transmitter
and a receiver, passing on messages directly from the
sender to the recipient. There is no central server or
authority. Moreover, if one user goes offline, the network
would adjust to finding a new path for the messages. It
is flexible, scalable, and decentralised.
This system does not rely on any fixed infrastructure,
like cell towers or cables, with each individual pocketsize
device connecting a user to the network.
The proposed device is made up of a Digi XBee
SX868 and an ESP32 microcontroller. The XBee device
forms part of the long-range MANET, communicating
with other XBee devices on the same network. The
device itself makes use of a variety of techniques that
make it ideal for this use case, including the use of
encryption to keep all communication hidden from any
attacker that might succeed in accessing the network,
its resistance to interference, and its low power and
spread spectrum techniques that make the signal
undetectable.
The ESP32 would then act as an access point
providing a local wi-fi network, allowing standard
devices such as phones and laptops to connect as
normal. The packets received by the access point
would then be encapsulated into XBee protocol frames
and manipulated so that they could travel over the
MANET, while appearing as standard wi-fi traffic to the
user devices.
Altogether, this system could come online very
quickly and promises to enable highly secure and private
communication channels. This technology could prove
crucial to the work of rescue teams, activist groups and
cells of journalists.
Figure 1. Messaging over fixed
infrastructure versus a mobile ad hoc
network (MANET)
82
Faculty of Information and Communication Technology Final Year Projects 2024
Development of a 2D-ECC system for
enhanced error correction in memory systems
MATTEO FALZON SUPERVISOR: Prof. Inġ. Victor Buttigieg
COURSE: B.Sc. (Hons.) Computer Engineering
Safeguarding the integrity of data is crucial, in particular
ensuring that the information stored in devices remain
error-free. This project aimed to investigate methods
for correcting errors in digital information, which is
critical during storage mainly due to surrounding noise.
Specifically, the set objective entailed implementing
error correction codes (ECCs) in memory devices,
which are fundamental to modern digital storage.
MATLAB, a popular software for mathematical
tasks, was used for this project. The motivation
of the project was to build such systems, which
can enhance the reliability of data by correcting as
many errors as possible that occur during storage.
The ECCs add parity bits in the encoding process to
the original data, enhancing its reliability. This is done by
using two distinct codes: one for the rows and another
for the columns. These parity bits act as buffers similar
to protective wrapping in a package. The rows are
encoded with one type of code that adds a certain
pattern of parity bits, while the columns are encoded
with another, ensuring two layers of protection. This
dual-encoding process helps to detect and correct any
errors that could occur during data storage, much like
double-checking the safety of the package from every
angle before it would be dispatched.
The project required testing two error models,
namely: the random error model, where bits would be
flipped at random, and the hybrid error model, in which
consecutive bits would be flipped, as well as bits flipped
at random clusters.
One of the objectives of the project was
developing decoding techniques to restore data to its
original, error-free state. This was achieved through
the implementation of three distinct decoders.
The first utilised MATLAB’s toolkits for iterative
decoding. Iterative decoding is particularly suited to a
2D-structured approach, common in memory devices,
because it allows for systematic error-checking and
correction in two dimensions. In practice, this means
that data encoded along one dimension (rows) could
be independently checked and corrected, and then
Figure 1. BER comparison of the three different
decoders for a product code
a similar process could be applied along another
dimension (columns). This cross-checking between
dimensions (rows and columns) increases the
likelihood of identifying and correcting errors that might
be missed if only one dimension was considered.
In contrast, GRAND (Guessing Random Additive
Noise Decoding) and it’s more advanced counterpart,
IGRAND, adopt a different strategy. GRAND searches
through all potential error patterns to find the correct
one, while IGRAND applies more sophisticated
techniques to tackle complex error patterns. Notably,
GRAND and IGRAND are also iterative techniques, but
they differ from MATLAB’s toolkits approach. The latter
employs algebraic decoding, which uses mathematical
structures to deduce the original data, whereas GRAND
variants are based on searching through error patterns.
From the results obtained, the IGRAND method
performed the best, as it was the most effective in
correcting the most errors that were introduced during
storage. The GRAND performed the second best and
MATLAB’s decoder ranked last, as shown in Figure 1.
In summary, the project validated the initial hypothesis,
confirming that innovative decoding techniques could
improve error correction.
NETWORKS AND TELECOMMUNICATIONS
University of Malta • Faculty of ICT 83
Implementation of a visual traffic-data
system over FM-RDS and SDR technology
DYLAN GATT SUPERVISOR: Prof. Inġ. Victor Buttigieg
COURSE: B.Sc. (Hons.) Computer Engineering
NETWORKS AND TELECOMMUNICATIONS
Being stuck in traffic has become a common frustration
for many motorists. Whether it is due to being rush
hour or unexpected road closures, navigating through
congested roads is often challenging. While radio
announcements and text alerts are helpful, they often
lack the visual context for grasping the situation better.
This final-year project has sought to address the matter
by seeking to improve upon traditional traffic updates.
Current methods, like radio announcements or text
alerts, often do not give motorists a clear picture of
the traffic situation. Moreover, this lack of clarity could
generate confusion and more delays, as drivers try to
make informed decisions about their routes. The aim
was to use existing technology to develop a system
that would deliver visual traffic updates in real time.
Providing drivers with clear visual information could
improve their ability to deal with traffic more efficiently.
The proposed system uses FM-RDS (Radio Data
System) and software-defined radio (SDR) technology.
At its core, the system consists of a transmitter and
receiver that work together to transmit traffic update
images over FM radio frequencies. Additionally, this
would ensure that motorists would receive visual
updates about the traffic conditions by first receiving an
audio notification, followed by the actual traffic image.
Moreover, the system incorporates network capabilities,
enabling remote access to the transmitted and received
signal graph plots and control of the transmitter and
receiver parameters through an IP address and port
configurations. Likewise, this allows for seamless
management and customisation of the system settings.
Throughout the development process, a number
of challenges were encountered, including optimising
the transmission process for efficient data delivery
and ensuring compatibility with existing FM-RDS
infrastructure. However, through rigorous testing
and fine-tuning, a robust solution was implemented,
eventually.
FM-RDS, a digital subcarrier technology utilised
in FM radio broadcasts, transmits digital data, such
as song titles and artist names. This project has
demonstrated that it could also include visual traffic
updates. On the other hand, SDR technology allowed
for the flexible and programmable implementation of
radio communication systems using software-based
techniques. Subsequently, GNU Radio Companion
was employed, along with the GR-RDS module, to
develop and deploy the system. For the actual signal
transmission, the HackRF One SDR was employed. For
the reception of the broadcast, an RTL-SDR dongle was
used.
The successful implementation of the visual
traffic-data system could be considered a significant
step forward in traffic-management communication
systems. The proposed system has the potential to
improve road safety and reduce traffic congestion,
leading to a more efficient and enjoyable driving
experience for all.
The project has demonstrated the power of
technology in addressing real-world challenges. By
combining FM-RDS and SDR technology, it was possible
to devise a solution that could significantly improve
traffic information offered to motorists. It is hoped
that, in the future, traffic updates would be merely
informative but also visually engaging, while not losing
sight of driver safety.
Figure 1. A real-time accident alert generated by the proposed system
84
Faculty of Information and Communication Technology Final Year Projects 2024
Implementation of hardwareaccelerated
LDPC decoding
MARK MIZZI SUPERVISOR: Prof. Johann A. Briffa
COURSE: B.Sc. (Hons.) Computing Science
Many real-world systems rely on the transmission of a
signal across an unreliable medium. A radio broadcast,
for example, involves the transmission of an audio
signal across the atmosphere through radio waves.
Signals transmitted in this manner would be prone to
the introduction of errors in the received data by various
physical phenomena, and to distortion.
The need to correct errors when signals are
transmitted over an unreliable channel has resulted in
the development of encoding schemes called errorcorrecting
codes (ECCs). These encoding schemes are
applied to the signal before transmission in such a way
that a certain number of introduced errors could be
corrected when decoding the received signal.
Among the most widely used, state-of-the-art
ECCs are the family of low-density parity check (LDPC)
codes These codes can correct an arbitrarily large
number of transmission errors, close to the theoretical
Shannon limit. However, the excellent error-correcting
performance of these codes comes at with the cost
of computational complexity, with better performing
codes in the family being more costly to decode. For
this reason, the field of LDPC codes has largely revolved
around finding efficient algorithms to make the use of
better codes feasible.
Hardware acceleration through the use of fieldprogrammable
gate arrays (FPGAs) and generalpurpose
computing-on-graphics processing units
(GPGPUs) programming environments such as Nvidia’s
CUDA C/C++ also play a key role in the implementation
of effective LDPC decoders.
The goal of this project was to implement and
evaluate LDPC decoding on the Nvidia CUDA C/C++
platform. This platform provides a parallel computing
environment, which could be exploited to significantly
improve the performance of many algorithms. The
‘belief propagation’ algorithm at the heart of most
LDPC decoders is no exception, and can benefit greatly
from the process known as parallelisation, which was
the main task undertaken in the project.
The developed implementation of LDPC decoding
also supports encoding/decoding over Galois extension
Figure 1. Demonstration of the use of an LDPC errorcorrecting
code
fields. The reason for this is that many of the LDPC
codes approaching the Shannon limit are defined over
GF(2 k ) with considerable code lengths. In addition, the
run-time complexity of the decoder was reduced from
quadratic to log-linear through the use of Hadamard
transform matrices.
Communication systems consist of various
components, besides the error-correcting encoder/
decoder pair. These include the communication
channel used to transmit the signal, and potentially a
modulator/demodulator pair which serves to convert
the signal to/from a representation that would be
suitable for transmission over the channel. Properly
evaluating an implementation of LDPC codes would
entail the simulation of all these components.
The SimCommSys framework was used as a
simulation harness for the implemented LDPC decoder.
This open-source framework provides simulations for
several potential components of a communication
system, as well as the ability to gather statistics
about the performance of ECCs simulated within the
framework. SimCommSys allowed the implemented
decoder to be evaluated using realistic simulations.
NETWORKS AND TELECOMMUNICATIONS
University of Malta • Faculty of ICT 85
Creating a Maltese-English duallanguage
word embedding
MELANIE ATTARD SUPERVISOR: Prof. John Abela
COURSE: B.Sc. IT (Hons.) Software Development
NATURAL LANGUAGE PROCESSING
The Maltese language, spoken by just over 500,000
persons globally, faces unique challenges in the field
of natural language processing (NLP). Unlike widely
spoken languages, such as English or Spanish, Maltese
lacks the substantial linguistic resources necessary for
cutting-edge language technologies.
In the world of computational linguistics, the limited
availability of high-quality linguistic datasets and tools
for Maltese acts as a barrier to advancements in the
area. These resources, which include comprehensive
datasets and specialised language-processing tools, are
crucial for training computer systems to understand and
generate human language effectively. This scarcity of
resources not only affects the development of language
technologies but also hinders research initiatives
focusing on Maltese.
This research set out to use word embeddings
to overcome some of these limitations and open up
more opportunities for the development of language
technologies in Maltese. Word embeddings help
computers understand words in a manner that would be
similar to the way humans do so, by associating words
with meaningful representations in a high-dimensional
space. For instance, instead of organising a batch of
words alphabetically, these words could be arranged
according to their context, i.e., how they’re used together
in sentences. Words that tend to appear together, e.g.,
‘puppies’ and ‘adorable’ would be placed closer to each
other in this virtual space. Should a computer encounter
a word that it doesn’t recognise initially, it could seek
its neighbours in this space to understand its meaning.
Therefore, if it encounters the word ‘kittens’ for the first
time and sees that it is close to ‘puppies’ and ‘cute’ it
would guess that ‘kittens’ refers to something adorable.
In essence, word embeddings allow computers to
learn the subtle nuances and relationships between
words, even in languages like Maltese where resources
are limited. Overcoming these limitations was explored
by creating dual-language embeddings, using English as
the high-resource language. Dual-language embeddings
could be considered as a bridge between two languages
— this case, between a resource-rich language, such as
English, and Maltese, a language with limited resources.
Using a dictionary of translations from English to
Maltese, it would be possible for the computer to transfer
knowledge from English and enhance its understanding
of Maltese.
To determine whether using English could enhance
the quality of embeddings and expand the vocabulary
in Maltese, comparisons between mono-lingual word
embeddings in Maltese and dual-language embeddings
in Maltese and English were conducted across various
tasks. Impressively, the dual-language embeddings
facilitated the learning of 3,060 new Maltese words
by capitalising on English embeddings and a small
translation dictionary. Furthermore, they improved the
accuracy of identifying intruder words in sets of related
words by 15.86%, and demonstrated perfect clustering
of words into related groups, while the mono-lingual
embeddings also yielded robust results.
Figure 1. The process of
converting words into vectors that
could be visualised easily, where
words with similar meanings
are placed close to each other
(reproduced from https://medium.
com/@hari4om/word-embeddingd816f643140)
86
Faculty of Information and Communication Technology Final Year Projects 2024
BERTu Ġurnalistiku: Intermediate pre-training
of BERTu on news articles and fine-tuning
for question answering using SQuAD
ANDREA BORG SUPERVISOR: Prof. John Abela
COURSE: B.Sc. IT (Hons.) Software Development
Natural language processing (NLP) refers to the branch
of artificial intelligence (AI) that equips computers with
the ability to understand language in the same manner
that a human being does. NLP can be used to perform
a number of tasks that humans could carry out through
the use of language. This includes tasks such as
translation, summarisation, question answering (QA)
and sentiment analysis.
One of the tasks in NLP is reading comprehension,
otherwise known as extractive QA. Reading
comprehension refers to the ability to read (and
understand) a body of text, in order to answer any
questions regarding the text using the same body of text
as a context. An AI model trained for extractive QA would
receive a context and question as input, and would extract
a verbatim answer for that question from the context.
The field of NLP has seen significant advancements
with the advent of transformer-based models called
large language models (LLMs) such as BERT. The
training of LLMs usually follows the same procedure.
Firstly, an unlabelled and very large dataset would be
used consisting of text hailing from various domains
in a process called pre-training. During this phase,
the LLM would build a general understanding of the
structure of the language, learning to recognise words,
syntax, grammar, and some semantic relationships.
Secondly, a smaller homogenous corpus would be used
to further pre-train an LLM to enable it to understand
the language used in a particular domain. This is a
process often referred to as further or intermediate
pre-training. This helps the model to grasp the specific
language used within that domain by understanding its
jargon. Lastly, the programmer could use a dataset for
fine-tuning the LLM to perform a downstream task,
such as extractive QA.
In 2022, a group of researchers from the University
of Malta published a paper explaining their work in
creating the Korpus Malti V4, a large textual dataset
in Maltese, and BERTu, a Maltese BERT-based model
pre-trained on the aforementioned corpus. This finalyear
project built upon the said advancements for the
Maltese language, which has been under-represented
in NLP research due to being a low-resource language,
thus impinging on the availability and quality of datasets
in Maltese.
The current project involved the intermediate pretraining
of BERTu on a corpus of news articles from
various local sources. This corpus was obtained through
web scraping, a technique that programmatically
extracts data from websites. This process was
intended to enrich BERTu’s understanding of Malteselanguage
usage in journalistic contexts.
Furthermore, the project also involved fine-tuning
BERTu for extractive QA using the Stanford Question
Answering Dataset (SQuAD). This is a dataset which
comes in 2 variations for training and evaluating QA
models. However, since SQuAD is in English, it was
necessary to find an efficient means of translating
SQuAD into Maltese without compromising on
retention.
Finally, the project also involved the development of
a user interface to facilitate the interaction with the
developed models in a user-friendly and transparent
manner.
NATURAL LANGUAGE PROCESSING
Figure 1. Pre-training and fine-tuning procedures for BERT
(source: Devlin et al., 2018)
University of Malta • Faculty of ICT 87
A machine learning solution for
cyberbullying detection on social media
STEPHANIE CREMONA SUPERVISOR: Prof. Joseph G. Vella
COURSE: B.Sc. IT (Hons.) Software Development
NATURAL LANGUAGE PROCESSING
Machine learning (ML) has come to be used extensively,
with the result that it has a significant impact on daily
life activities and industrial processes. For example,
ML is used to predict crop yields in agriculture and to
facilitate clinical drug trials in healthcare.
With the ever-growing volume of available data, ML
applications have become an increasingly attractive
technology, including within the field digital forensics
(DF). One area of interest for DF investigators
concerns cyberbullying, which occurs when individuals
purposefully and persistently cause harm to another
person through electronic means. ML-driven tools are
used to assist investigators in sifting through, and
classifying, volumes of data in criminal cases.
A cyberbullying detection tool strives to classify
harmful messages and automate their separation from
other digital data as accurately as possible. By efficiently
and effectively detecting instances of cyberbullying,
it aids investigators in gathering evidence, addressing
incidents, and preventing future occurrences, thus
complementing the investigator’s skills and expertise.
Nonetheless, deploying such a tool would require it to
offer a high level of robustness, secure processing and
data management. Moreover, there are ethical issues to
be taken into consideration.
The objective of this research was to develop
a cyberbullying detection tool by combining natural
language processing (NLP) techniques, ML algorithms,
and the DistilBERT model. In this project, different
machine algorithms were used to build models by training
and testing them for the detection of cyberbullying.
These models are: k-nearest neighbor(k-NN), naive
Bayes (NB) and support vector machine (SVM). These
algorithms used a dataset that is an aggregation of
datasets including the ‘aggression parsed dataset’
and the ‘cyberbullying multi-label dataset’. All datasets
are publicly available (e.g., on Kaggle.com) and are
anonymous.
Furthermore, the DistilBERT model, which is based
on the Bidirectional Encoder Representations from
Transformers (BERT) base model, was fine-tuned for
the task of cyberbullying detection using the abovementioned
datasets. BERT offers a deep contextual
understanding of language use, enabling the model
to capture context-specific cues associated with
cyberbullying.
The combination of traditional NLP techniques, ML
algorithms (k-NN SVM, NB), and DistilBERT allowed
the development of a comprehensive and robust
cyberbullying detection framework. The algorithms
were evaluated for their performance by using standard
metrics such as accuracy, precision, recall, and F1-
score.
During the implementation of the ML tool, a number
of challenges and issues had to be addressed. These
include: ascertaining the quality and relevance of the
selected data; carefully engineering the features for
the models needed; identifying the appropriate models
to address the problem; fine-tuning the model’s
hyperparameters; preventing overfitting; and ensuring
that the models would be interpretable. Additionally,
there were challenges related to the complexity of
the ML process. There were also forced limitations in
using a dataset with 40,000 records, due to a lack of
computational power.
The DistilBERT model was trained for five epochs.
Validation metrics indicated that the model generalised
to the unseen data. In the fifth epoch, the model
achieved a precision of 0.86, a recall of 0.92, and an
F1-Score of 0.89.
Figure 1. The machine learning lifecycle
88
Faculty of Information and Communication Technology Final Year Projects 2024
Brain-to-text
KYLE DEMICOLI SUPERVISOR: Prof. Alexiei Dingli
COURSE: B.Sc. IT (Hons.) Artificial Intelligence
The concept of translating thoughts directly into
text has long been an ideal, in science fiction. Due
to constantly advancing technology, this notion is
becoming all the more possible. The main idea behind
this research was to use an electroencephalogram
(EEG) device to record brain activity, and then use
machine learning (ML) to translate these signals into
predetermined words.
The human brain is a natural wonder, capable of
producing a vast array of ideas and thoughts. Our
brains generate distinct electrical patterns when we
want to express ideas. The first challenge was to
capture these patterns using an EEG device. The next
hurdle was seeking to convert these complex patterns
into the actual words that would convey as faithfully
as possible what the person was thinking.
The underlying hypothesis of this study was that
ML algorithms could be used to translate particular
thought patterns into words. In other words, the goal
was to develop a system that could ‘read’ mental
activity and identify the word being considered from a
predetermined list consisting of ten words.
In order to address this, a system was created
using transformers as the operative ML architecture.
Transformers have revolutionised the field of natural
language processing (NLP), mainly in view of their ability
to process sequential data extremely efficiently. This
technology seemed optimal for the purpose of this
study, as it was the most likely one to identify the
distinct patterns of brainwaves linked to every word.
The greatest obstacle in creating the brain-to-text
system was the limited way it could apply its trainingdata
knowledge to new, unseen data. Despite being
able to understand words it had been trained on, the
system frequently misinterpreted or was unable to
identify brainwave patterns for new data. This issue
brought to light the intricacy of brainwave-pattern
identification, as well as the challenge of developing
an ML model that could process and translate the wide
range and complexity of human thought accurately.
The model is still at work-in-progress phase, in
order to improve its efficiency in terms of successfully
learning and interpreting more patterns. Concurrent
testing is also being done to investigate whether data
from a single person would produce better results
than combined data from different participants. This
is based on the theory that people may interpret the
same word in different ways.
This study sheds light on the intricate relationship
between neuroscience and ML. It provides fresh
opportunities for research in the field of thoughtto-text
conversion. As mentioned above, this does
not come without its hurdles. However, the potential
uses of this technology are numerous, ranging from
facilitating communication for persons with speech
difficulties to providing novel approaches to humancomputer
interaction.
In brief, the process of converting ideas into text
through EEG and ML is challenging but full of promise.
This investigation into the language of the brain
has highlighted the significance of generalisation,
adaptability, and precision that ML offers. This paves
the way for more studies in this exciting field, where
technology and neuroscience meet.
NATURAL LANGUAGE PROCESSING
Figure 1. The workflow of the proposed system
University of Malta • Faculty of ICT 89
The Quest of the Voynich Cipher
SHAIZEL VICTORIA BEZZINA SUPERVISOR:Dr Colin Layfield CO-SUPERVISOR: Prof. Ernest Cachia
COURSE: B.Sc. IT (Hons.) Software Development
Educational games are interactive software applications
that are specifically designed with the intention and goal
of teaching or facilitating learning in an engaging and
enjoyable way. Such games are created to keep users
simultaneously focused and entertained, while being
educated on various subjects, and learning skills, and
new concepts. Since gamification offers a new direction
for learning, students have displayed a keenness to
engage in the classroom, as well as to achieve better
learning outcomes.
The idea behind the educational game The Quest
of the Voynich Cipher emerged from a fascination with
historical mysteries and video games. The Voynich
manuscript (VM), with its mysterious content, and
limited awareness among the general public, presents
itself as the perfect subject to explore. The main goal
was to create an immersive and interactive digital
experience, that would allow the users to discover the
various sections of the VM and its mysteries.
From the moment the players launch the game, they
are transported to the mysterious world of the VM, where
its content awaits to be discovered and deciphered.
Through the visuals and mysterious elements in the
game, the users would be immersed into a world of
mystery. The Quest of the Voynich Cipher allows
exploring the six sections of the VM one at a time.
Users would be encouraged to learn more about what is
known to date about each section, while going through
a combination of puzzles, challenges and interactive
storytelling.
While the game offers an engaging and entertaining
experience for the users, its educational value is
paramount to those who were previously unfamiliar
with the VM or to those who would like to know more
about it. Each puzzle and challenge in the game has
been designed to teach the user something new about
the sections of the manuscript. By engaging with the
manuscript’s content in a hands-on manner, users
could gain insight into the manuscript’s mysteries
firsthand.
Whether The Quest of the Voynich Cipher is played
at home or in a classroom setting, it would still serve
as an interactive learning experience, since the game
is intended to transform passive learning into an active
and immersive experience for those who play it.
The Quest of the Voynich Cipher offers its players
a unique opportunity to delve into the mysterious
world of the VM. Providing the players with a learning
tool, the game not only increases their knowledge of
the manuscript, but also triggers their imagination in
seeking to understand the content of the manuscript.
SOFTWARE ENGINEERING & WEB APPLICATIONS
Figure 1. One of the screens from The Quest of the Voynich Cipher game
90
Faculty of Information and Communication Technology Final Year Projects 2024
A study to measure the effectiveness
of a job-recommendation algorithm
DANIEL CALLEJA SUPERVISOR: Dr Conrad Attard
COURSE: B.Sc. IT (Hons.) Software Development
A job-recommender system is a type of system that
provides users with personalised job suggestions,
according to their skills, job experience, education, and
preferences. It helps narrow down the numerous options
to facilitate the selection process for job seekers.
This project has evaluated various recommendation
algorithms to identify the most effective one.
Additionally, it consisted in developing a user-friendly
mobile application for job recommendations. This
application utilises a selection of shortlisted algorithms,
to suggest suitable jobs to job seekers, an example of
which is provided in Figure 1. This application aims to
establish a platform that would enhance the matching
of job seekers with relevant job opportunities, thus
engendering beneficial connections between candidates
and employers.
The research utilised two datasets: one comprising
a collection of job listings and another containing user
information similar to a condensed curriculum vitae
(CV). The collection of jobs was obtained through
website scraping and was formatted into a commaseparated
(CSV) file. This dataset was created by
employing various Python scripts to scrape jobs
from both the JobsPlus and Konnekt websites. The
extracted fields from the websites included: the
reference number, job title, location, job type, general
job information, and salary details. On the other hand,
the user dataset was created using Google Forms.
This form was subsequently shared with individuals
from diverse professional experiences and educational
backgrounds. This helped ensure that a broad
spectrum of data was available to effectively test
the recommendations. The form collected the user’s
name, surname, current occupation, current and
previous employment titles, highest level of education,
and particular skills.
The shortlisted algorithms included two contentbased
filtering algorithms, namely: cosine similarity
and term frequency-inverse document frequency (TF-
IDF). These algorithms would match the textual job
descriptions with the user’s CV to identify how closely or
distantly related they are. Additionally, the item-based
collaborative filtering algorithm was also identified as
Figure 1. A search-results screen of the proposed
job-recommendation mobile application
highly effective in recommending relevant job postings,
based on user preferences. This algorithm utilised user
interactions within the application to identify the jobs
that would correspond most to those with which the
user would have interacted. Therefore, this allowed the
recommendations to be generated on the fly, based on
jobs similar to those the user has interacted with.
This job recommendation system offers numerous
benefits. It has the potential to help direct fresh
graduates to the most suitable jobs for them, based
on their experience and qualifications. Furthermore, it
would assist them in avoiding vacancies likely to require
more experienced candidates, thereby preventing
prolonged periods of unemployment.
Nowadays, online recruitment platforms provide
job seekers and employers alike with an overwhelming
number of options. The proposed job-recommendation
engine would be ideal, given its ability to filter out nonrelevant
job vacancies, ultimately also minimising the
frustration and uncertainty that many job seekers
experience, especially if looking for their first job.
SOFTWARE ENGINEERING & WEB APPLICATIONS
University of Malta • Faculty of ICT 91
A rule-based DSL for the creation of gameplay
mechanics for team-based sports
JULIAN FALZON SUPERVISOR:Dr Sandro Spina
COURSE: B.Sc. (Hons.) Computing Science
SOFTWARE ENGINEERING & WEB APPLICATIONS
This project revolves around creating and fine-tuning a The key components of the solution:
dynamic, simulation-based handball game designed to
test different sets of rules and game-play mechanics
to discover the most balanced and enjoyable version.
This simulation involved crafting a new kind of handball,
1. Configurable game elements: the simulation
included mechanisms to easily adjust game
parameters like the ball’s physics properties
(e.g., size and bounce) and player behaviours
where the size of the ball, the speed of the players, 2. Autonomous player agents: players within
and even the rules themselves could change. The goal
was to find the perfect combination that would render
the game fair, challenging, and fun for everyone. The
simulation process was empirically guided by observing
how these changes would affect the game (e.g., the
level of ease/difficulty of scoring a goal, if one team
the simulation were designed as autonomous
agents with basic AI rule-based capabilities,
allowing them to make decisions based on the
current state of the game, such as chasing the
ball, defending their goal, or passing the ball to
teammates.
were to win with certain settings.
3. Interactive testing environment: the setup
Game mechanics were adjusted over a number
of simulation runs, until the formula ensuring the
most balanced and enjoyable game was defined. The
hypothesis for this research was the premise that
modifying certain elements of a handball game, such
allowed for both automated simulation runs,
where the computer would control all aspects
of the game, and interactive modes, where
human inputs could adjust variables in real time
to see the effects of changes immediately.
as the ball’s size, player statistics, team strategies,
or game rules, could have a significant impact on the
overall balance and enjoyment of the game. The goal
was to identify a set of parameters that would make the
game fair and competitive for all participants but also
engaging and fun to play.
In order to address the challenge, a dynamic
simulation environment was created using Three.
One significant discovery was the fragility of the
balance of game-play. Small changes in parameters,
such as player speed or ball size, had far-reaching
effects on game dynamics, underscoring the complex
interplay between different elements of the game. As
the complexity of the simulation increased, especially
with higher numbers of autonomous agents and more
js, which is a JavaScript library that allowing for the detailed physics calculations, performance could
rendering of 3D graphics in a web browser, thus
enabling the visualisation of the simulation runs. This
environment was designed to model a game where
various parameters, such as player speed, ball size, and
game rules, could be adjusted dynamically. The aim was
to observe how these changes would affect game-play
outcomes and balance.
degrade, thus influencing the smoothness of the
simulation. This required careful optimisation of the
code, and occasionally the simplification of models, to
maintain an acceptable performance level.
The proposed solution offers a powerful tool for
exploring how different game configurations could
affect player experience, highlighting the importance of
balance and adaptability in game design.
Figure 1. Game simulation on specific settings
92
Faculty of Information and Communication Technology Final Year Projects 2024
Automobile computer security
and communication issues
MATHIAS FRENDO SUPERVISOR: Dr Clyde Meli
COURSE: B.Sc. IT (Hons.) Software Development
Thinking of the network within a car as the vehicle’s
nervous system, much like that of humans, one
would become aware that it is a complex web of
communication pathways that connects the brain (the
car’s computer) to its vital organs (the engine, brakes,
and lights). In this digital age, what is traditionally
viewed merely as a means of transport has evolved to
a more sophisticated vehicle.
Cars have become complex entities composed
largely of electronic parts. At the heart of this
development is the controller area network (CAN bus)
system, which facilitates smooth communication
between the car’s critical components. However, as
our vehicles become more interconnected, they also
become more vulnerable to cyber threats.
This study presents an assessment of the CAN bus,
which is a vital component of the vehicle communication
system, with the aim of strengthening the defence
against cyber threats. Indeed, such research underlines
an imperative push into probing more vulnerabilities of
the CAN bus and, at the same time, sets a clarion call
for establishing optimal security mechanisms in line
with the evolving nature of automotive technologies.
In this area, therefore, enhancing the security in use of
the CAN bus would not merely improve safety but also
the general reliability of vehicle communication.
The proposed system consists of the development
of a simulation environment that emulates the CAN
bus network with the greatest possible truthfulness.
It would allow for methodical testing through the
emulation of many different cyberattack scenarios
made on the system but, most importantly, would
focus on man-in-the-middle (MITM) attacks and data
spoofing.
The MITM attack is characterised by an unauthorised
entity or even purpose intercepting the connected
systems and proceeds to change the communication
between two networked systems. On the other hand,
spoofing refers to data or message falsification so that
it appears to have originated from a valid source within
the network.
Figure 1. A vehicle’s CAN bus system with a rogue
device injecting messages into the network
The simulation method designs scenarios to test
potential attacks on vehicle networks, specifically
focusing on MITM attacks. It assesses how an attacker
might intercept and alter messages between a vehicle’s
electronic control units (ECUs), using spoofing to insert
false messages and mislead the ECUs into performing
unintended operations or providing incorrect information.
Intended to mimic real-life cyberattacks, such
simulations have a sophisticated architecture with
the goal of providing realistic responses of the CAN
bus network when under such threats. For these
vulnerabilities, the research included seeking to
identify the primary security flaw within the security
architecture of a network. In this manner and through
iterative testing with empirical analysis, it was possible
to develop ways through which these liabilities could
effectively be reduced.
Such experiential simulation exercises are key to
advancing network security, since they help improve the
preciseness of countermeasures for more sophisticated
cyber threats. The project merged automotive
engineering with the strategies of cybersecurity, which is
a combination that pushes the protection of automotive
technology to a much higher level. It guarantees that
the cars are well-equipped against the ever-increasing
cybersecurity threats.
SOFTWARE ENGINEERING & WEB APPLICATIONS
University of Malta • Faculty of ICT 93
The evaluation of various web-application
firewalls in the presence of malicious behaviour
ISMAEL MAGRO SUPERVISOR:Dr Clyde Meli
COURSE: B.Sc. IT (Hons.) Computing and Business
Web-application security faces ongoing challenges
in view of the persistent evolution of malware. This
constitutes a pressing concern regarding the efficacy
of web-application firewalls (WAFs) due to the rapidly
evolving cybersecurity threats. The primary function of
WAFs is to detect malicious activities acting as a shield
between web applications and potential attackers.
Hackers are motivated by various reasons, including
disrupting services, for financial gain or executing denialof-service
attacks, and commonly employ techniques
such as SQL injection and cross-site scripting (XSS).
It is believed that WAFs could potentially mitigate such
attacks, making their evaluation crucial.
This research aimed to contribute to the knowledge
of 2 WAFs, namely NAXSI and ModSecurity. The reason
behind the selection of these 2 specific WAFs was
their capability to continue to operate offline. Moreover,
a closed internal network needed to be created to
provide support to any existing vulnerable data in a web
application.
A number of steps were required in order to achieve
the above. The first involved creating an internal network,
which consisted of 5 virtual machines using Oracle VM
VirtualBox. The Kali Linux machine was portrayed as the
attacker machine, representing the hacker. Additionally,
since 2 different WAFs were being compared, respective
web servers and virtual web applications needed to be
created.
Since the selected WAFs operated solely off the web
server, a ruleset needed to be implemented to protect
against a broad spectrum of attacks. This was adapted
from OWASP, a non-profit foundation specialising in
web-application security. The attacks used were also
adapted from the top-ten attacks according to OWASP
(Open Worldwide Application Security Project) , this
being the best source for security risks.
Through testing, it was found that both WAFs failed
to protect against a certain type of cross-site scripting
that allowed for the use of JavaScript, which is a
programming language used to develop web pages. This
is generally considered dangerous because, if hackers
were to replace the ‘alert’ function with another, many
unauthorised or unintended actions could be taken. This
includes redirecting victims to a fake website to capture
data. On the other hand, they successfully blocked
against SQL injection with the help of Nginx. Other types
of cross-site scripting were also blocked.
This project has confirmed that WAFs should be
used in tandem with other defence mechanisms and not
as a solution in themselves. This was proven when both
NAXSI and ModSecurity did not protect against all the
XSS that was tested. The solution for this issue would
be to either include the specific rule in the adopted rule
set or to use another web-application firewall, such
as Cloudflare, that would constantly update itself. It is
believed that the latter would be the preferred option.
SOFTWARE ENGINEERING & WEB APPLICATIONS
Figure 1. A successfully launched attack
94
Faculty of Information and Communication Technology Final Year Projects 2024
The visibility and effectiveness
of a 3D supermarket
JACOB SALIBA SUPERVISOR: Prof. Lalit Garg
COURSE: B.Sc. IT (Hons.) Software Development
Grocery shopping online could be quite daunting
for some. This was established upon conducting a
literature review to understand the changing landscape
of the customers’ expectations of their online shopping
experience.
At present, websites offering online grocery
shopping are not as user-friendly as one would expect.
This could be due to the perception that shoppers
prefer to visit the supermarket in person, and request
that their purchases be delivered straight to their door,
instead of ordering online. One of the main reasons for
this is that, when going to the physical supermarket,
customers would normally already know where to find
the products they need, especially if they are regulars.
However, when using the website, customers would
have to search through the catalogue or remember the
exact name of the product they want. Consequently,
customers tend to find supermarket websites
somewhat frustrating, making online shopping an
unnecessarily time-consuming task.
The above context served as the main motivation
for exploring the above hypothesis in further detail. This
served as the basis for developing a solution that would
provide consumers with a more efficient and enjoyable
shopping experience, namely through the applicability
of a 3D virtual supermarket. A 3D virtual supermarket
is a digital simulation of a real supermarket, which
the user could explore using virtual reality. It allows
the customer to walk through aisles, browse
through products and shop in a virtual environment.
Furthermore, since most shoppers would be regular
customers of a particular supermarket — and therefore
would be familiar with the layout of that supermarket
and the products on offer — the application of a 3D
supermarket would allow customers to manage online
shopping more efficiently and in an enjoyable manner.
After conducting more research and conducting
a literature review, it was possible to establish that,
while online supermarkets are numerous and popular,
the 3D element is lacking. Therefore, the next step
was to gain insight into the customers’ experience
of online supermarket shopping by conducting a
survey. Similarly, the perspective and experience of
the supermarket operators was also compiled, with a
Figure 1. A screenshot of the Little Greens Virtual
Tour
view to providing them with feedback that would help
them offer an enhanced online experience to their
customers. With this data in hand, it was possible
to proceed to creating a prototype of a virtual 3D
supermarket that replicated an existing physical outlet.
This would allow the customer to take a virtual tour
through the supermarket.
The Kuula software was used in creating the 3D
virtual tour. The process entailed taking photos of
the actual supermarket and uploading them through
the said software. The uploaded images were then
edited and connected to each other, to create the 3D
visualisation.
Ultimately, the main reference point in developing
the prototype was to create a 3D virtual supermarket
that would be a faithful representation of the criteria and
views expressed by the participating consumers and
supermarket owners in terms of what they considered
to be contributing elements to the effectiveness of a
3D supermarket.
SOFTWARE ENGINEERING & WEB APPLICATIONS
University of Malta • Faculty of ICT 95
Join our Epic Team
Are you ready to embark
on a rewarding career in
telecommunications?
At Epic, we’re a team of innovators
dedicated to connecting Malta to the
future. Our commitment to excellence
and cutting-edge technology drives us
to make a real impact on the lives of
thousands of customers every day.
Join the Epic IT & Technology team, discover
opportunities on epic.com.mt/careers
GRAMMAR
CORRECTOR
The ultimate online tool for
the Maltese language
ALANA BUSUTTIL
is using her one-year
MSc by Research
to develop an AIdriven
Maltese
grammar corrector
that checks spelling,
grammar, and
syntax. Here, she
tells us the process
of creating this
much-needed tool.
Microsoft Word’s spellchecker,
as well as software
like Grammarly, can be
a godsend when writing
essays, emails, or even a
cheeky status update. While not
perfect, the ease with which they
pick up spelling and grammatical
errors in English, as well as in other
languages, is sometimes mindboggling,
but that can often belie just
how complex creating them can be.
Indeed, the amount of data
and work needed to develop these
systems has meant that the Maltese
language has had to do without,
leading to frustration and many-amisspelt
word. But things may soon
change thanks to a piece of software
Alana Busuttil is working on, which
can check Maltese sentences for
errors.
‘When it comes to language,
there are numerous elements this
type of model needs to evaluate and
highlight,’ says Alana, who is a BSc
in Information Technology (Honours)
(Artificial Intelligence) graduate.
‘Over and above spelling mistakes,
this tool needs to look at grammar,
like tenses, conjugations, and verb
formations, as well as syntax.’
98 Faculty of Information and Communication Technology Final Year Projects 2024
Alana is using Natural Language
Processing (NLP), which combines
computational linguistics with
statistical and AI models, allowing
computers to support and
manipulate human languages. The
spell-checker, therefore, needs to
learn what language errors are and
how to correct them, which is why
the researcher needs to feed the
system examples of both correct
and incorrect sentences. But this is
where one of the biggest hurdles lie.
While you would be correct in
assuming that all languages have
such grammaratical rules, what
makes it harder to teach an AI
model to pick them up in Maltese
is the fact that data is scarce. This
is down to a multitude of reasons,
with the most pertinent being that
the language is only spoken by some
500,000 people worldwide, resulting
in it being used rather sparingly in the
online realm. This, when compared
to the numbers for English (around
1.46 billion) or Spanish speakers
(approximately 600 million), brings
things into perspective.
‘Even so, there is BERTu, a
language model that is pre-trained
on the Maltese tongue, which
means it has prior understanding
of the language,’ Alana continues.
‘I intend to fine-tune the model
by introducing it to two versions
↗There are
numerous elements
this type of model
needs to evaluate.↙
of several sentences: one that’s
correct and another that has
misspelt words or grammatical
errors within it. This data will be
based on real-life examples, rather
than purely synthetic datasets,
which should make the grammar
corrector much more reliable.’
Yet BERTu is only the first step,
as Alana has also tested numerous
other models, including a multistep,
model pipeline that extracts
and processes data. She’s also
created a website where a user
is asked to type out sentences
spoken in Maltese. By blocking the
“Backspace” button and stopping
participants from amending their
answers, Alana can then compare
the typed-out sentences with ones
that are known to be grammatically
correct, helping in creating real-life
examples with which to train the
model.
As the system is still under
development, there are no
definite targets for when or if it
will be rolled-out for general use.
Nevertheless, through her research,
Alana has discovered that using one
model for grammar error detection
and another for corrections is
more promising than any previous
attempts at creating a Maltese
spell-checker. In fact, her findings
have brought us closer to finally
having such a system.
What makes this even more
exciting is that the grammar error
corrector’s scope goes beyond
helping native speakers with their
written Maltese. It can also be used
by language learners of different
levels and abilities… Could this
potentially inspire more people
around the world to learn our
language? We’ll have to wait and
see about that!
University of Malta • Faculty of ICT 99
Data security
in the age
of quantum
computing
Data is one of our most coveted assets, but how do we
keep it safe when computer power is increasing? Here,
four researchers explain their work on how Quantum Key
Distribution could bolster secure communications.
Data is an invaluable resource
that allows the modern world
to operate. It’s also a bona fide
currency that can be harvested,
stored, and sold; sometimes
without the knowledge of the people
it concerns. This is why companies
have worked on powerful algorithms
to encrypt it, and lawmakers have
passed bills to protect it. But what
happens when computer power
surpasses the fences that have
been built to keep our data safe from
prying eyes?
This issue is what’s driving four
Research Support Officers (RSOs)
within the Faculty of ICT to determine
how things may change, and the
reason behind it is both fascinating
and terrifying.
Data, be it on people’s health,
banking, or governments, is so
valuable that many rogue agents will
try to intercept and collect it. They
can then illegally sell it to the highest
bidder, who may be looking to better
understand people’s shopping habits,
use it to blackmail individuals, or even
weaponise it against nations and
their people.
Of course, there are measures in
place to protect this data. Currently,
for example, when data is transmitted,
say from one computer to another, it
gets encrypted. This means that it
is scrambled at the source and then
unscrambled at the receiver’s end
using a special “key”. This scrambling
process is often in the shape of a
hard-to-solve problem that can be
solved instantly with the key or which
would take an impractical amount of
time to decipher even with the fastest
supercomputers of our age.
But, as Dr Inġ. Christian Galea,
who is an RSO IV basing his postdoctoral
research on the topic,
explains, things are changing: ‘Once
quantum computers, which will be
exponentially more powerful than
today’s supercomputers at tackling
specific classes of problems, become
available, these agents may be able to
decipher these encrypted messages
at a much faster rate.’
To counteract this, there are two
paths one can take: create encryption
schemes designed to withstand
attacks by quantum computers or
use Quantum Key Distribution (QKD).
‘QKD is a scheme by which
information is secured through
mechanisms guaranteed by the laws
of the universe, meaning that no
amount of computational power can
ever break it,’ he continues.
‘This can be done using the principles
of quantum mechanics, where
photons (light-transmitting particles)
change when observed, so anyone
trying to “eavesdrop” on the data
100 Faculty of Information and Communication Technology Final Year Projects 2024
Ryan Debono, Maria Aquilina, Dr Inġ. Christian Galea, & Aaron Abela
being transmitted or the encryption
keys being set up, would introduce
errors and be detected in this process.
For anyone who has the key,
however, the data transfer would
work pretty much as it always has.’
QKD is made possible through
protocols whose security needs to
be verified before it can be used:
‘One way of going about this is to
analyse the relationship between
the key rate, which is how many key
bits per second can be transferred,
and information disclosure,’ explains
Ryan Debono, an RSO II who is
reading for his PhD. ‘The higher the
key rate and the less information
disclosed to an eavesdropper, the
better the protocol would be.’
Such an initiative won’t just
span a number of years, but also
a multitude of funded projects and
research areas, as well as all of the
EU. Its aim is to develop and implement
a quantum communications
pipeline, enabling secure communications
starting from a national level
before eventually being extended to
an EU-wide and then global level.
Another area of research that’s
part of this initiative involves satellite
QKD, where satellites are used to
enable communications over longer
distances. Ryan is also part of this
investigation, and he’s looking into
the potential factors that can affect
the link quality between the satellite
and the ground, which will also be
part of the development of a full,
end-to-end QKD communication
system simulator.
‘This is vital in the development
of QKD communication systems
as this software aims to simulate
the processes occurring in a QKD
communication system, allowing
an analysis of the expected
performance to be obtained
in practice across a range of
parameters.’
Meanwhile, Aaron Abela, who is
also an RSO II and reading for his
PhD, is working on the code needed
to simulate the end-to-end QKD
communication system.
↗QKD is a scheme by
which information
is secured through
mechanisms
guaranteed by the laws
of the universe↙
Dr Inġ. Christian Galea
‘Since there is no end-to-end
QKD simulation system yet, we’re
working on understanding the
processes involved in extending
software that simulates traditional
communications systems to also
consider communication over
quantum channels using QKD,’
Aaron explains.
To help with this, the team
is developing several modules
intended to work together in a
common framework, with each
module being a subsystem with a
different function.
‘My research extends into
creating a system that corrects
the errors and removes the noise
from the data once it’s made its
way to the right receiver,’ he adds.
The fourth person working in
this area is Maria Aquilina, an RSO
I currently reading for her MSc.
Her role, however, is somewhat
different, as she is tasked with
presenting this complex topic
to the public in relatively simple
terms.
‘Communicating the research
that is currently ongoing within
our group to diverse audiences is
crucial in ensuring that the public
is aware of future threats and the
science that is helping to mitigate
them,’ she tells us.
‘My role here is to adapt
the concepts to the audience
at hand, so while the scientific
principles remain unchanged, the
language and manner used to
express them need to be easier
to understand and digest. This is
important because many people
use computers to communicate,
and we’re sure everyone wishes to
do so in a secure environment.’
While undoubtedly complex,
the work in this area shows the
importance of looking ahead.
It’s also a great reminder that
researchers are not ones to rest
on their laurels, instead working
proactively when it comes to the
greater good, which should help us
all sleep a little better at night.
L-Università ta’ Malta 101
Turning
radiotherapy
procedures
into child’s
play
Mark Agius &
Gavin Schranz
MARK AGIUS and GAVIN
SCHRANZ’s group project is
called the Rainbow Rabbit’s
Radiotherapy Journey mobile
app, and aims to help patients
aged between six and eight
better understand their cancer
diagnosis. Here, they explain the
concept and its potential benefits.
When diagnosed with a
serious illness, it is only
natural to have questions
about the procedures that
are going to be undertaken
and the equipment that will be
used. But while adults have the
benefit of maturity to understand
the situation, children may find it
somewhat harder to wrap their
heads around what’s happening.
This is something hospitals and
doctors know about. As Mark Agius,
who’s been a radiographer at Sir
Anthony Mamo Oncology Centre
for the past 10 years, explains,
‘Children aged six or over who are
diagnosed with cancer and who
require radiotherapy treatment are
spoken to by their healthcare team,
taken around the facility, shown the
equipment, and have the procedures
explained to them in terms that are
age-appropriate.’
↗Using something more
complex could have
limited access to [the
app]↙
Gavin & Mark
While reading for their Master’s
in Digital Health, however, the
literature showed there was a gap
in the procedure, particularly when
it comes to patients aged between
six and eight years old, who
may find it somewhat difficult to
cooperate during the five to seven
minutes of radiotherapy where
they’re required to lie absolutely
still.
It was here that Mark joined
forces with Gavin, who graduated
in Business and IT three years ago.
Together they started working on
an interactive mobile application
that would ease the process for
children by giving them a better
understanding of what’s happening,
as well as a sense of agency.
‘The application, which is called
Rainbow Rabbit’s Radiotherapy
Journey, is somewhat like an animated
children’s story that follows
Rainbow Rabbit from diagnosis to
102 Faculty of Information and Communication Technology Final Year Projects 2024
treatment,’ Gavin explains. ‘It takes
Rainbow Rabbit through doctors’
appointments, ward environments,
radiotherapy scans, and treatment
procedures.’
To make it even more interactive
and familiar, the backgrounds Rainbow
Rabbit explores are actual pictures
of Mater Dei Hospital’s wards,
clinics, machines, and treatment
rooms. Meanwhile, the doctors,
nurses, radiographers, and other
healthcare professionals Rainbow
meets are other friendly animals
that explain to him exactly what
each room is, what every machine
is used for, and what is expected of
him at every stage. They also answer
a number of frequently-asked
questions, such as when he’ll be
able to play again following each
round of treatment.
The Rainbow Rabbit’s
Radiotherapy Journey is meant to
be downloaded on a parent’s or
guardian’s mobile phone, ensuring
the child has access to it even at
home.
↗The choice of
technology is an
important lesson in
itself↙
To further encourage use of
the app, Mark and Gavin have also
worked on a number of gamification
elements, like a certificate that the
child can print and show to their
oncologist, as well as collectables
hidden in every room, which range
from toys to medical equipment.
‘The application’s role isn’t just to
offer explanations,’ Mark continues,
‘there is also another feature that
tracks the child’s emotional journey
throughout radiotherapy. This works
by having the patient pick one of
a number of images of Rainbow
Rabbit, in which he is
experiencing different emotions,
like anger, tiredness, sadness, or
happiness.
‘This way, both the parents
and the doctors are able to follow
the child’s emotional and mental
health statuses without the need to
repeatedly ask questions that may
irk them.’
One of the most special
elements of the Rainbow Rabbit’s
Radiotherapy Journey application
is its simplicity: there is no use of
virtual reality, augmented reality,
or machine learning here. Instead,
Gavin and Mark practised restraint
in an effort to make the app as
accessible and as user-friendly as
possible.
‘We explored many avenues
before we agreed on the app,’
Gavin says. ‘Apart from combining
Mark’s 10 years’ experience as a
radiographer and my knowledge of
developing apps, we consulted with
numerous professionals who work
directly on such cases, as well as
a number of academics, to gauge
what’s needed. A mobile phone that
can support an app is commonplace
today, but using something more
complex could have limited access
to it.’
While this application is still
in project form, it shows how ICT
can be used to help make life that
much easier for children who are
going through one of the toughest
periods of their lives. Meanwhile,
the choice of technology is an
important lesson in itself, proving
that we don’t always need to go for
the most advanced options to make
an impact.
University of Malta • Faculty of ICT 103
Taking Maltese
into the realm
of the chatbot
Using English data that has been
translated into Maltese, three FICT
researchers are working on creating
the first chatbot that can understand
and reply in Malta’s native tongue.
says Kurt Abela. ‘While there are
some amazing chatbots in English
and other languages and we have
used some of their data, Maltese
has its own specific characteristics.’
Love them or hate them,
chatbots have become
commonplace on many
entities’ websites and social
media platforms. These
computer programs, which often
aim to simulate a customer support
agent, solve several issues by
being available 24/7, replying to
multiple clients simultaneously, and
efficiently answering frequentlyasked
questions.
Now, that technology may
soon be made available in the
Maltese language too, thanks
to an innovative conversational
chatbot that’s being worked on by
Dr Marthese Borg, who is carrying
out her post-doctoral research in
AI, and PhD in AI-students Kurt
Micallef and Kurt Abela.
‘Training a new model to create
a chatbot in Maltese isn’t as
straightforward as it may sound,’
Among these is the fact that
the Maltese alphabet includes a
few letters with diacritic marks and
digraphs, like the “ċ” and the “ħ”.
Then again, not everyone uses these
when typing online. This, as well as
the reality that some people may
shorten (such as “hawn” to “aw”) or
even outright misspell words, means
that the trio had to go beyond the
obvious to make sure the chatbot
did its job properly. That’s why, when
104 Faculty of Information and Communication Technology Final Year Projects 2024
Kurt Abela
Dr Marthese Borg
Kurt Micallef
it came to creating a new set of data
to feed this model for it to learn how
to interact with humans, mistakes
had to be included from the get-go.
‘Creating the dataset was a
multi-step process,’ continues
Marthese. ‘Once we got the
examples in English, we used
ChatGPT to give us 20 variations
of that example. We then used our
own machine translation system to
translate them into Maltese, before
going through each one to edit and
proofread it in order to make it
sound more akin to the way a native
Maltese speaker would write them.
‘We then added several
examples written in improper
Maltese, such as replacing “bonġu”
with “bongu”, which may not seem
like a big deal but, for a chatbot, it
makes a huge difference.’
Moreover, BERTu, the first AI
language model for the Maltese
tongue, was used to help build the
Maltese language models, resulting
in a chatbot that is now finally in
training.
‘Next up, we’ll be looking at
how we can creatively generate
new data and responses, as the
current system works on predefined
templates,’ Kurt Micallef
↗Training a new
model … isn’t as
straightforward as it
may sound.↙
Kurt Abela
adds. ‘While this allows the chatbot
to scale to unseen situations, we
also need to remain aware of the
fact that a chatbot can generate
what we call “hallucinations”, which
means that as it learns, it could also
start making things up.’
Currently, Finance, Banking,
and Insurance remain the scope
of this chatbot, with its ability
being limited to answering “simple”
questions about things like opening
hours, password creation, and
what documents are required when
seeking a home loan. Even so, this
could be scaled to other industries,
to understand code-switching
(alternating between Maltese and
English), and even to answer more
complex queries.
This project, which is funded by
Malta Enterprise, has so far been a
resounding success, and is on track
to be finalised by 2025. Even more
amazingly, the chatbot already has
a final client, which is the corporate
and tax consultancy company,
Cartesio LTD. The international
company is set to use its network
to disseminate the chatbot to
other businesses and entities that
operate locally, hopefully making
this chatbot in our national language
a fixture on many sites that target
Maltese people.
L-Università ta’ Malta 105
Reducing
inefficiency in
compressed
air systems
MSc by Research-student JURGEN AQUILINA
is looking into how AI and IoT could solve one
of manufacturing’s biggest issues: leakages
and faults in their compressed air systems.
Compressed air is a safe energy
source for a great number of
manufacturing businesses.
It’s used in everything
from powering automated
assembly lines to cleaning but,
like everything else, there are
downsides to using it. These include
the fact that the network of pipes
that leads this pressurised air from
the compressor to wherever it’s
needed is prone to leakages and
faults, making it rather inefficient at
times.
106 Faculty of Information and Communication Technology Final Year Projects 2024
‘Currently, these leaks are
checked via audits, which are
done using ultrasonic sensors or
pressure decay tests,’ explains
Jurgen Aquilina. ‘For my master’s,
however, I’m investigating how
Internet of Things (IoT) and Artificial
Intelligence (AI) can help with this.’
IoT is the process by which
everyday objects are connected
to the internet through integrated
or embedded devices, such as
sensors. So far, Jurgen has
focused on this part of the project,
specifically on the identification of
the right communication protocol
to use.
As he explains, network
communication is made up of seven
layers, which are the Physical,
the Data Link, the Network,
the Transport, the Session, the
Presentation, and the Application
Layers. Each layer is required
in order for an object to be able
to communicate with a central
system that can alert workers in a
manufacturing plant that there is a
leakage or a fault.
Jurgen’s master’s centres
around the seventh of these layers,
and to test which Application Layer
protocol would work best, he used
↗This experiment included considerations that
were commonly overlooked in past studies.↙
a single microcontroller to send
randomly-generated data to a
gateway device, which acts as an
intermediary between the industrial
network and the internet.
‘In order to make it more
representative of an industrial
scenario, the methodology used
for this experiment included
considerations that were commonly
overlooked in past studies. For
example, wired network connections
were used instead of wireless ones,
since manufacturing plants would
generally favour the former.’
The experiment gave Jurgen
a better understanding of which
protocol offered the best in terms
of latency, jitter, throughput, and
bandwidth. In the end, it was clear
that a particular configuration of
the Message Queuing Telemetry
Transport (MQTT) protocol was the
best option for fast delivery of data
in the examined scenario.
Armed with this knowledge,
Jurgen will now be setting his sights
on identifying the best protocol for
a wider range of scenarios, before
moving on to the AI side of things,
where he will use machine learning
to teach an algorithm when it ought
to inform workers about faults or
leakages.
Jurgen’s work is part of a larger
project called AirSave, which aims
to reduce compressed air system
inefficiency, something that costs
local companies a staggering €1-2
million and generates an excess
of approximately 6,000 tonnes of
CO₂ per year. Funded by the Malta
Council for Science & Technology,
AirSave is spearheaded by a
number of experts in sustainable
engineering, digital manufacturing,
IoT, computer systems, and
industrial automation.
This collaborative effort is giving
a number of students, including
Jurgen, the opportunity to work on
an exciting project that looks to help
both industry and the environment.
Indeed, it is a clear example of how
such projects fare better when
stakeholders put their heads and
resources together.
University of Malta • Faculty of ICT 107
AN OVERVIEW
2023
Awards
Last year’s Awards Ceremony took place during the
opening of the FICT Exhibition, on Friday, July 7. In
total, the Faculty of ICT handed out 14 accolades
to students whose work had stood out.
Eight were part of the Dean’s List, which is
an award aimed at recognising students who
have achieved academic excellence during their
undergraduate degrees. To be considered for this list,
students must meet set criteria. Firstly, they must
obtain a final average of 80 or above, demonstrating
exceptionally high achievement across all study units
in their Faculty-approved course. Secondly, they must
have no cases of failed or re-sit study units in their
studies. This needs to be complemented by zero
reports of misconduct during the whole period of their
studies.
Following these, the Faculty presented three FYP
Awards to students whose final-year, undergraduate
projects were considered to be exceptional. These
winners were chosen by the External Examiners of
the five undergraduate programmes.
The final three prizes were presented as part of the
Best ICT Projects with Social Impact Awards. These
accolades were given out to Mr Jonathan Attard for
his automatic analysis of news videos; Ms Mariah
Balzan, who worked on a location-awareness system
for elderly people living with mild cognitive impairment;
and Ms Ruby Ai Young, for her personalised nutritional
health assistant project.
commitment to FICT, as well as its students’
achievement and wellbeing. Last year’s award “For
sustained research in Theoretical Computer Science
and International Collaboration” went to Prof. Adrian
Francalanza, while the one “For excellent professional
engineering support given to students and Faculty” went
to Dr Inġ. Francarl Galea.
The FICT Exhibition and the Awards were attended
by the Hon. Keith Azzopardi Tanti MP, Parliamentary
Secretary for Youths, Research, and Innovation with the
Ministry for Education, Sports, Youths, Research and
Innovation. The end-of-year speech, meanwhile, was
delivered by Mr Adam Ryan Ali Farag, a then third-year
BSc IT (Hons) Software Development student.
On top of this, we are also celebrating two other
awards that were presented to FICT students during
other events. The first was the IEEE 2023 Award - Best
ICT Project, which was presented to Mr Cyrus Malik by
Chair IEEE Malta Section, Prof. Inġ. Edward Gatt, as well
as Prof. Simon Fabri, Pro-Rector of the University of
Malta, and Prof. Inġ. Carl James Debono, Dean of the
Faculty of ICT. The second was The Malta Engineering
Excellence Award, which was handed to Ms Gillian Anne
Gatt by Inġ. Malcolm Zammit, President of the Chamber
of Engineers, and Mr David Scicluna Giusti, Activities
Secretary COE.
As always, we congratulate all 2023 winners and
look forward to this year’s awards, which will take place
in July 2024.
Apart from honouring its students, the Faculty also
recognised individuals who had shown extraordinary
108 Faculty of Information and Communication Technology Final Year Projects 2024
The Dean’s List
All Dean’s List awards were presented by Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University
of Malta (middle). He is joined in some photos by Prof. Simon Fabri, Pro-Rector of the University of Malta (left).
Mr Kyle Agius.
Mr Jonathan Attard.
Mr Max Matthew Camilleri.
Ms Gillian Anne Gatt.
Mr Joseph Grech.
Mr Gabriel Hili.
Mr Jean Claude Sacco.
Ms Mariah Zammit.
The FYP Awards were also presented by Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University
of Malta.
The Best FYP Awards
The third prize went to
Mr Michael Vella.
The second prize was
awarded to Mr Benjamin
Borg, with Ms Fabiana Borg
accepting it on his behalf.
First prize was awarded
to Gabriel Vella.
University of Malta • Faculty of ICT 109
Best ICT Projects with
a Social Impact Awards
The Awards were presented by (L-R) Prof. Inġ. Carl James Debono, Dean
of the Faculty of ICT at the University of Malta, and Ms Loranne Avsar
Zammit, Senior Project Leader at eSkills Malta Foundation. They are also
joined by Prof. Simon Fabri, Pro-Rector of the University of Malta, in one
of the photos.
The award with the citation
“For excellent professional
engineering support given to
students and Faculty” went to
Dr Inġ. Francarl Galea (left).
Mr Jonathan Attard.
Ms Mariah Balzan.
Ms Ruby Ai Young.
The award with the citation “For
sustained research in Theoretical
Computer Science and
International Collaboration” went
to Prof. Adrian Francalanza (left).
The Malta
Engineering
Excellence
Awards
Ms Gillian Anne Gatt won
the award, which was
presented by (L-R) Prof. Inġ.
Carl James Debono, Dean
of the Faculty of ICT at the
University of Malta; Inġ.
Malcolm Zammit, President
of the Chamber of Engineers;
and Mr David Scicluna Giusti,
Activities Secretary COE.
IEEE 2023
Award - Best
ICT Project
Mr Cyrus Malik was
presented with the award
by (L-R) Prof. Inġ. Edward
Gatt, Chair IEEE Malta
Section, and Prof. Inġ.
Carl James Debono, Dean
of the Faculty of ICT at
the University of Malta.
110 Faculty of Information and Communication Technology Final Year Projects 2024
→
Speech by Prof. Simon Fabri, Pro-
Rector of the University of Malta.
↓
Speech by the Hon. Keith Azzopardi Tanti MP,
Parliamentary Secretary for Youths, Research,
and Innovation with the Ministry for Education,
Sports, Youths, Research and Innovation.
→
Speech by Inġ. Malcolm
Zammit, President of the
Chamber of Engineers.
↓
Speech by Mr Adam Ryan Ali
Farag, BSc IT (Hons.) Software
Development student.
↓
Speech by Prof. Inġ.
Edward Gatt, Chair
IEEE Malta Section.
Shape your
FUTURE
JOIN THE MAP IT TEAM NOW
SEND US YOUR CV ON
CAREERS@MAPITMALTA.COM
Members
F A C U L T Y O F I C T
of Staff
DEPARTMENT OF ARTIFICIAL
INTELLIGENCE
PROFESSOR
Professor Alexiei Dingli, B.Sc.I.T. (Hons.) (Melit.), Ph.D. (Sheffield), M.B.A
(Grenoble)
Professor Matthew Montebello, B.Ed. (Hons)(Melit.), M.Sc. (Cardiff), M.A.
(Ulster), Ph.D. (Cardiff), Ed.D. (Sheff.), SMIEEE (Head of Department)
SENIOR LECTURERS
Dr Charlie Abela, B.Sc. I.T. (Hons)(Melit.), M.Sc. (Melit.), Ph.D. (Melit.)
Dr Joel Azzopardi, B.Sc. (Hons.) (Melit.), Ph.D. (Melit.)
Dr Josef Bajada, B.Sc. I.T. (Hons)(Melit.), M.Sc. (Melit.), M.B.A.(Henley), Ph.D.
(King`s)
Dr Claudia Borg ,B.Sc. I.T. (Hons.) (Melit), M.Sc. (Melit.), Ph.D. (Melit.)
Dr Vanessa Camilleri, B.Ed. (Hons.)(Melit.), M.IT (Melit.), Ph.D. (Cov)
LECTURERS
Dr Ingrid Galea, B.Eng. (Hons)(Melit.), M.Sc. (Imperial), D.I.C., Ph.D. (Nott.),
M.B.A. (Lond.)
Dr Kristian Guillaumier, B.Sc. I.T. (Hons.) (Melit.), M.Sc. (Melit.), Ph.D. (Melit.)
Dr Konstantinos Makantasis, DipEng(TUC), MEng(TUC), Ph.D.(TUC)
Dr Dylan Seychell, B.Sc. I.T. (Hons.) (Melit.), M.Sc. (Melit.), Ph.D. (Melit.), SMIEEE
AFFILIATE ASSOCIATE PROFESSOR
Prof. Jean Paul Ebejer, B.Sc.(Hons)(Melit.),M.Sc.(Imperial),D.Phil.(Oxon.)
AFFILIATE SENIOR LECTURER
Dr Andrea De Marco, B.Sc.(Hons)(Melit.),M.Sc.(Melit.),Ph.D.(U.E.A.)
Mr Michael Rosner, M.A. (Oxon.), Dip.Comp.Sci.(Cantab.)
SENIOR VISITING LECTURER
Dr Vincent Vella, B.Sc.,M.Sc.,M.B.A.,Ph.D.
RESEARCH SUPPORT OFFICERS
Mr Kurt Abela, B.Sc. IT (Hons.)(Melit.),M.Sc. (Melit.) (Research Support Officer II)
Ms Arous Zaineb (Research Support Officer II)
Mr Andrew Emanuel Attard, B.Sc. IT (Hons.)(Melit.) (Research Support Officer I)
Mr Jonathan Attard (Research Support Officer II)
Mr Stephen Bezzina, B.Ed (Hons.)(Melit.), M.Sc. Digital Education (Edinburgh)
(Research Support Officer II)
Mr Luca Bondin, B.Sc. IT (Hons)(Melit.), M.Sc. AI (Melit.) (Research Support Officer II)
Dr Marthese Borg, B.A (Hons)(Melit.), M.A.(Melit.), Ph.D.(Melit.) (Research Support
Officer III)
Ms Lara Bua B.A. (Hons.)(Melit.), M.A.(Melit.) (Research Support Officer II)
Mr Quentin Bugeja, B.A. (Gen.)(Melit.), M.Trans.(Melit.) (Research Support Officer II)
Mr Liam Bugeja Douglas (Research Support Assistant)
Ms Laura Camilleri Sghendo B.A. (Hons)(Melit.), M.A. (Melit.) (Research Support
Officer II)
Ms Rachel Cauchi B.A. (Melit.), M.Trans. (Melit.) (Research Support Officer II)
Mr Carl Paul Delmar B.A. (Hons)(Melit.), M.Trans. (Melit.) (Research Support Officer
II)
Mr Gabriel Hili (Research Support Officer I)
Mr Kurt Micallef, B.Sc. IT (Hons.)(Melit.), M.Sc. (Glas.) (Research Support Officer II)
114 Faculty of Information and Communication Technology Final Year Projects 2024
Mr Athanasios Papathanasiou (Research Support Officer II)
Mr Benjamin Joseph Spiteri (Research Support Officer Assistant)
Mr Jurgen Stellini (Research Support Officer Assistant)
ADMINISTRATIVE STAFF
Ms Michelle Agius, H.Dip.(Admin.&Mangt.)(Melit.) (Administration Specialist)
Mr Elton Mamo, (Administration Specialist)
RESEARCH AREAS
Ongoing research
Title: AI meets the Maltese
Courts: Safe use of AI to imProve
efficiency using the Small Claims
Tribunal (AMPS) as a model
Task: Investigating the use of
Natural Language Processing
and Machine Learning to predict
the outcome of Maltese court
cases, specifically those within
the Small Claims Tribunal
Academics: Dr Ivan Mifsud (Dept.
of Public Law) and Dr Charlie
Abela, Dr Joel Azzopardi (Dept. of
AI) (MCST Research Excellence)
Title: Language Data Space
Task: European Level Access
to Language Data
Coordinator: Mr Michael Rosne
Title: Nexus Linguarum
(COST Action)
Task: Investigation and extension
in linguistic data science using
Linked Open Language Data
Coordinator: Mr Michael Rosner
Title: ENEOLI (COST Action)
Task: European Network
on Lexical Innovation
Coordinator: Mr Michael Rosner
Title: ERICA: Learning
Causal Models of Affect
Task: Exploitation of causal
inference tools towards building the
next generation of affect models
Coordinator: Dr Konstantinos
Makantasis
Title: AIMS-Lab
Task: AIMS-Lab is intended to set up
an audio-visual lab to research and
develop next generation e-learning
content employing cutting-edge R&D
aimed at revolutionizing the e-learning
industry by automating the creation
of high-quality educational content.
Coordinator: Prof. Matthew
Montebello (Sponsored by MDIA
through the MAARG funding)
Title: Video Conferencing
of the Future (VCF)
Task: An intelligent AIenabled
video-conferencing
solution designed to monitor
audiences’ engagement.
Coordinator: Prof.
Matthew Montebello
Title: Smart Athlete Analytics
through real-time tracking
Area: AI & IoT
Coordinator: Prof
Matthew Montebello
Title: LearnML
Task: Creation of a resource kit
and guide for teachers to teach
concepts of AI to young people
Coordinator: Institute
of Digital Games
Title: UPSKILLS
Task: Creation of a collection of
resources for higher education
courses relating to linguistics
and language students
Coordinator: Institute of Linguistics
Title: medicX-KE
Task: Predicting explainable
drug-drug interactions using
Knowledge Graphs
Coordinator: Dr Charlie Abela
Title: AI4Manufacturing
Task: Investigate the application
of Machine Learning techniques in
areas related to external/internal
failure analysis and predictive/
prescriptive maintenance.
Coordinator: Dr Charlie
Abela (in collaboration with
STMicroelectronics (Malta) Ltd)
Title: MDIA for Maltese
Text Processing
Task: The creation of
computational tools to process
the Maltese Language
Coordinator: Dr Claudia Borg
Title: MDIA for Maltese
Speech Processing
Task: The creation of a Maltese
Spoken Corpus to facilitate
Maltese Speech Recognition
Coordinator: Dr Claudia Borg
Title: MASRI+
Task: Commercialisation of
Maltese Speech Technology Tools
Coordinator: Dr Claudia Borg
and Dr Andrea Demarco
University of Malta • Faculty of ICT 115
Title: LT-Bridge
Task: Integrating Malta into
European Research and
Innovation efforts for AI-based
language technologies
Coordinator: Dr Claudia Borg
Title: The new era of Chatbot
Task: Creation of a chatbot
in Maltese in the domains of
finance, insurance and banking.
Coordinator: Dr Claudia Borg
Title: Detection of litter
from drone imagery
Task: Using computer vision
techniques to detect litter
in rural landscape
Coordinator: Dr Dylan Seychell
Title: UniDive - Universality,
diversity and idiosyncrasy
in language technology
Task: Ensuring language diversity
in language technologies with
a focus on low-resource
languages such as Maltese
Coordinator: Dr Claudia Borg
Title: Exploring Visual Bias in News
Content using Explainable AI
Task: Using computer vision
and explainable artificial
intelligence techniques assist in
the analysis and mitigation of
visual bias in news content
Coordinator: Dr Dylan Seychell
Completed research
Title: NLTP - National
Language Technology
Platform (traduzzjoni.mt)
Task: Maltese Automatic
Translation aimed at
Public Administration
Coordinator: Dr Claudia Borg
Title: ELE - European
Language Equality
Task: Assessing the use of
language technologies in Malta
and establishing a national
research agenda for Maltese
Language Technologies
Coordinators: Dr Claudia Borg
and Mr Micheal Rosner
Title: EnetCollect – Crowdsourcing
for Language Learning
Area: AI, Language Learning
Coordinator: Dr Claudia Borg
Title: Augmenting Art
Area: Augmented Reality
Task: Creating AR for meaningful
artistic representation
Title: Smart Manufacturing
Area: Big Data Technologies
and Machine Learning
Title: Analytics of patient flow
in a healthcare ecosystem
Area: Blockchain and
Machine Learning
Title: Real-time face
analysis in the wild
Area: Computer vision
Title: RIVAL; Research in
Vision and Language Group
Area: Computer Vision/NLP
Title: Medical image analysis and
Brain-inspired computer vision
Area: Intelligent Image Processing
Title: Notarypedia
Area: Knowledge Graphs
and Linked Open Data
Coordinator: Dr Charlie Abela
and Dr Joel Azzopardi
Title: Language Technology
for Intelligent Document
Archive Management
Area: Linked and open data
Title: Maltese Language
Resource Server (MLRS)
Area: Natural Language Processing
Task: Research and creation
of language processing
tools for Maltese
Coordinator: Dr Claudia Borg
Title: Autonomous Diagnostic
System (ADS)
Task: Investigating the use of deep
learning methods and graph-based
approaches to detect anomalies
Coordinator: Dr Charlie Abela (in
collaboration with Corel Malta Ltd)
Title: Learning Analytics,
Ambient Intelligent Classrooms,
Learner Profiling
Area: AI in Education
Coordinator: Prof
Matthew Montebello
Title: Language in the
Human-Machine Era
Area: Natural Language Processing
116 Faculty of Information and Communication Technology Final Year Projects 2024
Title: Smart animal breeding
with advanced machine
learning techniques
Area: Predictive analysis, automatic
determination of important features
Title: Morpheus
Area: Virtual Reality
Coordinator:
Task: Personalising a
VR game experience for
young cancer patients
Title: Walking in Small
Shoes: Living Autism
Area: Virtual Reality
Task: Recreating a first-hand
immersive experience in autism
Title: eCrisis
Task: Creation of framework
and resources for inclusive
education through playful
and game-based learning
Title: cSDGs
Task: Creation of digital
resource pack for educators
to teach about sustainable
development goals, through
dance, storytelling and games
Coordinator: Esplora
Science Centre
Title: GBL4ESL
Task: Creation of digital resources
for educators using a Game
Based Learning Toolkit
An updated list of concrete areas in which we have expertise to share/offer
→ Agent Technology and Ambient Intelligence
→ AI in Medical Imaging Applications (MRI, MEG, EEG)
→ AI Planning and Scheduling
→ AI, Machine Learning, Adaptive Hypertext and
Personalisation
→ Application of AI in Fintech and Algorithmic Trading
→ Automatic Speech Recognition and Text-to-Speech
→ Constraint Reasoning
→ Document Clustering and Scientific Data Handling
and Analysis
→ Drone Intelligence
→ Enterprise Knowledge Graphs and Graph Neural
Networks
→
→
→
→
Gait Analysis
Intelligent Interfaces, Mobile Technologies and Game
AI
Machine Learning in Physics
Mixed Realities
→ Natural Language Processing/Human Language
→
→
→
→
Technology
Optimization Algorithms
Pattern Recognition and Image Processing
Reinforcement Learning
Web Science, Big Data, Information Retrieval &
Extraction, IoT
DEPARTMENT OF COMMUNICATIONS AND
COMPUTER ENGINEERING
PROFESSOR
Professor Johann A. Briffa, B.Eng. (Hons)(Melit.), M.Phil.(Melit.), Ph.D.(Oakland) (Head of Department)
Professor Inġ. Carl J. Debono, B.Eng.(Hons.), Ph.D.(Pavia), M.I.E.E.E., M.I.E.E. (Dean of Faculty)
Professor Inġ. Adrian Muscat, B.Eng. (Hons.), M.Sc. (Brad.), Ph.D.(Lond.), M.I.E.E.E.
Professor Inġ. Saviour Zammit, B.Elec.Eng.(Hons.), M.Sc. (Aston), Ph.D.(Aston), M.I.E.E.E.
(On Sabbatical leave)
ASSOCIATE PROFESSORS
Professor Inġ. Victor Buttigieg, B.Elec.Eng.(Hons.), M.Sc. (Manc.), Ph.D.(Manc.), M.I.E.E.E.
Professor Inġ. Reuben A. Farrugia, B.Eng.(Hons.), Ph.D., M.I.E.E.E. (on special leave from 18 Feb 2022)
Professor Inġ. Gianluca Valentino, B.Sc.(Hons.)(Melit.), Ph.D. (Melit.), M.I.E.E.E. (On Sabbatical leave)
SENIOR LECTURERS
Dr Inġ. Trevor Spiteri, B.Eng.(Hons.), M.Sc., Ph.D.(Bris.), M.I.E.E.E.
University of Malta • Faculty of ICT 117
ASSISTANT LECTURER
Inġ. Etienne-Victor Depasquale, B.Elec.Eng.(Hons.), M.Sc.(Eng.), M.I.E.E.E.
AFFILIATE PROFESSOR
Dr Hector Fenech, B.Sc. (Eng.) Hons., M.E.E. (P.I.I.), Ph.D. (Bradford), Fellow A.I.A.A., F.I.E.E.E., F.I.E.T., Eur. Eng.
Prof. Franco Davoli, S.M.I.E.E.
AFFILIATE ASSOCIATE PROFESSOR
Dr Norman Poh, Ph.D (EPFL), IEEE CBP, FHEA
VISITING ASSISTANT LECTURERS
Inġ. Antoine Sciberras, B.Eng.(Hons.)(Melit.), PG.Dip.Eng.Mangt.(Brunel), M.ent (Melit.)
RESEARCH SUPPORT OFFICERS
Mr Aaron Abela (Research Support Officer II)
Mr Andrea Vella (Research Support Officer)
Dr Asma Fejjari (Research Support Officer III)
Dr Brandon Birmingham (Research Support Officer III)
Dr Ing. Christian Galea (Research Support Officer IV)
Dr Fabian Micallef (Research Support Officer IV)
Dr Jean Marie Mifsud (Research Support Officer III)
Dr Leander Grech (Research Support Officer III)
Dr Mang Chen (Research Support Officer IV)
Ms Maria Aquilina (Research Support Officer I)
Dr Ing. Mario Cordina (Research Support Officer III)
Mr Mirko Consiglio (Research Support Officer I)
Mr Ryan Debono (Research Support Officer II)
Dr Vijay Prakash (Research Support Officer III)
Mr Xandru Mifsud (Research Support Officer I)
ADMINISTRATIVE & TECHNICAL STAFF
Mr Ms Rakelle Portelli, (Administrator)
Mr Albert Sacco, (Senior Laboratory Officer)
Inġ. Maria Abela-Scicluna, B.Eng.(Hons.)(Melit.), M.Sc. ICT (Melit.) (Senior Systems Engineer)
Mr Jeanluc Mangion, B.Eng.(Hons.)(Melit.) (Systems Engineer)
RESEARCH AREAS
Computer Networks and
Telecommunications
→ Error Correction Codes
→ Multimedia Communications
→ Multi-view video coding
and transmission
→ Video Coding
→ Internet of Things
→ 5G/6G Networks
→ Green Telecommunications
→ Network Softwarization
→ Satellite Communications
→ Quantum Key Distribution
→ AI/ML techniques for
telecommunication systems
Signal Processing and
Machine Learning
→ Computer Vision
→ Image Processing
→ Light Field Image Processing
→ Medical Image Processing
and Coding
→ Earth Observation
→ Image Understanding
→ Vision and Language
tasks in Robotics
→ Visual Relation Detection
→ Visual Question Answering
→ Self Supervised Learning
→ Federated Learning
→ Reinforcement Learning
Computer Systems Engineering
→ Data Acquisition and
Control Systems for Particle
Accelerators and Detectors
→ Implementation on Massively
Parallel Systems (e.g. GPUs)
→ Reconfigurable Hardware
→ Implementation of Machine
Learning algorithms at the edge
→ Distributed Ledger Technology
118 Faculty of Information and Communication Technology Final Year Projects 2024
DEPARTMENT OF COMPUTER INFORMATION
SYSTEMS
ASSOCIATE PROFESSOR
Professor Ernest Cachia, M.Sc.(Kiev), Ph.D.(Sheff.) (Head of Department)
Professor John Abela, B.Sc.(Hons.), M.Sc., Ph.D.(New Brunswick), I.E.E.E., A.C.M.
Professor Lalit Garg, B.Eng.(Barkt), PG Dip. I.T.(IIITM), Ph.D.(Ulster)
Professor Joseph Vella, B.Sc., Ph.D.(Sheffield)
SENIOR LECTURERS
Dr Conrad Attard, B.Sc.(Bus.&Comp.), M.Sc., Ph.D.(Sheffield) (Deputy Dean)
Dr Colin Layfield, B.Sc. (Calgary), M.Sc.(Calgary), Ph.D.(Leeds)
Dr Chris Porter, B.Sc.(Melit), M.Sc.(Melit) , Ph.D.(UCL), A.C.M.
Dr Peter A. Xuereb, B.Sc.(Eng.)(Hons.)(Imp.Lond.), A.C.G.I., M.Phil.(Cantab.), Ph.D.(Cantab.)
LECTURERS
Dr Clyde Meli, B.Sc., M.Phil, Ph.D (Melit)
Dr Joseph Bonello, B.Sc.(Hons)IT(Melit.), M.ICT(Melit.), Ph.D(UCL)
ASSISTANT LECTURERS
Ms Rebecca Camilleri, B.Sc.(Hons) ICT(Melit.), M.Sc. ICT(Melit.)
VISITING SENIOR LECTURERS
Dr Michel Camilleri, B.Sc., M.Sc., Dip.Math.&Comp., Ph.D (Melit.)
Inġ. Saviour Baldacchino, B.Elec.Eng.(Hons.), M.Ent., D.Mgt.
SENIOR ASSOCIATE ACADEMIC
Mr Anthony Spiteri Staines, B.Sc., M.Sc., A.I.M.I.S., M.B.C.S.
VISITING ACADEMIC
Mr Norman Cutajar, M.Sc. Systems Engineering
AFFILIATE SENIOR RESEARCHER
Dr Vitezslav Nezval, M.Sc.(V.U.T.Brno),Ph.D.(V.A.Brno)
ADMINISTRATIVE STAFF
Ms Lilian Farrugia (Administrator)
Ms Precious Ikwudirim (Administrator)
RESEARCH AREAS
Software Engineering
→ Computational complexity
and optimisation
→ Integrated risk reduction
of information-based
infrastructure systems
→ Model extraction (informal
descriptions to formal
representations)
→ Automation of formal
programming syntax generation
→ Automation of project
process estimation
→
→
→
→
→
High-level description
language design
Distributed computing
systems and architectures
Requirements engineering
- methods, management
and automation
System development including
real-time scheduling, stochastic
modelling, and Petri-nets
Software testing, information
anxiety and ergonomics
Data Science and
Database Technology
→ Data integration and
consolidation for data
warehousing and analytics
→ Database technology,
data sharing issues and
scalability performance
→ Data warehousing and data
mining: design, integration,
and performance
→ Data analysis, data quality,
pre-processing, and
missing data analysis
University of Malta • Faculty of ICT 119
→ Data modelling including
spatial-temporal modelling
→ Distributed database systems
→ Predictive modelling
→ Big data and analytics
→ Search and optimization
→ Business intelligence
→ Processing of streaming data
→ Information retrieval
Human-Computer Interaction
→ Human-Computer
Interaction (HCI)
→ Digital Accessibility
→ Assistive technologies
→ Multi-modal interaction
→ Information architecture
→ Understanding the
User Experience (UX)
through physiological
and cognitive metrics
→ Human-to-instrumentation
interaction in the
aviation industry
→ User modelling in software
engineering processes
→ Human-factors and
ergonomics
→ Affordances and
learned behaviour
→ The lived experience of
information consumers
Bioinformatics, Biomedical
Computing and Digital Health
→ Gene regulation
ensemble effort for the
knowledge commons
→ Automation of gene curation;
gene ontology adaptation
→ Classification and effective
→
→
→
→
→
→
→
→
→
→
→
→
→
→
application of curation tools
Pervasive electronic
monitoring in healthcare
Health and social
care modelling
Missing data in
healthcare records
Virtual Health Twins
Health data exchange
Internet of Health Things
Extended Health Intelligence
mHealth
Neuroimaging
Metabolomics
Technology for an
ageing population
Education, technology
and cognitive disabilities
(e.g. augmented reality)
Assistive technologies in
the context of the elderly
and individuals with sensory
and motor impairments in
institutional environments
Quality of life, independence
and security - investigating
the use of robotic vehicles,
spoken dialogue systems,
indoor positioning systems,
smart wearables, mobile
technology, data-driven
systems, machine learning
algorithms, optimisation and
spatial analytic techniques
Applied Machine Learning,
Computational Mathematics
and Statistics
→ Applicative genetic algorithms
and genetic programming
→ Latent semantic analysis and
natural language processing
→ Heuristics and metaheuristics
→ Stochastic modelling
& simulation
→ Semantic keyword-based
search on structured
data sources
→ Application of AI and
machine learning to
business and industry
→ Application of AI techniques
for operational research,
forecasting and the
science of management
→ Application of AI techniques
to detect anomalies in the
European Electricity Grid
→ Knowledge discovery
→ Image Processing
(deconvolution)
→ Image super-resolution using
deep learning techniques
→ Optimization of manufacturing
production lines using
AI techniques
→ Square Kilometre Array
(SKA) Tile Processing
Module development
→ Spam detection using linear
genetic programming and
evolutionary computation
→ Scheduling/combinatorial
optimisation
→ Traffic analysis and
sustainable transportation
→ Automotive cyber-security
Fintech and DLT
→ Automatic Stock Trading
→ Distributed Ledger Technologies
120 Faculty of Information and Communication Technology Final Year Projects 2024
DEPARTMENT OF COMPUTER SCIENCE
PROFESSOR
Professor Adrian Francalanza, B.Sc.I.T. (Hons.), M.Sc., D.Phil.(Sussex)
Professor Gordon J. Pace, B.Sc., M.Sc. (Oxon.), D.Phil. (Oxon.)
ASSOCIATE PROFESSORS
Professor Christian Colombo, B.Sc.I.T. (Hons.), M.Sc. Ph.D. (Melit.)
Professor Joshua Ellul, B.Sc.I.T. (Hons.), M.Sc. (Kent) , Ph.D. (Soton)
Professor Mark Micallef, B.Sc. I.T. (Hons.), Ph.D. (Melit.), M.B.A.(Melit.)
Professor Kevin Vella, B.Sc., Ph.D. (Kent)
AFFILIATE PROFESSORS
Professor Alessio Magro, B.Sc. IT (Hons)(Melit.),Ph.D.(Melit)
SENIOR LECTURERS
Dr Sandro Spina, B.Sc.I.T.(Hons), M.Sc. (Melit), Ph.D.(Warw.) (Head of Department)
Dr Mark J. Vella, B.Sc.I.T.(Hons.), M.Sc. Ph.D. (Strath.)
Dr Keith Bugeja, B.A.(Hons), M.IT, Ph.D.(Warw.)
LECTURERS
Dr Neville Grech, B.Sc.(Hons),M.Sc.(S’ton),Ph.D.(S’ton)
RESEARCH SUPPORT OFFICERS
Robert Abela, B.Sc.(Hons), M.Sc.(Melit.) (Research Support Officer II)
Axel Curmi (Research Support Officer II)
ADMINISTRATIVE STAFF
Ms. Gianuaria Crugliano, P.G.Dip.Trans. & Interp. (Melit.) (Administrator)
RESEARCH AREAS
→
→
→
→
→
→
Blockchain, Distributed
Ledger Technologies and
Smart Contracts
Concurrency
Computer Graphics
Compilers
Distributed Systems
High Performance Computing
→
→
→
→
→
→
and Grid Computing
Machine Learning and Game AI
Model Checking and Hardware/
Software Verification
Operating Systems
Program Analysis
Runtime Verification
Security
→
→
→
→
Semantics of Programming
Languages
Software Development
Process Improvement
and Agile Processes
Software Engineering
Software Testing
University of Malta • Faculty of ICT 121
DEPARTMENT OF MICROELECTRONICS AND
NANOELECTRONICS
PROFESSOR
Professor Inġ. Joseph Micallef, B.Sc.(Eng.)(Hons.),M.Sc.(Sur.),Ph.D.(Sur.), M.I.E.E.E.
Professor Ivan Grech, B.Eng.(Hons.),M.Sc.,Ph.D.(Sur.),M.I.E.E.E.
Professor Inġ. Edward Gatt, B.Eng.(Hons.),M.Phil.,Ph.D.(Sur.),M.I.E.E.E.
ASSOCIATE PROFESSORS
Prof. Inġ. Owen Casha, B. Eng.(Hons.) (Melit.), Ph.D. (Melit.), M.I.E.E.E.
Prof. Eur. Inġ. Nicholas Sammut, B.Eng.(Hons.) (Melit.), M.Ent. (Melit.), Ph.D. (Melit.), M.I.E.E.E. (Head of Department)
ADMINISTRATIVE & TECHNICAL STAFF
Ms Alice Camilleri, Dip.Youth&Comm.Stud.(Melit.) (Administrator)
Dr Inġ. Francarl Galea, B.Eng.(Hons.)(Melit.),M.Sc.(Melit.),Ph.D.(Melit.) (Senior Systems Engineer)
Dr Inġ. Russell Farrugia B.Eng. (Hons)(Melit.),M.Sc.(Melit.),Ph.D.(Melit.) (Systems Engineer)
RESEARCH SUPPORT OFFICERS
Ing. Barnaby Portelli (Research Support Officer II)
RESEARCH AREAS
Research Areas:
→
Embedded Systems
→
System-in-Package (SiP)
→
Analogue and Mixed
→
Biotechnology Chips
→
System-on-Chip (SoC)
Mode ASIC Design
→
Micro-Electro-Mechanical
→
Accelerator Technology
→
Radio Frequency
Systems (MEMS)
→
Microfluidics
Integrated Circuits
→
Quantum Nanostructures
→
Internet-of-Things (IoT)
FACULTY OFFICE
Ms Nathalie Cauchi, Dip.Youth&Comm.Stud.(Melit.), H.Dip.(Admin.&Mangt.)(Melit.),
M.B.A.(A.R.U.,UK) (Senior Manager)
Mr Rene’ Barun, BA (Hons.) Philosophy (Melit), (Administrator)
Ms Therese Caruana (Administrator)
Ms Dorina Ndoj (Administrator)
Ms Samantha Pace (Administrator)
Mr Mark Anthony Xuereb, (Administrator)
SUPPORT STAFF
Mr Patrick Catania A.I.M.I.S. (Senior IT Officer I)
Mr Paul Bartolo (Senior Beadle)
Ms Melanie Gatt (Beadle)
Mr Raymond Vella (Technical Officer II)
122 Faculty of Information and Communication Technology Final Year Projects 2024
A Voluntary Organization founded in 1978 by Engineers to represent the
Engineering Profession and cater for the interests of Engineers in Malta
Your membership ensures a stengthened voice for the Profession
www.coe.org.mt/membership
BECOME A
MEMBER
As a Chamber Of Engineers member, you will benefit from:
Enhanced Representation
Improved Career Opportunities
Participation In Various Events
Value Added Benefits
Learn more at www.coe.org.mt
C o n t a c t u s o n i n f o @ c o e . o r g . m t w i t h a n y f u r t h e r e n q u i r i e s
University of Malta • Faculty of ICT 123
Your future ICT
career
mita.gov.mt