13. Social Engineering and Phishing13.5 State of the ArtDhamija’s [154] is among the most cited works regarding “phishing.” Althoughdating back to 2006, this research was the first that provided empirical evidenceabout the reasons why phishing attacks work: by analyzing the (ineffectivenessof) standard security indicators, the paper corroborates with objective findingsthe anecdotal (true) belief that phishing and social engineering work becauseof the scarce security education of the typical users. Albeit simple, this conceptis still the foundation of today’s social-engineering-based attacks. Three yearslater, Bilge et al. in [104] showed that, once an attacker has managed toinfiltrate a victim’s online social circle, the victim will trust the attacker andblindly follow any link they post, regardless of whether the victim knows theattacker in real life. Throughout the years, phishing and social engineeringhave evolved to find new ways to exploit trust relationships between humansubjects, or between a human subject and an institution or website. A recentexample is the abuse of short URLs [265] (e.g., bit.ly, tinyurl.com), to whichusers have grown accustomed thanks to Twitter, to spread phishing and othermalicious resources on social networks and email campaigns. Unfortunately,many years later, security warnings, which are supposed to help inexperiencedusers to distinguish between trustworthy and non-trustworthy websites orresources, are still of debatable effectiveness [79].Effective, personalized phishing and social-engineering-based attacks hasbeen conside<strong>red</strong> a small-scale threat, because collecting sufficient informationand launching tailo<strong>red</strong> attacks require time and manual effort. HoweverBalduzzi et al. [92] and Polakis et al. [317] both demonstrated how online socialnetworks can be used as oracles, for mapping users’ email addresses to theirFace<strong>book</strong> profiles. Thus, using the information contained in the profiles, onecould construct very convincing personalized spam emails. Furthermore, theauthors have shown [109] that automated social engineering in social networksis feasible. They introduce the concept of socialbots, automated programs thatmimic real online social network users with the goal of infiltrating a victim’ssocial circle. They operated their proof-of-concept “socialbot” on Face<strong>book</strong> foreight weeks and showed that current online social networks can be infiltratedwith a success rate of up to 80%. Additionally, they show that, dependingon users’ privacy settings, an infiltration can result in privacy breaches withmore users involved. Other work in the past tackled the threat of automatedsocial engineering on social networks. Notably, Irani et al. [218] measu<strong>red</strong>the feasibility of “attracting” victims using honey profiles, to eventually lurethem into clicking on some malicious link. This “passive” social engineeringapproach turned out to be effective and once again showed that humans areoften the weakest security link.98
13.5. State of the ArtScammers operating in online social networks have been analyzed byStringhini et al. [364], where the authors observed that the scammers’ socialnetwork usage patterns are distinctive because of their malicious behavior. Thisallowed them to design a system to profile and detect likely-malicious accountswith high confidence. In their work, the authors collaborated with Twitter anddetected and deleted 15,857 spamming accounts. Three years later, Egele etal. [164] improved previous approaches to adapt them to the new techniquesused by the attackers. Indeed, while in the past most of the scamming activityin online social networks used to be carried out through the creation of bogusaccounts created, modern scammers have realized that compromising legitimate,real accounts makes their phishing and social engineering activities evenmore reliable. Egele’s approach copes with this aspect using a combination ofstatistical modeling and anomaly detection to identify accounts that exhibita sudden change in behavior. They tested their approach on a large-scaledataset of more than 1.4 billion publicly-available Twitter messages and on adataset of 106 million Face<strong>book</strong> messages. Their approach was able to identifycompromised accounts on both social networks.Recently, Onarlioglu et al. [307] performed a large-scale measurement ofhow real users deal with Internet attacks, including phishing and other socialengineering-basedthreats. Their findings suggest that non-technical users canoften avert relatively simple threats very effectively, although they do so byfollowing their intuition, without actually perceiving the severity of the threat.Another interesting, yet unsurprising, finding is that trick banners that arecommon in file sharing websites and shortened URLs have high success rates ofdeceiving non-technical users, thus posing a severe security risk. Non-technicalusers, and in particular elderly users, have also been targeted through lesssophisticatedyet effective means: the so-called “vishing” (i.e., voice phishing)is the practice of defrauding users through telephone calls. We cannot identifywhen vishing first appea<strong>red</strong> (probably back in the phreaking era), neither canwe state that this threat has disappea<strong>red</strong> [3, 55]. Albeit not widespread, dueto its small scalability, vishing, also known as “phone scam” or “419 scam,”has received some attention from researchers. To make this a viable business,modern scammers have begun to take advantage of the customers’ familiaritywith “new technologies” such as Internet-based telephony, text-messages [20],and automated telephone services. The first detailed description of the vishingphenomenon was by Ollmann [305], who provided brief, clear definitions ofthe emerging “*-ishing” practices (e.g., smishing, vishing) and pointed out thecharacteristics of the vishing attack vectors. Maggi [264] was the first to analyzethis phenomenon from user-provided reports of suspected vishing activity. Themajority of vishing activity registe<strong>red</strong> was targeted against US phone users.By analyzing the content of the transcribed phone conversations, the authorfound that keywords such as “c<strong>red</strong>it” and “press” (a key) or “account” are99
- Page 1:
SEVENTH FRAMEWORK PROGRAMMETHERED B
- Page 4 and 5:
The Red Book. ©2013 The SysSec Con
- Page 7 and 8:
PrefaceAfter the completion of its
- Page 9 and 10:
Contents1 Executive Summary 32 Intr
- Page 11 and 12:
1 Executive SummaryBased on publish
- Page 13:
1.2. Grand Challenges4. will have t
- Page 16 and 17:
2. Introductionwho want to get at t
- Page 18 and 19:
2. Introduction• Although conside
- Page 20 and 21:
2. Introductionfuture, where each a
- Page 22 and 23:
2. Introductiondrones), such sensor
- Page 24 and 25:
2. Introductioncover our energy nee
- Page 27:
Part I: Threats Identified
- Page 30 and 31:
3. In Search of Lost Anonymity3.2 W
- Page 32 and 33:
3. In Search of Lost Anonymityguide
- Page 35 and 36:
4 Software VulnerabilitiesExtending
- Page 37 and 38:
4.1. What Is the Problem?infrastruc
- Page 39 and 40:
4.5. State of the Artparts of criti
- Page 41:
4.7. Example Problemstem mitigation
- Page 44 and 45:
5. Social Networks5.1 Who Is Going
- Page 46 and 47:
5. Social Networksby such an applic
- Page 48 and 49:
5. Social Networksdisasters. This r
- Page 50 and 51:
6. Critical Infrastructure Security
- Page 52 and 53:
6. Critical Infrastructure Security
- Page 54 and 55:
6. Critical Infrastructure Security
- Page 56 and 57: 6. Critical Infrastructure Security
- Page 59 and 60: 7 Authentication and AuthorizationH
- Page 61 and 62: 7.2. Who Is Going to Be Affected?so
- Page 63 and 64: 7.5. State of the ArtFinally, ident
- Page 65 and 66: 7.6. Research Gapshashes and evalua
- Page 67 and 68: 8 Security of Mobile DevicesIn an e
- Page 69 and 70: 8.3. What Is the Worst That Can Hap
- Page 71 and 72: 8.4. State of the ArtAll the other
- Page 73: 8.6. Example Problemserated anomaly
- Page 76 and 77: 9. Legacy Systemsthe execution of a
- Page 78 and 79: 9. Legacy Systemsparts of the progr
- Page 81 and 82: 10 Usable SecurityKeys, locks, and
- Page 83 and 84: 10.4. What Is the Worst That Can Ha
- Page 85 and 86: 10.6. Research Gaps10.6 Research Ga
- Page 87: 10.7. Example Problemsof value for
- Page 90 and 91: 11. The Botnet that Would not DieNu
- Page 92 and 93: 11. The Botnet that Would not Diefa
- Page 94 and 95: 11. The Botnet that Would not Dieti
- Page 96 and 97: 12. Malwarethan 128 million malware
- Page 98 and 99: 12. Malwareequipped with auto-updat
- Page 100 and 101: 12. Malwarethe introduction of App
- Page 102 and 103: 13. Social Engineering and Phishing
- Page 104 and 105: 13. Social Engineering and Phishing
- Page 108 and 109: 13. Social Engineering and Phishing
- Page 111 and 112: 14 Grand ChallengesOne of the most
- Page 113: Part II: Related Work
- Page 116 and 117: 15. A Crisis of Prioritization•
- Page 118 and 119: 16. Forwardare accessible from the
- Page 120 and 121: 16. ForwardRecommendation 4: “The
- Page 122 and 123: 17. Federal Plan for Cyber Security
- Page 124 and 125: 17. Federal Plan for Cyber Security
- Page 126 and 127: 18. EffectsPlus18.1 Roadmap Structu
- Page 128 and 129: 18. EffectsPlus18.6 Identified Prio
- Page 130 and 131: 19. Digital GovernmentThe roadmap o
- Page 132 and 133: 20. Horizon2020• “Making cyber
- Page 135 and 136: 21 Trust in the Information Society
- Page 137: 21.2. Recommendationsallows for the
- Page 140 and 141: 22. ENISA Threat Landscape2. Malwar
- Page 142 and 143: 22. ENISA Threat LandscapeSocial Te
- Page 144 and 145: 22. ENISA Threat Landscapewriters w
- Page 146 and 147: 23. Cyber Security Research Worksho
- Page 149 and 150: 24 Cyber Security Strategy of theEu
- Page 151 and 152: 24.2. Strategic PrioritiesProposed
- Page 153 and 154: 25 The Dutch National Cyber Securit
- Page 155 and 156: 25.1. ContextsInternet (e.g., smart
- Page 157 and 158:
25.1. Contextsdefensive approaches
- Page 159 and 160:
25.2. Research Themesand radio broa
- Page 161 and 162:
25.2. Research Themesconsists of se
- Page 163 and 164:
25.2. Research ThemesRisk managemen
- Page 165 and 166:
AMethodologiesIn this appendix we o
- Page 167 and 168:
BSysSec Threats Landscape Evolution
- Page 169 and 170:
B.4. SysSec 2013 Threats LandscapeT
- Page 171 and 172:
B.4. SysSec 2013 Threats LandscapeS
- Page 173 and 174:
Bibliography[1] 10 Questions for Ke
- Page 175 and 176:
Bibliography[45] SCADA & Security o
- Page 177 and 178:
Bibliography[88] A. Avizienis, J.-C
- Page 179 and 180:
Bibliography[130] G. Cluley. 600,00
- Page 181 and 182:
Bibliography[172] D. Evans. Top 25
- Page 183 and 184:
Bibliography[214] ICS-CERT. Monthly
- Page 185 and 186:
Bibliography[253] C. Lever, M. Anto
- Page 187 and 188:
Bibliography[291] Mozilla. Browseri
- Page 189 and 190:
Bibliography[329] F. Raja, K. Hawke
- Page 191 and 192:
Bibliography[370] T. Telegraph. Bog
- Page 193 and 194:
Bibliography[407] W. Yang, N. Li, Y