The Legal Challenges in Regulating Content

By Stuart Macdonald, Professor of Law at Swansea University, Co-Director of Swansea University’s CHERISH Digital Economy Centre ( and of the University’s CyberTerrorism Project (

Following the terrorist attacks in Manchester and London, the Prime Minister reiterated her commitment to halting the spread of ‘poisonous propaganda that is warping young minds’, referring particularly to online content and the role of social media companies. In considering any proposals for the creation of new legal powers, it is helpful to examine the already-existing offences of encouraging terrorism and disseminating terrorist publications. These illustrate the challenges involved in regulating online content whilst safeguarding basic rights.

Found in sections 1 and 2 of the Terrorism Act 2006 respectively, the encouraging terrorism and disseminating terrorist publications offences had multiple objectives: to prevent radicalisation by stopping the spread of violent extremist ideology; to protect members of the public from statements that might cause disgust or offence; and, to reassure the public that the Government was taking steps to ensure their safety. The encouraging terrorism offence was also designed to fulfil the UK’s obligation, under Article 5 of the Council of Europe Convention on the Prevention of Terrorism, to criminalise ‘public provocation to commit a terrorist offence’ – although it is important to note that the UK offences goes further than required by the Convention in two respects: by encompassing reckless, as well as intentional, encouragement; and, by stating that it is irrelevant whether the published statement did in fact create a danger that a terrorist offence would be committed.

To establish liability for one of these offences, three requirements must be established. The first concerns the conduct of the defendant. For the encouragement of terrorism offence, the defendant must have published a statement or caused another to publish a statement. A ‘statement’ is defined as a ‘communication of any description’, and includes communications consisting solely of words or pictures, whilst publishing is defined as ‘publishing [the statement] in any manner to the public’ and expressly includes providing an electronic service by which the public have access to the statement and using such a service to enable public access to it (so could include both Internet Service Providers and website administrators). The conduct element of the section 2 offence is defined in a similarly expansive manner. A publication is ‘an article or record of any description’ that contains matter that can be read, listened to and/or looked at or watched. The five specified forms of dissemination include providing a service that enables others to look at the publication or acquire it, and transmitting the contents of such a publication electronically.

The second requirement focuses on the content of the statement and its likely interpretation. For the encouragement of terrorism offence, it must be shown that the statement was ‘likely to be understood by some or all of the members of the public to whom it is published as a direct or indirect encouragement or other inducement to them to the commission, preparation or instigation of acts of terrorism or Convention offences’. The public includes those in other countries, as well as the UK. For the section 2 offence, the publication in question must satisfy one of two tests. The first is almost identical to the test for the section 1 offence, whilst the alternative is that the publication was ‘likely to be useful in the commission or preparation of [acts of terrorism] and to be understood, by some or all of those persons, as contained in the publication, or made available to them, wholly or mainly for the purpose of being so useful to them’.

Both offences thus employ the nebulous term ‘indirect encouragement’. Whilst the statute leaves this term undefined, it does offer as an illustrative example statements/publications that satisfy two conditions. The first is that the statement/publication glorifies the commission or preparation of acts of terrorism. This could be a past or future terrorist act, or acts of terrorism in general. The second is that the statement/publication is one from which members of the public (section 1) or the recipient (section 2) ‘could reasonably be expected to infer that what is being glorified is being glorified as conduct that should be emulated by [them/him] in existing circumstances’. Glorification is itself defined as ‘any form of praise or celebration’. When combined with the UK’s broad statutory definition of ‘terrorism’, which contains no exception for industrial protest or just cause, it follows that statements or publications praising the actions of Nelson Mandela in the early 1960s, or the overthrow of Colonel Gaddafi in 2011, could amount to the indirect encouragement of terrorism. The breadth and ambiguity of the term indirect encouragement is increased still further by two additional factors: first, it is unclear how many members of the public amounts to ‘some’, particularly given that the statement/publication may be available to a global audience of millions; and, second, it is irrelevant whether anyone was in fact encouraged to commit, prepare or instigate an act of terrorism or made use of the publication in the commission or preparation of a terrorist act.

The final requirement of these offences concerns the defendant’s state of mind: he must either have intended to encourage terrorism (or, in the case of section 2, intended to assist in the commission or preparation of acts of terrorism), or have been reckless as to whether the statement/publication would have this effect. There is no requirement, as such, to prove a terrorist purpose. However, where the allegation is of reckless encouragement of terrorism, there is a defence of non-endorsement. This applies where: (a) the statement/publication neither expressed the defendant’s views nor had his endorsement; and, (b) in the circumstances it was clear that the statement/publication neither expressed his views nor had his endorsement.

From a human rights perspective, these offences raise (at least) two sets of concerns. First, their wording is deliberately expansive, so as to ensure flexibility and avoid under-inclusivity. The result, however, is that the offences are overly broad and their boundaries uncertain. In response to concerns that this could result in the offences being used inappropriately, the Government emphasised that prosecutions may only be brought with the consent of the Director of Public Prosecutions. Commenting on this practice of combining overly broad offence definitions with reliance on prosecutorial discretion, the Supreme Court in R v Gul stated that it amounts to an abdication of legislative responsibility to an unelected official. Second, the ambiguity and broad reach of these offences may have a chilling effect on the freedom of expression of members of so-called suspect communities.

Yet inhibiting discussion of political and religious ideology in this way appears to run contrary to the stated aim of the UK’s Prevent strategy: to engage in the battle of ideas. In turn, this inhibitory effect can contribute to the very sense of grievance and alienation that radicalisers seek to exploit. As such, there is a danger that – if not carefully delineated and curtailed – these offences, and any additional powers that may yet be created, could prove counter-productive, undermining one of the key rationales for their very existence.

This blog was originally published on the TechAgainstTerrorism website. TechAgainstTerrorism is directly mandated by the UN Security Council to engage with smaller tech companies and startups to help build operational capacity and inform debate about terrorists’ use of technology. See further

Posted in Uncategorized

How Terrorist Groups Can Use Your Computer Against You

In today’s ever-expanding digital world, the apparent link between terrorism and the internet appears to be getting stronger. Gill et al. (2017) have indicated that although radicalisation is not dependent on internet use, the internet can facilitate the adoption of extreme views, and terrorist use of the internet is typically high. Nevertheless, the media, or the government, often makes sweeping claims about how the internet, through social media and echo chambers, can be a weapon to be used against the safety and security of the masses, opening the door for massive security initiatives and sweeping government power, i.e. the Patriot Act or the Planning Tool for Resource Integration, Synchonization, and Management (PRISM) program.

However, as is often the case when the media attempts to report on complicated issues, there’s a bit more to the story. Terrorists’ use of computers is not as simple as a couple of criminals sitting in a room working to hack into servers or spread radical propaganda. Understanding the different ways terrorists can use computers to further their goals will help improve defenses and make security efforts more effective.

Accessing Personal Information

When discussing terrorists’ use of personal computers, one of the first things that comes to many people’s minds is their personal information. Between the information stored on the device itself and the login information used for countless online services and servers, each computer is a treasure trove of data to be potentially used by terrorists. However, the real target is likely not the information of one person but rather the networks to which this information belongs.

Spear phishing is becoming an increasingly potent tool used by terrorists to carry out their cybercrime objectives. By sending emails that appear to be from legitimate sources, hackers can damage browsers, gain access to servers or plant malware (which can then wreak havoc on the entire network). Most people think they won’t fall for this type of gimmick, but hackers and cybercrime groups can imitate even the most secure services to try and draw you in, as evidenced by the Google Docs phishing scam from earlier this year. Again, these attacks might not be directed at an individual, but allowing terrorists to access whole networks can and will be very dangerous for everyone involved with that network.

For further evidence, one need not look further than today’s best-known terrorist group. The group known to many as the Islamic State of Iraq and Syria (ISIS) has certainly been attempting to harness the power of the internet. While they do not currently possess the people power to be able to be a serious threat in the cyber world, they are certainly interested in doing so. Back in 2014, Syrian citizen media group,  Raqqah is being Slaughtered Silently (RSS) reported a spear phishing attack that attempted to remove the anonymity of the group’s members, indicating a desire by ISIS to identify and potentially track those creating anti-ISIS rhetoric. These threats are very real, and it is important each individual understands the consequences of following suspicious links and going lax on internet security.

Echo Chambers

Not all of the ways that terrorists use computers will be direct attacks against you. Radical rhetoric turns people off and discourages healthy debate. As Cass Sunstein argued in his seminal 2007 book, “Republic. Com 2.0”, the myriad of choices we have in terms of information seems to present us with the opportunity to hear and understand different points of view about an issue.

However, this ends up having the opposite effect since so many different sources allow us to pick and choose what it is we would like to hear, closing us off to alternative opinions and informed debate. This creates a scenario where debates are isolated from external groups, which only serves to entrench beliefs and block out alternatives. This theory has been tested many times and while it may sound superficially convincing, it is unclear as to whether or not the internet actually drives radicalisation.

The role of the internet in creating echo chambers and radicalisation is muddled further with the work of O’Hare and Stevens (2015) which suggests that echo chambers are neither inherently linked to internet use nor naturally harmful. However, Lee et al. (2014) argue that because those on the fringes of the political spectrum are more likely to post content in line with what they already believe, those who find themselves outside of the mainstream could be more vulnerable to being on the receiving end of radicalised content. But is this caused by the internet? Or is it more a product of someone already having certain beliefs seeking out content in line with what they believe on today’s most accessible medium, the internet? Either way, the jury is still out.

Despite the lack of clarity as to the role of the internet and echo chambers, there is a space for terrorist groups to occupy. By infiltrating networks “on the fringes,” and by posting content congruent with these groups’ beliefs, terrorists can help contribute to the confirmation bias that, if left to its own devices, can contribute to radicalisation. As such, it is important to encourage people to work against this effect so that they can at least be exposed to a more diverse range of opinions and perspectives. One way to do this is through the use of a Virtual Private Network (VPN). This tool hides your digital trail online, which disrupts the algorithms used by social media networks that often lead to homogeneous content and a lack of differing views. Those who find themselves closer to the center of the political spectrum may not feel vulnerable, but encouraging good practice to prevent confirmation bias is something needed to promote healthy debate throughout the public sphere.

Seeding Fear

It is important to remember terrorism has as its ultimate goal inciting fear and terror to disrupt peace and order in society. Keeping this in mind, it should be obvious how terrorists can use your computer against you. By continuing to carry out bombings, shootings, cyberattacks, etc., terrorist groups continue to find themselves on the news. Images of chaotic post-explosion scenes or crying children play well in the Western media, and these visuals are exactly what terrorist groups want to be shared with the world. A car bombing may kill no more than 15 people, but the panic it incites around the world has a much larger and more damaging impact. It inflates the threat. For example, according to a Pew Research Poll, 74 percent of Trump supporters consider terrorism to be a “very big problem” facing the country, despite the fact that the odds of being killed in a terrorist attack are somewhere around 1 in 3.6 million and that 98.6 percent of terror-related deaths in America occurred all at once, on September 11, 2001. The fact Fox News still hosts the video of the Jordanian pilot being burned alive in 2015 should indicate the audience-attracting value of this type of content.

Additionally, terror is vastly misreported in the media, with attacks involving Muslim-born or Muslim, foreign-born perpetrators receiving disproportionate coverage. In this case, though, terrorists aren’t using computers against us so much as we are. The rapid spread of information facilitated by computers and the internet sends these messages out to the public quickly and easily, and no matter how much nuance is added later on, the initial shock value this provides goes a long way towards shaping societal views about terrorism and its threats. This can have profound consequences, as these views are often what gives political support for military action abroad, adding more fuel to the fire and further destabilising entire regions of the world.

The rise of computers and digital technologies is one of history’s largest double-edged swords. It arms people with the ability to inform themselves about the world around them, but it also presents a window of opportunity for terrorists to attack the peace and security of the world in which many of us live. Constant vigilance and awareness are essential if this threat is to be addressed, and it is important each individual realise their role in preventing radicalism and extremist behaviors from doing more damage than they already have.

About the Author: Sandra is a freelance blogger who specialises in internet security and cybercrime. As a student of how digital technology has reshaped modern life, she is concerned that the awesome power of the internet will fall victim to restriction due to fear from extremist groups, and she dedicates herself to educating people how to use the internet for good so that this does not happen.

Posted in Uncategorized


The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more.

Proceedings kicked off with keynotes Professor Bruce Hoffman and Professor Maura Conway. Professor Hoffman discussed the threat from the Islamic State (IS) and al-Qaeda (AQ). He discussed several issues, one of which was the quiet regrouping of AQ, stating that their presence in Syria should be seen as just as dangerous as and even more pernicious than IS. He concluded that the Internet is one of the main reasons why IS has been so successful, predicting that as communication technologies continue to evolve, so will terrorists use of social media and the nature of terrorism itself. Professor Conway followed with a presentation discussing the key challenges in researching online extremism and terrorism. She focused mainly on the importance of widening the groups we research (not just IS!), widening the platforms we research (not just Twitter!), widening the mediums we research (not just text!), and additionally discussed the many ethical challenges that we face in this field.

The key point from the first keynote session was to widen the research undertaken in this field and we think that the presenters at TASM were able to make a good start on this with research on different languages, different groups, different platforms, females, and children. Starting with different languages, Professor Haldun Yalcinkaya and Bedi Celik presented their research in which they adopted Berger and Morgan’s 2015 methodology on English speaking Daesh supporters on Twitter and applied this to Turkish speaking Daesh supporters on Twitter. They undertook this research while Twitter was undergoing major account suspensions which dramatically reduced their dataset. They compared their findings with Berger and Morgan’s study and a previous Turkish study, finding a significant decrease in the follower and followed counts, noting that the average followed count was even lower than the average Twitter user. They found that other average values followed a similar trend, suggesting that their dataset had less power on Twitter than previous findings, and that this could be interpreted as successful evidence of Twitter suspensions.

Next, we saw a focus away from the Middle East as Dr Pius Eromonsele Akhimien presented his research on Boko Haram and their social media war narratives. His research focused on linguistics from YouTube videos between 2014 when the Chobok girls were abducted until 2016 when some of the girls were released. Dr Akhimien emphasised the use of language as a weapon of war. His research revealed that Boko Haram displayed a lot of confidence in their language choice and reinforced this through the use of strong statements. They additionally used taunts to emphasise their control, for example, “yes I have your girls, what can you do?” Lastly, they used threats, and followed through with these offline.

Continuing the focus away from the Middle East, Dr Lella Nouri, Professor Nuria Lorenzo-Dus and Dr Matteo Di Cristofaro presented their inter-disciplinary research into the far-right’s Britain First (BF) and Reclaim Australia (RA). This research used corpus assisted discourse analysis (CADS) to analyse firstly why these groups are using social media and secondly, the ways in which these groups are achieving their use of social media. The datasets were collected from Twitter and Facebook using the social media analytic tool Blurrt. One of the key findings was that both groups clearly favoured the use of Facebook over Twitter, which is not seen to be the same in other forms of extremism. Also, both groups saliently used othering, with Muslims and immigrants found to be the primary targets. The othering technique was further analysed to find that RA tended to use a specific topic or incident to support their goals and promote their ideology, while BF tended to portray Muslims as paedophiles and groomers to support their goals and ideology.

The diversity continued as Dr Aunshul Rege examined the role of females who have committed hijrah on Twitter. The most interesting finding from Dr Rege’s research was the contradicting duality of the role of these women. Many of the women were complaining post-hijrah of the issues that pushed them into committing hijrah in the first place: loneliness, cultural alienation, language barriers, differential treatment, and freedom restrictions. They tweeted using the hashtag #nobodycaresaboutawidow and advised young women who were thinking of committing hijrah to bring Western home comforts with them, such as make-up.

Both Dr Weeda Mehran, and Amy-Louise Watkin and Sean Looney presented on children in terrorist organisations and their portrayal through videos and images. Dr Mehran analysed eight videos and found that children help to create a spectacle as they generate memorability, novelty, visibility and competitiveness, and display high levels of confidence while undertaking executions. On the other hand, Watkin and Looney found in their analysis of images in online jihadist magazines that there are notable differences between IS and AQ in their use of children with IS focusing on displaying brutality through images of child soldiers and AQ trying to create shame and guilt at their Western followers through images of children as victims of Western-back warfare. They concluded that these differences need to be taken into account when creating counter-messages and foreign policy.

Joe Whittaker presented his research on online radicalisation. He began with a literature review of the field, concluding that the academic consensus was that the Internet is a facilitator, rather than a driver, of radicalisation. He then offered five reasons as to why there was good reason to doubt this consensus: the lack of empirical data, how old the data is compared to the growth of the Internet, the few dissenting voices in the field, the changing online threat since 2014, and the wealth of information that can be learned from other academic fields (such as Internet studies and psychology). He then offered three case studies of individuals radicalised in the previous three years to learn whether the academic consensus still holds; finding that although it does in two cases, there may be good reason to believe that social media could drastically change the nature of some individuals’ radicalisation.

On the topic of corporate social responsibility in counter-terrorism, Chelsea Daymon and Sergei Boeke discussed different aspects of private entities engaging in policing extremist content on the Internet. Daymon drew upon the different projects and initiatives conducted by industry leaders, such as Google’s Jigsaw projects and the shared database between Microsoft, Twitter, Facebook, and YouTube. She, however, warned against the excessive use of predictive technology for countering violent extremism, suggesting that it could raise practical and ethical problems in the future. Drawing from Lawrence Lessig’s models, Boeke outlined four distinct categories of regulation that can be applied to the Internet: legal, architectural, market-based, and altering social norms before offering different suggestions for how this can be used in the context of countering terrorism.

The final panel related to creating counter-narratives, which included Dr Paul Fitzpatrick, who discussed different models of radicalisation, and how it related to his work as Prevent Coordinator at Cardiff Metropolitan University. He began by critiquing a number of prevalent models including Moghaddam’s staircase, as well as all multi-stage, sequential models, observing that, having seen over one hundred cases first-hand, no-one had followed the stages in a linear fashion. He also highlighted the particular vulnerabilities of students coming to university, who have their traditional modes of thinking deliberately broken down, and are susceptible to many forms of extreme thinking. Sarah Carthy, who presented a meta-analysis of counter-narratives, followed Dr Fitzpatrick. She observed that specific narratives are particularly powerful because they are simple, present a singular version of a story, and are rational (but not necessarily reasonable). Importantly, Carthy noted that despite many assuming that counter-narratives can do little harm – the worst thing that can happen is that they are ignored – some were shown to have a detrimental effect on the target audience, raising important ethical considerations. The final member of the counter-narrative panel was Dr Haroro Ingram, who presented his strategic framework for countering terrorist propaganda. Ingram’s framework, which draws on findings from the field of behavioural economics, aims to disrupt the “linkages” between extremist groups’ “system of meaning”. Dr Ingram observes that the majority of IS propaganda leverages automatic, heuristic-based thinking, and encouraging more deliberative thinking when constructing a counter-narrative could yield positive results.

The last day of the conference saw keynote Max Hill QC argue that there is a strong place for counter-narratives to be put into place to discredit extremist narratives, and spoke of his experiences visiting British Muslims who have been affected by the recent UK terrorist attacks. He told of the powerful counter-narratives that these British Muslims hold and argued their importance in countering extremist propaganda both online and offline. Hill also argued against the criminalising of tech companies who ‘don’t do enough’, asking the question of how we measure ‘enough’? His presentation was shortly followed by Dr Erin Marie Saltman who discussed Facebook’s advancing efforts in countering terrorism and extremism. She argued that both automated techniques and human intervention are required to tackle this and minimise errors on the site that sees visits from 1.28 billion people daily. Saltman gave an overview of Facebook’s Violent Extremism Policies and spoke of the progress the organisation has made regarding identifying the ability of actors to make new accounts. Overall, Saltman made it crystal clear that Facebook are strongly dedicated to eradicating all forms of terrorism and violent extremism from their platform.

With the wealth of knowledge that was shared from the academics, practitioners and private sector companies that attended TASM, and the standard of research proposals that followed from the post-TASM research sandpit, it is clear that TASM was a success. The research presented made it very clear that online terrorism is a threat that affects society as a whole and the solutions will need to come from multiple directions, multiple disciplines, and multiple collaborations. You can find Max Hill QC’s TASM speech in full further down on the blog and follow us on Twitter @CTP_Swansea to find out when we will be releasing videos of TASM presentations.

Amy-Louise Watkin @CTP_ALW

Joe Whittaker @CTProject_JW

Posted in Uncategorized

Reflections: Terrorism and Social Media Conference 2017

Last week, in a sleepy Welsh city by the sea, a group of social media and terrorism researchers came together to discuss the latest challenges in the field.

I learned a lot, met people doing admirable work and came away inspired with ideas to shape my own research in the future. This post is a short synopsis of topics from the conference that struck me as important, interesting and/or particularly thought-provoking.

The visual web

Maura Conway’s opening keynote was peppered with mentions of the visual web – and it’s importance in the study of terrorist and extremist activity. All extremist groups have a visual profile, and many use images as a central feature of their propaganda and recruiting efforts.

One look at the ISIS propaganda magazine, Dabiq, proves this point. And it’s not only about images, but also video, which terrorist groups have used for decades, from the grainy, muffled bin Laden recordings all the way through to the glossy ISIS productions. Far-right groups use images too – from the notorious Pepe the Frog to a range of logos featuring swords, swastikas and national flags.

The ‘post-truth’, digital era has ushered in a trend for using images as part of disinformation efforts; driving so-called ‘fake news’. A recent example springs to mind from the March 2017 Westminster attack. In the swirling social media aftermath of Khalid Mahmood’s actions there emerged a photo of a Muslim woman wearing a hijab, walking past victims across Westminster bridge, engrossed in her phone as she walked.

The image was quickly hijacked, attached to numerous false claims attacking the unknown woman for her apparent ‘disdain’ for the injured victims. These claims spawned thousands of comments where people released their Islamophobic feelings to the full, feeding into the milieu of anti-Muslim sentiment that presently hangs over society.

Of course, the truth was very different. The woman had been messaging friends and family to let them know she was safe after the attack. Despite the truth being outed, the damage had already been done. Social perceptions of Muslims as ‘bad’ had been further reinforced.

Back to Prof Conway’s speech; in which she highlighted the ‘strong signalling function’ of images, making them critical subjects for further analysis. Yet most terrorism analysts still focus primarily on text, because the analysis of images is more challenging. Visual analytics tools and techniques do exist, both qual and quant, with big data research on images being especially popular in communication science at the moment.

In short: we need to pay more attention to the visual nature of the internet – and focus more on these ‘low-hanging fruit’ of visual analytics in the study of extremism.

The far-right

TASM didn’t focus only on the Islam-related side of extremism, but showcased a balanced view across the spectrum, with plenty of emphasis on research into the far-right. I attended several interesting panel talks on this subject, and came away with a number of key points.

One piece of research compared Britain First with Reclaim Australia, aiming to draw out the nuances within the umbrella term ‘far-right’. Methodology involved corpus assisted discourse analysis (CADS) on a static dataset, showing text that Britain First and Reclaim Australia supporters had posted on social media over a three-month period.

The researchers used a social media insights tool, Blurrt, to gather raw data, then used Python scripts to sort it into a workable format before finally analysing using CADS. In particular, they focused on collocations to reveal telling patterns in ideas and sentiments across the two groups.

Findings included a strong pattern of ‘othering’ – the core ‘us versus them’ narrative (which is a common theme not just among far-right discourse but also in some mainstream media and foreign policy: e.g. the Iraq war – ‘Axis of Evil’).

It was unsurprising therefore to find that Muslims and immigrants were particularly targeted. In what appears to be an extension of the ‘us versus them’ theme, ‘metaphors of invasion’ were often found in the discourse of both groups.

Other common themes included mentions of ‘our women’, ‘our religion’ and ‘our culture’ as being under threat from the ‘invaders’. All these themes feel very masculine. It could be interesting to reflect on the proportion of these sentiments that come from male authors; and could also be worth analysing what far-right discourse looks like from a female perspective.

In general, researchers concluded that far-right propaganda is less ‘overtly’ violent than that of ISIS, and is mainly rooted in nationalistic tendencies. This raises many questions. Is this how the far-right have managed to fly ‘under the radar’ for so long? Are they seen as being defensive rather than offensive? (And hence the ‘good guys’ on some level).

Could that be a factor in the much-discussed media under-reporting of far-right crimes, while focusing almost hysterically on those perpetrated by jihadists? Or, are ISIS and similar viewed as ‘worse’ simply because they are more ‘other’ (i.e. racism)?

Resonant narratives

Just as in commercial marketing, narratives work best when they intersect with individual agency and contexts. In his panel talk, Dr Akil Awan pointed out that CVE campaigns must not neglect the real-world issues that allow extremist narratives to resonate in the first place.

So how do ISIS narratives achieve success? They play on themes of belonging and identity; important for people experiencing ‘dual culture alterity’, i.e. feeling alienated from both their parents’ culture and the culture of their country of upbringing. In these cases, a return to fundamentalism becomes an ‘anchor’; a default setting of identity in a sea of alienation.

Awan highlighted the disparity between perceptions and reality around the true numbers of Muslims living in European countries. The media drives much of this misperception; making people feel ‘under siege’, creating fear, driving societies apart and destroying any sense of cohesion. In such a milieu, it is easy for ISIS to ‘eliminate the grey zone’ by means of terrorist acts. The media has already primed society for ISIS to succeed.

Understanding perceptions is as important as understanding reality; because how people perceive something will guide their course of action in response to it. Current CVE campaigns (based around tools such as counter-narrative videos) are cheap to implement and make it look like governments are taking action.

But recognising the ‘lived experience’ of minority groups is one of the keys to successful CVE efforts; neglecting to do so is hypocritical and unlikely to be effective.


In closing, we heard from the arbiter of all this – Facebook. Dr Erin Saltman explained the tools Facebook uses to tackle the online side of extremism and terrorism. These tools include a database of extremist propaganda images that relies on machine learning to match images as they surface, and automatically remove them.

But machine learning has its limitations, and humans are still required to take into account context and nuance. At present, the two work in tandem to surface the content (machine learning) and then interpret it as needed (humans).

Other tools include Facebook Insights, commonly used in commercial marketing, but can also be leveraged to guide counter-speech initiatives and enable precise reading of audiences.

The age of social media, although still in its infancy, has already had profound impact on politics and society – as well as the individual psychology of internet users. The long-term effects are unknown, with many changes no doubt still on the way.

By Samantha North

PhD Candidate

University of Bath

Posted in Uncategorized



TASM Speech 27th-28th June 2017

There can be no doubt  that social media plays a pivotal role in communication between those intent on terrorism, just as it is pivotal in the daily lives of most of us as we go about our lawful business. In that simple truth lies the dilemma which we face at this Conference. We all deplore the outbreaks of terrorist violence we have witnessed in four vicious attacks since March 22nd this year, the most recent of the four emanating from Wales, though there is little more we can say now that the case has been charged and is before the courts. We should remember, however, that five terrorist plots were successfully disrupted by the Police and security services during the same period, the last three months. And we all come together with renewed determination to face down the menace of modern terrorism. Where these awful crimes are facilitated by the use of social media, we want to close down the criminals ability to communicate. And yet, we must recognise that policing the internet, and controlling social media comes at a very high price if it interferes with the freedom of communication which every citizen enjoys, and which is also enshrined in Article 10 of the European Convention on Human Rights.

Let us go straight to the limiting provision within Article 10, remembering that freedom of expression is not a fundamental right. Article 10(2) reads: The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime’ etc etc.

So, have we reached the point at which we need to legislate for further interference with Article 10?

Before we can answer that question, we should remember that statute already interferes with Article 10, where necessary and proportionate. Consider section 2 of the Terrorism Act 2006, the criminal offence of disseminating terrorist publications.

What is a terrorist publication? It is defined in section 2(3), ‘if matter contained is likely..(a) to be understood as a direct or indirect encouragement or other inducement  to the commission, preparation or instigation of acts of terrorism (CPI), or (b) to be (so) useful, and to be understood by some or all as …wholly or mainly for the purpose of being so useful…

A person commits the offence by (section 2(2)) distributing or circulating such a publication, which includes  (section 2(2)(e)) transmitting the contents electronically, and when he or she (section 2(1)(b and c)) intends an effect of the conduct to be a direct or indirect encouragement for CPI, or when he intends to provide assistance in CPI.

The section 2 offence works in practice. One of the significant decided cases in this area (which I prosecuted myself) was R v Faraz, tried at Kingston Crown Court in 2011 and reviewed on appeal in [2013] 1 CrAppR 29. At trial, section 2 was ‘read down’ for compatibility with Article 10 in a number of ways, for example by requiring that the words ‘acts of terrorism’ in section 2 must mean criminal offences, not any lesser form of conduct. There were a number of further revisions, made by Calvert-Smith J after submissions which had considered a comparative analysis from many legal jurisdictions. The Court of Appeal later held that it was not arguable that a publication that to the knowledge of the defendant carried a real risk that it would be understood by a significant number of readers as encouraging terrorist offences was entitled to exemption because of Article 10, just because it expressed political or religious views.

Therefore, through laws already on the statute book, it is both possible and compatible with ECHR for investigators and prosecutors to reach into social media and the internet for material  which can properly be brought before the court.

The question for us is how much further if at all should legislation go into this arena?

A quick review of the social media imprint within recent criminal prosecutions might be helpful. I therefore looked through all of the successful terrorism prosecutions brought by the CPS last year, 2016. Allow me to discuss the relevant aspects of some of those cases now, because they indicate both how prosecutors are currently dealing with social media as evidence, and they set the context for any consideration of where we go from here.

So, a sentence or two on some of the cases from 2016.

Tarik Hassan and Suheib Majeed, the latter a physics undergraduate at Kings College London, used a variety of secure and encrypted systems to communicate with each other (Hassane was studying in Khartoum) concerning their plot to carry out terrorist murders in London using a silenced firearm. The evidence included online reconnaissance of a police station and Territorial Army barracks. Charged with conspiracy to murder and preparation of terrorist acts under section 5 of the Terrorism Act 2006.

Tareena Shakil, 26 and the mother of an 18 month old son, became prolific on social media in support of Daesh. Her messages included an exhortation ‘to take to arms and not the keyboard’. She took her son to Turkey and on to Raqqa in Syria, joining Daesh and using the internet both to maintain contact with other family members and to glorify Daesh. Charged with encouraging terrorism under section 1 of the Terrorism Act 2006 and belonging to a proscribed organisation namely ISIS under section 11 of the 2000 Act.

Ayman Shaukat engaged in coded communications with men whom he assisted in travelling to Syria. Shaukat drove a co-defendant Alex Nash to the airport and facilitated his desire to join ISIL. Charged with preparation, the section 5 2006 Act offence.

Forhad Rahman assisted a man called Aseel Muthana to leave the UK in order to fight in Syria; the two men first met via social media. Muthana and another made a video on a hill near Cardiff in possession of an imitation firearm, referring to ‘the Islamic State in Cardiff and Iraq and Sham’. Their co-defendant Kaleem Ulhaq used social media to send money to another whom he believed to be fighting in Syria. Charged with preparation under section 5 of the 2006 Act, and in respect of the funding arrangement under section 17 of the 2000 Act.

Junead and Shazib Khan (whom I prosecuted) were inspired and instructed online, firstly on how to get into so-called Islamic State, and secondly in the case of Junead Khan how to access the addresses of soldiers in the UK and how to attack USAF bases in Norfolk. Online Kik conversations spoke of aspirations to seek shahada or martyrdom, together with explicit instructions for ‘mujahid style’ knife and/or pipe bomb attacks. Both men were charged with section 5 of the 2006 Act preparation, and Junead Khan received a life sentence.

Zafreen Khadam was investigated after complaints made to the police that a Twitter account was being used as a tool to post IS propaganda and to encourage others to join IS and instigate acts of violence. This defendant was found to have opened 14 Twitter accounts  in one month in the spring of 2015. Extreme content was posted, including a web based IS document encouraging the online dissemination of IS literature in order to support its cause. The document was viewed 1464 times by the time it was captured by the Police as evidence. In addition, the defendant used WhatsApp to send material including execution videos. Charged with ten counts of section 2 dissemination.

Mohammed Alam used Paltalk messenger to send links to an ISIS video. Charged with section 2 dissemination on the basis that he was reckless as to whether it would encourage CPI of terrorism.

Mohammed Ameen sent 8000 tweets over 7 months using 16 different Twitter accounts and using 42 different names, expressing support for Daesh. Charged with offences of encouragement under section 1 of the 2006 Act, one count of section 2 dissemination, and one of inviting support for a proscribed organisation under section 12 of the 2000 Act. The judge in passing sentence  (five years’ imprisonment) noted that the offending was aggravated by the explicit and intentional nature of the encouragement and by the persistence with which it was pursued.

Naseer Taj used Twitter and WhatsApp, seeking advice on where to go in Syria to satisfy his aim to become a suicide bomber. Charged with section 5 preparation, possession of material under section 58 of the 2000 Act and an offence under section 4 of  the Identity Documents Act 2010.

Rebecca Poole used social media to express her desire to marry a jihadi warrior, to travel to Syria to live under ISIS, and to become a suicide bomber. She was later found unfit to plead, but to have been in possession of material under section 58 of the 2000 Act, and sentenced to a restricted hospital order.

Mohammed Uddin travelled to Syria via Turkey, stayed for five weeks then left under pressure to return to his wife in the UK but expressing disappointment with the slowness of progress in Syrian training camps, and his social media messaging indicated an intention to return to Syria in the future. Charged with section 5 preparation.

Mr and Mrs Golamully pleaded guilty to terrorist funding under section 15 of the 2000 Act, having sent money to their nephew who had travelled from Mauritius to Syria to fight for ISIS. They used WhatsApp messaging, and sent money via Western Union.

Abdul Hamid both received and posted Daesh propaganda using his Facebook page. Police investigation revealed that a redacted version of the video was available via BBC and other media outlets, but that the defendant had repeatedly posted the unreacted full version, latterly with a message reading ‘this video is strictly for education purposes only’. Charged with section 2 dissemination.

Aras Hamid, Shivan Zangana and Ahmed Ismail were variously charged with section 5 preparation, identity document offences under the 2010 Act, and failing to disclose information about acts of terrorism under section 38B(1)(b) of the 2000 Act. They used phone and social media contact to discuss and arrange  travel to join ISIS, discussing the planned travel with a facilitator abroad.

So that is the range and frequency of offending which sets the context for our discussion, and that is last year; we are yet to come to grips with a full review of the atrocities of 2017.

I have left out, for present purposes, an analysis of sentences passed in these cases, because that is beyond the confines of this conference, though I predict that the available maximum sentences for several of the main statutory offences will feature in the ongoing government review of counter extremism strategy, and some sentencing powers may rise. That said, it does not follow that the review will necessarily lead to the identification of new offences not currently on the statute book. Perhaps that is a discussion for another day, another conference.

Returning specifically to social media and its prevalence in current terrorism offending, it is clear how important this continues to be to investigators and commentators alike.  In some of the cases I have analysed briefly above, it is quite possible to observe single days of online communication between defendants and complicit third parties, where those communications range between WhatsApp, Twitter, Telegram, Viber, Kik and more, according to the perception of the participants as to the relative security and encryption levels of these various modern platforms. To catch them at it, we have to keep up with their technical knowledge and the march of progress made by the internet and other communication service providers.

So where do we go next? It seems to me, thinking about the range of statutes in use by prosecutors as shown by recent cases, that we do not lack for legal powers to bring these cases to court. We do need to encourage investigators and prosecutors to use the full range of current powers at their disposal; which is not to say that they are ignorant of what Parliament has provided, but we do need to see the use of financial, identification, fraud, firearms, public order, offences against the person, and conspiracy offences being added to the indictment, in order to capture the full range of criminality represented by future cases. There should be nowhere safe for terrorists to hide. Terrorism-related cases charged in the year ended December 2016 totalled the use of 56 Terrorism Act offences, and 62 non-Terrorism Act offences, in other words where other criminal statutes were used. More of this is the way forward.

I am on record, from when I first came into post as Independent Reviewer in March this year, saying that in general we don’t need more terrorism offences, and there may be examples of redundant terrorism offences which time has proved are not as necessary as Parliament thought. Interestingly, training for terrorism under sections 6 and 8 of the 2006 Act  was not charged at all in 2015 or 2016. Inciting terrorism overseas was charged once in the same two-year period. Possession of articles for terrorist purposes under section 57 of the 2000 Act was charged once in 2015 and not at all in 2016. Some revision and trimming of the current legislation may yet be possible, and that would be a good thing.

But I cannot say with certainty whether the ongoing government review will throw up an example or two that legislators have not yet covered. Maybe that will emerge, and it if so it will be my job to take a hard look. It would be foolish to discount the possibility of one or more new offences for a new age, though I am yet to find any.

Which brings us to the big question, whether the investigation and prosecution of  terrorists’ use of social media needs specific new laws.

I spent time analysing recent past cases as a way of showing how much is already possible, utilising the laws we already have. To go further, would we risk unenforceable infringements on ECHR rights, and/or would we push the current abundance of evidence proving terrorist activity online to go offline or underground, into the dark chambers of TOR the onion router, impenetrable places within the dark web from which clear evidence rarely emerges, and where the placement of a robust counter-narrative to terrorism is hard to effect and harder to gauge?

This is uncertain territory. Driving material, however offensive, from open availability into underground spaces online would be counter-productive if would-be terrorists could still access it. And once this material goes underground, it is harder for law enforcement to detect and much harder for good people to argue against it, to show how wrong the radical propaganda really is.

Last week, I made a speech at an event hosted by the Oxford Media Network, in which I said this to a large audience including some of our most distinguished security correspondents:

‘In my view, we should all spend less time - in public through the media at least - trying to elucidate the dogma behind these terrible events, and should instead spend far more time seeing these criminals for what they clearly were, criminals or demons, evil doers of evil deeds. There really is no justification for an individual detonating a bomb inside a concert filled with thousands of children and teenagers. We should not waste time in public airing the dogma behind the demonic work of Abedi and his like. Of course, this remains the vital, urgent work of the security services and Police, whose job it is to unpick the dogma, to unearth the radicalisers in person or online, and to stop the next criminal planning an attack, and the next and the next. But by publicising and analysing the dogma for all to see, you are perpetuating the myth that these crimes are for a religious reason, or still worse that they have a justification’.

I stand by those words. When criminals kill and are killed in the act, we should not give them the media platform they may have craved in life but are not entitled to receive in death. I was amongst many who were very pleased to see leading Mosques and British Islamic communities who refused to say funerary prayers for those responsible for the attacks in London and Manchester.

But that does not mean to say that we or the media need be silent when we see the vile propaganda with which  those who are yet to commit attacks  drench social media platforms. There is a place for a strong counter-narrative to be put in place to meet the online radicalisation efforts of these criminals. So to those of you who speak on this subject, and who have the technical expertise to support an online counter-narrative, I applaud your thinking and your efforts. To the rest of us, the message is do not suffer in silence, speak up, speak up from wherever you are within law-abiding, multicultural British life, and do whatever you can to reject the messages of hate we see online. As some of the speakers at this conference have said already, the omnipresence of social media provides a great opportunity to meet the evils of terrorism, to take the opportunity to prove them wrong. Doing that is far better in my view than spending too much time taking the actions of suicide bombers and telling their story for them. Criminals do not speak for us, we must find our own voice and set the record straight.

As Independent Reviewer for just three months thus far, I have made it my business to travel around the country, seeking out Muslim communities in particular because they have been badly affected by all four of the terrorist attacks this year; indirectly as it were at Westminster, Manchester and London Bridge, but directly at Finsbury Park Mosque. And so, I have been to Finsbury Park Mosque, I have spent time with the Libyan community at a mosque and elsewhere in Manchester, and I have sat with community representatives from Mosque chairmen to youth workers in Leicester and Bradford. They offer real insight into the impact of our terrorism legislation upon their communities, and they all say one thing that is the same, which I paraphrase as ‘nobody really speaks for us, though many claim to represent us’ The communities I have visited all detest terrorism, they have powerful counter-narratives to terrorism, and they must be part of the answer online and offline in dealing with the extreme propaganda which we are contemplating today.

It is beyond doubt that social media has played a significant role in the planning and perpetration of terrorist attacks both here and abroad. My digest of cases from last year makes that point. Beyond the need, and the opportunity for a counter-narrative, should we be taking the chance to control social media and the tech companies who support it? In Germany we heard recently of the suggestion that heavy financial penalties should be imposed on companies who fail to take down extreme content. Discussions between our Prime Minister and President Macron in France suggest that there is a top table conversation in which solutions are being sought, there is an element of tough-talking, and tech companies are not immune from censure. And we read each week of high-level meetings between COOs and even CEOs of the internets biggest players.

Much of this is both necessary and valuable. I firmly believe that tech companies should strain every muscle to stem the flow of extreme material online. I have sat next to Metropolitan Police specialist officers who spend every day searching the net to find extreme material, and who then systematically apply hash values and other algorithms to identify each and every posting of that material with a view to writing to every web host requesting the take down of that material. It is laborious work, and it is important. There must be ever greater liaison and cooperation between law enforcement and tech companies.

But I struggle to see how it would help with this battle, if our Parliament were to criminalise tech company bosses who ‘don’t do enough’ . How do we measure ‘enough’? What is the appropriate sanction? We do not live in China, where the internet simply goes dark for millions when government so decides. Our democratic society cannot be treated that way. People have to be regarded as grown ups, entitled to every freedom provided for in a mature democracy, but working together to reduce the menace and the prevalence of terrorism and terrorists, those who would do us all indiscriminate harm.

So there is a need to do more, and tech companies must realise if they do not already that they have to be part of the solution here. There must be a coming together on a corporate level as well as amongst the wider population. Engagement is the answer. To my mind, companies who make eye-watering sums of money from our everyday chatter need to be brought firmly onside, they do not need to be forced offside by the application of criminal statutory offences with which to beat them, with the inevitable side-product of defeating the freedom which the net and social media platforms has opened up for the enjoyment and better understanding of us all about the world in which we live.

I finish where I started, as a lawyer not a politician, nor civil servant, and certainly not a regulator of social media nor a technocrat who understands the algorithms by which these communications platforms operate. Can we legislate to rid ourselves of online terrorism? My answer is that Parliament has already done so in meaningful ways including such offences as the dissemination offence under section 2 of the  2006 Act. We lawyers should look hard into such areas, to see whether any amendments might hone these offences given recent technological advances. We should also look to see whether sentencing provisions in 2017 are apt for our world, for example where Parliament drew a line in 2000, and where 17 years is a long time in tech terms. But apart from that, further legislation does not strike me as the answer. Criminalisation, and thereby alienation of tech companies who are there to serve us and to help us - albeit for colossal financial reward on their part - that cannot be the answer.

So no, or very little new legislation, as it seems to me.

That leaves investigation and prosecution, to complete my answer to the title of my talk. Both are vital. Both are working well, as last year’s cases show and I await to see the picture from this year.  From my long experience of terrorism trials, it is the communications schedule which forms the backbone of almost every new trial. Where communication used to be by voice calls and SMS messages, now that is augmented by online messaging, much but not all from end-to-end encrypted platforms including WhatsApp. We need the assistance of tech companies to ensure that the comms schedules in trials from this year and next year incorporate every such platform used by these criminals. My technical knowledge runs out very quickly at this point. Many of you have important and creative solutions to offer, so I am here to learn.

And I finish if I may, in the technical arena which I know least but am willing to learn. Is quantum computing part of the answer? I first heard the term only recently. Could it hold the answer to breaking algorithms? What should we be doing towards the sharing of encryption keys? Can network providers enforce encryption and validation as precursors to content being published? And if these things are technically possible, by whom and when should this power be used? The future will be very interesting.

Posted in Uncategorized

A letter from our 2016 Cyberterrorism Project Database Interns

During July and August 2016, three second year undergraduate students from Swansea University partook in an internship to conduct research on definitions of cyberterrorism: Nathan Davies (Criminology), Damary Kyauka (Politics) and Callum Sullivan (Criminology). Our students have shared their experiences below:

“Being a part of the Cyberterrorism Project team was exciting and fulfilling, and we feel very privileged to have undertaken an internship last summer. We all individually and as a group developed skills that were essential for our career prospects and personal growth. Our first couple of weeks were concentrated on amending and collecting cyberterrorism definitions for a database. Handling, finding and collating these definitions proved to be quite challenging, but it enabled us to heighten our computer and analytical skills. One of the issues that we quickly identified was the lack of cyberterrorism definitions from governmental, non-governmental and public bodies. This was quite limiting, considering that a wide range of sources was needed to gain a picture of the variety of definitions being employed globally. Collecting the data was also equally challenging but it was also thrilling as we could acquire more knowledge on understandings of cyberterrorism. In addition to this, we could advance our team working skills. We had to communicate and explain to each other the information that we found and decide on which parts of the data were most important for inclusion in the database.

We were also able to familiarise ourselves with different academic databases which will undoubtedly be extremely useful in our further studies. After collecting extensive data, we had to plan and co-ordinate how to analyse it. We had to determine whether we wanted to use thematic or content analysis. This was rather interesting as it was new territory for some of us. We decided the most effective and appropriate analysis method was that of a thematic style. This approach proved to be more useful for our final report. As we used a thematic approach, it was interesting to discover the range of different types of definitions that are being used in the world today. Take the ‘target’ theme for example. Before starting the internship, the main consensus within the group was that the main target would have been a computer. After analysing the data, to our surprise, there were a wide range of targets being used within definitions, ranging from computers, to aeroplane systems and actual people. Therefore, what we learnt through the research challenged our initial thoughts about definitions of cyberterrorism and gave us a whole new perspective on this topic. During our analysis, we decided to divide the analysis in sub-topics that aligned with the definitions of cyberterrorism. We then allocated the sub-topics to each other through based on our personal interests.  Although we were continuously working in a team, we were able to shine independently in our allocated sub-topics, which enabled us to work more in-depth in our chosen sub-topics of the database and were able to work more effectively whilst still helping and giving guidance to one another.

One of the most difficult aspects of the internship was presenting our findings and data in a final report. This tested our team work and organisational skills as we had to produce a report that included an extensive amount of data and research collectively. We had to put our separate work together whilst still ensuring it was a collective report.

One of the most exciting aspects of the internship was the opportunity to attend a talk with a renowned expert of fascist ideology and far-right extremism Professor Matthew Feldman who was visiting the University to provide a research talk. We were able to have a private moment to speak to him and ask him some of our own questions. He was also able to assist us and give us some ideas on our research; which was a great networking opportunity. The analysis of our findings was also extremely fascinating and encouraged us all to pursue this project forward for our third-year undergraduate dissertations”.

More details about this project and the report produced from the students work will be available for download from the project website in the coming months. The Cyberterrorism Project provides opportunities for students from across all Colleges at Swansea University to participate in internships every summer. If you are currently a student of Swansea University and are interested in these opportunities – please contact Dr Lella Nouri: for further information.


Posted in Uncategorized

Methodological problems in online radicalisation

There seems to be near-ubiquity between discussion of radicalisation to violent extremism and the Internet. Despite this, the study of online radicalisation remains under-researched and as a result ill-understood. This is, perhaps, surprising given the vast attention in the media that is given to the online presence of groups such as Islamic State, and the tens of thousands of foreign fighters who have joined them; the implicit assumption being that many became radicalised via content they had interacted with online. A large part of the reason for under-research in the field is not a lack of interest or desire, but a number of factors which make meaningful research in the field difficult. Below, I will briefly outline three of the biggest methodological problems facing the field of online radicalisation – the problem of correlation and causation, the problematic online/offline dichotomy, and the vast amount of “poor” and “noisy” data.

Correlation, causation, and underdetermination

It takes only a cursory glance to observe that today, the Internet has a high degree of prevalence in most cases of radicalisation to terrorism. Gill et al. (2015) found that in 61% of cases there was evidence of online activity related to the ultimate attack or conviction and from 2012, 76% used the Internet to learn about some aspect of their terrorist activity. Similar results have been found by Gill & Corner (2015) and Gill et al. (2017) seemingly confirming anecdotal evidence from the likes of Sir Norman Bettison, who remarked that “the internet [seems] to feature in most, if not all, of the routes of radicalisation” (Home Affairs Select Committee 2012, 16).

However, this rare empirical research takes us little closer to understanding whether there is a causal connection between the Internet and radicalisation. Even if the Internet was present in every single case of terrorism, which may occur in the future as social life becomes even more connected with the online sphere, it would merely underdetermine the relationship between the two. Philosopher of science Willard van Orman Quine made the most renowned contribution to this problem in 1951:

“The totality of our so-called knowledge or beliefs… [is] underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to reevaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole”. (Quine 1951, 42-43)

Quine is suggesting that it is impossible to test a single hypothesis in isolation of a host of background (or auxiliary) hypotheses and that any evidence generated from empirical testing may be insufficient to offer a conclusion over other competing theories. To offer a hypothetical example, it may seem intuitive to suggest that if a greater use of the Internet is correlated to a higher chance of being involved in a terrorist incident, that it is evidence for a causal explanation of the Internet as a driver, rather than a facilitator of radicalisation. However, it could just as easily be suggested that becoming radicalised often makes actors seek like-minded people, and they do this by the use of the most effective method of communication – the Internet. The underlying point is that correlation suggests that two phenomena are connected, and it could be that either causes the other, or even that there is an entirely separate causal explanation which links the two correlated factors.

For obvious reasons, it is both ethically and practically impossible to conduct a laboratory-style experiment and test the dependent variable of radicalisation against use of the Internet on test participants while controlling for other variables. Academics will have to search for more novel methods if they wish to posit, or disprove a causal relationship.

Online/offline dichotomy

The very nature of discussing ‘online radicalisation’ can assume a dichotomy that is problematic. The little empirical data that is available suggests that, while the use of the Internet is extremely prevalent in becoming an extremist, actors regularly engage in both domains (Gill et al. 2017). In fact, there are very few cases in which an actor has radicalised solely online (Ibid.). Part of the grounding for this dichotomy stems from a belief that radicalisation on the Internet operates on a different ontological plane than it does offline. This can be seen in the surprising number of academics and practitioners who refer to the offline domain as the ‘real world’(Silber & Bhatt 2007; Weimann & Von Knop 2008; Hussain & Saltman 2014; O’Hara & Stevens 2015; Home Affairs Select Committee 2012; Holt et al. 2015 - to name but just a few), a phrase which misses the point – the Internet, and the Web 2.0 in particular, is a social space which interacts and compliments offline interactions. Maura Conway makes this point well:

Today’s Internet does not simply allow for the dissemination and consumption of “extremist material” in a one-way broadcast from producer to consumer, but also high levels of online social interaction around this material. It is precisely the functionalities of the social Web that causes many scholars, policymakers, and others to believe that the Internet is playing a significant role in contemporary radicalization processes. (Conway 2016, 4)

Although there is a degree of pedantry in signalling out research for using ill-judged terminology, the wider point is that the online domain cannot be studied in isolation from its offline counterpart (and vice versa). Although it seems to be the case that identities and habits can differ greatly online,(Aresta et al. 2015; Krasodomski-Jones 2017; Gössling & Stavrinidi 2016) they are not separate, but interconnected with their offline counterparts.

Difficulty generating and interpreting data “poor data” – “supply vs demand” – “noise”

The access to good quality data is a problem not just for online radicalisation, but the wider field of Terrorism and Extremism Studies. Rich data, as described by Nate Silver, is “data that’s accurate, precise and subjected to rigours quality control”. Generating rich data is a problem to varying degrees in most Social Sciences and Humanities, but in Terrorism Studies, scholars often have little-to-no access to extremists to try to ascertain their motivations and must often utilise open-sourced secondary data. From this, many problems of data collection pertain. For example, in empirical research by both Bakker (2006) and Horgan et al. (2016), data collectors had to code for a hard ‘yes’ or ‘no’ when looking through open-source data. The lack of readily available rich data means that the (correct given the circumstances) high burden of proof may have often not been met for cases in which it ought to have done. The effect of this is that when empirical research comes to certain conclusions, we can have less confidence in it than we otherwise could.

For online radicalisation, this problem takes on a new element. It remains as difficult to obtain empirical evidence of the process in which an actor becomes radicalised – what von Behr et al. (2013) call the ‘demand-side’, often having to rely on fragmentary open-sourced, or occasionally closed-source, data. However, this represents the minority of research. Instead, scholars tend to opt for the ‘supply-side’ of online radicalisation; using the extraordinary reach of the Internet to analyse data that is being generated by and for extremists. In other words, the former tries to assess how people become radicalised, while the latter assesses what would-be radicals may see on the Internet. Clearly, knowledge about online radicalisation will only progress with some combination of both. However, the small amount of research that comes from relatively “poor” data on the demand-side takes the field down a cul-de-sac. Despite how much research is conducted, for example, on the social media strategy of IS, there is a limit on what can be ascertained about the process in which people go through in radicalising.

A further difficulty in ascertaining a potentially causal effect of the Internet on radicalisation is that the available data is extremely noisy. The “pathway” to radicalisation, according to different theorists, can seemingly involve so many different factors, such as group deprivation, identity conflict, and personality characteristics (King & Taylor 2011); or different stages, such as pre-radicalization, self-identification, and indoctrination, and “jihadizsation” (Silber & Bhatt 2007); or the twelve “mechanisms” of radicalisation of McCauley & Moskalenko (2008). It is difficult, if not impossible, to decipher whether the Internet is a ‘signal’ in radicalisation, or whether it is just noise. As noted above, correlation does not mean causation. It is one thing to note the prevalence with which the Internet is used in a trajectory to extremism, yet another altogether to make value judgements about why the Internet was particularly important in certain cases and not in others, or why self-identification or a period of personal crisis were important, or even, how those three potentially overlapping concepts can even be separated from each other. A large part of the problem with noisy data is evaluation. To evaluate whether theory is well backed-up by evidence, it is helpful to continually test it – as any natural scientist would do – to get immediate feedback and to judge whether the posited hypothesis is correct. As we have already seen, data is difficult to collect and often incomplete, which makes it very difficult to test our hypotheses.

In sum, online radicalisation studies suffer from a number of methodological problems that prove a stumbling block to further meaningful research. These problems are not exclusive to this field: the problem of underdetermination underpins all scientific endeavours; the online/offline dichotomy is present in all fields that pertain to the Internet; and data collection difficulties underpin Terrorism Studies (as well as many other fields). However, each problem takes on a new light when considered in the context of online radicalisation. For those, like this author, who have committed themselves to investigating this subject further, making a contribution to tackling these obstacles will underpin the ability to conduct consequential research in the future.

Joe Whittaker is a joint-PhD Candidate at Swansea University and Leiden University. You can follow him on Twitter @CTProject_JW


Aresta, M. et al., 2015. Portraying the self in online contexts : context- driven and user-driven online identity profiles. Contemporary Social Science, 10(1), pp.70–85. Available at:

Bakker, E., 2006. Jihadi Terrorists in Europe: Their characteristics and the circumstances in which they joined the jihad, The Hague.

von Behr, I. et al., 2013. Radicalisation in the Digital Era: The use of the internet in 15 cases of terrorism and extremism, Available at:

Conway, M., 2016. Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research. Studies in Conflict & Terrorism, pp.1–22.

Gill, P. et al., 2017. Terrorist Use of the Internet by the Numbers. Criminology & Public Policy, 16(312827), pp.1–19. Available at:

Gill, P. et al., 2015. What are the roles of the Internet in terrorism?, Available at:

Gill, P. & Corner, E., 2015. Lone Actor Terrorist Use of the Internet and Behavioural Correlates. In L. Jarvis, S. Macdonald, & T. M. Chen, eds. Terrorism Online: Politics Law and Technology. Abingdon, Oxon: Routledge, pp. 35–53.

Gössling, S. & Stavrinidi, I., 2016. Social Networking , Mobilities , and the Rise of Liquid Identities Social Networking , Mobilities , and the Rise of Liquid Identities. Mobilities, 11(5), pp.723–743. Available at:

Holt, T. et al., 2015. Political radicalization on the Internet: Extremist content, government control, and the power of victim and jihad videos. Dynamics of Asymmetric Conflict, 8(2), pp.107–120. Available at:

Home Affairs Select Committee, 2012. Home Affairs Committee Roots of Violent Radicalisation, London.

Horgan, J. et al., 2016. Actions Speak Louder than Words: A Behavioral Analysis of 183 Individuals Convicted for Terrorist Offenses in the United States from 1995 to 2012. Journal of Forensic Sciences, 61(May 2015), pp.1228–1237. Available at:

Hussain, G. & Saltman, E.M., 2014. Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter it.

King, M. & Taylor, D.M., 2011. The Radicalization of Homegrown Jihadists: A Review of Theoretical Models and Social Psychological Evidence. Terrorism and Political Violence, 23(4), pp.602–622.

Krasodomski-Jones, A., 2017. Talking To Ourselves ? Political Debate Online and the Echo Chamber Effect, London.

McCauley, C. & Moskalenko, S., 2008. Mechanisms of political radicalization: Pathways toward terrorism. Terrorism and Political Violence, 20(3), pp.415–433. Available at:

O’Hara, K. & Stevens, D., 2015. Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism. Policy and Internet, 7(4), pp.401–422.

Quine, W.V.O., 1951. Two Dogmas of Empiricism. The Philosophical Review, 60, pp.20–43.

Silber, M.D. & Bhatt, A., 2007. Radicalization in the west: The homegrown threat, Available at:

Weimann, G. & Von Knop, K., 2008. Applying the Notion of Noise to Countering Online Terrorism. Studies in Conflict & Terrorism, 31(789181513), pp.883–902.


Posted in Uncategorized

P2P Extremism Project Fall 2016

In the autumn semester, as part of an annual competition run by the US State Department, a group of Swansea University students recently undertook the challenge to tackle extremism.

Being able to focus on any type of extremism we saw fit, we chose to focus on the far-right. We did this because there is a growing presence of right-wing extremism, both globally and locally in South Wales. By creating an easy-access platform with information, support, and resources, we hoped to encourage people to educate themselves while becoming further involved in countering right-wing extremism in the local communities. Drawing on the students’ knowledge, spanning from media through criminology and law, we aimed to tackle the far-right in South Wales by developing methods to encourage the “silent majority” to report hate crime.

We hoped to encourage, engage, and educate by asking our audience #howfar?

  • How far is too far?
  • How far would you let it go?
  • How far until you break the silence?

With these questions, we aimed to prompt our audience into thinking about reporting far-right extremism and hate crimes, as increasing the number of people reporting these crimes is what the campaign ultimately aimed to achieve.

The Rationale

The rationale behind our project was based partly on a survey that we put together at the beginning of the campaign. This survey was conducted to gauge Swansea University students’ current awareness regarding far-right extremism and hate crime in the community, and on their own experiences with these. We chose to focus our attention on university students in South Wales, as this was both a group we assessed to have significant access to and also the group most receptive. Furthermore, we were also conscious of the importance of reaching and engaging the leaders of tomorrow.

The results of the survey showed that:

  • 21 percent of students had witnessed hate crime in South Wales, with 6.5 percent of students having been a victim of hate crime.
  • 17 percent of students have witnessed far-right extremism in South Wales
  • 84 percent of those who have been a witness or victim to hate crime of far-right extremism did not report it
  • Only 5 percent indicated knowledge of where to report hate-crimes and far-right extremism.

The results of the survey suggested a severe lack of awareness surrounding far-right extremism, hate crime, and how to report. The results also indicated that students were unlikely to report hate crime.

The Strategy

The unlikeliness of reporting hate crime is a tendency that could be explained by the “Bystander Effect”. The Bystander effect is when an individual fails to intervene in an emergency situation when others are present because they think that ‘others’ will do so, also known as diffusion of responsibility (Darley & Latane, 1968). For example, one may fail to report a hate crime because they think that someone else who has witnessed it will or that they are not qualified or prepared to challenge the situation directly. This can result in incidents of far-right extremism and hate crime going unreported. By researching this effect, we concluded that most studies suggest it can be negated by an increase in awareness, and by removing the idea that there is such a thing as a silent bystander in a hate crime (Van Bommel et al., 2012; van den Bos et al., 2009). We, therefore, decided to attempt to challenge this effect by empowering our target audience to recognise hate crimes and provide them with the social support and knowledge on how and when to challenge hate crime when safe to do so.

We hoped to elicit a feeling of self-awareness in our audience, with the aim to increase the reporting of hate crime. The flagship of our campaign was a video with which we hoped to reach exactly this - to make the audience aware of the prevalence of right-wing extremism and hate crimes, but also to show the common situations that can lead to further extremism.

The video is available on our Facebook and Twitter page:

The video features a protagonist who encounters different levels of extremism, we attempted to engage the viewer by asking #howfar they would stay silent and remind them of the acronym SAFE: Silence Always Favours Extremism. Several sensitive issues had to be taken into account during the production phase. For one, we wanted to avoid promoting any right-wing sentiments accidentally and second; we were very conscious of not encouraging people to engage in any form of vigilantism. Instead, we wanted to encourage people to educate themselves by directing their attention to the content on our web page.

To further promote our project and engage people, we also approached students at the University to further examine their views on extremism. We asked them relevant questions and wrote their answers down on a whiteboard with our slogan #HowFar. This was subsequently posted on our platforms. Furthermore, we had the idea to create a product that we could give to students to help them to fight the bystander effect. After positive feedback in a focus group, we created key rings below and distributed 500 key rings to Swansea University students on campus.

At the end of our campaign, our video had been viewed 26,000 times on Facebook over the course of sixteen days thereby fulfilling its role as a type of gateway for the audience to the rest of our project. We also obtained a 2.6 percent engagement rate on Twitter. Both of these results could be considered successes.

Although the competition is over, we hope our message #howfar continues to spread. If you would like to know more about far-right extremism, hate crime and our campaign please visit our website where you will find educational information in our blog and details on how to report far-right extremism and hate crime on our Report It! page.

Blog written by Anna Eva Heilmann and Mads Nyborg Anneberg (Swansea University MA students and members of How Far)


Darley, J. M., & Latane, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of personality and social psychology8(4), 377-383.

van Bommel, M., van Prooijen, J. W., Elffers, H., & Van Lange, P. A. (2012). Be aware to care: Public self-awareness leads to a reversal of the bystander effect. Journal of Experimental Social Psychology48(4), 926-930.

Van Den Bos, K., Müller, P. A., & Van Bussel, A. A. (2009). Helping to overcome intervention inertia in bystander’s dilemmas: Behavioral disinhibition can improve the greater good. Journal of Experimental Social Psychology45(4), 873-878.

Posted in Uncategorized

Defend, Deter and Develop: Exploring the UK’s Cybersecurity Strategy

Last week the government revealed the National Cyber Security Strategy.  In this document the government set out their agenda, along with the priorities and objectives that will direct policy, partnership and procurement for the next five years.  This is the second such strategy that the government has come out with (the previous coming in 2011) and while elements between the two strategies remain constant there are a number of divergences from the previous strategy and over twice as much money to achieve its objectives given the continued status as a tier 1 security threat and the expanding role of government as outlined in the strategy (£860m between 2011-2016 rising to £1.9bn between 2016-2021).  The strategy itself is broken down under a number of different headings each with important implications for the future direction of the UK’s approach to securing cyberspace.  Below I outline the main sections within the report and offer some reflections on the government’s new direction.

The Strategic Context

The implications of rapid technological change are acknowledged at an early stage within the strategy and in fact the pace of this change is deemed to have accelerated markedly since the publication of the previous 2011 strategy.  For the government this means a reminder to all that while such developments have ‘offered increasing opportunities for economic and social development’ (p. 17) they come hand-in-hand with issues of reliance and dependency upon the very same technologies and networks.  Where there is reliance and dependency questions of vulnerability soon follow and the government outlines 6 predominant threatening actors: cyber-criminals, states and state sponsored groups, terrorist, hacktivists, insiders and script kiddies (less skilled individuals who use readily available programmes made by others).

The government’s assessment places cyber-criminals and states/state-sponsored groups at the top of the threat agenda while correctly recognising that actors such as terrorists, hacktivists and script kiddies have to date operated in a way that is best described as disruptive as opposed to genuinely destructive.  Interestingly, where the 2011 strategy had no mention of the ‘insider threat’ the 2016 version identifies and highlights the security implications of those who have privileged access to systems and can cause damage (be it physical, financial or reputational) through either malicious or inadvertent action.  While the threat of the insider is not exclusive to cyberspace it has been the topic of academic discussion in this context for at least the last 15 years (Cilluffo and Pattak, 2000; Hamin, 2000; Esen, 2002) and is presumably an acknowledgement by the government that malicious actors in cyberspace are not all externally positioned states, terrorists or criminals.

The National Response

In light of the strategic context that the strategy identifies the government introduces a threefold “defend, deter and develop” approach that seeks to respond to the breadth of the challenge facing the nation.  Two elements that underpin this response are of particular note here: the need to conduct the strategy in accordance with a range of different principles and a commitment to push forward with the strategy in collaboration with other actors and institutions.

The first of these two elements refers to the government’s commitment to ensure that their strategy operates in accordance with principles such as national and international law, a rigorous promotion and protection of ‘core values’ (democracy, rule of law, liberty, etc.) and the perseverance and protection of privacy among many others (pp. 25 - 26).  The commitment to these principles will likely be the focus of intense scrutiny over the next five years, especially given recent rulings such as those by the Investigatory Powers Tribunal over the security services’ operation of an ‘illegal regime’ in its collection of vast amounts of communication data (Travis, 2016).

The second element of national response reflects the government’s belief that this is not a strategy that it has to or indeed should be championing and implementing on its own.  The strategy remarks how in 2011 the focus was on promoting cybersecurity primarily through the market but accepts that this approach had not brought change fast enough.  Nevertheless, this has not prompted an about-turn that sees cybersecurity becoming consumed by the government but instead the strategy states that, ‘securing the national cyberspace will require a collective effort’; one that includes individuals, businesses, government, market forces and the intelligence community (pp. 24-28).  Through newly created institutions such as the National Cyber Security Centre the government hopes to build genuine and effective partnership between the different parities it has identified as necessary partners in ensuring the nation’s cyber defence.

Achieving genuine collaboration internationally both across the public and private sector as well as educating the national population and the workforce on issues of cyber‑hygiene continues to prove difficult given different ideas around governance internationally and different priorities between the public and private sector.  Focus will be on the ‘expanded role for the government’ to assess the extent to which it can achieve collaboration and education.

Implementing the Strategy: Defend, Deter and Develop

In implementing this strategy the government has set itself the goal of achieving a UK that is ‘secure and resilient to cyber threats’ by 2021 (p. 25).  The first aspect of this is defence, and accepting that while ‘it will never be possible to stop every cyber-attack’ (p. 33) it is nevertheless possible to develop layers of defence that significantly reduce the UK’s exposure to cyberattacks.  The UK should be far more difficult to attack and its networks, data and systems resilient.  Deterrence is about increasing the cost and reducing the benefits of any attack on the UK.  The UK should be a ‘hard target’ and the nation will have the means to respond effectively to attacks be it via international law, the criminal justice system or offensive cyber means of its own.  Finally development refers to the drive to expand the cybersecurity industry and cultivate the necessary skills within our society to ensure the UK keeps pace with cyber-threats.  This is a longer term aim with the government accepting that assessing success will require a longer timeframe than the next 5 years, for example, to ensure that cybersecurity is taught effectively and that more young people enter the profession.


This National Cyber Security Strategy 2016-2021 is a wide ranging and ambitious document that looks to respond to a diverse range of perceived threats and the various different stakeholders and interests that require attention.  The government has set out clear objectives and looked to ensure that these objectives are measurable against a set of metrics that will provide a good benchmark for progress on cybersecurity over the course of the next five years.  In a time of austerity cybersecurity has managed to secure £1.9bn of public money and it is of paramount importance therefore that these resources are distributed in a manner that offers good value for money and that serves the public interest.

The government has identified that doing this will require investment in defensive means, offensive means, and developing the necessary skills to keep pace with a domain that is rapidly transforming.  Pursuing some of these will necessarily require secrecy on the part of the state but it remains integral that throughout the process of rolling out the strategy that the aforementioned principles of privacy, liberty and the rule of law etc. are front and centre and that the balance does not become skewed towards more offensive means ahead of securing public data and improving cyber-literacy.  A long term approach to improve the security of networks and data needs to accept that collaboration, communication, diplomacy and the development and cultivation of expertise will be vital.

Dr Andrew Whiting is a lecturer in Security Studies at Birmingham City University and a member of the Cyberterrorism Project. You can follow him on Twitter @CTProject_AW.


Cilluffo, F. J. & Pattak, P. B. (2000) ‘Cyber threats: Ten issues for consideration’, Georgetown Journal of International Affairs, 1(1), pp. 41-50.

Esen, R. (2002) ‘Cybercrime a growing problem’, The Journal of Criminal Law, 66(3), pp. 269‑283.

Travis, A. (2016) ‘UK security agencies unlawfully collected data for 17 years, court rules’, The Guardian (17 October 2016), available at:, accessed 10 November 2016.

Zaiton, H. (2000) ‘Insider Cyber-Threats: Problems and Perspectives’, International Review of Law, Computers & Technology, 14(1), pp. 105-113.

Posted in Uncategorized

Notable trends in the use of images in online terrorist magazines

As a new member of Cyberterrorism Project, I have been very eager to assist in the latest research projects. One of these projects involves the large dataset, collected by previous Project interns cataloguing thousands of images taken from the online magazines of terrorist organisations. These organizations have been creating and disseminating online magazines for some time; well known examples include so-called Islamic State’s Dabiq, and Al Qaeda’s Inspire. During my search for existing literature regarding images in online terrorist magazines, I became aware that the majority of the current literature focuses on the text of these publications, and there is at present relatively little research into their use of images. Having read the small amount of academic research that has investigated this topic, I found that it revealed some noteworthy themes.

The first theme was noted in research by Winkler, El Damanhoury, Dicker and Lemieux (2016) who researched the recurring use of death-related images which they have termed ‘about to die images’. Although some death-related images are a display of martyrdom, the overwhelming majority contain the terrorist organisation killing their enemies. Some images are taken pre-death and are accompanied by a tagline confirming that the killing took place; others are taken post-death. Both the former and latter types of death-related images aim to instil fear and terror in the readers by displaying that the death of their enemies is not a threat, it is a reality. Images that are taken pre-death without the confirmation of a tagline, such as prisoners walking towards terrorists armed with weapons aim to instil fear and terror differently. These images leave the fate of the terrorists’ prisoner to the readers’ imagination, which in turn encourages the reader to consider their own vulnerability to death and the organisation. The last types of death-related images are those that showcase the range of weaponry and methods of killing (e.g., guns, fire, bombs) that the terrorist organisations have access to, and the traumatic aftermath that follows (e.g., destroyed homes). These images provide the least amount of information regarding the outcome of the image, and thus invite the reader to engage even more than other images in interpreting the deadly potential of the organisation.

The second theme, which was found across more than one article, is the use of techniques to create a positive portrayal of the ‘in-group’ (the terrorist organisation) and a negative portrayal of the ‘out-group’ (e.g., the West). The most common technique noted in portraying the in-group positively was the display of photographs of the organisations carrying out ‘charitable’ work (Wright & Bachmann, 2015). Noted examples include photographs of bags of food that are accompanied by taglines explaining that the organisation will donate them to individuals in need,  members of the organisation ensuring that no spoiled foods are sold at market, and that there are no harmful substances in slaughterhouses (Greene, 2015). These photographs present the idea that these organisations care about the health and welfare of the communities in which they live (Greene, 2015), and could potentially appeal to individuals around the world in desperate need of a sense of belonging and responsibility. A common technique noted that negatively portrayed the out-group was the use of photographs displaying innocent civilian victims killed by the out-group, including children. These photographs are likely to elicit feelings of sympathy towards the cause, and anger towards the West (Lyer, Webster, Hornsey, & Vanman, 2014). Lastly, there were notable differences in the photographs of terrorist leaders and Western leaders. Terrorist leaders are often photographed from an angle that results in readers ‘looking up’ at the leader, the photographs are usually staged with the leader wearing religious or military attire, and they are portrayed as being in control of the situation. On the contrary, photographs of, for example, President Obama, are often unstaged where he is captured with a worried or unhappy facial expression which portrays him as vulnerable, weak and unable to handle difficult situations (Otterbacher, 2016).

The last theme that was noted was highlighted in research undertaken by Sivek (2013) that images are often displayed in the style of Western pop culture. Noted examples are photographs of international leaders captioned with funny quotes in a handwriting-style font not dissimilar to the style found in weekly Western fashion magazines, and photographs displaying the steps to bomb-building not dissimilar to the style of home improvement instructions. This style is also found in the magazines advertisements with pictures of terrorists laid out in a style similar to how a Western movie poster would lay out pictures of actors. The use of this Western style is most likely used because it is familiar to those the magazines are aimed at and thus could potentially help to normalise the jihadi content and make the ideas it presents appear acceptable. Once something is normalised to an individual, the chances of that individual incorporating those views into their own worldview is increased. Moreover, this style could add to the ‘street credit’ of the content by making it appear ‘cool’.

The three themes noted all have the same underlying potential to radicalise and recruit those who are exposed to them. Although a start has been made, there is still a great deal of work to be done to better understand the use of images by terrorist organisations in their online magazines. This understanding could be crucial to the creation of new counter-narrative and counterterrorism strategies. I am excited to further explore the dataset we hold and to contribute to this emerging area of research.

Amy-Louise Watkin is the Cyberterrorism Project’s new Project Officer. You can follow her on Twitter @CTP_ALW


Greene, K. J. (2015). ISIS: Trends in Terrorist Media and Propaganda. International Studies Capstone Research Papers, 3, 1-577

Lyer, A., Webster, J., Hornsey, M. J., & Vanman, E. J. (2014). Understanding the power of the picture: the effect of image content on emotional and political responses to terrorism. Journal of Applied Social Psychology44(7), 511-521.

Otterbacher, K. A (2016) New Age of Terrorist Recruitment: Target Perceptions of the Islamic State’s Dabiq Magazine. UW-L Journal of Undergraduate Research, 19, 1-21

Sivek, S. C. (2013). Packaging inspiration: Al Qaeda’s digital magazine Inspire in the self-radicalization process. International Journal of Communications, 7, 584-606

Winkler, C. K., El Damanhoury, K., Dicker, A., & Lemieux, A. F. (2016). The medium is terrorism: Transformation of the about to die trope in Dabiq.Terrorism and Political Violence, 1-20.

Wright, J., & Bachmann, M. (2015). Al Qaida’s Persuasive Devices in the Digital World. Journal of Terrorism Research6(2).

Posted in Uncategorized