While it’s widely recognized that online harassment is a threat to free speech, its chilling effects extend well beyond the media. Bad actors (aka abusers) use online harassment to threaten, silence and ultimately exclude people from conversation, activism, work, political or national discourse and institutions.
In this article, we’ll look at how harassment works, starting with a few examples that demonstrate a direct impact on one of democracy’s key features: participation.
Harassed Out of Office
Former California congresswoman Katie Hill resigned from her job in October 2019 after she became the target of cyber exploitation when intimate photos of her were leaked and distributed online.
In her resignation letter, Hill expresses the anxiety we hear from so many of our clients tormented by the uncertainty of what their abuser will do next. She says she’d “live fearful of what might come next and how much it will hurt” so long as she remained in Congress. Now, of course, Hill must deal not only with the man she says abused her and the profound losses to her career and reputation, but also the entire army of trolls his actions activated.
Not surprisingly, many targets of such abuse and manipulation are women and people from underrepresented communities. In 2018, Vermont’s only Black representative was harassed until she resigned. Adding to the severity of the abuse, the threats aimed at both her and her family were racist and occurred both online and offline. The Associated Press reported that “the harassment escalated into a break-in while the family was home, [as well as] vandalism and death threats seen by her young son.”
Vermont citizens lost a vital voice in government to hate and harassment. In the meantime, no charges were brought against the harasser.
In another case, a councilwoman in South Salt Lake, Utah, experienced sustained harassment and stalking (first by someone she knew, then by someone she didn’t know). She pursued restraining orders against the perpetrators, but like many targets of harassment, still suffered stress, fear and anxiety. Furthermore, her time, resources and energy were siphoned off from work to stave the attacks. The councilwoman told a local news outlet, “All I want to do is be able to serve my city and serve my council term and feel safe doing it.”
Why These Cases Matter
Although these are but three of many harassment cases, they’re cases in which we can draw a direct line between online abuse and the impact it has on democratic participation. The tendency to self-censor or retreat from work or the public sphere in order to protect one’s safety—whether done consciously or unconsciously—can quickly and easily undermine the pluralistic, dynamic and democratic society we exalt and aspire to maintain. An empty crater is left behind whether targets of abuse are regular denizens of the Twitterverse or a high-profile public figure.
This leaves us, as a society and body politic, especially vulnerable to manipulation and homogenization. Harassment impacts the stories we read in the paper, representation we receive in government, and advocacy we benefit from through civil society, art and activism.
We lose critical voices because of harassment. Whether someone retreats from a social media channel or leaves public office, self-censors or just “lives with it”, they suffer and so do we.
This summer, OnlineSOS released a comprehensive report about the current state of online harassment, including its chilling effects on press freedom and democracy. We also devoted a major chunk of the report to the needs of individuals and the current options they have (or, more often, don’t have) when facing harassment. These observations are based off our case work from 2016-2018 and conversations with experts and people who experienced online harassment.
The first step toward addressing the effects of online harassment is better understanding how it works, and what targets need in order to remain both present and safe in the public sphere.
How Online Harassment Works
In our work, we use “online harassment” as an umbrella term that refers to a set of specific, damaging behaviors and tactics. Bad actors employ harassing tactics through many mediums and in multiple locations to cause harm to targets, including silencing and exclusion. The figure below demonstrates this interaction.
What can easily be overlooked is that people may experience more than one tactic at any given time. That’s part of what makes harassment overwhelming for people, who can go into “fight or flight” mode when an abuser or abusers target them.
This fight or flight mode may prevent individuals from communicating effectively or remembering specifics in the reporting process. This adds to another core communication challenge for targeted individuals: a lack of specific language for the many abusive tactics and behaviors they may experience. Toxic comments, non-consensual pornography and doxxing are, by now, well-recognized tactics of harassment. But what happens if someone experiences other tactics that are more difficult to describe and name?
To add to the confusion, platforms may define harassment in terms that are too broad to enforce properly. This has prompted frustration from users who cannot adequately describe the severity and urgency of their situation to moderators or support agents.
Combined with the emotional distress experienced by targets, which can affect their ability to communicate effectively or supply proper documentation, lack of a common vocabulary is yet another barrier for people to get the help they need when experiencing harassment.
Better Outcomes for Targets
“Some people say that you should just delete all your accounts and get offline as if this is your own fault. But like I said, our livelihoods are so online dependent now that that’s just not a solution. It’s almost impossible to report this stuff to social network admins because they are dealing with a huge amount of reports of abuse everyday. They minimize and trivialize your issue.” — Eliza Romero, pop culture writer
We believe that if we can center solutions to harassment on the needs and experiences of targets, then these solutions can offer better outcomes for targets, including more satisfactory resolution and recourse, thereby keeping individuals in the public sphere.
What do people need?
In addition to tactical options that address their specific incident or concerns (e.g. reporting), people experiencing online harassment have three key fundamental needs:
1. Physical safety — guaranteeing the safety of oneself and one’s family
2. Emotional and psychological well-being — managing the emotional impact of online harassment, including uncertainty and anxiety
3. Digital security — securing or managing online accounts to minimize risk of further exposure or harm
To address these needs, an individual commonly takes these three steps (to varying degrees):
- Conducts a threat assessment: Whether an individual does this consciously (e.g. follows a guide, like the one we provide on OnlineSOS) or not, a person evaluates their risk and threat(s) to create a plan of action.
- Documents the harassment: This includes saving messages, posts, comments and other harassing content, typically to report to platforms or in case of legal action.
- Communications with others: This can include written and verbal communication with social media platforms, software providers, law enforcement, friends and family, employers, or other support organizations.
There are several common decisions an individual also has to make. One might ask:
- Should I respond to the abuser?
- Should I delete or remove the content?
- Should I report the content / behavior and, if yes, to whom?
Understanding this process can help product managers, UX / UI professionals, policy professionals and other stakeholders develop processes that are more helpful and inclusive of people’s needs. Consequently, people may find it easier to document their case and reach a more appropriate and satisfactory resolution.
How can we better communicate what’s going on and improve responses?
In addition, properly defining and categorizing tactics of online harassment can help targets more effectively communicate urgency in the reporting process and help moderators make faster, more context-informed decisions about how to handle reports. Below is an example of how some harassment tactics could be organized for better mutual understanding between users and platforms.
What else can we uncover?
While content is the mode through which harassment takes place (text, multimedia, operation), context demonstrates the urgency or intensity of a situation. A person can report one nasty Facebook comment, for example, but it’s critical that they can also communicate that they’ve received threatening emails and direct messages on Twitter every day for the past week.
Sustained harassment across platforms paints a very different picture for a moderator / support agent than one comment on its own. Context helps targets receive more appropriate, timely responses and support, whether from an employer, platforms, law enforcement or crisis management groups.
“According to Facebook’s rubric, there was no box to check or form to fill out that would adequately explain the situation. I couldn’t provide the documentation that might show a moderator why this person being able to contact me through their platform was a problem or why blocking him was not a solution. I was stuck.” — Sady Doyle, writing for Elle Magazine in 2017
Context can include:
- Who an abuser is and their relation (if any) to the person targeted: this can describe a lot about a harasser’s motivations
- When or how often is the person targeted: timing animates why someone is being targeted and to what end
- Where is the person is harassed: an abuser’s chosen communication channels can reveal how they intended to maximize impact on a person
There is growing awareness that context is an important piece of the online harassment puzzle. For example, in 2019 Reddit announced new policies against harassment and bullying that explicitly state context will have a larger role in moderation decisions. On the other hand, it’s clear that context can be unique and complex. It’s not easily standardized or categorized. More work is needed in this area to develop processes, design or reporting workflows that address context in a way that serve both targets of online harassment and moderators and/or support staff.
We recognize that context consideration improves outcomes for targets of harassment because then their unique situation can be more adequately addressed. Each harassment case and its context are specific to the targeted individual. And because harassment can take on many forms at once, and take place in many locations at once (both online and offline), these become incredibly important pieces of information for platforms, civil society or grassroots organizations, law enforcement or other groups to make assessments and take action that prioritize the health and safety of a targeted individual. With these individual-centered actions, we also prioritize the health and safety of our press, activists, doers, change-makers and society at large.
“Going Offline” Is No Option
Some people censor themselves or remove themselves from their work and online lives because they’ve been targeted. (We chronicle some of these cases in our brief history of trolling.) It’s a normal reaction to want to preserve one’s safety and sanity.
This is both tragic and dangerous. Exclusion homogenizes society and its most critical discourse. It fuels prejudices and dangerous ideologies like xenophobia and white supremacy. We rely on a diversity of voices, dynamic debate and a free press to hold powerful entities accountable and maintain some grip on reality in a time that it can feel increasingly adrift.
Besides, as more and more people rely on the internet and connectivity for career and social connection, this is simply not a realistic or just option.
Preserving a pluralistic and dynamic democratic society relies on creating better outcomes for individuals facing online harassment. Tactics of online harassment often parallel those used in the disinformation campaigns that have come to dominate our political consciousness, so understanding and addressing those tactics are a net positive outcome for our political and social frameworks. Furthermore, when targets get the support and recourse they need, their engagement and presence online—and, therefore, in society itself—can be secured.
Whether you’re a writer, politician or just day-to-day internet user, the experience of online harassment has many common denominators. The abuser’s motivation is often the same: to silence, exclude and intimidate. When they reach their goal, both the individual and we, as a society, lose. Heading into 2020, we cannot lose.
For more about the effects of online harassment on individuals, please see Part 2 of our in-depth report, Into 2020: The State of Online Harassment →