HOME

助成対象詳細(Details)

   

2016 研究助成 Research Grant Program  /  (A)共同研究助成  (A) Joint Research Grants
助成番号
(Grant Number)
D16-R-0433
題目
(Project Title)
インターネット上のいやがらせや迷惑行為 に対する組織的な取り組み―プラットフォーム・プロバイダーのコメント投稿ポリシー に関する国際比較―
Organizational Efforts to Prevent Harmful Online Communication: A cross-national analysis of online platform providers' policies
代表者名
(Representative)
サビーネ・アインウィラー
Sabine Einwiller
代表者所属
(Organization)
ウィーン大学
University of Vienna
助成金額
(Grant Amount)
 2,800,000
企画書・概要 (Abstract of Project Proposal)

    インターネットは当初、人びとが理性的に交わるバーチャルな議論の場になるものと思われていた。しかし予想に反して、ネット上には感情的かつ攻撃的な表現があふれている。こうした憎悪に満ちた言葉のやりとりは、相手の尊厳や安全を脅かすことを目的としており、ヘイトスピーチとしてしばしば問題視される。ネット上のいやがらせや迷惑行為の取り締まりにおいて中心的な役割を果たすのは、プラットフォーム・プロバイダーである。プロバイダーの多くは民間企業であるが、ネット空間の提供者として、ネット上でのコミュニケーションに介入する決定的な力を持っているのだ。
    そこで本プロジェクトでは、プロバイダーが講じている措置について、6か国で調査を行う。第1段階でさまざまなプロバイダーのコメント投稿ポリシーについて内容を分析、第2段階では各プロバイダーの代表者にインタビューを行って、それぞれの取り組みの有効性について検証する。
    調査では、プロバイダーの種類や各国の事情を踏まえながら、各プロバイダーの対策を詳しく分析するとともに、優れた事例を掘り起こしていく。また研究成果に基づいて成功事例の紹介や各種提案を行うことにより、プロバイダーや政策立案者がこうした問題に取り組み、思いやりのあるネットコミュニケーションの促進に向けての活動支援も行っていく。

    In its early days, the internet was envisioned to serve as an electronic forum where a plurality of voices engage in rational argument. Yet, this vision is severely hampered by plenty of emotional and quite often aggressive, hateful and thereby harmful voices disseminated online. Such harmful online communication (HOC) - often debated as online "hate speech" - aims at harming the dignity or safety of the attacked target. Online platform providers, mainly private organizations, play a central role in confining HOC, because as the owner of the space they are the actors who have decisive power of intervention. This research focuses on the measures taken by various types of platform providers in six different national environments. In the first phase of the research, organizations' comments policies are content analyzed. In the second phase, interviews with representatives of the organizations are conducted to assess measures' effectiveness. The research generates in-depth insights into organizations' policies aiming to curb HOC, and unveils good practice examples while considering organization types and national embedding. Based on the findings, best practices and recommendations are derived that will help organizations and policy makers to tackle this issue and to foster the value of "online considerateness".

実施報告書・概要 (Summary of Final Report)



Harmful online communication (HOC) severely threatens the dignity and safety of persons, social groups or organizations. Curbing this way of online expression that contains aggressive and destructive diction is a social responsibility of the organizations that provide platforms for online comments and discussion. This research focused on the measures taken by various types of organizations (e.g., web portals, news media, and online communities) in seven countries (USA, GBR, DEU, AUT, JPN, CHN, KOR). It included the analysis of HOC policies of 266 organizations (38 per country) and in-depth interviews with 60 representatives of organizations responsible for community and/or social media management.
Results of the policy analysis reveal that organizations share their policies mainly through terms of service (esp. Japan & China) or community guidelines/ netiquettes (esp. Germany & Austria). While most policies are easy to find on the websites, those by Japanese and Chinese organizations are hardest to find. Policies buried in the terms of service document are also hardest to read. Organizations from South Korea provide most readable and educational policy documents, often containing examples and illustrations. In their policies, organizations from all countries mention to delete harmful comments without explanation as their predominant course of action. Regarding the possibilities for user actions, flagging a comment was the prevailing option.
Interviews reveal that manual inspection of comments is still the “gold standard” for identifying HOC. Before doing so, organizations with large user bases apply some form of machine filtering that often includes machine learning. While these tools are advancing, they are very far from being perfect. Organizations with smaller user bases and thus generally less amounts of HOC often rely on simple black lists and human inspection; some rely on manual inspection only. To do so, large organizations outsource inspection, others employ moderators; at online communities moderators are often volunteers. Chinese organizations are the only ones excessively using elaborated upload filters that prohibit posting certain words. Many organizations in the other countries are cautious to delete comments too quickly. Contrary to what they state in their policies, they usually try to moderate through communication and warnings of users. Deleting or hiding posts is generally done only when comments are clearly offensive, harmful for other users/persons/groups or illegal. Policies are helpful moderation instruments as they instruct and educate users. Nearly all interview partners perceive an increase in HOC in their country, which is often attributed to a polarization in society, the fact that people find likeminded others online for any opinion, and a lowered inhibition threshold for attacking others due to the distance perceived online.
Results of the research’s first phase were presented at Bledcom 2017. The comprehensive results were presented at the symposium “Against online hate – For more online civility”, which was organized by the researchers in April of 2018 in Vienna. The symposium was a forum for fruitful discussion among researchers, organizations that deal with HOC and NGOs working on the topic and placed the research findings of this project in a broader context. This helped to advance the research goal, which is to support organizations and policy makers in tackling HOC and fostering the value of "online considerateness.” Recommendations for HOC identification and management were derived and can be found in the final report.

ホームページへのリンク ◆トヨタ財団WEBサイト内関連記事