Individuals in diaspora tend to engage and interact with the people of their origin. Afghans in the 1980s and 90s, still one of the largest diaspora communities, used traditional channels such as Radio and Television to communicate with their loved ones inside Afghanistan. The Internet facilitated changing their mode of communication to near real-time updates. Diaspora and local communities engage in the same Social Awareness Streams (SAS), but there is a lack of research focusing on difference and similarities of these communities. This paper investigates the differences in emotional disclosure and expression between diaspora and local communities in Afghanistan using data from 'Afghanistan My Passion', the largest public Afghan Facebook community page. We investigate a corpus of 2,165 Persian language words considering the location and gender of the social media user. This work provides the first analysis towards understanding differences in diaspora and local communities' emotional reactions to social media content and extends the body of literature on Persian language sentiment analysis.
To what extent do contemporary web technologies that seek to personalize users' browsing experience trap them in filter bubbles? Existing research has yielded mixed results on the possibility of such influence, but empirical research in this area has been entirely cross-sectional. In this paper, we report results from a longitudinal controlled web experiment conducted to determine whether passive recommendations embedded in common computer user-interfaces (UIs) reinforce users' habitual browsing behaviors, to the detriment of the diversity of the set of pages they tend to visit online. Inspired by classical demonstrations of a part-set cueing effect in memory, our experimental design manipulates the behaviour of the 'New Tab' page for consenting volunteers over a two month-long period in randomized time blocks of equal length. Analysis of their browsing behavior shows that users visit on average 15% fewer unique web pages while their browser's 'New Tab' page displays recommendations based on conventional frequency and recency-based algorithms, than if the display is left blank. This effect is seen systematically for all participants in our study. Further analysis of browsing behavior in this experiment clearly identifies the source of the difference between these modes of browsing: users consistently visit a greater diversity of web pages while typing in URLs in the URL/search bar when there are no recommendations on the 'New Tab' page. Finally, using a simulation study, modelling user behavior as a random walk on a graph, we extracted quantitative predictions about the extent to which discovery of new sources of information may be hindered by the concentration of browsing driven by such personalized 'New Tab' recommendations in classical browser UIs.
Recently, there have been many research efforts aiming to understand fake news phenomena and to identify typical patterns and features of fake news. Yet, the real discriminating power of these features is still unknown: some are more general, but others perform well only with specific data. In this work, we conduct a highly exploratory investigation that produced hundreds of thousands of models from a large and diverse set of features. These models are unbiased in the sense that their features are randomly chosen from the pool of available features. While the vast majority of models are ineffective, we were able to produce a number of models that yield highly accurate decisions, thus effectively separating fake news from actual stories. Specifically, we focused our analysis on models that rank a randomly chosen fake news story higher than a randomly chosen fact with more than 0.85 probability. For these models we found a strong link between features and model predictions, showing that some features are clearly tailored for detecting certain types of fake news, thus evidencing that different combinations of features cover a specific region of the fake news space. Finally, we present an explanation of factors contributing to model decisions, thus promoting civic reasoning by complementing our ability to evaluate digital content and reach warranted conclusions.
An important political and social phenomena discussed in several countries, like India and Brazil, is the use of WhatsApp to spread false or misleading content. However, little is known about the information dissemination process in WhatsApp groups. Attention affects the dissemination of information in WhatsApp groups, determining what topics or subjects are more attractive to participants of a group. In this paper, we characterize and analyze how attention propagates among the participants of a WhatsApp group. An attention cascade begins when a user asserts a topic in a message to the group, which could include written text, photos, or links to articles online. Others then propagate the information by responding to it. We analyzed attention cascades in more than 1.7 million messages posted in 120 groups over one year. Our analysis focused on the structural and temporal evolution of attention cascades as well as on the behavior of users that participate in them. We found specific characteristics in cascades associated with groups that discuss political subjects and false information. For instance, we observe that cascades with false information tend to be deeper, reach more users, and last longer in political groups than in non-political groups.
Introduction. Cyberbullying, as a form of abusive online behavior, although not well-defined, is a repetitive process, i.e., a sequence of harassing messages sent from a bully to a victim over a period of time with the intent to harm the victim. Numerous automated, data-driven approaches have been developed for the automatic classification of cyberbullying instances, with emphasis on classification accuracy. While the importance of highly accurate classifiers is undoubted, a key pitfall of existing cyberbullying detection methods is that (i) they disregard the repetitive nature of the harassing process, and (ii) they work retrospectively (i.e., after a cyberbullying incident has occurred), making it difficult to intervene before an interaction escalates. Motivated by the scarcity of methods to anticipate cyberbullying, we focus on cyberbullying prediction with the goal of reducing the time from detection to intervention. Methods. We formulate the prediction of the number of harassing comments a media session will receive over a period of time as a regularized multi-task regression problem. In our formulation, we consider two settings where (i) the progression of cyberbullying behavior from some time point in the near future to subsequent time points further into the future is modeled given limited knowledge of the recent past, and (ii) increasingly more historical data is accumulated to improve prediction accuracy. To validate our approach, we conduct an extensive experimental evaluation on a real-world dataset from Instagram, the online social media platform with the highest percentage of users reporting experiencing cyberbullying. Results. Intuitively, the larger the number of observed comments in the recent past of a media session, the better the predictive power of our approach. The downside to using more historical data is that decisions must be postponed until more comments are collected. Therefore, the trade-off between accuracy and decision speed is examined. In general, our approach outperforms competing approaches by up to 31.4% and 46.2% in Recall and Mathew correlation coefficient respectively. Discussion. Our approach can be used to effectively prioritize media sessions for increased monitoring as time goes by or for immediate intervention before a conversation escalates. In future work, we plan to incorporate additional features and investigate the generalizability of our approach on other key social networking venues where users frequently become victims of cyberbullying. Beyond cyberbullying prediction, our work is, to the best of our knowledge, the first to provide insights on the forecasting performance of multi-task regression as a function of the prediction horizon and the length of available historical data. We thus believe that our work can serve as a reference point on the forecasting performance of multi-task regression both for researchers and practitioners.
The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.
Crowd financing is a burgeoning phenomenon that promises to improve access to capital by enabling borrowers with limited financial opportunities to receive small contributions from individual lenders towards unsecured loan requests. Faced with information asymmetry about borrowers' credibility, individual lenders bear the entire loss in case of loan default. Predicting loan payment is therefore crucial for lenders and for the sustainability of these platforms. To this end, we examine whether the ''wisdom'' of the lending crowd can provide reliable decision support with respect to projects' long-term success. Using data from Prosper.com, we investigate the association between the dynamics of lending behaviour and successful loan payment through interpretable classification models. We find evidence for collective intelligence signals in lending behaviour and observe variability in crowd wisdom across loan categories. We find that the wisdom of the lending crowd is most prominent in the auto loan category, but it is statistically significant for all other categories except student debt. Our study contributes new insights on how signals deduced from lending behaviour can improve the efficiency of crowd financing thereby contributing to economic growth and societal development.
The concept of Social Machines has become an established lens to describe the sociotechnical systems of Web Science, and has been applied to some archetypical cyberphysical systems. In this paper we apply this lens to a larger system, the location based online augmented reality game Pokémon Go!. The contributions are an illustrative application of the descriptive Social Machines lens to a system of this scale and type, and the use of simulation as a method for an executable description, which includes use of an ontology to represent partially the universe of the game.
Introduction: Citizen involvement in scientific projects has become a way of encouraging curiosity and greater understanding of science whilst providing an unprecedented engagement between professional scientists and the general public. In this paper we specifically focus on the impact of online citizen science (OCS) participation in the science education of primary school age children in New Zealand. Methods: We use four exploratory cases within a broader research project to examine the nature and impact of embedding OCS projects that use web based online crowdsourcing and collaboration tools within classroom environments of primary school science learners. Results & Discussion: Our findings provide insights into primary school teachers' perception of OCS. They offer initial insights into how teachers embed OCS in a classroom environment, and why this improves science learning aptitudes, inquisitiveness and capabilities in primary school age children. We also notice that successfully embedding OCS projects in education is affected by the project context, how the results are disseminated, and inclusivity in socio-cultural aspects.
Research is being conducted to understand social and innovative behavior in human interactions on the Web as a biological ecosystem. Keystone species in a biological ecosystem are defined as a set of species that significantly impacts the ecosystem if they are removed from the system, irrespective of its small biomass. Identifying keystone species is an important problem as they play an important role in maintaining diversity and stability in ecosystems. A human community in the web system also possesses keystone species. They can be influential users or contents on the web systems, even though their commitments to the web are relatively small. We use data from an online bulletin board, and identify keystone threads (= "species") that have a large impact if they are removed or become unpopular, despite their small population size. Here, the removal of threads can be regarded as a state in which there is no attention or actions by users on the thread. The multivariate Hawkes process is used to measure the degree of influence among all threads and calculate the overall activity level on the online bulletin board. Our analysis confirms that keystone threads do exist in the system. Apparently, the number of keystone species increases along with the service maturation. The keystone concept in online services proposed in this study gives a new viewpoint for their stable operation.
The 'manosphere' has been a recent subject of feminist scholarship on the web. Serious accusations have been levied against it for its role in encouraging misogyny and violent threats towards women online, as well as for potentially radicalising lonely or disenfranchised men. Feminist scholars evidence this through a shift in the language and interests of some men's rights activists on the manosphere, away from traditional subjects of family law or mental health and towards more sexually explicit, violent, racist and homophobic language. In this paper, we study this phenomenon by investigating the flow of extreme language across seven online communities on Reddit, with openly misogynistic members (e.g., Men Going Their Own Way, Involuntarily Celibates), and investigate if and how misogynistic ideas spread within and across these communities. Grounded on feminist critiques of language, we created nine lexicons capturing specific misogynistic rhetoric (Physical Violence, Sexual Violence, Hostility, Patriarchy, Stoicism, Racism, Homophobia, Belittling, and Flipped Narrative) and used these lexicons to explore how language evolves within and across misogynistic groups. This analysis was conducted on 6 million posts, from 300K conversations created between 2011 and December 2018. Our results shows increasing patterns on misogynistic content and users as well as violent attitudes, corroborating existing theories of feminist studies that the amount of misogyny, hostility and violence is steadily increasing in the manosphere.
Identifying, investigating, and potentially disrupting organised criminal networks is difficult. Data gathered by law enforcement and regulatory authorities are often inconsistent, incomplete, and inaccurate. Computational criminology attempts to address these limitations by modelling the behaviour of virtual ?humans" in virtual places. However, virtual humans are rule-based and can never fully replicate actual human behaviour. This study takes a new approach by utilising the benefits of the observable and controllable environment of virtual worlds but examining real people and real behaviour. To do this, it explores real people's behaviour in a virtual environment similar to the circumstances found in organised criminal networks. Massively Multiplayer Online (MMO) video games with player-driven markets present real humans with similar circumstances in controlled and observable virtual environments. Market conditions within MMO games and illicit markets are both characterised by trust, reputation and, when all else fails, violence. Overall, MMO games are a novel data source to identify, investigate, and provide prevention strategies to the problem of organised criminal networks. Using social network analysis of real-world players from data broadcast by EVE Online (an MMO); spatial, temporal, and behavioural patterns of both offenders and victims are examined. The data broadcast from the game is consistent, complete, and accurate and provides a much larger sample size than obtainable in real-world environments. The data set consists of a seven-year period containing approximately 7M-9M events. It captures the activities of ~600,000 individuals and ~2,500 groups. This paper proposes that video games can approximate the circumstances found in the real world and human agents can and do act in the most rational way to maximise success in those circumstances. Overall, MMO games offer a powerful social science data generator that offers insights into real-world social problems (such as organised criminal networks) that are typically difficult to examine.
Hate speech, offensive language, sexism, racism, and other types of abusive behavior have become a common phenomenon in many online social media platforms. In recent years, such diverse abusive behaviors have been manifesting with increased frequency and levels of intensity. Despite social media's efforts to combat online abusive behaviors this problem is still apparent. In fact, up to now, they have entered an arms race with the perpetrators, who constantly change tactics to evade the detection algorithms deployed by these platforms. Such algorithms, not disclosed to the public for obvious reasons, are typically custom-designed and tuned to detect only one specific type of abusive behavior, but usually miss other related behaviors. In the present paper, we study this complex problem by following a more holistic approach, which considers the various aspects of abusive behavior. We focus on Twitter, due to its popularity, and analyze user and textual properties from different angles of abusive posting behavior. We propose a deep learning architecture, which utilizes a wide variety of available metadata, and combines it with automatically-extracted hidden patterns within the text of the tweets, to detect multiple abusive behavioral norms which are highly inter-related. The proposed unified architecture is applied in a seamless and transparent fashion without the need for any change of the architecture but only training a model for each task (i.e., different types of abusive behavior). We test the proposed approach with multiple datasets addressing different abusive behaviors on Twitter. Our results demonstrate high performance across all datasets, with the AUC value to range from 92% to 98%.
In this keynote I will mention a number of works from the research team Wimmics that has been studying the challenges in bridging social semantics and formal semantics on the Web. These contributions address some of the challenges in connecting AIs to the Web.
When shopping for lenders, most consumers choose a financial institution based on just a few key factors: the interest rate, the distance to the lender's nearest branch, an existing relationship with the lender, and the reputation of that lender. But most consumers fail to consider an important element that will be key to their long-term satisfaction: whether the customer service provided by the lender is commensurate with the price. Our underlying assumption in this paper is that a consumer's personality traits are associated with the issues they will face. We use state-of-the-art cross-domain word vector space mapping and representative trait vectors in this space to estimate ten personality traits corresponding to each text and use topic modeling for finding the topics in a complaint. We then use two modified collaborative topic regression methods to create two complaint topic trait spaces for each lender, and test our underlying assumption by using statistical tests for this unsupervised learning problem in three cases: mortgage loans, student loans, and payday loans. We propose that lenders could be recommended for a specific user by analyzing this space, recommending a lender with the fewest number of complaints per retail customer of that lender in the complaint space neighborhood of the customer. We suggest future work that may be undertaken for the three types of loans, including the possibility that lenders evaluate their service from a customer's perspective to track customer satisfaction over time, and extensions to other parts of the service economy.
Social media can impact how people feel both in the short and long term. Most studies in this area have focused on longer-term feelings of happiness and life satisfaction, but the immediate impact on users' sense of well-being and anxiety levels are not well studied. In this work, we had 1,880 subjects complete surveys about their immediate sense of well-being and contentment and then view one of three possible social media pages: a collection of happy dog pictures and videos; a collection of non-dog related images and videos that generally were funny, non-political, and popular; and Donald Trump's Twitter account. After viewing this content, they were re-surveyed on their sense of well-being. We found viewing dogs led to a large and significant increase in the sense of well-being, viewing popular content led to a smaller but still significant improvement, and viewing Donald Trump's Twitter account led to a very large decrease in sense of well-being. This work has implications for recommender systems, which may consider these results as a step toward optimizing user well-being rather than simply engagement, and for users who may want to manage their own happiness through social media channels and following patterns.
Social life is full of paradoxes. Our intentional actions often trigger outcomes that we did not intend or even envision. How do we explain those unintended effects and what can we do to regulate them? In this talk, I will discuss research that illustrates how data science and digital traces help us solve the puzzle of unintended consequences?offering the solution to a social paradox that has intrigued thinkers for centuries. Communication has always been the force that makes a collection of people more than the sum of individuals, but only now can we explain why: digital technologies have made it possible to parse the information we generate by being social in new, imaginative ways. Yet we must look at that data, I will argue, through the lens of theories that capture the nature of social life. The technologies we use, in the end, are also a manifestation of the social world we inhabit. In this talk I will discuss how the unpredictability of social life relates to communication networks, social influence, and the unintended effects that derive from individual decisions. I will focus on empirical research in the field of political communication, with special emphasis on the analysis of social media, mobilization dynamics, exposure to information, and news consumption. I will describe how communication generates social dynamics in aggregate (leading to episodes of ?collective effervescence") and I will discuss the mechanisms that underlie large-scale diffusion, when information and behavior spread ?like wildfire." I will use the theory of networks to illuminate why collective outcomes can differ drastically even when they arise from the same individual actions. By opening the black box of unintended effects, and how they relate to communication dynamics, I hope to identify strategies for social intervention and illuminate policy implications?and how data science and the analysis of digital traces embolden critical thinking in a world that is constantly changing.
Today, more than ever, social networks and micro-blogging platforms are used as tools for political exchange. However, these platforms are biased in several aspects, from their algorithms to the population participating in them. With respect to the latter, we analyze the discussion on Twitter about an abortion bill in Chile, proposed in January 2015, and approved as law in September 2017. We find that Twitter has strong biases in population representation. Still, when carefully paired with demographic attributes, Twitter-based insights on the characteristics of political discussion match those from national-level surveys.
We present an efficient graph-based method for filtering tweets relevant to a given breaking news from large tweet streams. Unlike existing models that either require manual effort, strong supervision, and/or not scalable, our method can automatically and effectively filter incoming relevant tweets starting from just a small number of past relevant tweets. Extensive experiments on both synthetic and real datasets show that our proposed method significantly outperforms other methods in filtering the relevant tweets while being as fast as the most efficient state-of-the-art method.
The recently introduced General Data Protection Regulation (GDPR) requires that when obtaining information online that could be used to identify individuals, their consents must be obtained. Among other things, this affects many common forms of cookies, and users in the EU have been presented with notices asking their approvals for data collection. This paper examines the prevalence of third party cookies before and after GDPR by using two datasets: accesses to top 500 websites according to Alexa.com, and weekly data of cookies placed in users' browsers by websites accessed by 16 UK and China users across one year. We find that on average the number of third parties dropped by more than 10% after GDPR, but when we examine real users' browsing histories over a year, we find that there is no material reduction in long-term numbers of third party cookies, suggesting that users are not making use of the choices offered by GDPR for increased privacy. Also, among websites which offer users a choice in whether and how they are tracked, accepting the default choices typically ends up storing more cookies on average than on websites which provide a notice of cookies stored but without giving users a choice of which cookies, or those that do not provide a cookie notice at all. We also find that top non-EU websites have fewer cookie notices, suggesting higher levels of tracking when visiting international sites. Our findings have deep implications both for understanding compliance with GDPR as well as understanding the evolution of tracking on the web.
Timing of supply and demand for information on a topic does not always coincide. Sometimes one of them rises first, then the other follows. We show a classification of hot topics on the Web in the past based on the temporal relationship between their supply and demand, and also show that our classification is useful for predicting the timing of supply peaks in some cases.
Advances in Blockchain and distributed ledger technologies are driving the rise of incentivized social media platforms over Blockchains, where no single entity can take control of the information and users can receive cryptocurrency as rewards for creating or curating high-quality contents. This paper presents an empirical analysis of Steemit, a key representative of the emerging incentivized social media platforms over Blockchains, to understand and evaluate the actual level of decentralization and the practical effects of cryptocurrency-driven reward system in these modern social media platforms. Similar to Bitcoin, Steemit is operated by a decentralized community, where 21 members are periodically elected to cooperatively operate the platform through the Delegated Proof-of-Stake (DPoS) consensus protocol. Our study performed on 539 million operations performed by 1.12 million Steemit users during the period 2016/03 to 2018/08 reveals that the actual level of decentralization in Steemit is far lower than the ideal level, indicating that the DPoS consensus protocol may not be a desirable approach for establishing a highly decentralized social media platform. In Steemit, users create contents as posts which get curated based on votes from other users. The platform periodically issues cryptocurrency as rewards to creators and curators of popular posts. Although such a reward system is originally driven by the desire to incentivize users to contribute to high-quality contents, our analysis of the underlying cryptocurrency transfer network on the blockchain reveals that more than 16% transfers of cryptocurrency in Steemit are sent to curators suspected to be bots and also finds the existence of an underlying supply network for the bots, both suggesting a significant misuse of the current reward system in Steemit. Our study is designed to provide insights on the current state of this emerging blockchain-based social media platform including the effectiveness of its design and the operation of the consensus protocols and the reward system.
The democratic role of the press relies on maintaining independence, ensuring citizens can access controversial materials without fear of persecution, and promoting transparency. However, as news has moved to the web, reliance on third-parties has centralized revenue and hosting infrastructure, fostered an environment of pervasive surveillance, and lead to widespread adoption of opaque and poorly-disclosed tracking practices. In this study, 4,000 US-based news sites, 4,000 non-news sites, and privacy policies for 1,892 news sites and 2,194 non-news sites are examined. We find news sites are more reliant on third-parties than non-news sites, user privacy is compromised to a greater degree on news sites, and privacy policies lack transparency in regards to observed tracking behaviors. Overall, findings indicate the democratic role of the press is being undermined by reliance on the "surveillance capitalism" funding model.
In his seminal work stability in competition, Hotelling developed a model for identifying the spatial equilibrium for two competing firms such that they maximize their market-share. He considered a linear area of fixed length and he showed that in this setting the two competing firms should be located side-by-side in the middle of the line. Hotelling's study has been then adopted and used to analyze and explain other phenomena in a variety of fields. However, the linear city model is purely theoretical, without any empirical validation. The goal of this study is to explore Hotelling's Law in its original space - i.e., that of firm competition - and identify possible adjustments needed to describe its application/validity in a non-linear city. In particular, we collect data from location-based social networks that include information for the number of customers in a venue and we compare them with the expectations from Hotelling's original law. Overall, we identify that at a large geographic scale there is correlation between the market-share and the inter-venue distance, which is consistent with the Hotelling's Law. However, as we zoom into smaller scales there are deviations from the expectations from Hotelling's law, possibly due to higher sensitivity to the necessary assumptions. Our findings enhance the literature on optimal location placement for a venue and can provide additional insights for owners in regards to the linear city model.
Link Prediction has emerged as an important problem with the recent interest in studying large scale social graphs. User interactions on social networks can be represented as signed directed graphs where the links represent nature of their relation. Positive links correspond to trust/friendship among nodes. Negative links typically map to distrust or antagonism among the graph nodes and are useful in analyzing social graphs when coupled along with positive links. In this paper, we study the prediction of positive and negative links in a large scale graph. We propose a classification-afterclustering approach and design maximum margin classifiers which can be formulated as standard Second Order Cone Programs. Our proposed approach is immune to class imbalance and scales to large graphs very well. We apply this to various scenarios of link prediction problems of separating between negative links and positive links, globally and within a neighborhood. Empirical evaluations is reported on a real world social graph.
Hate speech is considered to be one of the major issues currently plaguing the online social media. With online hate speech culminating in gruesome scenarios like the Rohingya genocide in Myanmar, anti-Muslim mob violence in Sri Lanka, and the Pittsburgh synagogue shooting, there is a dire need to understand the dynamics of user interaction that facilitate the spread of such hateful content. In this paper, we perform the first study that looks into the diffusion dynamics of the posts made by hateful and non-hateful users on Gab (Gab.com). We collect a massive dataset of 341K users with 21M posts and investigate the diffusion of the posts generated by hateful and non-hateful users. We observe that the content generated by the hateful users tend to spread faster, farther and reach a much wider audience as compared to the content generated by normal users. We further analyze the hateful and non-hateful users on the basis of their account and network characteristics. An important finding is that the hateful users are far more densely connected among themselves. Overall, our study provides the first cross-sectional view of how hateful users diffuse hate content in online social media.
Within OSNs, many of our supposedly online friends may instead be fake accounts called social bots, part of large groups that purposely re-share targeted content. Here, we study retweeting behaviors on Twitter, with the ultimate goal of detecting retweeting social bots.We collect a dataset of 10M retweets. We design a novel visualization that we leverage to highlight benign and malicious patterns of retweeting activity. In this way, we uncover a ?normal" retweeting pattern that is peculiar of human-operated accounts, and suspicious patterns related to bot activities. Then, we propose a bot detection technique that stems from the previous exploration of retweeting behaviors. Our technique, called Retweet-Buster (RTbust), leverages unsupervised feature extraction and clustering. An LSTM autoencoder converts the retweet time series into compact and informative latent feature vectors, which are then clustered with a hierarchical density-based algorithm. Accounts belonging to large clusters characterized by malicious retweeting patterns are labeled as bots. RTbust obtains excellent detection results, with F1=0.87, whereas competitors achieve F1?0.76.Finally, we apply RTbust to a large dataset of retweets, uncovering 2 previously unknown active botnets with hundreds of accounts.
This study investigates gender bias in political interactions on digital platforms by considering how politicians present themselves on Twitter and how they are approached by others. Incorporating social identity theory, we use dictionary analyses to detect biases in individual tweets connected to the German federal elections in 2017. Besides sentiment analysis, we introduce a new measure of personal- vs. job-related content in text data, that is validated with structural topic models. Our results indicate that politicians' communication on Twitter is driven by party identity rather than gender. However, we find systematic gender differences in tweets directed at politicians: female politicians are significantly more likely to be reduced to their gender rather than to their profession compared to male politicians.
In this work, we tackled the problem of the automatic classification of the extremist propaganda on Twitter, focusing on the Islamic State of Iraq and al-Sham (ISIS). We built and published several datasets, obtained by mixing 15,684 ISIS propaganda tweets with a variable number of neutral tweets, related to ISIS, and random ones, accounting for imbalances up to 1%. We considered three state-of-the-art, deep learning techniques, representative of the main current approaches to text classification, and two strong linear machine learning baselines. We compared their performance when varying the composition of the training and test sets, in order to explore different training strategies, and to evaluate the results when approaching realistic conditions. We demonstrated that a Recurrent-Convolutional Neural Network, based on pre-trained word embeddings, can reach an excellent F1 score of 0.9 on the most challenging test condition (1%-imbalance).
Empowering end users to be directly involved in the development and composition of their smart devices surrounding them that achieves their goals is a major challenge for End User Development (EUD) in the context of Web of Things (WoT). This can be achieved through Artificial Intelligence (AI) planning. Planning is intended as the ability of a WoT system to construct a sequence of actions, that when executed by the smart devices, achieves an effect on the environment in response to an end user issued goal. The problem of planning specifically for the WoT domain has not been sufficiently dealt with in the existing literature. The existing planning approaches do not deal with one or more of the following important factors in the context of WoT: (1) random unexpected events (2) unpredictable device effects leading to side effects at runtime, and (3) durative effects. In this work, we propose a cyclic planning system which adopted a PDCA (Plan-Do-Check-Act) process solution to deal with the existing shortcomings for continuous improvement. The planner employs domain knowledge based on the WoTDL (Web of Things Description Language) ontology.The cyclic planner enables continuous plan monitoring to cope with inconsistencies with user issued goals. We demonstrate the feasibility of the proposed approach on our smart home testbed. The proposed planner further enhances the ease of use for end users in the context of our goal-oriented approach GrOWTH.
Social network and publishing platforms, such as Twitter, support the concept of a secret proprietary verification process, for handles they deem worthy of platform-wide public interest. In line with significant prior work which suggests that possessing such a status symbolizes enhanced credibility in the eyes of the platform audience, a verified badge is clearly coveted among public figures and brands. What are less obvious are the inner workings of the verification process and what being verified represents. This lack of clarity, coupled with the flak that Twitter received by extending aforementioned status to political extremists in 2017, backed Twitter into publicly admitting that the process and what the status represented needed to be rethought. With this in mind, we seek to unravel the aspects of a user's profile which likely engender or preclude verification. The aim of the paper is two-fold: First, we test if discerning the verification status of a handle from profile metadata and content features is feasible. Second, we unravel the features which have the greatest bearing on a handle's verification status. We collected a dataset consisting of profile metadata of all 231,235 verified English-speaking users (as of July 2018), a control sample of 175,930 non-verified English-speaking users and all their 494 million tweets over a one year collection period. Our proposed models are able to reliably identify verification status (Area under curve AUC > 99%). We show that number of public list memberships, presence of neutral sentiment in tweets and an authoritative language style are the most pertinent predictors of verification status. To the best of our knowledge, this work represents the first attempt at discerning and classifying verification worthy users on Twitter.
Whatsapp is a messenger app that is currently very popular around the world. With a user-friendly interface, it allows people to instantaneously exchange messages in a very intuitive and fluid way. The app also allows people to interact using group chats, sharing messages, videos, audios, and images. These groups can also be a fertile ground to spread rumors and misinformation. In this work, we analyzed the messages shared on a number of political-oriented WhatsApp groups, focusing on textual content, as it is the most shared media type. Our study relied on a dataset containing all textual messages shared in those groups during the 2018 Brazilian presidential campaign. We identified the presence of misinformation in the contents of these messages using a dataset of priorly checked misinformation from six Brazilian fact-checking sites. Our study aims at identifying characteristics that distinguish such messages from the other textual messages (with unchecked content). To that end, we analyzed various properties of the textual content (e.g., language usage, main topics and sentiment of message's content) and propagation dynamics of both sets of messages. Our analyses revealed that textual messages with misinformation tend to be concentrated on fewer topics, often carrying words related to the cognitive process of insight, which characterizes chain messages. We also found that their propagation process is much more viral with a distinct behavior: they tend to propagate faster within particular groups but take longer to cross group boundaries.
Autocomplete algorithms, by design, steer inquiry. When a user provides a root input, such as a search query, these algorithms dynamically retrieve, curate, and present a list of related inputs, such as search suggestions. Although ubiquitous in online platforms, a lack of research addressing the ephemerality of their outputs and the opacity of their functioning raises concerns of transparency and accountability on where inquiry is steered. Here, we introduce recursive algorithm interrogation (RAI), a breadth-first search method for auditing autocomplete by recursively submitting a root query and its child suggestions to create a network of algorithmic associations. We used RAI to conduct a longitudinal audit of autocomplete on Google and Bing using a focused set of root queries -- the names of 38 US governors who were up for reelection -- during the summer of 2018. Comparing across search engines, we found a higher turnover rate among longer and lower ranked suggestions on both search engines, a higher prevalence of social media websites in Google's suggestions, a higher prevalence of words classified as a swear or a negative emotion in Bing's suggestions, and periodic shocks that spanned across most of our root queries. We open source our code for conducting RAI and discuss how it could be applied to other platforms, topics, and settings.
Brands produce content to engage with the audience continually and tend to maintain a set of human characteristics in their marketing campaigns. In this era of digital marketing, they need to create a lot of content to keep up the engagement with their audiences. However, such kind of content authoring at scale introduces challenges in maintaining consistency in a brand's messaging tone, which is very important from a brand's perspective to ensure a persistent impression for its customers and audiences. In this work, we quantify brand personality and formulate its linguistic features. We score text articles extracted from brand communications on five personality dimensions: sincerity, excitement, competence, ruggedness and sophistication, and show that a linear SVM model achieves a decent F1 score of $0.822$. The linear SVM allows us to annotate a large set of data points free of any annotation error. We utilize this huge annotated dataset to characterize the notion of brand consistency, which is maintaining a company's targeted brand personality across time and over different content categories; we make certain interesting observations. As per our knowledge, this is the first study which investigates brand personality from the company's official websites, and that formulates and analyzes the notion of brand consistency on such a large scale.
Background. Hateful speech bears negative repercussions and is particularly damaging in college communities. The efforts to regulate hateful speech on college campuses pose vexing socio-political problems, and the interventions to mitigate the effects require evaluating the pervasiveness of the phenomenon on campuses as well the impacts on students' psychological state.
Data and Methods. Given the growing use of social media among college students, we target the above issues by studying the online aspect of hateful speech in a dataset of 6 million Reddit comments shared in 174 college communities. To quantify the prevelence of hateful speech in an online college community, we devise College Hate Index (CHX). Next, we examine its distribution across the categories of hateful speech,behavior, class, disability, ethnicity, gender, physical appearance, race, religion, andsexual orientation. We then employ a causal-inference framework to study the psychological effects of hateful speech, particularly in the form of individuals' online stress expression. Finally, we characterize their psychological endurance to hateful speech by analyzing their language -- their discriminatory keyword use, and their personality traits.
Results. We find that hateful speech is prevalent in college subreddits, and 25% of them show greater hateful speech than non-college subreddits. We also find that the exposure to hate leads to greater stress expression. However, everybody exposed is not equally affected; some show lower psychological endurance than others. Low endurance individuals are more vulnerable to emotional outbursts, and are more neurotic than those with higher endurance.
Discussion. Our work bears implications for policy-making and intervention efforts to tackle the damaging effects of online hateful speech in colleges. From technological perspective, our work caters to mental health support provisions on college campuses, and to moderation efforts in online college communities. In addition, given the charged aspect of speech dilemma, we highlight the ethical implications of our work. Our work lays the foundation for studying the psychological impacts of hateful speech in online communities in general, and situated communities in particular (the ones that have both an offline and an online analog).
Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Post-session questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning- Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.
As AI technologies rapidly advance, the artifacts created by machines will become prevalent. As recent incidents by the Deepfake illustrate, then, being able to differentiate man-made vs. machine-made artifacts, especially in social media space, becomes more important. In this preliminary work, in this regard, we formulate such a classification task as the Reverse Turing Test (RTT) and investigate on the contemporary status to be able to classify man-made vs. machine-made texts. Studying real-life machine-made texts in three domains of financial earning reports, research articles, and chatbot dialogues, we found that the classification of man-made vs. machine-made texts can be done at least as accurate as 0.84 in F1 score. We also found some differences between man-made and machine-made in sentiment, readability, and textual features, which can help differentiate them.
For more than three years, Google has been facilitating users with four gender options ??Male", ?Female", ?Rather Not Say" and ?Custom". ?Rather Not Say" is for users who do not prefer to disclose their gender identity and ?Custom" is for users who do not identify themselves among the conventional gender labels (male or female). By this, it is evident that Google provides choice to its users to classify themselves among non-conventional gender groups. This work makes an attempt to assess choice, transparency and privacy in Google Ad Settings when the option ?Rather Not Say" is selected as gender. It was observed that even though the gender was set as ?Rather Not Say", a conventional gender was displayed as demographic in Ad Personalization page of Google Ad Settings. Therefore, even though it provides choice to the user, it is not an absolute choice as Google still classifies an individual into one of the two traditional categories. Our experiment infers that the websites might be categorized as Male or Female-oriented. Therefore, while trying to create a preference of websites for a particular user, the system often introduces bias towards a gender for a predefined interest demographic in Google Ad Personalization page. This paper focuses on the statistical analysis of the prediction of gender for the different categories of websites and how this effects a user's choice, privacy and transparency.
In this research, we hypothesize that some social users are more gullible to fake news than others, and accordingly investigate on the susceptibility of users to fake news--i.e., how to identify susceptible users, what are their characteristics, and if one can build a prediction model.Building on the crowdsourced annotations of 5 types of susceptible users in Twitter, we found out that: (1) susceptible users are correlated with a combination of user, network, and content features; (2) one can build a reasonably accurate prediction model with 0.82 in AUC-ROC for the multinomial classification task; and (3) there exists a correlation between the dominant susceptibility level of center nodes and that of the entire network.
News about massive data breaches is increasingly common. But what proportion of Americans are exposed in these breaches is still unknown. We combine data from a large, representative sample of American adults (n = 5,000), recruited by YouGov, with data from Have I Been Pwned to estimate the lower bound of the number of times Americans' private information has been exposed. We find that at least 82.84% of Americans have had their private information, such as account credentials, Social Security Number, etc., exposed. On average, Americans' private information has been exposed in at least three breaches. The better educated, the middle-aged, women, and Whites are more likely to have had their accounts breached than the complementary groups.
In this paper, we present a study to identify similarities and differences in how users express themselves on Twitter during two editions of the most watched sports events in the world, the finals of the FIFA Soccer World Cup of 2014 and 2018. Our findings suggest in 2014 users tended to post more negative content than in 2018, while less hateful and offensive messages were posted in 2018 than in 2014. This study also showcases the challenges of performing analysis of emotional reactions on sports-related posts due to the specificity of the colorful language employed by the fans.
Transport planners face the growing need to understand the behavior of their users, who base their mobility decisions on several factors, including travel time, quality of service, and security. However, transportation is usually designed with an average user in mind, without considering the needs of important groups, such as women. In this context, we analyzed 300K tweets about transportation in Santiago, Chile. We classified users into modes of transportation, and then we estimated the associations between mode of transportation, gender, and the categories of a psycho-linguistic lexicon. Our results include that women express more anger and sadness than expected, and are worried about sexual harassment. Conversely, men focus more on the spatial aspects of transportation, leisure, and work. Thus, our work provides evidence on which aspects of transportation are relevant in the daily experience, enabling the measurement of the travel experience using social media.
When people interact through technical infrastructure such as that of organisations or the World Wide Web, this infrastructure will change and in some cases new identifiable structures or even eco-systems may emerge. Examples of such emergent socio-technical systems on the Web include some social machines and phenomena such as echo chambers. To model these complex social systems we develop a method for formalising the moments, or occasions, of experience of an individual person by contriving a 'chemistry' encoding possible sequences of their external stimuli, internal experience and reactions to this internal experience. We take a process oriented approach and formalise this as a stochastic Petri net. We wire together a number of these to form a fixed social network in which experience is shared. The resulting model unfolds into many possible causal graphs of occasions of experience which we show using an interactive visualisation. We demonstrate the utility of this method by encoding models exhibiting information diffusion and what we call multiple phase diffusion, and then consensus formation, before encoding a mechanism of echo chamber formation. We then demonstrate the conflation of individuals' positions on otherwise separate issues through emotion. The approach results in a single Petri net model which may be analysed using qualitative and quantitative techniques supporting web science research. It provides a way to describe and reason about the internal experience of individuals within multi-scale socio-technical systems.
It is only natural that events related to a listed company may cause its stock price to move (either up or down), and the trend of the price movement will be very much determined by the public opinions towards such events. With the help of the Internet and advanced natural language processing techniques, it becomes possible to predict the stock trend by analyzing great amount of online textual resources like news from websites and posts on social media. In this paper, we propose an event attention network (EAN) to exploit sentimental event-embedding for stock price trend prediction. Specially, this model combines the merits from both event-driven prediction and sentiment-driven prediction models, in addition to exploiting sentimental event-embedding. Furthermore, we employ attention mechanism to figure out which event contributes the most to the result or, in another word, which event is the main cause of the price fluctuation. In our model, a convolution neural network (CNN) layer is used to extract salient features from transformed event representations, and the latter are originated from a bi-directional long short-term memory (BiLSTM) layer. We conduct extensive experiments on a manually collected real-world dataset. Experimental results show that our model performs significantly better in terms of short-term stock trend prediction.
This paper attempts to understand the perspectives of the seniors (aged 65 years and above) on misinformation in the Indian context. Interviews with 33 seniors who use social media regularly revealed three themes. The seniors viewed and rationalized sharing news irrespective of its veracity as a process of building sociality. Sharing information was also based on the logic of superimposing information with an epistemic ascription to the networks from where they received it. Finally, a kind of normative dualism becomes apparent from an acknowledgment of the role they may play in the spread of misinformation as agents on the one hand and a resounding need to stop it on the other due to its potential social ramifications.
Social networks have been instrumental in spreading rumor such as fake news and false rumors. Research in rumor intervention to date has concentrated on launching an intervening campaign to limit the number of infectees. However, many emerging and important tasks focus more onearly intervention. Social and psychological studies have revealed that rumors might evolve 70% of its original content after 6 transmissions. Therefore, ignoring earliness of intervention makes the intervening campaign downgrade rapidly due to the evolved content. In real social networks, the number of social actors is usuallylarge, while the budget for an intervening campaign is relativelysmall. The limited budget makes early intervention particularly challenging. Nonetheless, we present an efficient containment method that promptly terminates the diffusion with least cost. To our knowledge, this work is the first to study the earliness of rumor intervention in a large real-world social network. Evaluations on a network of $3$ million users show that the key social actors who earliest terminate the spread are not necessarily the most influential users or friends of rumor initiators, and the proposed method effectively reduces the life span of rumors.
Tor is among most well-known dark net in the world. It has noble uses, including as a platform for free speech and information dissemination under the guise of true anonymity, but may be culturally better known as a conduit for criminal activity and as a platform to market illicit goods and data. Past studies on the content of Tor support this notion, but were carried out by targeting popular domains likely to contain illicit content. A survey of past studies may thus not yield a complete evaluation of the content and use of Tor. This work addresses this gap by presenting a broad evaluation of the content of the English Tor ecosystem. We perform a comprehensive crawl of the Tor dark web and, through topic and network analysis, characterize the 'types' of information and services hosted across a broad swath of Tor domains and their hyperlink relational structure. We recover nine domain types defined by the information or service they host and, among other findings, unveil how some types of domains intentionally silo themselves from the rest of Tor. We also present measurements that (regrettably) suggest how marketplaces of illegal drugs and services do emerge as the dominant type of Tor domain. Our study is the product of crawling over 1 million pages from 20,000 Tor seed addresses, yielding a collection of over 150,000 Tor pages. The domain structure is publicly available as a dataset at \urlhttps://github.com/wsu-wacs/TorEnglishContent.
Crowdfunding platforms promise to disrupt investing as they bypass traditional financial institutions through peer-to-peer transactions. To stay functional, these platforms require a supply of investors who are willing to contribute to campaigns. Yet, little is known about the retention of investors in this setting. Using four years of data from a leading equity crowdfunding platform, we empirically study the length and success of investor activity on the platform. We analyze temporal variations in these outcomes and explain patterns using statistical modeling. Our models are based on information about user's past and current investment decisions, i.e., content-based and structural similarities between the campaigns they invest in. We uncover the role of past successes and diversity of investment decisions for novice vs. serial investors. Our results inform potential strategies for increasing the retention of investors and improving their decisions on crowdfunding platforms.
Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.
Multichannel retail is now prevalent with retailers and consumers utilizing a number of channels in parallel or in some instances in an interconnected way. There is a degree of understanding on what each channel can offer but the Relative Advantage of each channel in relation to the others is less understood. This research evaluates the Relative Advantage between the three channels of three-dimensional Virtual Worlds, two-dimensional websites and offline retail stores. The consumer's preferences across the three channels were distinguished across six Relative Advantages. The three channels were then compared across the six Relative Advantages identified. Participants showed a preference for offline and 2D in most situations apart from enjoyment, entertainment, sociable shopping, the ability to reinvent yourself, convenience and institutional trust where the Virtual Worlds were preferred.
The mood of individuals in the workplace has been well-studied due to its influence on task performance, and work engagement. However, the effect of mood has not been studied in detail in the context of microtask crowdsourcing. In this paper, we investigate the influence of one's mood, a fundamental psychosomatic dimension of a worker's behaviour, on their interaction with tasks, task performance and perceived engagement. To this end, we conducted two comprehensive studies; (i) a survey exploring the perception of crowd workers regarding the role of mood in shaping their work, and (ii) an experimental study to measure and analyze the actual impact of workers' moods in information findings microtasks. We found evidence of the impact of mood on a worker's perceived engagement through the feeling of reward or accomplishment, and we argue as to why the same impact is not perceived in the evaluation of task performance. Our findings have broad implications on the design and workflow of crowdsourcing systems.