A social bot is an agent that communicates more or less autonomously on social media, often with the task of influencing the course of discussion and/or the opinions of its readers. It is related to chatbots but mostly only uses rather simple interactions or no reactivity at all. The messages it distributes are mostly either very simple, or prefabricated, and it often operates in groups and various configurations of partial human control. It usually targets advocating certain ideas, supporting campaigns, or aggregating other sources either by acting as a "follower" and/or gathering followers itself. In this very limited respect, social bots can be said to have passed the Turing test. If the expectation is that behind every social media profile there should be a human, social bots always use fake accounts. This is not different from other social media API uses. Social bots appear to have played a significant role in the 2016 United States presidential election and their history appears to go back at least to the United States midterm elections, 2010. It is estimated that 9-15% of active Twitter accounts may be social bots and that 15% of the total Twitter population active in the US presidential election discussion were bots. At least 400,000 bots were responsible for about 3.8 million tweets, roughly 19% of the total volume. Twitterbots are already well-known examples, but corresponding autonomous agents on Facebook and elsewhere have also been observed. Nowadays, social bots are equipped with or can generate convincing internet personas that are well , although they are not always reliable. Social bots, besides being able to produce or reuse messages autonomously, also share many traits with spambots with respect to their tendency to infiltrate large user groups. Using social bots is against the terms of service of many platforms, especially Twitter and Instagram. However, a certain degree of automation is of course intended by making social media APIs available. The topic of a legal regulation of social bots is currently discussed in many countries, however due the difficulties to recognize social bots and to separate them from "eligible" automation via social media APIs, it is currently unclear how that can be done and also if it can be enforced. In any case, social bots are expected to play a role in future shaping of public opinion by autonomously acting as incessant and never-tiring influencer.
Uses
Lutz Finger identifies 5 immediate uses for social bots:
foster fame: having an arbitrary number of bots as followers can help simulate real success
spamming: having advertising bots in online chats is similar to email spam, but a lot more direct
mischief: e.g. signing up an opponent with a lot of fake identities and spam the account or help others discover it to discreditize the opponent
bias public opinion: influence trends by countless messages of similar content with different phrasings
limit free speech: important messages can be pushed out of sight by a deluge of automated bot messages
The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages around the clock. Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. To unambiguously detect social bots as what they are, a variety of criteria must be applied together using pattern detection techniques, some of which are:
cartoon figures as user pictures
sometimes also random real user pictures are captured
Botometer is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features. An active method that worked well in detecting early spam bots was to set up honeypot accounts where obvious nonsensical content was posted and then dumbly reposted by bots. However, recent studies show that bots evolve quickly and detection methods have to be updated constantly, because otherwise they may get useless after a few years.