Americans lose confidence they can sniff out social media bots

social media
Credit: CC0 Public Domain

How times have changed. Just a few years ago, Americans were uncomfortably amused at the presence of social bots but were confident they could tell tofu from prime rib.

In the newest study, about half, 47 percent, of people who heard about bots were very or somewhat confident they could recognize these kinds of accounts on . In the earlier study, a more substantial 84 percent expressed confidence in spotting made-up news.

Here; this is how the Pew Research Center explained the then-and-now: "About half of those who have heard of bots (47%) are very or somewhat confident that they can recognize them, and just 7% are very confident. About four-in-ten (38%) are not very confident, and 15% say they are not at all confident. This stands in contrast to the confidence Americans had in their ability to detect made-up news: In a December 2016 survey, 84% of Americans were very or somewhat confident in their ability to recognize made-up news."

So, fast forward to October 2018 and it appears that most Americans cannot distinguish between a human comment and that delivered by an automated bot. And they are not amused; you would need to scroll all the way south to find social bots' numbers on a popularity poll.

Galen Stocking, computational social scientist and Nami Sumida, research analyst, wrote the article that reports on the survey and its findings.

"While many Americans are aware of the existence of social media bots, fewer are confident they can identify them. About half of those who have heard about bots (47%) are very or somewhat confident they can recognize these accounts on social media, with just 7% saying they are very confident. In contrast, 84% of Americans expressed confidence in their ability to recognize made-up news in an earlier study."

The new survey by the Pew Research Center explored American thought on automated accounts on social media platforms and found that many think social bots have a negative impact on how people stay informed. Opposition was apparent toward any organization or individual using bots to share false information. Majorities also opposed a celebrity using bots to gain more social media followers and a political party using bots to share information that favors or dislikes a candidate.

"About two-thirds of American have heard about social media bots, most of whom believe they are used maliciously."

The two article authors said "About eight-in-ten of those who have heard of bots (81%) think that at least a fair amount of the news people get from social media comes from these accounts, including 17% who think a great deal comes from bots. And about two-thirds (66%) think that social media bots have a mostly negative effect on how well-informed Americans are about current events, while far fewer (11%) believe they have a mostly positive effect."

Stocking and Sumida defined what they mean by social media bots – "accounts that operate on their own, without human involvement, to post and interact with others on social media sites."

Shannon Liao, The Verge, noticed something interesting about the naysayers, in that you could not categorize them by age and not by political persuasion; those who disliked bots crossed those lines.

"Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates."

The Pew Research Center's survey drew from around 4,581 U.S. adults in July and August.

One question begs for closer analysis; why would a small percentage, though small, find anything positive about the bots, about being lied to, that the information is not free of salary packet or automated word strings? As one comment-giver in The Verge put it, this might simply be human nature. If the plant is for the team or celeb or public official we root for, then we like to agree with the propaganda, plain and simple. Also, one must not confuse being "lied to" with automated messages provided by government agencies to post emergency updates for our safety.

In addition, an issue that begs for closer analysis is that, no matter what the flavor of the lie, we do not like a lie—but the propaganda playing field today is quite layered and confusing. Site visitors rebelling against phony-sounding comments are quick to brand the comments as from "bots" when they may be by human opportunists running fake accounts just to prop their employers or friends or idols. These are from humans so they do not fit easily into the definition of automated accounts.

Will more and more bots be good homework for our ability to ferret out truth versus propaganda and get more savvy with the times? After all, last year in The Atlantic: "Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year's report."

© 2018 Tech Xplore

Citation: Americans lose confidence they can sniff out social media bots (2018, October 20) retrieved 19 March 2024 from https://techxplore.com/news/2018-10-americans-confidence-social-media-bots.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Bots, good or bad, dominate Twitter conversation: study

96 shares

Feedback to editors