content moderation keeps horrible internet content away from the public -- but is there a cost? what happens to the actual moderators tasked with sorting through the disturbing content?
vice says these moderators see a lot of ugly things like murders, animal abuse, and sexual violence. a former fb moderator said it can be overwhelming at times, and there isn’t enough psych support
a paper from the conference on human factors in computing systems says the lack of support is a problem -- there are *tons* of psychological health concerns with content moderation. exposure to disturbing images is, perhaps obviously, disturbing. we simply cannot ignore the human cost
harvard law agrees, these moderators go through hell to keep us safe -- there’s only so many beheading videos somebody can watch before they become completely desensitized. large companies need to step up and protect their employees
but new media services says there’s really no other way, it’s too crucial for business -- without moderation, users wouldn’t be safe from harmful content
business insider agrees -- extremists tend to benefit from less content moderation, and that’s bad for everybody. we need moderation
innodata agrees, but maybe we don’t need moderators -- ai could be a solution. it’s not a reality yet tho, some content is already ai moderated but most content requires human moderation
harvard business review doubts ai will fix things -- ai still requires human oversight, and more ai moderation has actually resulted in more content moderation jobs. the real solution is to completely change the way social media companies do business, and maybe question their role in society
ttec is more optimistic -- all we need is a solid psychological health plan, corporate accountability, and good ai tools to help workers