This semester, I’m taking Ethan Zuckerman’s “Future of News and Participatory Media” class at the MIT Media Lab. This is a brilliant class that’s introduced me to a ton of great ideas, and revived my long-dormant love of blogging. This post is a repost of an assignment I originally did for the class, in which we were asked to profile a tool that we believe has the potential to significantly change the practice of journalism.
The tool I’m interested in is Civil Comments.
How it Works
Their video offers a great overview of the system.
Why It’s Interesting
The Civil Comments tool provides an interesting intervention in the comments space, because it offers a unique but also intuitive answer to some of the key problems facing online content publishers when it comes to their comments.
- The Expense of Moderating, Especially at Scale – many content publishers currently face a huge problem when it comes to moderating comments. Most moderation requires a human editor. Although many tools will automatically filter out abusive terms, it still takes a human moderator with judgment (and maturity!) to read user-flagged comments. For publishers who deal with heavy content volume, human moderation can be very expensive. By leveraging the power of the audience, Civil offers to make comment moderation free, and scalable. This is huge. The Civil interface is also pretty simple – it looks a lot like TripAdvisor’s or Amazon’s ratings, which are proven interfaces that people like to use.
- Limiting incentives to abuse – the most thought-provoking claim that Civil makes is that they’re able to single out abusive commenters through this crowd-sourced system. I’m not entirely sure if that’s the case, but their initial run with Willamette week appears to have garnered some positive reviews. Although untrained folks may NOT always be able to filter out abusive comments, this crowd-moderated system raises some interesting questions about the incentives to post uncivil commentary. If comments are social, as many people (including Joseph Reagle of Northeastern, in his book on the topic) have suggested, then that begs the question: if only 1 or 2 people at most are going to see an abusive comment before it gets buried, will trolls even want to post abuse in this kind of system? Is the incentive to abuse lessened when there’s no audience? When it comes to mass troll attacks, Civil claims they have a system that will detect them.
- Hierarchy? Values? By enabling a form of peer moderation, it’s possible that publishers who use the Civil system will send a positive message about the role that their community plays in setting the site’s values. It also marks the comment section as an independent space, one where both readers and journalists get to set priorities. At the same time, because readers get random comments to review, this peer moderation system might offer ways to avoid some of the bias (towards highly ranked commenters, towards familiar commenters, towards early comments) that other peer moderation systems are prone to (Lampe et al).
What it Won’t (Necessarily) Do
- Eliminate issues of site-wide bias: Many moderators of peer moderated sites whom I’ve spoken to have mentioned that their sites have a particular political bias. I don’t see that Civil will address site-wide bias very effectively, especially considering that people tend to moderate comments more favorably when those comments reflect their own views.
- Invite minority views/communities into conversations. One of the moderators whom I spoke with offered a compelling case study of how their site had drawn flak from trans members about transphobic language. The moderators made an executive decision to change community norms, and enforce those changes, even though the majority of site users weren’t as affected by the issue. Sometimes moderators might want to enforce values/etc that the community does not. How are these more subtle social norms introduced? How are they maintained and shown to new members who are visiting for the first time? It seems like the initial judgment made by the crowd might be a large-grained filter at best, and exclusionary at worst.
- Protect identities and data. Conspicuously absent from the Civil Comments’ webpage: any mention of what happens to users’ comment data. Civil says that they offer analytics, which means that they must collect data or offer a data collection option. But publishers run their own ‘instances’ of Civil. How are those data stored and anonymized? Who has access? Will Civil turn around and sell that information? Particularly relevant in conjunction with point #2, but problems of online harassment in general.
Editor’s Note: In class, Ethan linked Civil Comments to the idea of the re-captcha. It’s a great connection, and the re-captcha is a fascinating way to subtly mine the wisdom of the crowd, in very specific (and socially beneficial!) ways. Moderating 3 comments is significantly more labor-intensive than typing two words, which makes me wonder how much it inhibits online conversation to require moderation for EVERY act of commenting (including a reply).