Advocating for Shades of Censorship
Facebook and Twitter, (as well as many many other online platforms) censor their content. The legality of this is pretty clear as they are private platforms, the ethics of this are interesting and sticky, touch on if they claim to be a common carrier. Even as a common carrier, we have president, legal and emotional, for censorship that applies to specific mediums like the restriction of Telemarketing. We are also comfortable with other kinds of censorship by platforms on our behalf, like the blocking of spam, which makes up about half of email, or ad-blockers in browsers .
Ethics aside, censorship, is something that is incredibly difficult to do coherently. Any set of guidelines that are published are immediately going to become riddled with symantec issues and blurry edge cases. Some of these issues, such as if breastfeeding constitutes nudity, are covered in a radiolab episode about the subject. Others, like attempting to moderate a community to reduce flame wars and encourage civil conversation, or prevent the spreading of potentially deadly rumors are not both subtle and real time. The subjectivity of what constitutes nudity is absolutely trivial compared to trying to decide what constitutes trolling, or attempting to ensure only truth is spread.
At the moment most social media platforms use an approximately 4 tiered version of censorship to respond to rule violations. The four typical possible responses are:
1) Legitimize content, do nothing
2) Delegitimize content, delete content
3) Delegitimize content, delete content, and punish user with suspension
4) Delete content, remove user
This is actually a pretty course set of responses, given what can be imagined.
In the case that content is allowed through, it is implicitly thought to be acceptable, and (because most of us think in terms of common-law) implies that it can be repeated. Allow a nipple once, then you are implicitly saying that nipples in that context are always ok.
Conversely, removing content entirely is almost the only other option used, along with potentially punishing the poster. This "punishment" might also be focused on throttling the number of things that need censorship attention in a very mechanical way.
Other Potential Responses
I'm organizing these by reasons one might want to censor content, and covering less blunt techniques that communication could be shaped in order to manage it. In my mind ost of these are softer forms of censorship, and allow for communication to continue between consenting adults but the definition of censorship itself gets blurred. Some techniques we are all pretty familiar with, such as how gmail sorts your inbox into "category" tabs, others you may have never thought of before. If anything using these tricks is probably more ethically complicated than outright censorship, but I think it also frequently would be closer to the right answer.
Keeping content away from children
As a society, there are a number of things that we think children shouldn't be able to view. The obvious response to this in the case of social media is to, rather than removing a post is to simply screen which users it is shown to by age. Admittedly many of the social media platforms other than facebook do not police user age very carefully, and that can be worked around pretty easily, but this seems like an obvious start.
Preventing people from seeing disturbing or unpleasant content
Few of us want to log onto facebook in the morning and see a picture of a open heart surgery, or a beheading. There are all sorts of gruesome and awful things that people post. In many cases these posts are sharing things that are important, or personally relevant. In many other cases posters are just trying to generate a response.
A fairly obvious solution to this problem is to place these behind a "spoiler tag", along with a trigger warning indicating roughly what is going to be behind that tag (sex, gore, violence, etc)
Another option, and a far more subtle one, is to reduce the intensity of an image. This can be done by reducing both the color intensity and the contrast. This can be done in the same way as a spoiler tag, with a click showing the image in its original format. I would imagine this fits more for the "look at this huge cut I got on my foot" kind of photo.
Preventing the spread of misinformation
Rumors and falsehoods spread like wildfire on social media, and as we have seen recently it can sway national sentiment, and lead to atrocities. There are basically two things you approaches that I can see to controlling the spread of rumors on social media, both are probably useful. The first is to match known links, reposts, or strings of false information with truth validating sites such as snopes or politifact. Rather than removing the content outright, it could be preceded with a link to the de-hoaxing, and summary of the hoax. This can be applied asynchronously on the display layer (that is to say, that posts which were previously considered clear, when later revealed to be a hoax could carry the warning) and Asynchronously on the feed layer (I.E. if you have seen something on your feed that is later determined to be a hoax, the hoax correction could be scrolled through your feed as well, possibly with a reference link to where you saw it). This technique could be escalated even further by displaying the hoax de-legitimization link first, above a spoiler tagged version of the hoax content.
The second technique for reducing the spread of misinformation is to reduce it's virality. One trivial way to do this is by putting a countdown timer on the repost button before it's allowed to be used. This could also be solved by suspending reposts in limbo for a few hours, and even requiring a "verification" dialog before reposting.
Prevent or extinguish flame wars
Lots of communities don't care about flame wars at all, for others trying to keep conversations civil is a pretty big priority. For those that care there are three general classes of solution. The first is to reduce the velocity of the conversation, most people get angry fast and then cool down a little. If you can slow down the rate at which they read, or the rate at which they write a response, then they might behave better. Some techniques used for lowering reading speed are Disemvoweling, Deliberately slow to read fonts, or forcing users to click to reveal each level of comment. Techniques for lowering the writing speed can be countdown timers on the "post" button, setting a maximum WPM on the input field.
A second method of slowing down flame wars is to reduce the intensity of the comments. This can be done by decreasing font size, or font contrast, by reducing color saturation and contrast on images. These techniques are pretty much going to be the same as the ones for dealing with "disturbing or unpleasant content"
Finally, one can reduce virality, in this case by slowing down the natural spread of links into the flame war on the site. This would include things like not listing it on the front page, or above the fold on the front page. One could also pull it out of the list of trending conversations.
Prevent "inbox stuffing" harassment
A common problem that comes up is also the "internet lynch mob" effect. This is where someone who becomes a viral target receives hundreds, or hundreds of thousands of instances of hate-posts through the system. Their inbox is stuffed with abuse. This could be handled in a few ways. Abusive mail / posts can be passably identified using the same techniques that are used to identify spam. A platform could identify a sudden, massive spike in such posts incoming to a user and filter them in a variety of ways. Probably the best is to filter them into something like a "spam" bucket, and simply provide a count of them. Other options include deleting them outright, throttling the max number of messages/posts that seem this way that can be delivered per day, auto-directing the poster to a policy about these types of posts, or preventing the post and auto-detecting the user to something like a "petition post".
prevent doxxing, swatting, and active physical harassment
In some cases the "internet lynch mob" will step across the purely digital line. This usually starts with the discovery and posting of peoples direct, physical world, information. This is one of the few cases where immediate and instant removal of the content is probably warranted. In fact, there are several ways that more aggressive responses than simply removing the content could be brought into play. Some of these include: Intentionally altering the information so that it is incorrect, notifying the victim, or even escalating to the local authorities.
Hate Speech
As hot-button as hate speech is, it is really just a special case of "disturbing content" or "flame war". The same intermediate options for responding to it are available. Keep in mind that they don't have to be applied equally to all "disturbing content". So one might do something mild, such as greying the text for a comment like "Vim is universally better than emacs in call cases" while going with something far heavier, like a spoiler bar + disemvoweling for hate speech like "Kill all the _________s" with a content warning "this post may contain hate speech". The option of complete removal also, as always, remains on the table.
Legal Compliance
There are also a large number of types of content that are legally required to be removed. This gets super complicated as the site becomes international, and some posts only need to be blocked in some regions. In other cases it's pretty straightforward around the world. The one additional level of nuance that can be added here, is rather than simply removing illegal posts, one can replace them with a "this post removed because of legal compliance xxx" as a subtle indicator that the hosting site is having it's hand forced.