Great! We should be congratulating Facebook for this, for several reasons:
1. now Facebook can start designing against this. What design features promote negativity? Where is negativity being inadvertently fostered?
2. now everyone else can benefit from the research as they chose not to keep it private despite the potential PR ramifications like this
3. this is not research more than a few entities in the world can or will do.
Ethically, I don't see the problem. Users already rely on Facebook to tinker with user feeds in a variety of ways, and this tinkering causes them no harm, and may benefit them and all future users. (When the users signed up for Facebook, did they sign an agreement saying 'you agree to be mentally poisoned with rage, frustration, anger and disappointment, without recourse, by anyone you are foolish enough to friend and let flood your feed'? I rather doubt it.) Given that users are already buffeted by countless unpredictable random forces beyond their control or comprehension, what is tweaking one factor going to amount to? This is the same as clinical drug trials: the question should not be whether it is moral to run a trial, the question should always be whether it is moral to not run a trial.
The OP shows no sign of understanding any of this.
A private corporation is developing the capability to alter the moods of massive populations in a controlled way; unlike mass media, society doesn't (yet?) understand this sort of influence.
Perhaps soon Facebook will understand how to tweak the news feed algorithm to e.g. whip up negative sentiment in response to an attack? Or stop a particular candidate being elected? Etc.
Perhaps they will use this for good, as you mention. But its still huge power for a corporation to wield; effectively in secret - its easy enough for an observer to make a judgement about a bias in e.g. Fox News, and call that out - but it'd be very hard to observe subtle changes to individual news feeds.
I don't really buy your argument that they have a clear moral responsibility to develop this capability. That depends on the potential good of such capability if used for good, the potential bad if used for bad, and the chance it will be used for each.
So I don't see this as analogous to a corporation conducting a clinical trial of a drug; this is more analogous to a clinical trial of a drug that could also be used as a weapon, with the more complicated ethical issues that would entail.
I think its absolutely something there should be ethical discussion around.
> Perhaps they will use this for good, as you mention. But its still huge power for a corporation to wield; effectively in secret - its easy enough for an observer to make a judgement about a bias in e.g. Fox News, and call that out - but it'd be very hard to observe subtle changes to individual news feeds.
This is power and capability given to all media organizations. When you permit their existence, you permit both their deliberate use of their capabilities for goals you dislike but also their accidental unintentional random biases. There is no difference in terms of the consequences.
> In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said. Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher. “So that the room will be empty.” At that moment, Sussman was enlightened.
Isn't your reference to other media organization an appeal to popularity? Why shouldn't we strive for media that are more concerned about privacy and dignity? Furthermore the interactivity and private data make social networks a whole different beast.
> There is no difference in terms of the consequences.
Not in terms of consequences, but there is the difference that in a scientific study you need the consent of the experimental subjects. I think it's problematic that they assume this was given when the users clicked the TOS checkbox, since they must know that a large percentage does not know what it entails.
The problem is, classically, lack of informed consent: the vast majority of Facebook users simply don't know they can be subjects of emotionally manipulative behavioral experiments at any time via UI tinkering, so it could be argued the TOS small-print waiver does not de facto ethically (nor legally) constitute informed consent proper.
An altogether different matter of course is whether you agree with the Belmont report principles in the first place, or the adequacy of principlism for social science research in general [1,2]. But as far as vanilla regulatory research ethics is concerned, this study should rightly raise several flags in any self-respecting IRB.
> An altogether different matter of course is whether you agree with the Belmont report principles in the first place, or the adequacy of principlism for social science research in general
Absolutely. I've always found 'informed consent' to be a bizarre and ill-founded moral concept, with no place in consequentialist ethics, and this is an example of why: so, here is an experiment, of great importance to our understanding of social interactions & community design, where the intervention was just one of countless influences on the subjects and not a large one at that (what about the 'lack of informed consent' that is inherent with every form of advertising?), which did not harm them, and yet... this is somehow bad? Why? Uh, because 'informed consent!'
Very well said. If I proposed this study to the IRB where I work they would likely freak out. I think Facebook should be taking advantage of their data and conducting research. But they need to do this the way most respectable research organizations do and run all their studies through a credible IRB.
Why would you assume FB is going to engineer/optimize against negativity?
Who knows, the next, probably internal, research is going to reveal that people on certain parts of the negative emotional spectrum are way more likely to click on ads than happy people.
1. now Facebook can start designing against this. What design features promote negativity? Where is negativity being inadvertently fostered? 2. now everyone else can benefit from the research as they chose not to keep it private despite the potential PR ramifications like this 3. this is not research more than a few entities in the world can or will do.
Ethically, I don't see the problem. Users already rely on Facebook to tinker with user feeds in a variety of ways, and this tinkering causes them no harm, and may benefit them and all future users. (When the users signed up for Facebook, did they sign an agreement saying 'you agree to be mentally poisoned with rage, frustration, anger and disappointment, without recourse, by anyone you are foolish enough to friend and let flood your feed'? I rather doubt it.) Given that users are already buffeted by countless unpredictable random forces beyond their control or comprehension, what is tweaking one factor going to amount to? This is the same as clinical drug trials: the question should not be whether it is moral to run a trial, the question should always be whether it is moral to not run a trial.
The OP shows no sign of understanding any of this.