Skip to content

Commit

Permalink
Create rri-100-2.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ClauFischer authored Jan 25, 2023
1 parent 3ac9fdf commit 0c3807d
Showing 1 changed file with 104 additions and 0 deletions.
104 changes: 104 additions & 0 deletions drafts/rri-skillstrack/rri-modules/online-guidebook/rri-100-2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# What is Responsible Research and Innovation?
## Section 2: Collective and Distributed Responsibility

> **Note**
> The following sections are also part of this module:
>
> - Section 1: [Understanding Responsibility](rri-100-1.md)
> - Section 2: [Collective and Distributed Responsibility](rri-100-2.md)
> - Section 3: [Defining Responsible Research and Innovation](rri-100-3.md)
> - Section 4: [The Scope and Horizon of Responsibility](rri-100-4.md)

## The Problem of Many Hands

<!-- Reflective Question -->
!!! question

How should we assign moral responsibility when large groups of people, organized or unorganized, wrongfully cause some harm?


This is not just an abstract challenge for moral philosophers. It is known as the 'Problem of Many Hands' and affects areas including:

- Anthropogenic Climate Change
- Medical Negligence
- Public Policy and Governance


## Collective Responsibility

One response to the problem of many hands is to ascribe "collective responsibility" to groups of people or organisations.

<!-- Admonition -->
!!! info "Collective responsibility"

Collective responsibility, here, is not the same as mechanisms of legal accountability that are used to fine organisations (e.g. limited liability companies).

<!-- Admonition -->

Whether we can ascribe collective responsibility is independent from any practical consequence (e.g. fines, sanctions). Here, we are just interested in whether it makes sense, conceptually, to blame a collective.


### Arguments *For* Collective Responsibility

In large-scale data science and AI projects, the distributed structure of roles and responsibilities is necessary for various reasons:

- Individuals have specific forms of expertise, and the delivery of a fully functional AI system requires a plurality of skills.
- Reuse of datasets is common, and the collection and curation of the original dataset may not have been performed by the software engineer responsible for training a model.
- Pre-trained models and cloud computing services are increasingly common (e.g. AI as a service, APIs), such that project teams are reliant on third-party infrastructure.



As such, when we ask where responsibility for the behaviour of an AI system lies (e.g. responsibility for biased or discriminatory inferences), it is incredibly difficult to locate it at an individualistic level of abstraction because of the distributed nature of the system's development.
Rather, it seems best located at the level of a project team or organisation (i.e. collective and distributed responsibility).



<!-- Reflective Question -->

!!! question

If moral praise and blame is only directed at moral agents, does this make companies or organisations moral agents?



### Arguments *Against* Collective Responsibility

Holding a group collectively responsible can have pragmatic benefits (e.g. punishing a group for the misbehaviour of an individual creates motivating disincentives for future misconduct).
However, our intuitions may still lead us to think that only one person was truly *responsible*.

> "Value belongs to the individual and it is the individual who is the sole bearer of moral responsibility. No one is morally guilty except in relation to some conduct which he himself considered to be wrong... Collective responsibility is... barbarous." (Lewis 1948, pp. 3–6)

It could also be argued that collective responsibility lets individuals off the hook for personal responsbility.
For instance, consider the phenomena of the 'bystander effect', explained in the following video:

<iframe width="560" height="315" src="https://www.youtube.com/embed/OSsPfbup0ac?start=35" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


Those who wish to argue against collective notions of responsibility, therefore, may wish to appeal to intuitions or point to such empiricial matter.
However, defendants of collective responsibility could respond that the failure is a conceptual issue.
That is, an inability to adequately and precisely specify particular types of responsibility.

For instance, in the context of a data science or AI project, while we may not be able to locate responsibility for the overall behaviour, we can identify specific forms of individual responsibility:

- Responsibility for transparent documentation of data provenance
- Responsibility for due diligence when reusuing existing artefacts (e.g. datasets, pre-trained models)
- Responsibility for carrying out thorough risk assessments
- Responsibility for organising meaningful forms or stakeholder engagement


<!-- Reflective Question -->
!!! question

What do you think?
Is collective responsibility a coherent concept?
Or should we make do with structured forms of individual responsibility?


## One More Thing: The Replicability Crisis

Philosopher of Science, Sabina Leonelli, has argued that the replicability crisis is evidence of the challenge of locating moral responsibility and professional accountability within large-scale research projects:

> "The recent ‘replicability crisis’ in psychology and biomedicine, which many perceive as evidencing an overwhelming lack of research integrity and a failure of peer review, could also be interpreted as illustrating the difficulties in making individuals accountable for their data processing actions within large research networks—which in turn generates problems when attempting to reconstruct, describe and evaluate the methods and assumptions made in any one piece of research." > (Leonelli, 2016)

0 comments on commit 0c3807d

Please sign in to comment.