Algorithms, Law and Society: Decision Makers between Algorithmic Guidance and Personal Responsibility

Investigating non-discriminatory and employee-friendly implementation of algorithmic decision support systems in the workplace

Background 

The world of work is undergoing major changes due to digitalisation. Increasingly, decisions fundamentally affecting people's lives are taken using algorithmic support: From hiring, promotion and dismissal to credit-worthiness, policing, training for job seekers or social welfare benefits claims. Such algorithmic decision support (ADS) or automated decision-making (ADM) is often perceived as particularly “objective” and “neutral”, but in fact it raises many questions surrounding potential and actual discriminatory treatment throughout the process. These range from the underlying data to implicit or explicit modelling or forecasting assumptions to the user interface and the representation of the resulting “recommendations”.

Research Objective

The AlgoJus project focusses on problems that arise in the interaction of algorithms with humans. In contrast to the decision-making of humans, where discretionary decisions are at least possible in principle, there is no such leeway when using algorithms. The ostensible objectivity and neutrality often leads to denying discriminatory results or explaining them away. Existing discriminatory practices are thus reinforcedand become even more difficult to address due to a lack of transparency in such algorithmic systems. All these issues constitute fundamental obstacles in the strive towards equality, inclusion and freedom from discrimination. 

The project deals with the question of how algorithmic decision support systems can be introduced and used while reducing all the aforementioned drawbacks and problems. We will consider the issues from a legal, ethical and technical viewpoint.
In particular, the project addresses the following questions:

  • Which types of regulations and measures are required to distinguish the responsibility of decision makers who work using algorithmic decision support from the responsibility of the algorithm developers?
  • How can one ensure that it is possible to take full responsibility for all decisions if algorithms are used – without burdening employees with responsibility they cannot bear due to their limited scope of action? How can one map, depict, track where responsibility shifts or has to shift?

Moreover, the use of algorithmic decision support in the context of policing and justice represent a special case. Here, the effects of such systems on society as a whole will also be examined. 

Result 

In addition to the scientific results, the project will develop guidelines for employers as well as for works councils which aim to ensure that such algorithms are used in the workplace in a manner that is as non-discriminatory and employee-friendly as possible. 

 

The project is funded by the Digitalisation Fund of the Vienna Chamber of Labour and is carried out in cooperation with VICESSE.

You want to know more? Feel free to ask!

Researcher Institute of IT Security Research
Department of Computer Science and Security
Location: B - Campus-Platz 1
Partners
  • VICESSE
Funding
Digitalisation Fund of the Vienna Chamber of Labour
Runtime
02/01/2020 – 01/31/2021
Status
finished
Involved Institutes, Groups and Centers
Forschungsgruppe Data Intelligence
Institute of IT Security Research