Algorithmic Management in the gig-economy: the lack of transparency and the risks of discrimination

Ilaria Cendret
12 Dicembre 2022

In the context of the digital transformation of the 21st century, which impacts various economic sectors, the development of digital technologies has proved essential for the rapid spread of new working models. The drivers of this digital evolution are to be found in new technologies, namely the incremental use of artificial intelligence, and, more specifically, the use of algorithmics for work organisation purposes. Algorithmic management is used in several sectors including home deliveries and passenger transport, two services typically offered by digital platforms. The rapid spread of this management model entails the risk of the algorithmic management being used in a discriminatory manner. The algorithm cannot, therefore, be considered a neutral tool in the organisation of work. An example of discrimination based on the functioning of the algorithm has been recently recognised by an Italian Court (Tribunale di Bologna).
Typologies of algorithms and the concept of Algorithmic Management

Over the last century, the processes of automation and digitalisation have been increasingly applied to industry and production. The OECD (Organisation for Economic Cooperation and Development) refers to the spread of new technologies as the era of digital transformation (1). The reasons for this are to be found in the development of new technologies (especially artificial intelligence), globalisation, and demographic changes, which are having a profound impact on our societies. The use of the Artificial Intelligence (AI) has expanded rapidly in recent years, especially with the emergence of increasingly sophisticated algorithms, which are at the core of the development of new ways of working.

1. The concept of Algorithmic Management

The use of algorithms to automate organisational services traditionally performed by human managers has been defined as “algorithmic management” and has been applied to both digital platform-based work and conventional work settings. At the heart of this new way of working is the concept of algorithm, which, according to the online Oxford English Dictionary, is “a process or set of rules to be followed in calculations or other problem-solving operations” (2).

The use of algorithms in business models dates back to the 19th century. In fact, the sociologist Max Weber believed that management should be organised in layers (management hierarchy) and in an objective selection process (based on employees' technical skills and competencies): his theory of bureaucratic management influenced modern business organisation throughout the 20th century.

Algorithmic management has taken on a broader dimension in recent years in the context of the so-called Fourth Industrial Revolution (3). More precisely, algorithmic management is based on software algorithms, defined as “computer-programmed procedures for transforming input data into a desired output” (4), as opposed to the simple algorithmic support used in work organisation. Algorithmic management thus refers to the use of computer procedures to organise work using new technologies such as data collection (cameras, sensors, audio devices, etc.).

The term of algorithmic management was first used by Professor Min Kyung Lee in 2015 (5) to refer to “software algorithms that assume managerial functions and surrounding institutional devices that support algorithms in practice”. Indeed, algorithmic management consists of assigning and evaluating human tasks through the use of algorithms.

Similarly, some researchers (6) have defined algorithmic management as a set of technological tools and techniques used to remotely manage the workforce through data collection and control of the workers in order to enable automated or semi-automated decision making.

However, to date there has been no large-scale representative research on algorithmic management, which means that existing data and statistics relate to qualitative case studies, particularly in the context of platform-based work; it should be remembered that data collection is at the heart of the digital platform business model, but algorithmic management has also been used in other work contexts such as in the warehouse organisation, marketing, legal services, banking and consultancy.

1.2 The different types of algorithms

Algorithmic management is a process of automation of human management services and is not limited to the use of new technologies (such as the algorithm) in the organisation of work. In fact, it is a new and specific way of working, based on a computer process (which could be automated or semi-automated) that transforms input data into a determined outcome, allowing the development of business models. In order to understand algorithmic management, it is essential to illustrate and understand the functioning of the algorithm and the computer processes on which it is based.

A computer algorithm is a set of defined instructions, translated into a machine-understandable programming language, to perform various types of operations. Algorithm is an accredited system whose use has increased in recent years. Among the various possible algorithm systems, it is possible to identify a plurality of algorithm types. Such a multiplicity of algorithmic systems is evidenced by the variety of operations that can be performed by these systems. Within the broad concept of computer algorithms, there are several types of algorithms such as “the sum algorithm”, which performs mathematical operations and can be used, for example, by banking applications to calculate monthly expenses, and “the Google search algorithm”, which allows you to obtain hundreds of web page results from a word or a set of characters. These examples show that the algorithm doesn't require any form of complexity (as far as rational mental functions are concerned) or a tendency to understand and adapt to new situations.

In a context of incremental use of algorithmic systems, it is necessary to point out that the category of computer algorithms encompasses a wide variety of systems. In order to better understand the concept of algorithm, some authors (7) have pointed out a tripartition of the category of computer algorithm. Taking the multiplicity of algorithmic systems into account, three main categories of algorithms can be outlined based on their ability to recognise different concepts and on the technological process used to arrive at a given result: the classical algorithm (rule-based algorithm), the machine learning algorithm and the strong artificial intelligence algorithm.

The most common typology is the classical algorithm, capable of producing a result solely on the basis of the inputs entered (such as the numbers to be summed or the characters that make up the search phase) and the information codes (computer instructions translated into a programming language that can be recognised by a computer). This category is characterised by a final result that depends only on the code instructions and the information input, without any external input. These algorithms are the first to appear in computer science and have the advantage of being easier to understand (due to their simplicity). For example, the sorting algorithm makes it possible to organise different elements in a list according to their alphanumeric order.

The second type of algorithm is the machine learning algorithm, which is characterised by the use of external data during the learning phase. This learning process allows the algorithm to adapt to solve a wide variety of problems by modifying its internal parameters (the number of which can vary according to the model of the algorithm). In this way, the algorithm will be able to adapt itself in order to solve different problems, but it won't differ from the incoming information that has been entered for the learning. Criticism of the use of algorithms often refers to the typology of this algorithm, defined as “blind” by the French National Data Protection Commission, Commission nationale de l'informatique et des libertés (CNIL): “the algorithm without data is blind, data without algorithms is voiceless” (8).

In the recent years, the learning process of the algorithm has advanced, as has the branch of machine learning known as deep learning (9), which is based on a network of algorithms, each of which passes a simplified representation of the data to the next level. Deep learning algorithms can process a large amounts of data and are therefore useful for dealing with unstructured data.

Finally, the third typology of algorithms is the theoretical category of the strong artificial intelligence algorithms, such as the algorithm in motion: a multifunctional, intelligent and capable of self-learning algorithm. However, some authors (10) have pointed out that this category of algorithms would remain theoretical without any technology that could be used in practice.

1.3 The three areas of application of Algorithmic Management

A recent report by the European Commission (11) shows that algorithmic management is mainly expressed through the design of organisational control in three main areas: direction, evaluation of workers and the exercise of the disciplinary power.

Algorithmic management is used in both digital platform-based models and conventional work models and consists of giving instructions to workers about the tasks to be performed, the order of execution and the time interval.

Firstly, in a platform-based work setting, algorithms perform some management functions by automatically assigning tasks to workers via smartphones and computers. For example, ride-hailing drivers in the United States receive requests to transport people after activating the smartphone app and have a limited amount of time to refuse or accept the ride (12). Similarly, food delivery platforms send instructions to workers via a handheld device or smartphone specifying the task to be performed, the location where the food is to be picked up and delivered, and the route to be taken to reach the customer's destination. Failure to comply with the platform's instructions can lead to a range of sanctions, such as restricting access to the app or deactivating the worker's account, penalties for taking a different route or, more generally, a bonus-malus system aimed at verifying that the driver follows the predetermined schedule.

In conventional employment settings (such as warehouses, factories and marketing companies), algorithmic management is used to automatically allocate tasks to workers, to schedule the workforce, and to direct workers in order to respect the timing of execution of their tasks. Amazon's warehouses, for example, use devices that combine barcode scanners to track the movement and location of workers. The advantage of using wearable devices over handheld ones is that they leave workers' hands free in order to manage different materials (13).

Secondly, the algorithm is used to evaluate workers. In a platform-based work setting, workers are ranked based on customer ratings and job acceptance rates, as is done on the Uber and Lyft's platforms. Drivers then receive a weekly performance statistics via an app on their smartphones. However, customer reviews are also becoming increasingly important in the conventional employment setting, as evidenced by the inclusion of online customer reviews, such as those on TripAdvisor, in individual performance management and team meetings (14). Moreover, in the warehouse context, algorithmic management allows real-time data (collected by wearable devices that compile productivity statistics) to be compared with past performance.

Finally, algorithmic discipline is linked to worker evaluation. Indeed, in the context of working on digital platform, access to work via smartphones or computers usually depends on workers' ranking. For example, Uber and Lyft apps automatically deactivate a worker's account if they don't maintain a minimum average rating set by the platform. In the context of conventional work activities, especially in the warehousing sector, the data collected reinforces the disciplinary function of human managers, which is often based on the ratings generated by the algorithm; moreover, workers receive work instructions via their smartphones, which also compare the worker's productivity with that of the previous days, and if a poor work performance is detected, the access to work can be automatically restricted, as is the case for platform workers.

2. Risks of discrimination and the need for transparency in Algorithmic Management

Algorithmic management may raise some issues related to workers' rights, including the treatment of personal data and protection from potential discrimination. Indeed, as the International Labour Organisation (ILO) states in the Philadelphia Declaration, “labour is not a commodity” (15) and the protection of workers' rights and working conditions flows from this principle.

Algorithmic management raises the issue of data transparency, as many companies do not publicly disclose the evaluation method used or the incidence of customer feedbacks on worker ranking. However, the need for transparency is at the heart of the algorithmic organisation, which is carried out through electronic performance monitoring, defined (16) as the “email monitoring, wiretapping, content and computer time tracking, video monitoring, and GPS tracking”. This data, combined with the use of big data, forms the so-called People Analytics (17) practices: a human resources management process aimed at better understanding job's performance.

The practices used to collect productivity data can be highly invasive, as they require the monitoring of personal data, such as the level of integration with colleagues and the mood of workers, through the use of “sociometric badges” (18). The monitoring process can easily be carried out, for example, through smartphone applications that collect information such as the number of rides accepted, the average speed of drivers and customer satisfaction. Thus, algorithmic management allows to access to confidential information and there is a risk of continuous and invasive tracking methods.

In the context of the gig-economy, workers may be exposed to intrusive methods of monitoring, which are countered by the denial of a subordinate employment relationship (gig-economy companies claim that control over workers' performance is not directly exercised and that is minimal). Excessive monitoring of workers can have a major impact on the collection of their personal data, as well as on the relationship between the employer or customer and the worker (19).

Moreover, the use of algorithms in a work context risks reproducing the biases of human programmers, who focus exclusively on the productivity of work performance, and thus, reproducing possible discrimination. The European Economic and Social Committee has noted that “the development of AI is currently taking place within a homogenous environment principally consisting of young, white men, with the result that (whether intentionally or unintentionally) cultural and gender disparities are being embedded in AI, among other things because AI systems learn from training data” (20). This highlights the importance and consequences of the data entered into the algorithm.

In fact, the inputs made by programmers into the algorithm, although objective, can lead to a discriminatory result due to the automated process of standardisation and repetition. As a result, algorithmic management can reinforce discriminatory practices that may result from the modification of the data entered into the system or from the self-learning process that enables the algorithm to reprogram its own criteria in order to achieve a predetermined objective entered by the programmers (21).

Another essential aspect of algorithmic management is that workers don't have any control over it: they are in a weak position with regard to working conditions and decisions. For this reason, the role of trade unions can prove to be essential in gaining a better understanding of how the algorithm works and in involving workers in the input data that is entered into the algorithm. Despite workers' freedom of association and the right to collective bargaining (22), the actual improvement of algorithmic management performed by workers may be very limited, partly due to the fragmentation of the workforce and the strong competition between unions.

An example of the role of trade unions in improving working conditions for platform workers was the consultation of the European Trade Union Confederation (ETUC) (23) prior to the European Commission's Proposal for a Directive, published on December 9, 2021. The Confederation underlined the need of a directive - based on Article 153(2) TFEU and respecting both national practices and the autonomy of social actors - that would establish a rebuttable presumption of employment, with the burden of proof on the platform company. The Confederation emphasised the essential role of trade unions and collective bargaining in the organisation of work by digital platforms and in guaranteeing workers' rights and interests. In the context of algorithmic management, trade union activism is crucial for democratic control, transparency of the algorithm and protection of workers' personal data. This is why, the Confederation - during the consultation phase in accordance with Article 154 TFEU - demanded the right of workers (regardless of their legal qualification) to be informed about the management and control mechanisms used by platform companies, with particular reference to the data processing (results obtained, performance statistics, etc.) and the means of control used. The impact of the ETUC consultation is clearly visible in the European Commission's proposal for a directive (24) aimed at improving the working conditions of platform workers.

The issue of informing workers about the methods of control and data collection used in algorithmic management is central because, as some authors have pointed out (25), it can have two main consequences. Firstly, the scale and diversity of organisational systems entails a greater power of control and direction (of the companies that own the digital platforms), which would lead to more opportunities to explain disciplinary power. Secondly, algorithmic management raises the issue of transparency, since algorithms are defined as “black boxes” that operate in an opaque way from the point of view of those who receive their instructions; some legal mechanisms, such as trade secret clauses or confidentiality obligations, may increase the lack of transparency and workers' ignorance of algorithmic functioning, referred as “computer illiteracy”.

As the French National Artificial Intelligence Strategy (26) project has also pointed out, trust in artificial intelligence is closely linked to the transparency and fairness (27) of algorithmic data processing, so that it does not become an obstacle to further innovation and exploitation of new technologies. Problems arising from algorithmic management must be balanced with the needs of productivity and development: a transparent system using algorithmic management would be a response to potential algorithmic discrimination. However, the fluctuating and evolving nature of algorithms represents a limit to the attempt to regulate the functioning of algorithms (28).

Thus, the need for algorithmic transparency emerges as a response to the possible violation of personal data processing and algorithmic discrimination. Transparency must be aimed at providing information about the processing of users' data and the purpose pursued. Some researchers at the Data & Society Research Institute (29) have proposed five common and flexible principles for creating reliable algorithms: responsibility, explicability, accuracy, auditability and fairness. With regard to the last two criteria, it has been stressed that control and possible correction of the algorithm by third parties, such as trade unions, can avoid a discriminatory use of the algorithm. The publication of the results of the assessment and criteria used in algorithms would make it possible to avoid discriminatory and prejudicial decisions being taken automatically.

The development of common principles for designers and programmers is an attempt to standardise the operation of algorithms that are used in specific work contexts and are subject to constant technological change due to innovative purposes or company policies (30).

3. The central role of the platform. An example of discrimination based on the functioning of the algorithm taken from the Italian case-law

The rapid spread of algorithmic management may entail the risk of discriminatory decisions, which can be particularly pervasive in the work context of digital platforms. Therefore, the algorithm cannot be considered neutral, as it determines the organisation of service provision.

In recent years, case-law concerning discrimination through the use of artificial intelligence has increased in multiple countries. Several examples of algorithmic discrimination, especially on gender and on race, have been detected especially in the United States of America. The “Gender, Race and Power in AI” research of the AI Now Institute of the University of New York (31) has identified a persistent problem of discrimination, via the use of artificial intelligence, based on gender and on ethnicity. In fact, researches underline that image recognition technologies misclassify the faces of people of colour. It has been found that denial of access to the app - because of the software's failure to recognize a transgender worker's face - led to a deactivation of the driver's account. The facial recognition problem has also occurred with reference to other drivers claiming to be victims of racial discrimination (32).

A particular case-law is the one of an Italian court who has recently recognised the discriminatory nature of the algorithmic system, particularly with regard to the possibility of participating in a strike.

In order to recognise algorithmic discrimination, it has to be pointed out that the use of algorithms gives decisions the appearance of neutrality. However, there is a risk of reproducing or even systematising algorithmic discrimination. The necessary condition for an objective justification of the algorithmic decision is an understanding of how the algorithm works (33). In fact, according to the argument of the defenders of algorithmic management, the algorithm would be a neutral tool because it is based on numbers, and has the effect of reducing the time and errors of human intervention. However, some authors have pointed out that the algorithm cannot be considered as such, since it uses numerical data that are part of potentially subjective information (34).

In addition, the intervention of the digital platform is crucial in the organisation of the service: the digital platform not only plays the role of intermediary between workers and clients, but also determines the organisation of work in order to guarantee the transport service (35). The Court of Justice of the European Union had the opportunity to rule on the involvement of digital platforms in provision of services carried out by workers (without ruling on their legal qualification) in the case of Asociación Profesional Elite Taxi vs. Uber (36).

While algorithmic discrimination against consumers has started to be recognised, mainly as a result of the General Data Protection Regulation (RGPD) of 2016, case-law has only recently ruled on algorithmic discrimination in the organisation of work. In fact, Italian case-law has recognised a case of algorithmic discrimination after analysing the labour relationship that exists between the platform, the user and the worker in the execution of food delivery services, taking into account how deliveries are organised.

According to the ruling of the Tribunale di Bologna (37), it appears that the Deliveroo platform allows drivers to book their availability session, during which they receive work proposals, but at the same time they do not have access to the booking calendar. In fact, the workers' access to the booking calendar - which appears to take place at different times - takes into account the “booking index”, which is composed of worker's reliability index (which concerns his participation in a booked session) and the peak participation index (i.e., the most relevant times for home meal consumption). These two elements therefore constitute elements of preference or exclusion for booking sessions and thus for the possibility of making deliveries and receiving remuneration. The combination of these indexes determines the workers' “ranking”, which is central to access to reservations: the better the ranking, the better the time slots the worker can choose.

The Tribunale di Bologna emphasises that such a reservation system is discriminatory because it prevents the worker from deviating from the pattern established by the algorithm with regard to the access to work. In fact, the geolocation system binds the worker to perform the tasks in a specific geographical area, which consequently limits the choice of future work opportunities for personal reasons or, as in the above case-law, for joining union initiatives. Even if the case-law only deals with union discrimination against trade unions, the Tribunale mentions, among the possible factors of discrimination, the case of a disabled family member, as stated by the Court of Justice in the Coleman case (38), and the case of illness, whether it causes a limitation to a full and an effective participation in personal life (39).

As far as the Italian and the European jurisprudence on the interpretation of the term “convictions” (40) is concerned, freedom of association and trade union activity are included in personal convictions. Some authors (41) argue that trade union freedom cannot be dissociated from participation in a strike as a discriminatory motive, since trade union freedom is not only expressed by a personal conviction, but also by a series of actions, such as participation in a strike. Thus, occasional participation remains a form of expression of a right to freedom.

Deliveroo's system of organising work considers the justification for not attending the booked session irrelevant. In fact, the platform does not differentiate between the various possible reasons behind a worker's decision to cancel or refuse to perform that follows a lower ranking. Trade unions have complained about the “blindness” or lack of awareness of the algorithm, particularly with regard to the possibility of participating in a strike (42). Nevertheless, the company associates a different treatment to two situations, namely injury in consecutive shifts and system malfunction. As a result, the inclusion or non-inclusion of certain justifications in the worker's evaluation system seems to be a deliberate consequence of the company.

It is important to bear in mind the difference identified by the Court of Justice between the concepts of “discrimination” (which appear in the European directives) and “judicial remedies” (which do not prevent Member States from introducing a more favourable regime). The Member States have the possibility to provide for a specific remedy for collective discrimination, the actio popularis; in the case-law referred to above, the action of trade unions, associations and organisations representing the right or interest concerned has been followed, in accordance with Article 5 c. 2 d.lgs. 216/2003 in the case of collective discrimination against a group of workers (43).

The recognition of algorithmic discrimination overrides the assumption of the freedom of the platform workers, which, as previously ruled by the Tribunale di Palermo (44), is in fact only fictitious, since the choice of working time slots depends on elements unrelated to the workers' preferences. In fact, the discriminatory element of the algorithm is the reservation system based on a ranking that sanctions the apparent freedom of choice of the worker mentioned in the contract signed between the Glovo platform and the platform worker. The decision points out that the worker “feels powerless against the unknown mode of operation of the platform (...) exactly like a worker of the last century with respect to the mode of operation of the assembly line, on which, however, the chief worker could probably provide him with operations in real time” (45). When the platform worker arrives at the booked time slot, he will only receive offers for individual deliveries, according to the algorithm's selection of workers to carry out the tasks. In addition, the issue of the centrality of the platform can be found, since the Tribunale di Palermo, after analysing the organisation of the work activity, states that “the plaintiff's work was managed and organised by the platform (as mentioned above, organised solely by the employer party and in its own exclusive interest), in the sense that only by accessing it and submitting to its rules the plaintiff could perform the work services” (46).

Finally, it is essential to underline that in both decisions (Tribunale di Bologna and Tribunale di Palermo), the companies, owners of the platforms, did not prove in court the actual functioning of the algorithm (i.e. the exact calculation process to determine the rankings of individual workers), but they only argued for a change in the organisation of the system.

In conclusion, the decision of the Tribunale di Bologna is certainly an important reference in the Italian and European labour law, as it recognises the discriminatory nature of the algorithmic system. This jurisprudence enriches the rights of platform workers and highlights the problem of opacity and lack of transparency that characterises algorithms. The recognition of the discriminatory nature of the algorithms has a strong impact on work platforms, as a universalist protection seems to appear regardless of the legal qualification of the existing relationship between the worker and the platform. The planned interventions of the European Union and the Member States are essential to understand the specificities of artificial intelligence systems (47) and the regulation of technological innovation. For this reason, the participation of trade unions in the debate on the protection of workers against the risks of algorithmic management is essential, even if the enforcement of an excessively detailed regulation could allow digital platforms to circumvent European and national regulations, weakening their effectiveness.

Note

(*) Ilaria Cendret, Dottoressa.


(1) Science, Technology and Industry Scoreboard, OECD, 2011.

(2) OED Online, 2021.

(3) As Professor Tae Kyung Sung points out, the concept of the Fourth Industrial Revolution must be distinguished from the concept of Industry 4.0 (which refers only to the manufacturing sector). In fact, the Fourth Industrial Revolution is a broader concept that encompasses all changes that affect society, such as the political systems, the culture, means of communication and living.

Sung T.K., Industry 4.0: A Korea perspective, Technological Forecasting and Social Change, Volume 132, July 2018.

(4) Kellogg KC., Valentine M. and Christin A., Algorithms at Work: The New Contested Terrain of Control, 2020, Academy of Management Annals 14(1): 366–410.

(5) Lee MK., Kusbit D., Metsky E. and Dabbish L., Working with machines: the impact of algorithmic, data-driven management on human workers, 2015, Proceedings of the 33rd Annual ACM SIGCHI Conference, Seoul, South Korea, 18–23 April. New York: ACM Press, 1603–1612.

(6) Mateescu A., Nguyen A., Explainer Algorithmic Management in the Workplace, 2019, New York: Data & Society.

(7) Fischman S., Gomes B., Intelligences artificielles et droit du travail : contribution à l'étude du fonctionnement des plateformes numériques, in Adam P., Le Friant M., Tarasewicz Y., Intelligence artificielle, gestion algorithmique du personnel et droit du travail, « Thèmes & commentaires », Dalloz, Août 2020, 37-54.

Gamet L., Le travailleurs et (les deux visages de) l'algorithme, Droit social, n°10, Octobre 2022, 775.

(8) CNIL, Comment permettre à l'homme de garder la main ? Les enjeux éthiques des algorithmes et de l'intelligence artificielle, décembre 2017.

(9) Lee MK., Kusbit D., Metsky E. and Dabbish L., Working with machines: the impact of algorithmic, data-driven management on human workers, 2015, Proceedings of the 33rd Annual ACM SIGCHI Conference, Seoul, South Korea, 18–23 April. New York: ACM Press, 1603–1612.

(10) Fischman S., Gomes B., Intelligences artificielles et droit du travail : contribution à l'étude du fonctionnement des plateformes numériques, in Adam P., Le Friant M., Tarasewicz Y., Intelligence artificielle, gestion algorithmique du personnel et droit du travail, « Thèmes & commentaires », Dalloz, Août 2020, 37-54.

Gamet L., Le travailleurs et (les deux visages de) l'algorithme, Droit social, n°10, Octobre 2022, 775.

(11) Wood A., Algorithmic Management Consequences for Work Organisation and Working Conditions, JRC Working Papers Series on Labour, Education and Technology, July 2021.

(12) Lee M.K., Kusbit D., Metsky E. and Dabbish L. (2015), Working with machines: the impact of algorithmic, data-driven management on human workers. In: Proceedings of the 33rd Annual ACM SIGCHI Conference, Seoul, South Korea, 18–23 April. New York: ACM Press, 1603–1612.

(13) Delfanti A., Machinic dispossession and augmented despotism: Digital work in an Amazon warehouse, New Media & Society, 2019.

(14) Orlikowski W.J. and Scott S.V., What Happens When Evaluation Goes Online? Exploring Apparatuses of Valuation in the Travel Sector, 2014, Organisation Science 25(3):868-891.

(15) ILO, Declaration of Philadelphia, Declaration concerning the aims and purposes of the International Labour Organisation, 1944.

(16) Moore P., Akhtar P., Upchurch M., “Digitalisation of Work and Resistance” in Moore P., Upchurch M., Whittaker X., “Humans and Machines at Work: Monitoring, Surveillance and Automation in Contemporary Capitalism”, Palgrave Macmillan, 2018.

(17) Bodie M.T., Cherry M.A., McCormick M.L., Tang J., “The Law and Policy of People Analytics”, University of Colorado Law Review, Forthcoming; Saint Louis U. Legal Studies Research Paper n. 2016-6.

(18) Fischbach K., Gloor P.A., Lassenius C., Olguin Olguin D., Pentland A., Putzke J., Schoder D., Analyzing the Flow of Knowledge with Sociometric Badges, COINs2009, 2009.

(19) In a similar way, users' trust in artificial intelligence and personal data processing is a competitive factor for companies. With reference to the relationship with the platform users, the definition of platform practices has been discussed by Google, Facebook, IBM, Microsoft and Amazon through the “Partnership on Artificial Intelligence to Benefit People and Society”.

OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, 142.

(20) European Economic and Social Committee, 526th EESC Plenary Session of 31st May and 1st June 2017, Opinion of the European Economic and Social Committee on ‘Artificial intelligence — The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society', Rapporteur: Catelijne Muller.

(21) De Stefano V., “Negotiating the algorithm”: Automation, artificial intelligence and labour protection, ILO Employment Policy Department, 2018, Working paper n. 246.

(22) Recognised by the ILO among the four categories of fundamental principles and rights in the workplace. ILO, Declaration on Fundamental Principles and Rights at Work, adopted by the International Labour Conference at its Eighty-sixth Session, Geneva, 18 June 1998 (Annex revised 15 June 2010).

(23) ETUC Executive Committee, ETUC reply to the Second phase consultation of social partners under Article 154 TFEU on possible action addressing the challenges related to working conditions in platform work, 9 September 2021.

(24) European Commission, Proposal for a Directive on the European Parliament and of the Council on improving the working conditions in platform work, 2021, December 9.

(25) Conference talk by Giovanni Gaudio, “Workers' Rights Litigation in the Time of Algorithmic Management,” at the January 14, 2022 conference at Ca' Foscari University Venice, “Labor and Rights in the Internet Revolution,” “Session 3- The Qualification of Labor Relations: Subordination, Autonomy, Protection Needs,” organised as part of the project “PRIN2017 - Dis/Connection: Labor and Rights in the Internet Revolution.”

(26) Stratégie nationale en intelligence artificielle, subgroup contribution 3.2.B “Intelligence artificielle, enjeux juridiques” to the working group 3.2 “Anticiper les impacts économiques et sociaux de l'intelligence artificielle”, Mars 2017.

(27) In the audio-visual field, the Conseil supérieur de l'audiovisuel (CSA) exploits the notion of fairness and proposes the publication of clues that measure the qualitative aspects of algorithms.

(28) Observation of the Conseil Supérieur de l'Audiovisuel (CSA) at the opening of the public debate concerning the ethical issues raised by algorithms. CNIL, Compte rendu de l'événement de lancement du cycle de débats publics sur les enjeux éthiques des algorithmes, 23 January 2017.

(29) Diakopoulos N., Friedler S., “How to Hold Algorithms Accountable”, MIT Technology Review, 17 November 2016.

(30) Reference is made to the changes made to algorithmic management used in digital platforms, such as Deliveroo and Uber, in order to elude an employment qualification of the labour relationship between the platform and the worker.

(31) West S.M., Whittaker M. and Crawford K., Discriminating Systems: Gender, Race and Power in AI, AI Now Institute, 2019.

(32) Melendez, S., Uber driver troubles raise concerns about transgender face recognition, 2018, Aug. 9.

Booth. R., Ex-Uber driver takes legal action over ‘racist' face-recognition software, The Guardian, 5 Oct. 2021.

(33) Sereno S., Focus sur les discriminations par algorithme, Revue de droit du travail 2020, 680.

(34) Pietrogiovanni V., Frank è sordo (alle norme giuridiche) ma non è cieco, Labour Law Community, 28 maggio 2021.

(35) Falletti E., Il leading role della giurisprudenza comparata in materia di discriminazioni algoritmiche subite dai lavoratori della gig economy, Questione giustizia, 11 novembre 2021.

(36) CJUE, 20 December 2017, Asociación Profesional Elite Taxi / Uber Systems Spain SL, n. C-434/15.

(37) Tribunale di Bologna, 31 dicembre 2020, n. 2949/2019.

(38) CJUE, Coleman, 17th July 2008, C-303/06.

(39) CJUE, HK Denmark, 11th April 2013, C-335/11 and C337/11.

(40) ECHR, Case of Campbell and Cosans v. the United Kingdom, 25th February, 1982 n. 7511/76 and 7743/76; Corte di Cassazione, 2 January 2020, n. 1/2020; ECHR, Danilenkov v. Russia, 30th July 2009, n. 67336/01.

(41) Tarquini E., L'algoritmo Frank e il diritto al conflitto sindacale, Labour Law Community, 3 giugno 2021.

(42) Purificato I., Behind the scenes of Deliveroo's algorithm: the discriminatory effect of Frank's blindness, Italian Labour Law e-Journal, Issue 1, Vol. 14, 2021.

(43) Federazione Italiana Lavoratori dei Trasporti - Filt Cgil di Bologna, Filcams Cgil di Bologna e Nidil Cgil di Bologna. Rizzi F., L'algoritmo Frank: un caso di discriminazione ipotetica?, Labour Law Community, 5 maggio 2021.

(44) Tribunale di Palermo, 24 novembre 2020, n. 3570/2020. The Tribunale di Palermo has ruled on the existence of a subordinate employment relationship between the platform and the worker.

(45) Il lavoratore “si sente impotente avverso l'ignota modalità di funzionamento della piattaforma (…) esattamente come un operaio del secolo scorso rispetto alla modalità di funzionamento della catena di montaggio, sulla quale però verosimilmente il capo operario poteva fornirgli operazioni in tempo reale”.

(46) “Il lavoro del ricorrente veniva gestito e organizzato dalla piattaforma (come detto organizzata unicamente da parte datoriale e nel proprio esclusivo interesse), nel senso che solo accedendo alla medesima e sottostando alle sue regole il ricorrente poteva svolgere le prestazioni di lavoro”.

Fava G., Nota alla sentenza del Tribunale di Palermo n. 3570/2020 pubbl. il 24/11/2020 - il rapporto di lavoro dei riders, Lavoro Diritti Europa, 14 gennaio 2021.

(47) European Commission, Proposal for a Regulation of the European Parliament and the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, 2021, April 4.