A report from the All-Party Parliamentary Group on the Future of Work highlights the mental health impact of pervasive monitoring of workers and how this has increased during the pandemic.
Pervasive monitoring and target-setting technologies harm workers’ mental and physical wellbeing and put extreme pressure on them through constant, real-time micro-management and automated assessment, according to a new report.
The All-Party Parliamentary Group on the Future of Work’s report, The New Frontier: Artificial Intelligence at Work, says the use of algorithmic surveillance, management and monitoring technologies that undertake new advisory functions, as well as traditional ones, has significantly increased during the pandemic. It states: “AI technologies are changing the nature of work, who does it and how it is done.”
It says that, while, AI offers invaluable opportunities to create new work and improve the quality of work if it is designed and deployed with this as an objective, it also has negative impacts linked to access to work, fair pay, terms and conditions for work; equality, dignity and autonomy; and support, participation and learning.
It says: “The evidence we have heard indicates that adverse impacts of AI are economy-wide but that key workers in essential service sectors have been hit particularly hard.”
It highlights “a pronounced sense of unfairness and lack of agency” around automated decisions that determine access or fundamental aspects of work. It states: “Workers do not understand how personal, and potentially sensitive, information is used to make decisions about the work that they do; and there is a marked absence of available routes to challenge or seek redress.”
This leads to a lack of trust in AI technologies, a feeling that there is little accountability and a sense that the law has been far outpaced by the magnitude and pervasive use of AI at work. The report states: “It is the role of the law to shape innovation and organisational behaviours in ways which serve the public interest. And it is the role of legislators to regulate for real accountability and real AI innovation, squarely addressing the toughest challenges we face and redirecting our trajectory towards the high road: human-centred AI and the creation of better work for all.”
It recommends the creation of an Accountability for Algorithms Act to establish a simple, new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA) which would always include a dedicated equality impact assessment. It would also include an easy-to access right for a full explanation of purpose, outcomes and significant impacts of algorithmic systems at work and means for redress as well as a right to be ‘involved’ in shaping the design and use of algorithmic systems at work.
Other recommendations include collective rights for unions and specialist third sector organisations to exercise new duties on members or other groups’ behalf; the expansion of the joint Digital Regulation Cooperation Forum (DRCF) with new powers to create certification schemes, suspend use or impose terms and issue cross-cutting statutory guidance, to supplement the work of individual regulators and sector-specific standards; and the creation of a human-centred AI Strategy, based on Good Work principles, and a Work 5.0 Strategy to address the challenges and opportunities of automation as a result of AI and other modern technologies.