Trustworthy Digital Society submission in response to the House Standing Committee on Employment, Education and Training’s inquiry into the Digital Transformation of Workplaces

CREDS members were part of the team that were called as witnesses to this committee.

Recently, as part of the Trustworthy Digital Society, a number of CREDS members worked with other UTS academics to put together a submission into the inquiry into the digital transformation of workplaces. Consequently, CREDS members, along with Professor Asif Gill from FEIT, were invited to give evidence at the inquiry. We will post the submission when it is publicly available, but here is a brief summary.

I’d like to open by saying that the benefits of automated decision-making (ADM) and AI should not be evaluated solely on their ability to improve the efficiency of specific tasks.

We have recommended a focus on four key areas:

Equity and fairness in digitalisation: We need to consider the long-term implications and psychosocial risks of workplace digital transformation. These include unfair work displacement, covert monitoring, and other societal harms that increase inequity through digital oppression. These can affect vulnerable populations to a greater extent; for example, through facial recognition technologies.

Regulation of and privacy at workplaces: Governments need to play a crucial role in regulating digital technologies, including Automated Decision Making (ADM) and Machine Learning (ML). We need laws that can make workplace digital technologies trustworthy for all stakeholders. This can ensure data privacy, transparency, and accountability. We need protections for workers and their families, especially in work-from-home settings.

Ethical monitoring and governance of organisations: It is essential to implement context-based ethical, transparent, and reliable monitoring systems to foster responsible technology use. Robust governance frameworks are also needed to protect workers from cyberbullying, cyberstalking, and disinformation, in order to prevent mental, physical, and reputational harm to organisations.

AI literacy for the public and ethical literacy for AI developers: We think it is crucial to provide affordable, accessible, and informed learning opportunities for all around AI literacy, but also integrating AI ethics into STEM education to educate future AI developers for the societal impact of their work.


Nothing links to here

Keith Heggart
Keith Heggart
Senior Lecturer

Dr Keith Heggart is an early career researcher in the School of International Studies and Education, with a focus on learning and instructional design, educational technology and civics and citizenship education.