A new report from the University of Technology Sydney (UTS) Human Technology Institute outlines a model law for facial recognition technology to protect against harmful use of the technology, but also foster innovation for public benefit.
Australian law was not drafted with widespread use of facial recognition in mind. Led by UTS industry professors Edward Santow and Nicholas Davis, the report recommends reform to modernise Australian law, especially to address threats to privacy and other human rights.
Facial recognition and other remote biometric technologies have grown in recent years, raising concerns about privacy, mass surveillance and unfairness experienced, especially by people of colour and women, when the technology makes mistakes.
In June 2022, an investigation by consumer advocacy group CHOICE revealed that several large Australian retailers were using facial recognition to identify customers entering their stores, leading to considerable community alarm and calls for improved regulation. There have also been widespread calls for reform of facial recognition law – in Australia and internationally.
This new report – Facial Recognition Technology: Towards a Model Law 1 – responds to those calls. ‘When facial recognition applications are designed and regulated well, there can be real benefits, helping to identify people efficiently and at scale. The technology is widely used by people who are blind or have a vision impairment, making the world more accessible for those groups,’ said Prof Santow, the former Australian Human Rights Commissioner and now Co-Director of the Human Technology Institute.
‘This report proposes a risk-based model law for facial recognition. The starting point should be to ensure that facial recognition is developed and used in ways that uphold people’s basic human rights,’ he said.
This report calls on Australian Attorney- General Mark Dreyfus to lead a national facial recognition reform process. This should start by introducing a bill into the Australian Parliament based on the model law set out in the report.
The report also recommends assigning regulatory responsibility to the Office of the Australian Information Commissioner to regulate the development and use of this technology in the federal jurisdiction, with a harmonised approach in state and territory jurisdictions.
The model law sets out three levels of risk to human rights for individuals affected by the use of a particular facial recognition technology application, as well as risks to the broader community.
Under the model law, anyone who develops or deploys facial recognition technology must first assess the level of human rights risk that would apply to their application. That assessment can then be challenged by members of the public and the regulator.
Based on the risk assessment, the model law then sets out a cumulative set of legal requirements, restrictions and prohibitions.
The publication of the report has a particular resonance in Australia at a time when the Office of the Australian Information Commissioner is embroiled in the aftermath of the theft of 2.1 million personal identification details, including 150,000 passport and 50,000 Medicare numbers, in the data hack of Australian telecommunications company, Optus.
1 - www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf