What the AI Bill of Rights means for real estate
The White House has published a blueprint for rules to govern automated systems to ensure they don’t interfere with people’s rights or exacerbate inequalities.
The AI Bill of Rights, published by the White House Office of Science and Technology Policy, proposes five principles for the use of AI.
- be protected from “unsafe or ineffective systems”
- not face discrimination from algorithms
- have control over how data about them is used
- know when an automated system is used and understand why, and
- be able to opt out “where appropriate”.
While artificial intelligence and machine learning have “brought about extraordinary benefits”, “this important progress must not come at the price of civil rights or democratic values,” the White House said in a statement.
What impact those principles will have in reality – and how real estate will have to respond – is another matter.
What regulatory power does the AI Bill of Rights have?
None. These are non-binding guidelines that companies and government agencies can choose to comply with voluntarily.
In that case, should real estate pay attention?
Anna C Westfelt, head of privacy at law firm Gunderson Dettmer, says that while the document isn’t binding, its scope will affect most industries and it’s part of a wider ramping up of regulation around AI in the US on a federal, state and local level.
She points to a New York City law on automated employment decision tools, which comes into force in January 2023 and will impose “onerous requirements” on employers using automated tools in hiring.
“Several state laws coming into effect in 2023 include provisions regulating the use of AI and automated decision-making, calling for risk assessments and human oversight and, in some cases, a right to opt out,” she says.
What areas of real estate could be affected?
Future regulation could affect the use of AI for everything from tenant, asset and occupancy management to property or market analysis. “In essence, where the use of AI and ML affects individuals, we can expect future regulation,” Westfelt says.
How can a developer or asset manager minimise those risks?
Because the White House’s principles are not binding, they give real estate the opportunity to prepare for any future regulations that are. If you use AI in any way, make sure that use conforms with the AI Bill of Rights. Is it potentially discriminatory? Can people opt out?
The National Institute of Standards and Technology recently released the second draft of its AI Risk Management Framework, which will help organisations identify and address risks in their AI products, services and systems.
Westfelt also recommends requesting details of any underlying data if you use third-party AI tools.
Could the Bill of Rights affect AI regulations beyond the US?
Maybe, but not necessarily.
John Buyers, international head of AI at Osborne Clarke, says: “It might indirectly influence regulation beyond the US as it focuses on the key elements of ‘ethical’ AI. However, what it is saying is not innovative. It is basic common sense and many of the themes in the Bill of Rights will have been written about extensively elsewhere.”
As usual, Europe is ahead on the regulatory front, with the EU introducing a draft AI Act in 2021. The proposed law splits systems into four risk categories: unacceptable risk (e.g. social scoring by governments), high-risk (e.g. safety components in products), limited risk (e.g. chatbots where people are aware they’re talking to a machine) and minimal or no risk (e.g. spam filters).
The rules, set to come into force in late 2022 or early 2023, would ban anything that falls under “unacceptable risk” and set strict requirements for those in the “high risk” category. US companies operating in Europe will have to be aware of, and comply with, those regulations.