Mary Louis received a score of 324 from an AI-powered tenant screening tool named SafeRent, which failed to explain how this score was determined or what it meant. With a simple directive to decline her application, the absence of clarity marked a troubling instance of algorithmic decision-making that many face when applying for housing.

After applying for an apartment in an eastern Massachusetts suburb, Louis was informed by the management company that despite her solid rental history and a housing voucher, her application had been rejected, due to a score below the acceptable level of 443. Louis, a security guard with a low credit score yet a landlord reference confirming her timely rent payments, felt the sting of a decision made by a faceless algorithm without room for appeal.

The Legal Challenge Against Discrimination

Realizing that her situation was part of a larger trend, Louis joined over 400 other Black and Hispanic tenants in Massachusetts who had been similarly affected by SafeRent’s screening decisions. In 2022, they filed a lawsuit under the Fair Housing Act, claiming discrimination from the software that disproportionately disadvantaged applicants relying on housing vouchers.

Louis and class action co-plaintiff Monica Douglas argued that SafeRent’s algorithm weighted irrelevant data, including credit scores and personal debt, without considering their employment backgrounds or the financial guarantees of their housing vouchers. Data has shown that Black and Hispanic applicants are more likely to have lower credit scores and utilize housing vouchers, creating systemic bias.

Reflecting on the experience, Louis lamented, “It was a waste of time waiting to get a decline. I knew my credit wasn’t good, but the AI doesn’t know my behavior. It knew I fell behind on paying my credit card but it didn’t know I always pay my rent.” While two years elapsed with the lawsuit dragging on, the impact of her fight holds potential to protect future renters from arbitrary rejections based on algorithmic scores.

Settlement and Its Implications

In a notable turn of events, SafeRent agreed to a settlement, paying $2.3 million, and committing to halt its tenant scoring system for applicants using housing vouchers for five years, a rare outcome where a tech company alters its operational practices in response to litigation. Although SafeRent maintained it acted within legal bounds, the spokesperson indicated that litigation was becoming a distraction from its core mission.

Attorney Todd Kaplan, who represents Louis and other plaintiffs, criticized the trend of automation that removes human involvement in tenant screening, pointing to a lack of transparency in how algorithms gauge applicants. Many property managers, he noted, are equally in the dark about the workings of such systems.

As a result of the settlement, SafeRent’s new restriction on scoring means applicants can no longer be judged on a simple thumbs-up or thumbs-down basis, allowing for more individualized assessment based on their full profiles. Kaplan foresaw this as a crucial step towards holding landlords and their processes more accountable.

The Broader Context of AI in Housing

The ramifications of AI decision-making reach deep into everyday life, affecting nearly all 92 million low-income Americans across essential areas such as employment and housing. This reality was highlighted in a report detailing the potential harms of AI to vulnerable populations, noting that systemic automation often overlooks the complexity of human circumstances.

Advocates like Kevin de Liban have long highlighted the absurdities that arise from algorithmic decision-making, marked by occasions where automated systems miss critical factors in people’s lives, such as one’s eligibility for state-funded services based on flawed predictions. Despite some progress, regulatory oversight around these AI systems remains limited, with few laws addressing their pervasive influence.

Consumer sentiment indicates discomfort around automated processes in significant life matters, exacerbated by a lack of clarity and notification when algorithms dictate decisions. Survey findings show that many respondents feel uneasy about not knowing the data used by AI systems to assess them.

Looking Towards the Future

As federal efforts aim to catch regulatory up to the fast-evolving AI landscape, with initiatives targeting discrimination, lawsuits like Louis’s become more vital in establishing accountability in these systems. The recent engagement of the US Department of Justice and the Department of Housing and Urban Development in the case signifies the heightened attention on discriminatory practices in housing.

Kaplan suggests Louis’s case could serve as a precedent, offering a framework for tackling similar challenges in the future. Yet, as the legal landscape struggles to keep pace with technology, the focus on class-action lawsuits as a means for accountability may intensify, acknowledging that corporate practices often shift swiftly, leaving marginalized populations unprotected.