Facebook's housing advertisement discrimination is what happens when the ethics of data collection and usage are not considered. There had to have been multiple levels of management and third parties who let this happen for profit, does one need any more proof for the need of more education in Data Ethics?
If regulations regarding Data Ethics are not imposed onto large companies and enforced, then situations where data is collected and used harmfully will undoubtedly occur.
This ban on the use of FHA-covered demographics to discriminate in housing needs to be applied more generally to the commercial use of data, with limits on the collection and usage of such data for direct commercial benefit.
This is how we can limit data discrimination in other forms of advertisement as well, and other algorithms based on user activity/demographics.
The response to unethical data usage by the U.S. Department of Housing and Urban Development (HUD) as well as the more contemporary reaction of trying to ban TikTok shows that America has a vested interest in preventing frivolous data usage, we are simply not tempering that interest as we should be. If we nurture a culture and laws surrounding data treatment that does not permit the usage of data such as FHA-protected characteristics for commercial use of any kind, then we are taking steps towards removing systems of oppression that are built-in directly to our society. If the only way to prevent commercial discrimination is to make it illegal enough to be cost-prohibitive, we have to do so.
The ethicality of data collection and usage in relation to advertising is an ongoing question, because isn’t the very act of ad-targeting discrimination? When the goal is to discriminate between which people and groups want certain products, FHA-protected characteristics seem like they could be some of the most important factors in classifying customers. We begin to see issue with this when it comes to housing, as that obviously has a more extreme effect on lives than a simple cosmetic ad for example, but the process is similar. Advertisement is discrimination, it falls on us to decide how okay with that we are.
Given the proposed changes to Facebook’s machine learning algorithms that will remove the algorithm’s access to FHA-protected characteristics about a customer, what will be done if one of these characteristics is unknowingly found through data analysis, and used in discrimination through classification just the same? Data science tells us that if any of these protected characteristics are important enough to the variation in the dataset, then it is entirely possible the data can be re-organized into separating by a protected characteristic without even knowing what it is. Is this still a violation of FHA? I say no, but what does it matter if the discriminatory affect ends up the same? Is double-blind discrimination any better than complacent discrimination if the affect is no different?
Because of situations such as this, I see data science as a tool where responsibility and care need to be taken in every step of the data collection and usages processes when it comes to involve real, human lives.