When a guesthouse owner discriminates on who may use their accommodation, the law takes over, but what happens when someone renting out their private home on a platform like Airbnb refuses for the same reason?
The sharing economy offers the opportunity for many to get involved, but discrimination is a big threat. Could technology overcome prejudices or does this represent a major flaw in the entire system?
While there are many platforms that have been created, the best known are the home rental platform Airbnb and taxi service Uber.
Both exist to make it easy for someone to earn revenue from someone that would like to use something for a short period.
The system’s biggest innovation was a rating system for both parties.
The rating is excellent for those that actually use the service and fosters trust - if not enforce good behaviour - to ensure your rating does not see you being overlooked by the next potential user. Trust has always been a significant human behaviour indicator and it is likely to become more important as we engage in non-personal ways online with brands and others.
But the rating can be affected by prejudice. That is first issue.
The second is that it does not work at all on the occasions that the service is declined.
Declining someone with a low rating makes sense assuming that the rating was justified. However, there are no rules for how to rate someone. It is possible that the rating - especially if there are only a few - was based on prejudice.
Airbnb is currently dealing with a growing number of incidents where those looking to book on the platform are being refused not because of their rating, but their race.
#AirbnbWhileBlack is a current trend following a campaign by NGO Share Better to highlight the issue. Airbnb has undertaken to address the issue with hosts and staff but are only expected to have a formal intervention in place by September.
If they succeed in overcoming the discrimination using technology, it may prove useful for other platforms.
What could they do?
The single largest improvement with web based interactions is the collection and processing of vast amounts of data. It was this data that allowed a Harvard study to determine that discrimination is real. Using the data to highlight discrimination and initiate a process would be better than waiting for a complaint to be filed and should lower the number of incidences.
Uber allows drivers to refuse a ride, but limits the number they may refuse in a shift to ensure as many people are served as needed. The company recently settled a lawsuit with the US National Federation of the Blind as Uber drivers had been refusing to transport the guide dogs of blind passengers.
It might seem easiest to remove the elements that may identify race like pictures and names, which has been suggested.
If a host was allowed only a certain number of declines, it might nudge those with bias to see that the prejudice was unfounded without needing a rule requiring you to accept a booking like a registered hotel would. The platforms would not like to implement something that would see hosts abandon it.
Could it go further though? Some of the worst prejudice can be found on social networks. Might a tech solution help make humanity less discriminating?
Analysing what people post on Twitter and Facebook post could trigger a suggestion or a requirement to take an implicit association test to demonstrate the user's underlying bias. Blocking those with high ratings would lower the incidence of inflammatory posts.
Unfortunately the attempts to determine if diversity interventions work don’t suggest they do. We can, at least, begin with understanding that prejudice is real and more of a hindrance than a help.