The future is here. ‘Artificial Intelligence’ (AI) surrounds us every day – in real-time traffic apps, online store shopping recommendations, even medical diagnostic tests. And while AI is here to stay, the potential has yet to be fully understood and tapped.
AI: Role in COVID-19
After months of ‘shelter in place’ and ‘work from home’ mandates due to the COVID-19 pandemic, it has been clear that technology allows us to do (nearly) everything from our screens at home. But then AI took on a new role – something that has since worked for our more superficial convenience now holds an urgent need to potentially curb the COVID-19 spread. Health care and technology sectors are working together to deploy AI systems in response to the virus – systems to track the spread, support medic responses and maintain a sense of control. These systems are still being tested.
Hopefully, one day they will be able to reduce strain on overwhelmed healthcare systems and personnel, but there is a potential downside. The algorithms are human creations, leading them to be subject to bias that can lead to perpetuating existing societal inequity, posing serious risk.
AI: Bias and Ability for Objectivity
How can a technological system hold bias? Yes, the AI technology itself can be programmed to be more objective, holding universal standards to directly combat biases. However, AI systems are created by everyday people. Therefore, there has to be a high level of awareness and various processes to check if human bias has (unknowingly) programmed biases. Another potential issue: AI systems deal with data systems to gather information. Data systems sound objective enough, however data collection is rife with social and cultural biases. Some data does not even exist for certain marginalized populations due to the barriers. It is important to understand that, as we move forward with incorporating our lives with technology – sometimes fully depending on it – data, technology…it all exists to be objective, but is created by subjective humans. There must be reviews in order to ensure that technology is in fact an objective tool rather than a further enforcement of biases.
AI: A Call for Digital Literacy in Workers
While programmers will have to be diligent to guarantee objectivity of future AI systems, current AI asks for a shift in worker skills. There has been much debate that AI will make the average worker redundant, but the reality is that the skillset is different but the worker is still needed. While many blue-collar workers, nationally and internationally, are not traditionally trained in digital literacy this is the call of the future. Various NGO efforts are currently working, mostly in SE Asia, to provide digital literacy courses to help minimum wage and informal workers gain the skills necessary to earn jobs in technology. If there are resources and training provided to the average worker to gain digital skills, not only will they be able to find stable employment moving forward but AI will be a space for inclusivity. This means that, instead of ´digital literacy´ being synonymous with wealthy or privileged candidates, the pool of talent will open up and allow for a diverse and equitable workforce as we move towards the future.
“Finding Our Humanity With AI.” U.S. News & World Report, U.S. News & World Report, www.usnews.com/opinion/economic-intelligence/articles/2018-01-02/artificial-intelligence-can-help-us-combat-individual-and-societal-biases.
Rustagi, Genevieve Smith & Ishita, and Genevieve Smith is the associate director at the Center for Equity. “The Problem With COVID-19 Artificial Intelligence Solutions and How to Fix Them (SSIR).” Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change, ssir.org/articles/entry/the_problem_with_covid_19_artificial_intelligence_solutions_and_how_to_fix_them.
Yadav, Namrata. “Post Covid-19 World: How AI Can Ensure a More Inclusive Digital Economy.” ORF, 18 Aug. 2020, www.orfonline.org/expert-speak/post-covid-19-world-ai-ensure-more-inclusive-digital-economy/.