Reworking the gender balance in the AI, IoT industries
Re•Work’s third Women in AI dinner was held in London in February 2018. This regular networking event celebrates women in artificial intelligence (AI) and showcases their achievements. Although the speakers are women, these are not women-only events. This is important, because diversity is about inclusivity, not segregation.
There are not enough women working in tech, let alone in AI. In the UK, for example, 83% of people working in science, technology, engineering and math (STEM) careers are men, according to figures presented at UK Robotics Week 2017. It has been reported that less than 10% of coders are women, despite Ada Lovelace being widely considered to be the first computer programmer.
Re•Work is attempting to improve the gender balance in the burgeoning AI community by organizing a series of dinners featuring female expert speakers who talk about their work at the cutting edge of emerging technology. Re•Work founder Nikita Johnson and her team are careful not to dwell on traditional "women’s challenges." Instead, Re•Work is focusing on technology and research, showcasing women in AI in a way that overrides traditional preconceptions.
However, the challenge is that bias and preconception are deeply ingrained in society, which means they also are ingrained in the AI applications data. Accordingly, the first presentation by Silvia Chiappa, senior research scientist at DeepMind, was about innovating towards algorithmic fairness.
Curing the bias virus
Machine learning already is used to make and support decisions or processes that affect people’s lives: in hiring, education, lending, and in policing and law. Judges and parole officers, for example, use algorithms to predict the likelihood a defendant or prisoner will reoffend. It is critical to ensure the algorithms are not biased toward or against individuals from particular social or racial groups.
The big challenge is it is impossible to take the bias out of historical/precedent data (which reflects preconceptions that existed in society at the time), so DeepMind is innovating ways to increase algorithmic fairness.
In AI terms, it is ineffective to disregard sensitive factors like race or gender, or give them a negative weighting, because this can have a negative impact on system performance. And it may not increase fairness because these factors are correlated with other attributes. For example, there is a positive correlation between race and neighborhood.
This underlines the importance of contextualizing problems: identifying conscious and unconscious bias and looking for solutions. In other words, we can’t eliminate biases, but we can use them to work towards a fairer society, Chiappa said.
The second presentation was from Cecilia Mascolo, professor of mobile systems at the University of Cambridge and The Alan Turing Institute. Her talk covered potential applications for built-in computational units on smartphones and wearables, particularly in developing countries that may have limited or slow access to cloud platforms.
These possibilities include using a smartphone’s built-in AI capabilities to support healthcare applications. For example, voice recognition can be used for mood monitoring and early diagnosis of Alzheimer’s disease. However, constant monitoring and/or the collection of detailed location data, have privacy implications. These are analogous to the side effects of a drug, suggested Mascolo, who added that more localized computations could reduce privacy concerns while maintaining the benefits of personalized healthcare monitoring.
Self-diagnosis in wind turbines
The final presentation was from Fujitsu’s lead deal architect, Marian Nicholson, who discussed the application of deep learning in advanced image recognition. Examples include teaching wind turbines to recognize a defective blade.
Fujitsu’s work starts from the premise that humans are predominantly visual conceptualizers—i.e., babies recognize images and relate them to what’s happening around them. Today, image recognition is important for autonomous and semi-autonomous vehicles, delivery drones, and so on.
Nicholson referred to recent headlines about the dangers of AI and highlighted the need for organizations to choose to use technology for good. Fujitsu’s own mission is to build technology that will benefit society, she said. But for society to accept AI demands transparency about the data, how the system works, and—critically—why it was designed, along with the ability to identify and minimize bias. The power and potential of AI are balanced by our responsibility to ensure it is used in a safe and fair way, she said.
Joanne Goodman, Internet of Business. This article originally appeared on Internet of Business’ website. Internet of Business is a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, firstname.lastname@example.org.
Keywords: artificial intelligence, IoT, gender gap
Too few women join STEM industries, especially AI and IoT industries.
Bias and preconception are deeply ingrained in society, which means they are also ingrained in the data used to develop AI applications.
Accepting AI requires transparency about how the system has been designed to help minimize any potential bias.
About the author
Joanna Goodman is a freelance journalist who writes about business and technology for national publications, including The Guardian newspaper and the Law Society Gazette, where she is IT columnist. Her book, Robots in Law: How Artificial Intelligence is Transforming Legal Services, was published in 2016.