Technology has been biased for centuries, or so admits the chief scientist for Artificial Intelligence (AI) at Google, Fei-Fei Li. She cites scissors as an example – their design represents a bias against the 10-15% of the population who are left-handed. The problem with artificial intelligence is that the bias is hidden away, and even the researchers who train AI systems often cannot understand how their charges reached their conclusion. This may be worrying when AI systems are used for medical diagnoses and financial decisions; and that presents future risks around how those areas might be regulated. The data used to train AI systems is never neutral and it can continue and even worsen the bias. If an HR selection system looks at historical data on what makes a successful CEO of a company, it might conclude that white males are most likely to succeed owing to the predominance of white males in the sample. It would then make a decision on that basis/bias
Artificial Intelligence is estimated to contribute up to $15.7 trillion per annum to the global economy by 2030, a figure greater than the combined GDPs of China and India, so it might be worth figuring out how to manage some of the risks it brings with it.
Source: Fortune magazine