In a significant move this week, Google announced the formation of an external global advisory council designed to offer “guidance on ethical issues relating to artificial intelligence, automation and related technologies”.
The Advanced Technology External Advisory Council (ATEAC) will consist of eight leading academics and policy experts from around the world, including former US deputy secretary of state William Joseph Burns, the University of Bath’s computer science professor Joanna Bryson, and mathematician Bubacarr Bah, who will meet for the first time in April and on a further three occasions throughout the year.
This move doesn’t necessarily represent a sea-change in the tech giant’s policy and attitude towards AI ethics — the company had already established internal councils, panels and review teams to confront the challenges posed by AI and related technologies. Last June it published its seven guiding AI principals, outlining its approach towards the adoption of AI. However, notably it is the first time Google has sought worldwide expertise on AI to inform its overall strategy, and it will be interesting to see how this development impacts the company’s future business decisions, which have often come under a great deal of criticism.
Google is not the only tech powerhouse looking at ethics and how it goes about adopting, investing in and incorporating AI innovations. Perhaps coincidentally, just a day before Google launched the external advisory council at the MIT Technology Review’s EmTech Digital conference, Amazon had revealed a collaboration with the National Science Foundation and a $10m cash injection to help develop systems based on fairness in AI. Meanwhile over at Microsoft, Harry Shum, executive VP of its AI and Research Group, had also announced at the very same conference that it will be adding “an ethics review focusing on AI issues to its standard checklist of audits that precede the release of new products”.
The discourse around AI, particularly coming from the heavy hitters in Silicon Valley, has certainly changed, that much is clear. And whether this is down to pressure being applied on these firms to adopt a less gung-ho and more measured approach as they slog it out on the AI innovation battlefield, remains to be seen.
But is it realistic to expect the likes of Google to genuinely care about AI ethics in so far as they are prepared to start prioritising these issues above their own sizeable business interests? This week the general mood at the summit in San Francisco was sceptical. Rishida Richardson, director of policy research for the AI Now Institute, was quoted in Reuters as saying: “Considering the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.”
While AI ethics may now be at the forefront of the agenda at conferences such as EmTech Digital, companies are still not being held accountable by the necessary regulation and legislation to keep them in-check and ensure that their roll-outs are responsible and ethical. In the absence of a single, global regulatory body operating in the field of AI, large tech firms are pretty much left to their own devices to self-regulate and develop AI-driven products and services without any directives or consequence. It’s a dangerous situation, and one which has led to several high profile, real world incidents whereby AI-based innovations have been rushed through and members of the general public have paid the price.
If we want the tech giants to offer more than lip service and tokenistic gestures on ethics in AI, maybe now is the time the industry should consider introducing independent regulation to enforce ethics rather than just talk about them.