By Stéphane Amarsy,
CEO of D-AIM

There remains a lack of transparency in the way algorithms work and how artificial intelligence makes decisions, even for designers. However, there is a need for the public to be informed of the many individual decisions that machines make by accumulating large volumes of personal data.

How do we define transparency?

 

The results of Machine Learning or Deep Learning—not to mention the models themselves—are often inexplicable, even to those who program them. That said, beyond the European Union’s political commitment to a right of explanation, its citizens will likely want even more transparency on automated decisions in order to understand, assess and even oppose discriminatory practices. How exactly this transparency should be defined remains an open question. Should there be an explanation? What about simply appreciating its complexity? Must we choose between explainability and accuracy? The accuracy of certain predictive techniques is inversely proportional to their explainability.

 

A better understanding of how AI-driven techniques work shouldn’t be seen as a lost cause. For example, several researchers have been working to understand Deep Learning using methods derived from biological research. For others, the pursuit of interpretability is a mistake—by restricting these technologies to human capabilities, they are not used to their full potential. Indeed, to be interpretable, a model must be relatively simple. But what is simplicity in this case? Does it mean keeping explanatory factors to a minimum or ensuring methods are as basic as possible?  Or is it more about strong discrimination? The question remains unresolved.

 

The solution lies in hybrid artificial intelligence

One solution would be to intelligently mix symbolic AI (expert systems) with Machine Learning and Deep Learning. Expert systems were successful before Machine Learning took over. But symbolic AI still has the advantage of being something humans can read and understand. It is based on the modeling of logical reasoning and the representation and manipulation of knowledge through formal symbols. On a more mundane level, these are expert systems that reproduce rules and decisions.

 

Murray Shanahan, a Professor of Cognitive Robotics at Imperial College London, is working to create hybrid AI, bringing together the best of both worlds. It’s like “training” the system—teaching another machine the rules of a game and the state of the world around it—so it can formulate what is going on in more abstract terms. Such a system would have clear advantages over Deep Learning due to its transparency and the fact that it needs a smaller quantity of data to learn. This demonstrates that, when it comes to AI, technology is never completely obsolete or abandoned. The hybridization of different methods is always more useful. Legislative and social changes will greatly reinforce this through the continual emergence of new constraints and, therefore, the future of AI is not yet fully understood.

 

Beyond transparency: accountability

However, transparency is not the same as accountability. Algorithmic transparency likely raises false hopes by hiding the real policies at stake. Simply opening the code of models won’t mean everyone can inspect it, nor does it allow for accountability. Without consideration of the data itself, algorithmic transparency gets us nowhere. Transparency for the sake of transparency is not a sustainable goal. We need systems that are accountable in order to counter discriminatory effects. We must always remember that these systems are based on experience. As a result, they will only ever reproduce what they have observed. How are decisions made? What are the criteria? Is it fairer to give everyone equal opportunities or to combat inequity? Who decides?

 

One potential way forward lies in strict requirements for potentially discriminatory treatment, with a duty to provide total transparency and a right of opposition. On the other hand, with regard to non-discriminatory treatment, there should be total freedom to achieve the objective of a continuous search for innovation and progress. The consequent applications will be the final judges of the quality of output while respecting the consent of each individual.