Summary
The Centre for Information Policy Leadership (CIPL), a “think and do tank” based in Washington DC, Brussels, and London recently published a white paper on Building Accountable AI Programs: Mapping Emerging Best Practices to the CIPL Accountability Framework. You can find the original white paper on the CIPL website.
This white paper shows how organizations adopting AI technologies are managing risks arising from their use. In particular, the paper examines how promoting accountability around the use of these technologies builds trust internally, with customers, and with regulators.
CIPL interviewed representatives from Accenture, Cisco, Google, Mastercard, Meta, Microsoft, Paypal, SAP and others to learn about their efforts on promoting accountability around AI. Some of these organizations have already publicly published resources on these topics. The findings presented in this paper were informed by these interviews, case studies, and associated research.
In brief, the paper suggests that,
- There is a general awareness across all levels of organizations, particularly leadership, about the necessity to build and use AI responsibly to inspire confidence in internal and external stakeholders.
- There is a belief that given the apparent staying power of these technologies, it is wise to invest in building sound governance frameworks.
- Direction and guidance from governments on how best to regulate these technologies would be welcome. Nevertheless, these organizations have begun formulating internal governance while leaving themselves room to maneuver in response to new and changing regulations.
- The general consensus appears to encourage the regulation of applications of AI rather than AI. The argument is that excessively restrictive policies may inhibit the development of these technologies.
- These organizations agree on the need for standardized terminology around these technologies and that peers across these industries may need to work together to learn from each other’s best practices.
- Existing compliance frameworks may be updated to include those around AI. The novelty of these technologies have compelled them to network experts across disciplines to ensure competitiveness while also mitigating risks.
Commentary
This is a timely paper that tackles a subject of great debate in the technology sector today. AI technologies are making great strides in capabilities and reach. Excitement, fear, and controversy seem to be staple features of conversations on the subject. The public, governments, and industrial leaders are taking notice.
This paper provides a glimpse of how organizations at the leading edge of these technologies are ensuring human control and accountable AI programs. The participating organizations participated out of an interest in learning how their peers were tackling the same issues. This, while noting the nascent regulatory efforts by governments around the world, suggests that there is a great deal of uncertainty in creating governance frameworks independently.
The document is somewhat repetitive in what it examines. Presenting its findings as a general list and also by classifying them into the think tank’s Accountability Framework exacerbates the repetitiveness. By doing so, CIPL seemingly wants to champion their framework to all readers, whether they are familiar with their prior work or not. This is a logical method to build mindshare but the flow of information suffers. The reader first reads about 10 general findings which then break down into the 7 elements of the accountability framework. To the publisher’s credit, they list all of their findings within those elements again in the appendix section in an easy to read table.
Minor quibbles like fully justified margins aside, I believe this paper provides insight into initial attempts at self regulating AI applications. More generally, it shows how innovation, competition, and regulation promotes organizational change.
Do you seek to produce a white paper to disseminate your ideas or research? I can help. Please visit the Solutions page to learn more.
Disclaimer: I am not affiliated with CIPL or any of the organizations that were a part of this study. If you would like to provide feedback or correct something in this article, please contact me.