Some interesting new research was recently published by Law Fare Blog and put together by Matt O’Shaughnessy, an AI and algorithms fellow at the Carnegie Endowment for International Peace. Here is an extract:
From major policy addresses and influential strategy documents to joint statements with key partners, a major stated U.S. policy goal has been to develop “rules and norms” that ensure technologies such as artificial intelligence (AI) are “developed and used in ways that reflect our democratic values and interests.” Unfortunately, this is much easier said than done.
A closer look at one of the most accepted norms for AI systems—algorithmic transparency— demonstrates the challenges inherent in incorporating democratic values into technology.
Like other norms and principles for AI governance, efforts to make the inner workings of algorithms more transparent provide utility to policymakers and researchers alike. More detailed information about how AI systems are used can enable better evidence-based policy for algorithms, help users understand when algorithmic systems are reliable, and expose developers’ thorny design trade-offs to meaningful debate. Calling for transparency is an easy and noncontroversial step for policymakers—and one that does not require deep engagement with the technical details of AI systems. But it also avoids the more difficult and value-laden questions of what algorithms should do and how complex trade-offs should be made in their design.
Read the full article here.
Leave a Reply