Impact Capitol DC

Who Will Monitor the AI Monitors? And What Should They Watch?

As all good policy practitioners know, all regulatory activities are constrained by the agency problem: Those who are being regulated know far more of what is going on than those who are charged with overseeing their activities.?

But there are even more problems of information asymmetry associated with AI regulation, because even those who are on the ?inside? are still learning about the capabilities of the new technologies they are developing. 

AI technologies, because of their human and constantly evolving characteristics, along dimensions of which their human developers are not entirely sure about, have the potential to create totally unexpected ?surprises??that is, the true??black swans??that not even prudent risk management can reasonably expected to anticipate. This invokes Frank Knight?s insightful 1921 distinction between risk?which can be measured and quantified?and uncertainty, which cannot.?

Via Reuters

Making decisions under such Knightian uncertainty is both an art and a science, relying on intuitive judgement and tacit knowledge built through experience in navigating complex, unstable environments synthesized with what little tangible information is available to inform the process. Knight saw entrepreneurial judgement as an important skill in making such decisions. 

AI is a frontier technology that should be regulated as such.  

The EU has proposed and the 27 member states have endorsed?an AI Act?comprising 12 Titles, 85 Articles and 13 Annexes (in 272 pages of text) stipulating the requirements for developers, importers, distributors, and providers of AI applications used in Europe. Even if an AI is developed outside of Europe, these rules require models to comply with the EU legislation which will??come into effect two years after ratification?likely 2026 or 2027.

In the US, the White House announced?an “Executive Order?on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” on October 30 2023, outlining the obligations on Federal entities in regard to AI development and use. This was backed up by the release on March 28 2024 by?the Office of Management and Budget (OMB) of the operational directions (rules) to apply to the agencies. This more modest set of rules still amounts to 34 pages of instructions, reflecting the fact that it operates within an existing legislative framework for government agencies, whereas the EU Act stands alone in governing all AI activity.

Despite their very different general requirements of the governance of AI application deployment, both sets of rules claim to have risk and risk management at their core. Yet they both rely on a very narrow notion of risk. There are 767 instances of ?risk? in the final EU text and 101 mentions of it in the US EO. Both start from the assumption that the developers and deployers of AIs have clear notions of the risks the applications pose and can confidently identify risk management strategies. There is no mention at all of uncertainty in the US EO, and the only mentions of it in the voluminous EU text relate to legal uncertainty.???

Hence, both sets of rules focus on bureaucratic processes of risk management and accountability.??Both require the deployers of subject AI applications to have named and registered individuals responsible for their operation, and both require extensive (and audited) documentation of the risk management plans for the applications. The regulatory requirements are tangible and quantifiable, even though the outcomes of the AIs they govern still embody considerable uncertainties.?

In both the EU and the US, a veritable army of skilled people will be required to monitor and report on AI activities. The EU?AI Office?has been established for this purpose, and each member state will need an office to certificate AI deployments.??In the US, the OMB will do the job. The EU is already?actively hiring?for the AI Office. In the US, the National Telecommunications and Information Association?has identified a shortage of people with the requisite skills, as these same people are also in high demand among AI developers and regulators. They propose government scholarships, subsidies and extensive in-house education and training to ensure the workforce is developed.?

But what skills should these new bureaucrats require? Clearly not just the knowledge of how AI works and the practical application of risk management, but also the requisite understanding of how the firms, sectors and industries in which the AIs are deployed operate?that is, the entrepreneurial knowledge and experience Knight foresaw as necessary to make good decisions under the state of uncertainty.?

Successful AI regulation?just as the development and implementation of AIs themselves?requires a workforce with very different skills from those required for most regulatory tasks in recent history.??Only by embracing the art and science of decision-making under uncertainty can we truly harness the potential of AI while mitigating its risks and preparing for its surprises.?

The post Who Will Monitor the AI Monitors? And What Should They Watch? appeared first on American Enterprise Institute – AEI.

REQUEST EARLY ACCESS

AI For Real Estate Professionals