Compliance Issues That Arise When Deploying AI
April 20, 2023
The fact that AI is a potential loose cannon was seared into the consciousness of the general public a few weeks ago by a New York Times reporter’s widely read account of his extended conversation with an early release Microsoft’s A.I.-powered Bing search engine, during which the bot declared its love for him and tried to convince him to leave his wife. Even before that article appeared, however, a number of influential organizations had recognized the possibility of less spectacular but more insidious potential problems with AI and recommended “AI frameworks,” going back at last as far as 2019,when member countries of the Organization for Economic Cooperation and Development (OECD) adopted a set of “AI Principles,” according to a post from law firm Clark Hill.
Since then, there have been many other versions, from an alphabet soup of organizations that includes NIST (the National Institute of Standards and Technology), IEEE ( The Institute of Electrical and Electronics Engineers), and the White House Office of Science and Technology. The sheer number of frameworks, laws and proposals in this area can be overwhelming, but there are some common elements, according to the Clark Hill writers. They break these down into seven components that should be considered in any comprehensive AI/ML compliance program.
Read full article at:
Share this post: