It is difficult to understand how to responsibly manage and deploy AI systems today. The U.S. Government Accountability Office recently created the first federal framework to ensure accountability and responsible use. It outlines the essential conditions for accountability during the entire AI lifecycle, from design and development through deployment and monitoring, and sets out specific questions that leaders and organizations should ask and the audit procedures they can use when assessing AI systems.There are many principles and concepts that aim to promote fair and responsible use of artificial intelligence when it comes to managing it. Organizations and their leaders often find themselves in a quandary when faced with difficult questions about how to manage and deploy AI systems responsibly.The U.S. Government Accountability Office recently created the first federal framework to ensure accountability and responsible use for AI systems. This framework outlines the essential conditions for accountability during the entire AI lifecycle, from design and development through deployment and monitoring. It also provides specific questions and audit procedures for assessing AI systems in the following four dimensions: governance, data, performance, monitoring, and surveillance.This work was done to assist leaders and organizations in moving from theories and principles to actual practices that can be used to evaluate and manage AI in real life.Understanding the entire AI Life CycleToo often oversight questions about an AI system are raised after it has been built and deployed. However, this is not enough. An AI system or machine-learning program should be assessed at all stages of its life cycle. This will allow you to identify system-wide problems that cannot be identified by narrowly-defined point-in-time assessments.Based on the work of the Organisation for Economic Co-operation and Development, (OECD), and others, we found that there are three important stages to an AI system's life cycle:Design: The process of defining the system's goals and objectives, as well as any underlying assumptions or performance requirements.Development: Collecting and processing data, defining technical requirements and building the model. Validating the system.Piloting, compatibility testing with other systems, regulatory compliance and evaluation of user experience.Monitoring: Continuously assessing the system outputs and impacts (both unintended and intended), refining and making decisions about expanding or retiring the system.This is similar to how software development uses a life-cycle view of AI. We have previously noted that organizations need to establish appropriate life-cycle activities to integrate design, construction, testing, planning, and building to continuously measure progress and respond to stakeholders' feedback.Inclusion of the entire Community of StakeholdersIt is crucial to have the right mix of stakeholders at all stages of the AI cycle. Experts are required to give input on technical performance of a system. This could include software developers, data scientists, cybersecurity specialists, engineers, and others.The full community of stakeholders extends beyond technical experts. It is also necessary to have stakeholder who can assess the social impact of an AI system implementation. Additional stakeholders include policy and legal professionals, subject-matter specialists, users of the system and, most importantly, individuals who are impacted by the AI systems.Every stakeholder plays an important role in ensuring that any ethical, legal or economic concerns regarding the AI system can be identified, assessed and mitigated. A wide range of technical and non-technical stakeholders can provide input to help prevent bias or unintended consequences in an AI system.Four Dimensions of AI AccountabilityOrganizations, leaders, and third party assessors must focus on accountability throughout the entire lifecycle of AI systems. There are four dimensions to be aware of: governance, data management, performance, monitoring, and monitoring. There are both important actions and things to be aware of in each area.Examine governance structures. Governance processes and structures are essential for a healthy ecosystem to manage AI. A proper governance system for AI will help to manage risk, show ethical values, and ensure compliance. Accounting for AI requires that you look for evidence of governance at an organizational level. This includes clear goals and objectives, well-defined roles and responsibilities, lines of authority, multidisciplinary workforce capable managing AI systems, and a wide range of stakeholders. It is also important to examine system-level governance elements such as technical specifications, compliance and access by stakeholder to information about system design and operation.Learn the data. We all know that data is vital for many AI and machine learning systems. However, the same data that gives AI systems power can also make them vulnerable. It is crucial to document how data is used at two stages of the system. When it is being used for the building of the underlying model, and when the AI system is actually in operation. Documentation of the data sources and origins used to create the AI models is essential for good AI oversight. Attention must also be paid to technical issues such as variable selection and the use of altered data. It is important to examine the reliability and representativeness, as well as the potential for bias, inequality, or other social concerns. Accountability includes the evaluation of an AI system's data security and privacy.Define metrics and performance goals. Once an AI system is developed and deployed, it is crucial to not lose sight of the important questions: Why did we create this system? And How can we tell if it works? This requires detailed documentation that outlines the purpose of the AI system, as well as the definitions of performance metrics, and the methods for evaluating its performance. The ability to assess and manage an AI system's performance is crucial for both the managers and those who evaluate it. These performance assessments should not only be performed at the system level, but also include the components that interact with and support the overall system.Plan for monitoring. AI should not be viewed as a "set it and forget it" system. Many of AI's benefits are due to its ability to automate certain tasks at speeds and scales beyond human capabilities. Continuous performance monitoring by humans is also essential. It includes setting a tolerance level for model drift and continuous monitoring to ensure the system is producing the desired results. Monitoring over the long-term must include evaluations of the operating environment and whether or not scaling up or expanding the system in other operational settings. It is important to assess whether the AI system can still achieve its intended goals and which metrics will be used to decide when it should be retired.Think like an auditorOur framework is based on existing standards for internal control and government auditing. This allows the framework's audit practices and questions can be used by existing accountability or oversight resources that organizations already have. It is written in plain English so that anyone can use its principles and practices to interact with technical teams. Although our focus was on the accountability of the government's use of AI, the framework and approach can be easily adapted to other sectors.The framework covers the four dimensions of governance, data, performance and monitoring. This framework is ideal for executives, risk managers, auditor professionals, and anyone who works to ensure accountability for the organization's AI systems. It provides specific questions and audit procedures that can be used to assess AI systems.It never hurts to think like an audit when it comes to creating accountability for AI.