Content Information
Use of Sensitive and Protected Data
This Policy aims to promote responsible and ethical use of generative artificial intelligence (AI) technologies. By establishing clear minimum requirements and prohibited uses of AI, we ensure that AI technology is utilized to enhance efficiency, innovation, and productivity while safeguarding against potential risks and misuses.
All Support Entity workforce members (employees and contractors) and any contracted third-party performing work on behalf of the Agency must comply with this Policy. Supported Entity senior leadership is responsible for establishing procedures for their Agency’s compliance with the requirements of this Policy. This policy is binding on Supported Entities and should be considered a best practice for others.
Blind Trust of AI Outputs
Generative AI technology, including chatbots, virtual assistants, and similar applications, is increasingly prevalent in professional settings. These technologies, such as OpenAI's ChatGPT and DALL-E, Google Gemini, and Microsoft Copilot, offer diverse functionalities and integration options, ranging from standalone systems to seamless embedding within existing infrastructures.
While generative AI tools have the potential to significantly enhance productivity, efficiency, and innovation across various tasks, including drafting documents, text editing, idea and image generation, software code writing, data analysis, and anomaly detection, they also pose risks such as bias, inaccuracies, and intellectual property concerns. As AI technology evolves rapidly, organizations must carefully assess both the benefits and risks associated with its adoption. The State of Iowa is actively developing procedures to navigate this dynamic landscape of AI technology.
Personal Accounts
Department of Management (DOM) Information Technology Procurement Department of Administrative Services (DAS) Procurement