On September 28, 2024, the UNESCO Information for All Programme Working Group on Information Accessibility held its fifth annual Artificial Intelligence for Information Accessibility (AI4IA) Conference.

The AI4IA Conference explored AI risks and opportunities, focusing on presentations that would appeal to the “widest cross-section of people.” Specifically, the AI4IA Conference examined critical international themes for the use of AI, including presentations on:

  • AI as a tool for access to justice and the preservation of human rights
  • The affordances of generative AI, the devaluation of truth and the meaning of “vulnerability” in the digital age
  • Generation Alpha, hyper-personalization and the divides in information accessibility
  • How generative AI is bracketing power and reinventing the value of information, intellectual property and information accessibility
  • The higher dimensions of human creativity versus AI
  • Access, not atrophy: autonomy, personhood and digital agency
  • Esports as a tool for upskilling, empowerment, inclusion and access to science, technology, engineering, arts and math
  • AI technologies: risks and opportunities for disabling disability
  • The paradox of information access and carbon footprint
  • The intersection of AI and neurotechnology

BakerHostetler participated in the 2024 AI4IA Conference with a presentation titled “Improving Accessibility Through AI Program Accountability that built on those themes, asserting that constructing and maintaining a strong AI program with articulated components and documentation would benefit all stakeholders in an organization using or affected by AI, thereby improving accessibility to AI and further directly extending AI’s benefits.

The BakerHostetler presentation covered the following basics of AI program development, focusing on explaining AI and related documentation in a way that is accessible to the general public, including addressing:

  • Components of internal AI programs
  • The importance of how interpretability intersects with – but is not synonymous with – transparency
  • Benchmarking among organizations that have developed AI policies and explored AI program development, including considerations of internal AI charters and committees
  • The application of legal standards, including the Colorado AI Act, and frameworks, including the National Institute of Standards and Technology AI Risk Management Framework
  • Considerations related to developer (creator) and deployer (implementer) roles within organizations and how procurement and contracting are opportunities for organizations to utilize gating measures to responsibly deploy AI technologies
  • How conformity assessments, audits, data privacy impact assessments and other internal documentation methods help improve organizational knowledge as well as defensibility in the case of a regulatory inquiry

The presentation, available in its entirety here, is a follow-on to BakerHostetler’s presentation at the 2021 AI4IA conference. In that presentation, “Ethics in Artificial Intelligence and a Practical Approach to Presentation & Defense,” discussed the importance of understanding the data involved in AI technologies, rather than focusing on the math, and correctly predicted that the consent picture involved in AI issues intersecting with data privacy would grow in importance.

[View source.]



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *