Not a Threat: Deploy AI at Work

The NIST AI Risk Management Framework: Building Trustworthy AI Systems

Written by Sam Chappell | Jan 28, 2024

 

“Most people don’t like change. They revolt against it unless they can clearly see the advantage it brings. For that reason, when good leaders prepare to take action or make changes, they take people through a process to get them ready for it.”

— John C. Maxwell

Trust and Performance

Trust is an emotional, subjective and fluid concept. There is no guarantee that one person's trust will be conferred to another, or that your trust in me today will still exist tomorrow.

However, as elusive as it may be, trust is the foundation of all productive relationships and a vital element of any high-functioning society.

Research has shown that countries whose citizens report higher levels of interpersonal trust experience greater happiness and better economic outcomes.

The Value of Trust in Business

The value of trust in a business setting is just as quantifiable.

62% of consumers say their purchasing decisions are influenced by a company's authenticity, with 94% remaining loyal to companies that operate with transparency and 73% even willing to pay more for such transparency!

Trust from employees is equally beneficial. Through employee surveys conducted over the last 30 years, Great Places to Work has uncovered that high-trust cultures lead to:

  • Employee turnover rates 50% lower than industry competitors;
  • Stock market returns 2-3X greater than the market average;
  • Increased employee willingness to go above and beyond for clients and customers.

Executives are not blind to these benefits - 91% acknowledge that their ability to build and maintain trust tangibly improves the bottom line, and more than half of Fortune 100 companies list "integrity" as a core value.

But for an organization to access the benefits enjoyed within trusting cultures, leaders must do more than update their list of corporate values. They must take deliberate and consistent action.

Shopify: The 'Trust Battery'

Tobi Lütke, the CEO of Shopify, built respect and credibility amongst his employees by introducing the concept of a 'trust battery' early in the company's history:

"Another concept we talk a lot about is something called a trust battery. It’s charged at 50 percent when people are first hired. And then every time you work with someone at the company, the trust battery between the two of you is either charged or discharged, based on things like whether you deliver on what you promise."

This metaphor was easy for employees to grasp and the results are hard to deny - over the last decade, Shopify's annual revenues have multiplied by more than 230X.

Trust in the Age of AI

As the use of artificial intelligence becomes more widespread, business leaders must consider how to foster trust in these systems from their employees, customers and suppliers.

The potential for AI to improve business processes and customer experience has been widely reported. However, the risks to privacy, data security and jobs have also dominated the headlines.

PwC found that the vast majority of consumers say protecting their data is very important to building trust, and research from McKinsey has shown that many customers value trustworthiness almost as highly as other common purchase decision factors like cost and quality:

Chart credit: McKinsey

The AI Trust Gap

When it comes to implementation of technology in the workplace, there is often a gap in trust between employees and company leadership.

Workday recently uncovered an "AI Trust Gap" in a survey of 1,375 business leaders and 4,000 employees, finding that 62% of leaders welcome AI whilst only 52% of employees do.

"Employees aren’t confident their company takes a people-first approach. 70% of leaders say AI should be developed in a way that easily allows for human review and intervention, and yet... 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention."

The one thing they can agree on - organizational frameworks will be essential for ensuring that AI systems are trustworthy:

Chart credit: Workday

The NIST AI Risk Management Framework

Enter the National Institute of Standards and Technology ('NIST'), an agency of the US Department of Commerce with the stated mission to "promote  U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life."

NIST contributes heavily to the AI research community and is working to establish benchmarks and develop metrics for evaluating the technology. The agency is leading the development of technical AI standards and contributing to the discussion around the development of AI policies.

NIST's AI Risk Management Framework (AI RMF) was crafted using a transparent, multidisciplinary, and multi-stakeholder approach. The tool is designed to help businesses incorporate trustworthiness into the design, development, use, and evaluation of AI systems. 

AI Risks and Trustworthiness

Although the AI RMF acknowledges that "trustworthiness is a social concept that ranges across a spectrum and is only as strong as its weakest characteristics", an AI system will only be deemed trustworthy by its users if it successfully balances their various interests.

The 7 core characteristics of a trustworthy AI system are:

  • Validity and Reliability - the AI system fulfills the requirements for its intended use, delivering consistent performance over time and under expected conditions. 
  • Safety - the AI system operates without endangering human life, health, property, or the environment, and comes with clear guidelines for use.
  • Security and Resiliency - the AI system is protected from unauthorized access and capable of maintaining function in adverse conditions, or degrading gracefully if necessary.
  • Accountability and Transparency - relevant information about the AI system must be accessible to enable appropriate, context-dependent AI-human interaction.
  • Explainability and Interpretability - the AI system enables users to understand how a specific output is generated and why the AI came to that decision.
  • Privacy - the AI system safeguards the practice of giving humans choice into how their personal data is accessed and used.
  • Fairness with Mitigation of Harmful Bias - the AI system prevents discrimination by recognizing and addressing systemic, computational, statistical, and human-cognitive biases across the AI lifecycle, from data collection to decision-making processes.

Image credit: NIST

Managing Potential Tradeoffs

Organizations will face tough decisions when attempting to balance these characteristics and appeal to their various stakeholders.

To take one example, in situations where there are limited data available to train an AI system, privacy-enhancing techniques may result in a loss of accuracy, reduction in validity and increase in bias of the system.

Every scenario is unique and will require humans to employ judgment when determining how to balance these characteristics, manage tradeoffs and maximize benefits and positive impacts.

Cultivating Trust and Capitalizing on Opportunity

As AI continues to evolve, it brings unparalleled opportunities for efficiency, innovation, and growth. Yet, it also presents challenges – concerns over privacy, data security, and the ethical use of AI are paramount in the minds of stakeholders.

To navigate this complexity, business leaders must prioritize the cultivation of trust – not just in their human relationships but in their technology.

This includes bridging the 'AI Trust Gap' by adopting a people-first approach, ensuring clear communication about the role of AI in the workplace, and actively involving employees in the AI integration process. By doing so, they can build a culture where AI is seen as an enabler of human potential, not a replacement.

As organizations build out their AI capabilities, balancing the recommended characteristics of trustworthy systems will require tradeoffs, but it is a necessary step to ensure that they are accepted and effectively utilized by all stakeholders.

There are no shortcuts - if you want to cultivate trust with your stakeholders and fully capitalize on the promise of AI, heed John C. Maxwell's advice and "take people through a process to get them ready for it.”