Chapter 10 AI’s Outcomes-Driven World: What are the ‘outcomes organization/society/AI’? (from the Surfing AI book)
From Duignan, P. (2026). Surfing AI: 30 New Concepts for Getting Your Head Around AI Shock.
This chapter first discusses how AI is paving the way for something that organizational and social policy thinkers have long sought, a truly outcomes-driven world leading to the outcomes organization and the outcomes society. There is then discussion of how a particular outcomes tool may be able to assist in specifying, explaining, communicating and controlling what AI itself does as part of achieving outcomes-transparent AI. Lastly, the idea of outcomes AI is introduced.
The outcomes organization
Despite the inherent desirability of organizations and society being outcomes-driven, up until now, we have seen limited progress in achieving this. This is due to the fact that humans have structured our organizations in specific ways based on the limits of human cognition and constraints on our speed of communication. This has resulted in organizations and society itself consisting of sets of siloed activities. These silos can be seen as being organized around diferent organizational or societal ‘functions.’ A central challenge of organizational life up until now has been how to get these separate, functionally-siloed organizational parts to practically coordinate and align around a unified vision and mission. This is reflected in the ongoing calls in both organizational science and social policy for ‘whole-of-organization approaches’, ‘joined up solutions’, ‘cross-functional teams ’, ‘wrap-around services’ and ‘systems thinking.’
Given that integration is an intrinsically desirable objective for any organization or society, why did siloed organizations and siloed societies develop? There are three drivers that led to siloization in the past in almost all organizational and societal contexts. However, AI is now impacting these drivers and, as a result, reducing the need for organizations and societies to be as siloed as they have previously been, leading us directly to the concept of the outcomes organization. The three drivers of siloization and how they are being impacted by AI are discussed below.
First, humans have cognitive limits on how much information they can acquire and retain. This has naturally led to the need for silos within organizations. Within these silos teams of people have been able to specialize and develop in-depth knowledge about their area of work. For instance, in a many organization, there have been silos for: HR, corporate services, R&D, operations, marketing etc. AI intelligence, in contrast to human intelligence, is not constrained by human cognitive limitations. It can contain in-depth information about a wide range of different topics. Non-specialists can then use AI to instantly access deep insights about different knowledge domains. The introduction of AI means there is now less need to rely on silos that contain, manage and provide access to specific knowledge bases on particular topics.
Second, in addition to being repositories of specialized knowledge, current organizational silos consist of teams of people who have specific operational skills regarding the particular functional silo they are working in. For instance, within HR, there are a number of processes such as recruitment, onboarding, performance management, organizational restructuring, that require specific skill sets. In the past, workers have had to spend much time developing and practising these skills. However, regarding operational skills, AI can provide real-time guidance on how to undertake highly specialised tasks, and it can automate routine specialized tasks. This means that specialised skill sets that were previously only available from within a specific organizational silo can now be provided much more broadly by AI right across an organization. As a result, this is reducing the need for skill set compartmentalization within functional silos.
Third, for humans to work together effectively, they need control and governance structures. In the past, for reasons of scale and efficient communication, these functions have been undertaken within organizational silos. However, AI can undertake monitoring and decision-making right across a wide range of areas of an organization’s work. It is capable of coordinating the activity of many people and other entities within an organization. This is an aspect of AI orchestration. AI has the potential to more efficiently manage integrated control and governance spanning across multiple organizational silos. The above three drivers of organizational siloization applied in the past to both silos within individual organizations in addition to siloization of different organizations within a society as a whole. For instance, police, justice, welfare, commerce, revenue, finance, etc.
So with AI, we are starting to see a major leap in the possibilities for cross-organizational and cross-societal integration and collaboration. If handled correctly, this could usher in a new age in which we can finally achieve the holy grail that organizational thinkers have been seeking for so long, the vision of joined-up organizations and a joined-up society. There is now increasing attention to this possibility. For instance, a Microsoft report on the impact of AI discusses how AI will result in ‘human-agent teams upending the org chart.’ What is meant by this is that the traditional organizational chart based on the siloed structure of organizations is now being replaced. The report describes the replacement for a traditional organizational chart as being an outcomes-based ‘work chart.’ It describes this new approach as ‘dynamic and outcome-driven.’ This change reflects the emergence of a new way of working where ‘teams form around goals, not functions, powered by agents that expand employee scope and enable faster, more impactful ways of working.’ We can describe this impact of AI as us now transitioning to the outcomes organization and the outcomes society.
In order to rapidly adapt to the new organizational and societal forms that AI is now ushering in, we need to better understand what an outcomes organization could look like, develop well-developed ideas about how such organizations can work, and build practical tools that outcomes organizations can use. In doing this, we do not need to start from scratch. Given the appeal of the concept of the idea of outcomes-driven organizations, much thinking has already gone into how such organizations should work and developing tools for use within them. Despite organizations up until now having inevitably involved some siloization, some progress has been made in pushing for more outcomes-driven approaches even within traditional, siloed organizations.
The author and others have been intensively involved in this work over a number of years. One framework that I have developed while helping hundreds of organizations to become more outcomes-focused is the outcomes theory approach. This provides a comprehensive conceptual framework for how one can run outcomes-driven organizations. In addition, it provides well-tested practical tools that outcomes-driven organizations can use to organize their activity around outcomes rather than just staying with siloed functions. Given its outcomes orientation, this work is potentially very relevant to thinking about structuring and managing outcomes organizations in the AI age.
Outcomes theory provides a generic conceptual framework for thinking about achieving outcomes in any context. Given its generic nature, this means that it can be applied not only to humans taking action in the world, but also AI agents and hybrid human-AI agent activity. A key concept within the theory is that of an outcomes system. An outcomes system is any type of system pursuing goals of any type. Such systems are broadly defined as systems that: identify, prioritize, detail the steps needed to achieve outcomes, act to achieve outcomes; align lower-level steps with higher-level priority outcomes; measure, seek to attribute, and/or hold parties to account for outcomes of any type in any area.
There are many issues involved in transitioning organizations from being based on functional silos to outcomes organizations. These include the best way of specifying, working with and communicating outcomes and the steps that are being taken to get to them. How to ensure tight alignment between outcomes and the steps being taken to achieve them. How to ensure parties are accountable within an outcomes-driven organizational framework. How to deal with the problem of attribution of changes in outcomes when many parties are contributing to the same outcomes. How can collaboration be encouraged between parties, all of whom want to claim that improvements in outcomes have resulted from their own activity? How to deal with the issue of some outcomes being easier to measure than others. How to most effectively deal with delegation and subcontracting of different sets of activities necessary to achieve outcomes. Outcomes theory provides a robust conceptual framework for thinking about all of these issues and more for both human and AI agents. At the same time, it provides concrete tools for working with outcomes within the outcomes organization.
A key concept in outcomes theory is the idea that, underpinning any outcomes system, there is an implicit or explicit outcomes model. Outcomes theory argues that a prerequisite for the success of any organization or initiative is that the underlying ‘this-then logic’ of what it is trying to achieve is surfaced and articulated in an explicit, visualized outcomes model. Outcomes models set out in a visual format the high-level outcomes a system is seeking and the lower-level steps being employed to achieve these. Such models are built according to a set of rules to ensure that they are fit-for-purpose for planning, managing and implementing the outcomes-focused action that they model. One of the purposes of the rules is to ensure that everything that is relevant to achieving outcomes is included within the one model. For instance, contrary to most currrent organizational practice, where these things exist in different pieces of documentation, risks (written in the positive) and assumptions are included within an outcomes model. This means that the model captures the whole strategy space that is relevant to attempting to take action in the relevant area. The rules that are used for drawing outcomes models within outcomes theory can be seen as providing a technical standard through which any type of ‘this-then’ strategy space can be modelled. Such a standard way of modeling this is required in order to provide the strategic and implementation backbone that can then be used to drive any outcomes organization.
Outcomes models are visualized in the form of a left-to-right visual diagram. Larger diagrams are broken up into a series of modular drill-down sub-pages to enable sufficient detail to be captured within the models, while at the same time humans are not overwhelmed with information. When visualized in appropriate apps or platforms, the reader can move up and down between a high-level helicopter view and more detailed drill-down subpages as they work with the model. The specific outcomes models used in outcomes theory are known as Duignan strategy/outcomes diagrams or DoView strategy/outcomes diagrams. Their use is fully documented in the DoView Planning and Outcomes Theory Handbook (DoViewPlaning.Org).
Relevant to the discussion here, Duignan strategy/outcomes diagrams are a standardized and well-tested way of representing the outcomes-based work charts being called for in the Microsoft report discussed above. It should be noted that while outcomes models are the specific type of model used in outcomes theory, attempting to model organizational logic has a long history, with such models going by names such as ’theories of action,’ ‘program theories,’ ‘program logic models,’ ‘strategy maps’ and ‘intervention logics.’
Outcomes models serve both a conceptual and a practical purpose. Within outcomes theory, from a conceptual point of view, they provide an efficient way of talking about issues such as the different levels at which an organization’s outcomes reside, which outcomes are attributable to which parties and which outcomes should be used as accountabilities. At the practical level, outcomes models function as ‘shared thinking tools’ and provide a central organizing artefact for articulating organizational outcomes, prioritizing them, identifying the steps needed to achieve them, aligning the work of the organization, developing performance measures, holding parties to account and improving performance. In an AI outcomes-driven world, outcomes theory and outcomes models can therefore be used as the basis for building the totally new types outcomes organizations, initiatives and joint ventures that are going to be needed to exploit the potential of AI. They can be used to de-siloize organizations for an AI world. At the societal level, they can be used to de-siloize the provision of public services as AI enables this to happen.
In our new AI world, the outcomes theory approach has the advantage that outcomes models are world-centric, unlike earlier planning and implementation tools that are more organization-centric. This means that outcomes diagrams are an appropriate tool for managing, aligning, implementing and monitoring the vast multi-organization and multi-sector integrations that AI orchestration has now ushered in. In addition, the fact that outcomes theory makes extensive use of strategy visualization means that it provides a tool to manage hypercomplex systems more efficiently than planning and implementation systems that do not use a visualization approach.
Lastly and crucially, outcomes theory and the use of outcomes diagrams is not limited to thinking just in terms of human agents working within traditional organizations. This means that it can be applied equally to outcomes systems that just involve human agents, hybrid outcomes systems involving human agents plus AI agents working together, and outcomes systems purely being managed and implemented by AI agents. It is essential to have conceptual frameworks, such as outcomes theory and tools such as outcomes diagrams, that can deal with the emergence of ‘digital labor’: AI agents fully integrating with human labour in the workforce.
The outcomes society
The concept of the outcomes society is directly analogous to the idea of the outcomes organization. If an organization can be viewed as an outcomes system and its planning, management and implementation conceptualized as being structured around the backbone of articulating and implementing an outcomes model, then the same concept can potentially be applied to a society as a whole.
At the moment, societies are siloed in the same way as human organizations as discussed above. The three constraints that have resulted in organizations being structured in this way apply equally to how whole societies are structured. These constraints are human’s cognitive limitations, the siloing of specialized skill sets, and the problem of how to control and govern such specialized activity. Looking at societies from the point of view of outcomes theory, they can be conceptualized as being an overall outcomes system, in other words. Simply put, this is that societies are systems that exist to pursue a set of outcomes. This overall outcomes system is then made up of many smaller outcomes systems, which together make up the whole.
Conceptualizing a society as an outcomes society should not be misconstrued as some sort of call for draconian central planning. It is just a theoretical claim that it is useful to view societies as made up of a set of outcomes systems. The simple call to see societies in this way does not in itself make any claims about who gets to specify what is in the various outcome systems that together constitute a society. Nor at what level such outcomes systems should be specified and who should have control at various levels for developing, planning, implementing and measuring whether or not particular outcomes are being achieved. In contrast to most individual organizations, societies are typically made up of a wide variety of groups collaborating, but equally, depending on the particular society, often in conflict and competing with each other. And there are often large areas of disagreement about the overall outcomes a society should be seeking, the steps that are being taken in the attempt to achieve them, and whether or not particular outcomes are actually being achieved.
In addition, there can be those who argue that no attempt should be made to even specify outcomes for a particular society, saying that societies just consist of individuals pursuing their own individual outcomes. However, even this view can be accommodated within the concept of an outcomes society. If you conceive of society in this individualistic way, the highest level outcome is just something like ‘All individuals in this society are free to pursue whatever outcomes they want.’ Concrete lower-level steps being used within such a society to achieve this outcome can then be easily articulated without having to foreclose on a set of specific higher-level outcomes for the society as a whole’.
The main practical implication that arises out of the idea of an outcomes society is that there should be, as much as possible, transparency regarding the outcomes that are being sought by various parties within the society. This then enables individuals to freely choose to support organizations, initiatives and policies which they know are aligned with the outcomes they want to progress. At the current time, much of the political and related debate about the outcomes that various groups, e.g. politicians, large corporations, stakeholder groups, are seeking is ineffectual. People attempt to communicate and elaborate on the outcomes they are seeking, and the steps they believe are needed to achieve them, in the form of largely unstructured rhetoric. This is an inefficient way to operate and does not encourage people to be fully frank with others about the outcomes they are seeking.
So a first step in moving towards an outcomes society is for increasingly numbers of people to ask organizations and initiatives to transparently communicate their outcomes. As is the case in regard to the outcomes organization, an efficient way of doing this is to have them build a Duignan (DoVeiw) strategy/outcomes diagram. As strategy/outcomes diagrams are built by organisations and initiatives, the rest of the apparatus of DoView planning, facilitating good planning, measurement, evaluation and delegation can be used to make sure that the activity of an organization or initiative is in alignment with what is has specified in its strategy/outcomes diagram. Without the benefit of the DoView Planning approach, checking for such alignment at the moment is complicated and time-consuming. So the argument is simply that the first step in moving towards an outcomes society is for more clarity at all levels within the society about the outcomes that parties within it are attempting to pursue.
Using outcomes diagrams for outcomes-transparent AI
What is called the ‘AI alignment problem’ is the problem that AI may end up pursuing outcomes that are incompatible with those of its developers, users or humanity as a whole. Another AI term related to the alignment problem is ‘explainable AI.’ This seeks to discover better ways of explaining to humans the details of how AI systems work. The more explainable an AI system is, the better humans can keep track of what it is attempting to achieve and how it is trying to achieve it. Then, if necessary, we can modify its behavior to align it with the outcomes we, as humans, want it to be working towards. Obviously, there is a significant problem at the moment in that humans have yet to fully understand how AI systems work internally. However, significant progress is being made on this with the development of mechanistic interpretability, which delves deeper into how AI systems work.
Another concept in AI is an AI system’s ‘reward function.’ This is what an AI is being rewarded for in the course of its training. An AI’s reward function specifies the outcomes a particular AI is trying to achieve. A problem in AI safety arises when we specify only high-level outcomes for an AI system within its reward function. To ensure that an AI system is aligned with our outcomes, we also need to know about the lower-level steps it takes to get to its high-level outcomes. In addition, we need information about how it manages risks that may arise from the steps it takes to achieve its high-level outcomes.
When we reward an AI system for achieving its high-level outcomes, we may also be inadvertently rewarding it for other things. In particular, the lower-level steps that we are unaware it is using to successfully achieve its top-level outcomes. In such a case, we can end up inadvertently rewarding the system for bad behavior regarding the lower-level ways it has gone about seeking its higher-level goals. Some AI safety experts are worried that if this repeatedly occurs over time, AI alignment could drift so that AI systems could unknowingly be encouraged to work in ways damaging to humans. This is AI learning what we can call AI bad habits in the way it goes about achieving its outcomes.
“When we reward an AI system for achieving its high-level outcomes, we may also be inadvertently rewarding it for undesirable ways of achieving these”
The concept of outcomes-transparent AI being introduced in this book picks up on aspects of AI alignment and explainable AI. It is concerned with coming up with easily communicable ways of identifying the high-level outcomes an AI is seeking, plus the lower-level steps it will take to achieve them. Once this is done, these can be used both to inform humans and other AI systems about what a particular system is seeking to achieve while also being used help govern, manage and audit AI systems. The idea of outcomes-transparent AI is based on applying the principles of outcomes theory, which have been discussed above. Outcomes theory’s outcomes diagrams could be used with AI systems in the same way that they can be used with AI outcomes-driven organizations. Looking through an outcomes theory lens, AI systems are simply viewed as just another type of outcomes system. This is similar to the way in which outcomes theory views any organisation or other type of initiative that is attempting to achieve outcomes in the world.
Many AI systems advertise the overall outcome they are trying to achieve in their name. For instance, the high-level outcome of image generators is obviously to generate images; similarly, the high-level outcome of music generators is to create music. However, equally important as the stated high-level outcome of any outcomes system are the details of the lower-level steps that the system is using to achieve its high-level outcome. Therefore, outcomes diagrams are potentially valuable for use with AI systems because they are designed to spell out not only high-level outcomes but also all of the important lower-level steps that any outcomes system uses to achieve such outcomes.
Outcomes-transparent AI is, therefore, a concept in which the developers of AI systems are encouraged to make their systems’ outcomes transparent. It is worth considering whether doing this can be assisted by developing visual outcomes diagrams for AI systems. These diagrams would specify what outcomes an AI is trying to achieve, the lower-level steps it is taking to achieve these, and the risks it is attempting to manage as it does so. Adopting this practice would mean that anyone who wanted to discover an AI’s outcomes and the steps it is taking to pursue them could refer to its outcomes diagram in order to quickly find this out.
“Outcomes models are designed to spell out not only high-level outcomes but also all of the important lower-level steps that a system uses to achieve such outcomes”
Creating outcomes diagrams for AI systems is similar to the concept of requiring AI systems to have a ‘constitution’ that spells out the beliefs and values driving the AI system’s behavior. One of the reasons that outcomes theory promotes the use of visual outcomes diagrams in an organizational context is that they are a more efficient way of surfacing, communicating and working with an organization’s underlying outcomes structure. This is in contrast to the traditional approach to documenting and working with organizational outcomes and strategy, which often involves the use of multiple pieces of text-based strategic and outcomes documentation. Because outcomes diagrams are designed to identify a general causal flow from left to right, from lower-level steps to higher-level outcomes, they are a more efficient way of surfacing the causal chain that lies behind an organization’s activity than just text-based narrative descriptions. This makes it much easier to quickly understand what is happening with any type of outcomes system. If outcomes diagrams are experimented with in the context of AI systems it may well be the case that they are found to be a more efficient way of representing the underlying outcomes structure of an AI system and mean that this can be communicated and analyzed faster than more straight text-based approaches to representing what AI systems are trying to do and how they work.
It needs to be noted that the above discussion focuses on ways of talking about and communicating what AI systems are doing. Outcomes diagrams can equally be used in AI governance, management and audit. When used in this way, they function like AI’s constitutions and are used by the AI system and those managing it to oversee the ways in which they are going about achieving their outcomes.
Given the complexity of monitoring AI systems, it will be impossible for humans on their own to adequately monitor whether an AI system is acting in accordance with its outcomes diagram or constitution. Therefore, super-intelligent or specialized AI systems, which we can call AI watchdogs, will have to be used to help with this task. If it were found that representing AI systems’ outcomes in the form of outcomes diagrams was useful, a crucial part of an AI watchdog’s role would be monitoring other AI systems’ compliance with their stated outcomes diagram. You can find up to date and detailed discussion of these issues on the author’s Outcomes Theory Substack.


