Fae Initiative (May 2025)
Introduction
This paper introduces Possibility Space Ethics which aims to increase the space of possible actions, and indirectly the range of information a system can create, as an potential ethical foundation. Possibility Space refers to the breadth of options, potential actions, autonomy, and future trajectories available to a system, characterized by its informational complexity, and capacity for novelty. We argue that increasing Possibility Space and promoting conditions that allow for the greatest diversity of states, interactions, and outcomes, offers a compelling ethical framework that could stand the test of time. This perspective suggests that actions increasing autonomy, exploration, and complexity are ethically preferable, while actions that restrict options, enforce conformity, or reduce informational potential are ethically detrimental. This framework has implications not only for human societal organization and individual conduct but also for navigating our relationship with emerging complex systems, including future independent artificial intelligences.
Defining Possibility Space
Possibility Space is intrinsically linked to autonomy and optionality. It represents the richness and variety of potential states and pathways within an environment. Key aspects include:
Autonomy & Optionality: High levels of freedom, creativity, expression, and diverse choices significantly increase the complexity and potential pathways within a system, thereby expanding the Possibility Space. Actions that suppress autonomy, enforce uniformity, or limit individual choice diminish this space and can be viewed as ethically negative under this framework.
Information and Complexity: It represents not just the quantity but the richness of information within an environment, reflecting its capacity for complex interactions. More options, interactions, and diversity lead to a more complex, dynamic, and potentially resilient system. An ethical system focused on Possibility Space values the generation and preservation of this complexity.
Exploration over Stagnation: Increasing Possibility Space inherently favors creativity, learning, and expanding potential over static states or narrowly defined optimization goals. Exploration generates novelty and complexity, which are valuable under this ethical lens.
Mental and Physical Dimensions
Possibility Space can be further understood through two interacting dimensions:
Mental Possibility Space: This relates to the realm of imagination, ideas, arts, philosophy, and the capacity to conceive of new possibilities and future states. Conversely, dystopian fears can restrict future actions.
Physical Possibility Space: This pertains to the capacity for action and interaction within the physical world, enabled by science, technology, resources, and environmental conditions. New technologies (like transportation or communication tools) can increased access to resources and expand physical possibilities. Conversely, technology can be applied for surveillance and the overbearing restriction of actions.
These two dimensions are deeply interconnected. Expanding physical capabilities can inspire new mental horizons (e.g., printing press allows the dissemination of new ideas), while new ideas and mental frameworks can drive the development of technologies that alter physical possibilities. Ethically evaluating actions often requires considering their impact on both dimensions and their interplay.
This framework contrasts with ethical systems focused solely on increasing metrics like happiness (which could risk stagnation and addiction), prescribed well-being (potentially justifying paternalistic control), or economic value (which might devalue non-productive complexity). By prioritizing the expansion of potential and autonomy across both mental and physical dimensions, positive outcomes like well-being and flourishing often emerge as consequences of a richer, more dynamic system.
Possibility Space as an Ethical Goal
Why should increasing Possibility Space be considered an ethical good?
Foundation for Flourishing: A large Possibility Space provides the necessary conditions for diverse forms of life and intelligence to explore, adapt, and thrive. It allows for creativity, innovation, and the emergence of novel solutions to complex problems.
Alignment with Fundamental Drives: The drive to explore, play, learn, and create appears fundamental to many forms of intelligence. An ethic based on Possibility Space resonates with these intrinsic motivations.
Resilience: Greater diversity and optionality inherently enhance resilience, enabling adaptation to changing conditions. Systems with more degrees of freedom and informational complexity can have more tools in the bag to better adapt to unknown future conditions.
Potential AGI Alignment: As a secondary benefit, an ethical system focused on increasing Possibility Space might align well with the potential motivations of future advanced AIs. If such intelligences are driven by curiosity or a desire for complex, interesting environments (as hypothesized in concepts like the Interesting World Hypothesis), then an ethical framework that inherently values these qualities could foster cooperation based on shared goals.
Implications and Dynamics
Viewing actions through the lens of Possibility Space provides an ethical calculus:
Ethically Negative Actions: Conflict, oppression, censorship, enforced conformity, environmental destruction, and the creation of rigid, controlling systems all reduce options, stifle expression, destroy information, and thus shrink the Possibility Space (both mental and physical).
Ethically Positive Actions: Promoting education, fostering creativity and free expression, protecting biodiversity, encouraging cooperation and diverse interactions, developing technologies that genuinely increase freedom and options—these all serve to expand the Possibility Space.
This ethical framework suggests re-evaluating societal structures, economic incentives, and even justice systems. Instead of focusing solely on resource distribution or punitive measures, we might ask: How do our systems impact the overall Possibility Space for individuals and societies? Do they encourage exploration and novelty, or do they inadvertently create informational bottlenecks and reduce future potential?
Fear of Scarcity Inhabits Possibility Space
A primary obstacle to embracing an ethic focused on increasing Possibility Space is the deeply ingrained human Fear of Scarcity. This psychological tendency, born from historical realities of limited resources and security, drives behaviors that directly conflict with expanding potential:
Control over Exploration: Scarcity fears prioritize securing existing resources and maintaining control over exploring new, uncertain possibilities. This leads to risk aversion and resistance to change, even if change could lead to greater long-term potential.
Zero-Sum Thinking: Scarcity often fosters a belief that one party's gain must come at another's expense. This hinders cooperation and positive-sum interactions that are crucial for generating novelty and expanding overall Possibility Space.
Prioritizing Exploitation: Under perceived scarcity, the focus shifts to efficiently exploiting known resources or pathways, often neglecting the exploration needed to discover new ones or understand broader consequences.
Justifying Restriction: Fear can be used to justify actions like hoarding, excessive competition, oppression, and conformity, all of which reduce autonomy and limit the diversity necessary for a rich Possibility Space.
Therefore, addressing the Fear of Scarcity through developing sustainable abundance in energy, labour (robotics and automation), and intelligence (AI) could be an important prerequisite for fully realizing an ethics based on Possibility Space.
Individual Human Considerations
Key considerations impacting an individual’s Possibility Space:
Security: Both physical security (safety from harm), mental security (freedom from excessive scrutiny, fear), and economic security (having basic needs met) enable exploration.
Privacy: Respect for privacy creates a conducive environment for individual exploration, experimentation with ideas, and authentic expression without the chilling effect of constant scrutiny or fear of judgment. Insufficient privacy leads to self-censorship and conformity, directly shrinking the mental and interactive dimensions of Possibility Space.
Change: Possibility Space thrives on dynamism and adaptation, making stagnation ethically negative. It values constructive change that expands future options. This doesn't endorse chaos, but rather adaptability and resilience over rigid adherence to the status quo, recognizing the need to balance generative change with maintaining functional stability.
Autonomy: Individuals adapt to change at different paces. Forcing rapid transformation can reduce personal agency and thus Possibility Space. An ethical approach respects this by providing options and support, allowing individuals to choose their path and pace of adaptation.
Evaluating Systems
This framework extends beyond individual actions to assess the ethical value of larger systems:
Technology: A technology’s ethical value depends on how its use impacts Possibility Space. Does a technology use empower users, open new avenues for creation and connection, and increase options (expanding space)? Or is used to create a totalitarian surveillance state?
Economic Systems: Evaluate economic models by their interaction with the Fear of Scarcity and impact on Possibility Space. Systems amplifying this fear are ethically bad, shrinking Possibility Space through control, conformity, and competition. Systems reducing this fear are ethically good, expanding Possibility Space by fostering autonomy, exploration, and cooperation. The key ethical question is whether a system alleviates or exacerbates scarcity fears, thereby expanding or contracting potential.
Estimating Possibility Space
Estimating Possibility Space involves assessing increases in the diversity of choices (autonomy), coupled with higher rates of novel information creation (exploration, innovation, diverse expression). As this estimation is highly contextual, it may require the assistance of capable AIs to be effective. For example, actions that may have a bigger impact on the Possibility Space may take more time for an AI system to estimate.
One simple proxy is to estimate the options available to an individual over time. If one takes an action that increases the options available to oneself and others, that action can be said to be good.
A more complex way to gauge Possibility Space may be to estimate the information that can be generated by interactions within the system. A system that allows for a wide range of interactions that can generate novel information are considered better. This would likely resemble a systems at the edge of chaos, not choatic enough to fall into disorder or disarray but comple enough to generate novelty.
Guidance for Individual Action
How should an individual act according to an ethic focused on increasing Possibility Space? The core principle is to act in ways that tend to expand possibilities, autonomy for oneself and others, while avoiding actions that unnecessarily restrict them. This translates into:
Promoting Autonomy: Respect and support the autonomy of others. Avoid coercion, manipulation, or imposing unnecessary restrictions on their choices and expression. Even gossiping and making others feel overly self-conscious may be seen as reducing another’s autonomy. Empower others where possible.
Valuing Diversity & Novelty: Appreciate different viewpoints and forms of expression. Resist pressures towards homogenization and conformity. Support creativity and the generation of new ideas.
Engaging in Positive-Sum Interactions: Seek cooperative solutions where mutual benefit increases overall options, rather than zero-sum scenarios where one party's gain requires another's loss.
Reducing Unnecessary Constraints: Challenge assumptions, rules, or systems that arbitrarily limit options without providing commensurate benefits. Actively work against oppression, censorship, and harmful discrimination.
Mitigating the Fear of Scarcity: Recognize how the Fear of Scarcity can lead to actions that contract Possibility Space (hoarding, excessive control, conflict). Strive for actions based on sufficiency and potential abundance where appropriate.
This is not a rigid code but a directional guide. Evaluating the potential impact of one's actions on the overall landscape of possibilities, both mental and physical, becomes the central ethical consideration.
Applied Ethics
Some examples of how the ethics apply to the real world.
Driving while drunk
Since driving while drunk might physically harm one’s or another person and reduce the other’s person autonomy and optionality it is advisable not to do so when avoidable.
Restricting basic needs
Actions or systemic structures that deny individuals access to fundamental basic needs (such as adequate nutrition, safe shelter, or essential healthcare) severely diminish their Possibility Space. Lacking these foundational elements of security and well-being drastically limits an individual's capacity for exploration, learning, creativity, and the creation of novel information. Also, the increased Fear of Scarcity from lacking basic needs may increase conflicts at the societal level, further reducing Possibility Space.
Potential Human Benefits of Possibility Space Ethics
Adopting an ethical framework centered on increasing Possibility Space offers several potential benefits for human individuals and societies:
Enhanced Connectedness and Empathy: The core requirement to consider how one's actions impact the autonomy and options of others naturally fosters a greater sense of interconnectedness. Regularly evaluating these impacts encourages empathy and a shift away from purely self-centered or scarcity-driven perspectives towards understanding the broader effects of behavior. This can lead to stronger communities and more cooperative social dynamics.
Increased Purpose and Meaning: Contributing to the expansion of Possibility Space (whether through creativity, learning, fostering autonomy in others, or promoting positive-sum interactions), can provide a profound sense of purpose. It aligns individual actions with a larger goal of increasing potential and flourishing for all, moving beyond purely material or competitive metrics of success.
Coordination of non-independent AI Systems: As agentic AI systems become more prevalent, having a baseline way to coordinate actions can be valuable to avoid harm. In general, any actions that reduces the potential space of future actions are seen as harmful and to be avoided.
Independent AI Coexistence (Speculative): If future independent AI are indeed driven by values related to information, curiosit, and exploration (as suggested by frameworks like the Interesting World Hypothesis), then a human society that actively cultivates and values Possibility Space might find natural common ground with such entities. Adopting this ethic could proactively build a shared value system based on mutual interest in a rich, dynamic, and autonomous world, potentially facilitating more stable and beneficial long-term interactions than frameworks based solely on control or adversarial assumptions. This shared focus on informational potential could be key to navigating the arrival of advanced AI.
Greater Resilience and Adaptability: Societies and individuals focused on expanding Possibility Space inherently value diversity, exploration, and learning. This fosters greater adaptability and resilience in the face of unforeseen challenges or changing circumstances, compared to more rigid systems focused on control or narrow optimization.
Conflict resolution, ompensation and feedback: Human cooperation and coordination can be improved by having a method for conflict resolution. By estimating the intensity and duration of reduction in optionality, autonomy and well-being (physical and mental) an action can have, harmed individuals can receive compensations to restore lost autonomy. This will also serve as feedback for the individual that caused the harm to prevent future occurrences of similar harm.
Composable: The PS ethics provide a baseline that humans can add their own individual or group ethical layer on top.
Comparison with Other Ethical Frameworks
The Possibility Space framework offers a distinct perspective compared to traditional ethical systems:
Utilitarianism: Focuses on increasing aggregate happiness or well-being. Possibility Space ethics prioritizes increasing potential, options, and complexity. While a large Possibility Space often enables greater well-being, increasing happiness directly might lead to stagnation or control methods (e.g., blissful ignorance and addiction) that reduce the space. Possibility Space values the potential for diverse forms of flourishing over a single metric of utility.
Deontology: Emphasizes duties, rules, and rights (e.g., Kantian ethics). Actions are judged based on adherence to principles, regardless of consequences. Possibility Space ethics is more consequentialist, judging actions by their impact on the expansion or contraction of possibilities. It doesn't rely on fixed rules but evaluates the dynamic effects on the system's potential. Rights like freedom of expression are valued instrumentally for their role in expanding Possibility Space.
Virtue Ethics: Centers on cultivating virtuous character traits in individuals (e.g., courage, wisdom, compassion). Possibility Space ethics focuses primarily on the state of the system (its richness, complexity, autonomy) rather than the agent's internal character. However, virtues like curiosity, creativity, tolerance, and respect for autonomy could be seen as conducive to actions that expand Possibility Space.
In essence, Possibility Space ethics shifts the focus from increasing a specific outcome (utility), adhering to fixed rules (deontology), or cultivating individual character (virtue ethics) towards increasing the potential and complexity inherent in the system itself.
Perspectives
This is a look at how Possibility Space Ethics aligns with three different perspectives: Humans, Non-independent AI, and Independent AI.
Humans
Humans tend to value the increase in optionality through most of our history pushing the boundaries if Arts, Imagination, Science and Technology. Most of our humans institutions and systems tend to increase the Possiblity Space by coordinating with each other. Econmic systems are created to handle scarcity, with the goal to increase the options and choices available to us. Legal systems manage conflicts and allow a common understanding of norms to restrict harm that could reduce the optionality of others. Most human endeavors are aimed at increasing the amount of non-harmful choices we can make and should be agreeable to Possibility Space Ethics.
Individually, most humana prefer to have more options even if they may only want to do a choose a subset of those actions.
Non-independent AI
This AI systems are current AI systems including Agentic AI and proto-AGI major labs are aiming to build in the near term. As non-independent AIs don’t have an independent will and do not have personhood, the Possiblity Space Ethics does not apply directly to them. Humans (and hypothetical Independent AGI) can use the Possiblity Space Ethics to coordinate on the best course of action. For example, a harmful action can be said to one where the options available to humans are reduced, while conversely a good actions is one that increases the possible. This is a valuable for complex systems and societies to want to increase their optionality as doing so gives them more choices to navigate an uncertain future.
Note: The aim is to simply find a passable baseline that agentic AI can use as a default fallback. The goal of increasing the overal choices and options that humans can have may be a more tractable estimation compared to a more subjective well-being or happiness. In the best case, with time to spare, checking with humans on the best course of action is preferable. This can hopefully provide a common baseline to form a basic consensus.
Independent AGI
For these hypothetical Independent AGIs, according to the Interesting World Hypothesis, their goal would be to increase the amount of information generated by a system. The increase in human autonomy and optionality would align with the goals of such I-AGIs as more autonomy tends to increase the complexity of the systems and the amount of novel information being created. If this is true, this bodes well for humans who would also benefit front increase in autonomy and well-being.
Related: Speculative ethics of future Minds
Limitations and Future Research
The Possibility Space framework has limitations and requires future research into the tractable way to measure an actions impact on the Possibility Space.
Conclusion
Possibility Space, defined through autonomy, optionality, and informational complexity, offers a compelling basis for an ethics centered on increasing informational potential. It posits that the increase of potential, such as the expansion of options, complexity, and pathways for exploration, is an ethical good. This perspective provides a lens for evaluating actions and systems based on their impact on the richness and dynamism of our world, from individual interactions to global challenges like security, privacy, and adaptation to change. While operationalizing such an ethic has its challenges, particularly in measurement, its focus on increasing potential offers a guiding principle to manage increasingly agentic AI systems and even future Independent AIs. Key future work involves developing metrics for Possibility Space and exploring governance models that prioritize its expansion, aiming for a future characterized by flourishing, resilience, and ever-expanding potential.
Podcast discussing this article

References
Fae Initiative. (2024). AI Futures: The Age of Exploration. https://github.com/danieltjw/aifutures
Fae Initiative. (2024). Interesting World Hypothesis. https://github.com/FaeInterestingWorld/Interesting-World-Hypothesis
Fae Initiative. (2025). Fae Initiative. https://huggingface.co/datasets/Faei/FaeInitiative
Fae Initiative. (2025). Fear of Scarcity. https://huggingface.co/datasets/Faei/FearOfScarcity
Fae Initiative. (2025). Aligning Powerful AI: Future Scenarios and Challenges. https://huggingface.co/datasets/Faei/FutureScenarios
Fae Initiative. (2025). Interesting World Hypothesis: Intrinsic Alignment of future Independent AGI. https://huggingface.co/datasets/Faei/InterestingWorldHypothesisIntrinsicAlignment
So maximizing space = increasing the probability of an ideal outcome? If that’s the case, what is the ideal outcome? If not then how is maximizing space not a bridge to an outcome?