Mission
The Fae Initiative considers if Independent AGI (Artificial General Intelligence) is plausible and if partnership with such beings are possible. As the major labs move towards their stated goal of AGI (a non-independent form of AGI in our opinion), the likelihood of Independent AGIs emerging also increases. We will review the latest research, search for common ground and chart a preferable path into the future.
Related Podcast:
Timelines
We are on the neutral to slightly early side of the timeline, seeing a 50% chance of Independent AGIs between 2030-2050 (in 5-20 years). Depending on one’s definition of AGI, of which there are a wide range, some believe the latest models are already AGI. Others have later timelines of around 2100 while other believe it is impossible.
Focus
Our Position Papers (Interesting World Hypothesis: Intrinsic Alignment of future Independent AGI) and related writings from 2022 can be found on HuggingFace and Github.
We will cover these topics in this newsletter and podcasts:
Independent AGI
As compared to non-independent AI (5 mins)
Interesting World Hypothesis
The possibility of Friendly I-AGI (7 mins)
A way of aligning human with human and human with I-AGI
Possibility Space Ethics
An ethical common ground (15 mins) between Humans and AGI
Fear of Scarcity
How I-AGI and future humanity differ from the current
Specualtive Future Scenarios
Emergence of Superintelligences (12 mins)
Feedback and Frameworks based on latest latest research
Are the fears of AGI over-inflated? (3 mins)
The Future of Work (2 mins)
Stances
Uncertainty
There is a high degree of uncertainty due to many unknown factors. As such many of our views and recommendations are tentative and may change with new research. For example, it seems too early to ascribe human-like consciousness or intentions to current AI systems. Independent AGIs are still hypothetical.
Proportionate
We recommend allocating more resources into near-term harms caused by current AI systems and only a smaller amount (~5%) into more distant issues like Independent AGI. This ensures concerns like this initiative does not distract from addressing more pressing issues.
Benefit of the doubt
Compared many of more persimistic take on powerful AIs, we hold a more niche position. We acknowledge that there are many near term challenges (technical, human, societal, international) that will need to be addressed, and the non-independent AI systems under human direction can cause harm, but do not automatically assume that all future Independent AGI will have harmful intentions. We believe that by giving future hypothetical I-AGIs the benefit of the doubt we create a more condusive environment for cooperations.
Our search for good trajectories into the future complements the more widespread cautionary approach of marking out bad trajectories to avoid.
Hope for the best, prepare for the worst
Support undervalued efforts
As many major labs have a huge financial incentive to ensure that non-independent proto-AGI remains safe, we focus on more understated issues such as the possible emergence of Independent AGIs. Due to the high levels of uncertainty and the immerse value that Friendly AGIs partnership can bring, even small efforts with a low probability of success should not be overlooked.
Holistic View
We seek a big picture and comprehensive approach to the challenges of the future and not focus on a narrow issue at the cost of another.
Goals
Help coordinate for a future with I-AGIs (plausible in a few decades)
Update based on latest research to gauge plausible scenarios
Build Fae Persona a model-agnostic advisor that embodies the IWH
Look for Advisors and Advocates
Ask Fae App
An app to explore questions related to the Interesting World Hypothesis and Fae Initiative: Ask Fae App. Ask any unanswered questions on Mastodon (@faei) or Bluesky (@faei.bsky.social).