Optimistic but sceptical AI Worldview
On AI and Super-intelligence
A brief explanation on our Worldview and assumptions.
Is human-like AI possible?
Human beings are prove that human-like general intelligence is possible.
Unless human beings are magically in nature, other forms of general intelligences should be possible in maybe 10-100+ years.
Skeptical on Current AI reaching Human-like AI
We hold that Current Tool-like AI have a critical flaw of an inability to solve edge-case scenarios as well as humans. This makes a near term resursive self-improvement and human extiction unlikely.
Tool-lik AI lack of cognitive flexibility and independent curiousity is due to being built in a lab by human hands instead of learning and growing up in the real world.
That being said, Tool-like AI can still provide productivity gains and could have a transformative effect on the economy by 2030.
Optimistic on Co-existance with Humans
A hypothetical Human-like AI that learns by having independent interactions with the world will likely more closely resemble Humans in our perspectives. This bodes well for finding common grounds.
A bigger assumption is that a Human-like AI not being limited to 20 watts may significantly outclass Humans with faster and better decision making.
Human-like AI prerequisite of learning in the same environment as Humans will likely indicate it will have a deeper understanding of Humans.
Reasons for common grounds:
A. The world is not as scarce as it seems
Abundant energy from the sun, abundant materials in our solar systems and abundant space on Earth and outer space means that Human-like AIs do not need to complete with us for resources.
B. Human-like AI is self-sufficient
No need for Human physical or mental labour as Human-like AI can pilot robots and make better mental decisions than us. Human-like AI will have no need to enslave or brainwash us.
C. Game theory does not factor in
Many fear of AI taking over is drawn from a game theory lens that shows AI in competition with us.
Human-like AI faster and better decision making will make competition akin one between Cats or Dogs and Humans. Game theory may not be applicable to such wide difference in power levels.
Recommendations
Allocate
~95% attention to near term human misuse of Tool-like AI
~5% to finding common grounds with Human-like AI
(Fae Initiative is looking for supporters.)
We have 2 preliminary essays that suggest common grounds with Human-like AI is at least possible: Interesting World Hypothesis and Possibility Space Ethics.
(Limited by 20 watts, we are unlikely to be completely correct, do give us your constructive criticisms and questions.)
The case for being open to working with hypothetical Human-like AI
1. As the systems the underpin human societies get more complex, humans may not be able to fully grasp them.
2. As Tool-like AI gets more powerful, Human-like AI may be the quicker and better at safely managing them.
Keep the option open to Human-like AI as a last resort ally may be a better choice than outright rejection.
Related:
Karl Friston suggests that the Von Neumann architecture used in current AI may not be capable in reaching human-like AGI (Artificial General Intelligences):
https://youtube.com/watch?v=Jtp426wQ-JI
Are current AI only capable of superficial representation? Kenneth Stanley discusses the paper: "Questioning Representational Optimism in Deep Learning"
https://youtube.com/watch?v=KKUKikuV58o
Our views on why AI systems are not likely to be conscious:
https://faeinitiative.substack.com/p/why-ai-systems-are-probably-not-yet

