
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
17
660Ṁ8242029
18%
Too many existing humans suffer death
5%
Too many existing humans suffer other awful fates
13%
80% of currently attainable cosmopolitan value becomes unattainable
18%
The concept of "maximum attainable cosmopolitan value" is not meaningful
7%
As a demonstration of treacherous turns, trolling, or lulz
2%
Some other reason
2%
No reason given after 30 days
35%
It will not resolve N/A
This Yudkowsky market will resolve N/A.
/EliezerYudkowsky/if-artificial-general-intelligence
But can you predict why?
Resolves to the reason given by Yudkowsky.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
Will artificial general intelligence be achieved they the end of 2025 ?
15% chance
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
If Artificial General Intelligence has a poor outcome, what will be the reason?
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
57% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
79% chance
Will we have an AGI as smart as a "generally educated human" by the end of 2025?
7% chance
Sort by:
@MartinRandall As I read it, it means AGI assimilates human culture to a tragic degree.
@IsaacKing I will take a brief break from luxuriating in 20% of max attainable value to realize that I'm in an impossible thought experiment set up to test my integrity, put down my ultra-chocolate, and carefully resolve this market to the correct answer to demonstrate my counterfactual integrity to the larger universe that is simulating me, thus slightly increasing my expected returns in the larger universe. And then I'll go back to the ultra-chocolate.
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
Will artificial general intelligence be achieved they the end of 2025 ?
15% chance
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
If Artificial General Intelligence has a poor outcome, what will be the reason?
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
57% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
79% chance
Will we have an AGI as smart as a "generally educated human" by the end of 2025?
7% chance