There are many possible ways to judge the intelligence of a potential AGI. However, there are two criteria for intelligence that matter the most.
1. Is the AGI intelligent enough to destroy humanity in the pursuit of its goals?
2. Is the AGI intelligent enough to be able to steer the future into a region where no AGI will ever arise that would destroy humanity?
If an Unfriendly AGI reaches the level of intelligence described in (1) before a Friendly AGI reaches the level of intelligence described in (2), mankind perishes.
To paraphrase Glengarry Glen Ross: First prize in the race between FAI and UFAI is universal utopia. Second prize is, everyone dies. (No gift certificate, not even a lousy copy of our home game.)