If you’re unfamiliar with some of the thought experiments surrounding AI and super-intelligence, I will first point you in the direction of the Paperclip Maximizer thought experiment.
The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.
The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.
To illustrate the Paperclip Maximizer thought experiment, NYU Game Center director Frank Lantz, created a browser game titled Universal Paperclips. Open the game in a new tab and prepare to be consumed by numbers. Nothing will matter anymore, other than the number of paperclips you produce. Work, emails, text messages, feeding yourself and all other external stimuli will be laid aside as you increase the speed and efficiency of your paperclip production.