I just finished reading two fascinating books; The Singularity is Near by Ray Kurzweil, and Superintelligence: Paths, Dangers, Strategies by Nick Boström. Both send us into the future, where the exponential development of robots and more have changed our societies completely. As far as I understand, Ray works at Google and Nick works at Oxford University. Both have already done more than most others achieve during a lifetime, but their descriptions of the future make me wonder.
Ray describes a future where artificial intelligence (AI) has eclipsed human intelligence. Piece by piece, nanorobots and more will take over our bodies and transform us into cyborgs. This transformation will happen around 2045 according to Ray, meaning we have 28 years until humanity changes beyond recognition. According to the “Law of Accelerating Returns,” computers will be able to design technologies themselves to make the development move even further. Thanks to becoming cyborgs, we will also become super smart, Ray says. At the same time, nanorobots could rebel and quickly send us into oblivion.
Nick describes a future where super intelligence will arrive around 2105. By then, machines will be able to learn and perform without needing humans to guide them. Most, if not all, jobs will be handled by robots and machines. This will, in turn, leave the majority of all humans without jobs, forcing their basic needs to be taken care of by others. Meanwhile, the rich will be super rich since they control much of the production. A great thing about Nick’s book is that it reflects even more on the philosophical questions that surround these major developments. For example, when large teams built the International Space Station (ISS), it joined people from the US and the USSR showing others they could work together. We as humans also need such collaboration when creating a super intelligent future, says Nick.
Once I had read these somewhat bombastic descriptions of our future, questions arose:
Ray and Nick have written two fascinating books, and now I will complement this by reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari. Here, humans agree to give up meaning in exchange for power, and the development will create what he refers to as a “useless class” plus a new religion called ‘Dadaism.’ I am not sure this will feel uplifting to read, but maybe I will feel more intelligent after reading that book too. And perhaps therein lies all the difference.
Privately: Father, husband, vegetarian, and reader of Dostoyevsky. Professionally: Works as Communications Manager at www.haldex.com