I just finished reading two fascinating books; The Singularity is Near by Ray Kurzweil, and Superintelligence: Paths, Dangers, Strategies by Nick Boström. Both send us into the future, where the exponential development of robots and more have changed our societies completely. As far as I understand, Ray works at Google and Nick works at Oxford University. Both have already done more than most others achieve during a lifetime, but their descriptions of the future make me wonder.
Ray describes a future where artificial intelligence (AI) has eclipsed human intelligence. Piece by piece, nanorobots and more will take over our bodies and transform us into cyborgs. This transformation will happen around 2045 according to Ray, meaning we have 28 years until humanity changes beyond recognition. According to the “Law of Accelerating Returns,” computers will be able to design technologies themselves to make the development move even further. Thanks to becoming cyborgs, we will also become super smart, Ray says. At the same time, nanorobots could rebel and quickly send us into oblivion.
Nick describes a future where super intelligence will arrive around 2105. By then, machines will be able to learn and perform without needing humans to guide them. Most, if not all, jobs will be handled by robots and machines. This will, in turn, leave the majority of all humans without jobs, forcing their basic needs to be taken care of by others. Meanwhile, the rich will be super rich since they control much of the production. A great thing about Nick’s book is that it reflects even more on the philosophical questions that surround these major developments. For example, when large teams built the International Space Station (ISS), it joined people from the US and the USSR showing others they could work together. We as humans also need such collaboration when creating a super intelligent future, says Nick.
Once I had read these somewhat bombastic descriptions of our future, questions arose:
- Does the projection made by engineers create such a future, just by projecting it? Or will what they describe happen anyway? Given the massive amount of attention Ray and Nick receive, I am not sure.
- Do we want to walk down this path just because we can? Yes, better treatment of diseases is welcome, but designing machines that are smarter than us?
- What do we mean by saying that something is intelligent? Is it descriptive, or normative?
- Does high intelligence equal happiness? Most probably not. Just look at some of the brightest people on Earth so far. Many of them led miserable lives or even killed themselves. Also, a lot of people suffering from depression do so because they see, know, and feel more than others who instead shut down their feelings. Therefore, I wonder what happens when we reach for super intelligence. Will we see Super Depression?
- What happens when the machines start copying no only our strengths, but also our weaknesses? As described by The Verge and by the Guardian, AI can pick up racial and gender bias. As Tim Ferriss and many of his podcast guests have said: We humans are deeply flawed animals and sorry excuses for creatures living on Earth, but we have our highlights. Just pick up any history book to see with which brutal force we have destroyed our planet and other species. What if AI starts mimicking this?
- Things don’t just happen by themselves. For each generation, we can train them to think ethically about what should happen.
- Where are the alternative futuristic descriptions of everlasting happiness, art, wine, and music?
Ray and Nick have written two fascinating books, and now I will complement this by reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari. Here, humans agree to give up meaning in exchange for power, and the development will create what he refers to as a “useless class” plus a new religion called ‘Dadaism.’ I am not sure this will feel uplifting to read, but maybe I will feel more intelligent after reading that book too. And perhaps therein lies all the difference.
As a part of me embarking on Harold Jarche’s workshop in Personal Knowledge Mastery (PKM), I decided to expand my use of a social network. Last time I took the workshop (yes, it is awesome), I focused on Twitter. That move has led to a much smarter ways of handling Twitter, including using lists. During this year’s workshop, I decided to become better at using LinkedIn.
It all started with upgrading to LinkedIn Premium to see if that gave me better insights into my professional network plus access to training via Lynda.com. I combined that with engaging in more posts and groups, and so far it is working well. I do, however, think that LinkedIn could be even better at what it does. Therefore, I have the following suggestions on how to improve the platform:
- Let me filter the people I follow. This is a must since any professional quickly can pass 500 connections, and then move into thousands. To see all their likes and posts in once central flow creates an enormous noise, and it is hard to hear the signals. I have no clue if I have missed something important. Therefore, let me create lists as in Twitter where I can sort the “Communication Specialist” people from the “Personal Knowledge Mastery” people and the “Haldex people” (or any chosen employer).
- Inspire people to connect via mentoring. Learn from the 70/20/10 framework where the 20% of our learning comes from social learning. Being a mentor or mentee can do wonders for your professional and personal development. Therefore, let me mark in my profile if I am ready to act as a mentor, in which professional areas, and for how many people. Likewise, let me mark if I am ready to be a mentee and in which areas. This could connect people in very valuable ways.
- Only display relevant job posts. I have written about this before, and it is still somewhat of a mess. Maybe I should be flattered when LinkedIn thinks I can do everything from front-end programming to Key Account Management for used trucks, plus everything in between. But if I was looking for a job, this would instead be stressful. Changing these algorithms should be the easiest thing to fix given LinkedIn’s focus on AI for recruiters. Therefore, only list the jobs a candidate probably would like and where there is a good professional match.
- Let me listen more or less to people during set time frames. Even if I follow people that I find interesting, my interests can vary from week to week or month to month. We should all be able to adjust how much we want to listen to certain people during a certain time. For example, since I am soon attending a conference I want to see all posts and interactions from person A the coming month, but only the weekly highlights from person B. Therefore, add a slider for each person in the network where we can say “Listen more” or “Listen less.” Once the time frame expires, we listen as usual again.
- Display smarter recommendations of people I should connect with. Given the strong AI focus on LinkedIn, this should be a no-brainer the coming year. Today, there is a very basic recommendations logic where I see former colleagues and their connections. First of all, I should see communications professionals much more often since that is my profession. Secondly, I should be challenged to connect with people that might broaden my views. None of us should sit in echo chambers where everyone agrees, even if it is cozy. Therefore, use smarter algorithms when suggesting who I should connect with and even why. For example, “Connect to Molly since she can challenge your views on the best way of building a digital workplace.”
These recommendations would surely make LinkedIn a pleasure to use. Today it is somewhat of a mess where I feel I miss valuable posts, but LinkedIn can transform this into something good.
My blog on artificial intelligence, the Deckard Blog, already has 100+ posts. This means there is a lot to learn from it each week. This is the first example of a Friday post where I list what I learned from the blog during the week. All images belong to the creators of the original articles.
An overview of the AI landscape
See which technologies there are today and how they rate on the scales of ‘Sophistication’ and ‘Mass adoption or application’.
What is Artificial Intelligence, by BBC.
An excellent site from BBC, with an overview of artificial intelligence, with movies and more.
VIDEO: What cognitive computing means for the workforce, from Davos
A discussion with IBM CEO Ginni Rometty, Microsoft CEO Satya Nadella, MIT Media Lab Director Joi Ito, and HealthTap CEO Ron Gutman on a World Economic Forum panel in Davos, Switzerland.
VIDEO: Mikko Hypponen at F-Secure talks about the possibilities and dangers of self-driving cars
Already today are we facing difficult choices for using AI in our lives, and these choices will become even harder.
What Jobs Sectors Will Artificial Intelligence Take Over in the Near Future? | The Huffington Post
Very interesting post, originally from Quora, on which jobs will and will not be affected by artificial intelligence entering the job market.
VIDEO: Ray Kurzweil, Director of Engineering at Google, explains his predictions for 2045
Ray has been called a genius for years, and now leads the engineering team at Google. A must see.
23 #AI Principles laid out by the Future of Life Institute
A set of scientists and business people lay out 23 principles that we must follow to avoid bad consequences from using artificial intelligence.
Partnerships in the self-driving car industry take shape
Companies are collaborating to reach the best effects, and I think we will see much more of this. Here are two examples:
#AI and driverless vehicles just took a big step forward: Uber Partners With Daimler
Audi and Nvidia in #AI collaboration for the Audi Q7
Disengagements per 1,000 Miles for Autonomous Cars: @google leads, Bosch last
A report that caught a lot of attention. Not only are the car companies seeing more or fewer disengagements in their self-driving cars (Google leads, Bosch last) – they don’t always measure them the same way.