Many of us are afraid of intelligent machines to eventually take control. But here a thought: Is this fear partly based on what I’m inclined to call an ethically problematic “bioism” limiting our empathy and care to humans as well as to – for those not too speciecistic – biological living beings, excluding artificial intelligence for no good reason?
If advanced artificial intelligence (AI) will ever become emotional in the sense relevant from a utilitarian perspective, farming AI on supercomputers may be the most resource-efficient way to increase aggregate happiness on earth. Moreover, given the vastness of space even relative to the speed of light, AI may be our only realistic chance of spreading intelligence and emotions to remote places of our galaxy, and to preserve it for the far future when earth has become inhabitable due to natural or human causes. Also, if parents currently show more and more interest in selecting the genes and thus characteristics of their children, is it really impossible to imagine that they may at some point be attracted to the possibility of designing the emotional and intelligence characteristics of their ‘offspring’ in detail, which may eventually become easier and more reliable with emotional AI than with biological offspring?
This may feel a bit far-fetched. The thought that emotional AI is not necessarily worthless and may have many advantages over the current biological intelligence, does, however, at least to some degree soothe my fear about machines taking over…
I wouldn’t be surprised if before the idea of anti-speciecism is reaching most humans, anti-bioism will be advocated for by more and more people, as AI becomes more and more complex and relevant in our daily lives.