Bisbee said:
Maybe the developers of AI will discover just how bad an idea it is for us to be playing GOD at our stage of emotional development.
Some of the tools my team and I use look a lot like AI, but they are closer to machine learning - i.e., they discover stuff, but are seldom used to take action on the discovery - there is a human intermediary step.
The most recent Journal of the ACM published the updated
"Code of Ethics" for computational scientists and programmers:
https://www.acm.org/code-of-ethics
I am a lifetime member of both the ACM and the IEEE. Without trying to start an inter-society rivalry, the ACM is the more academic, thinking-persons society, while the IEEE is a bit more "Morlock" in nature.
The codes-of-ethics for both organizations emphasize only working on projects where one is actually qualified, but that is a self-referential definition lacking in outside oversight. The deep AI papers in the ACM have something similar to Asimov's Laws-of-Robotics. The most important of which is that we generally leave autonomous systems only loosely coupled.
There is a great concern amongst computer scientists about autonomous vehicles because they will get the best results if they: (1) Share a lot of information; (2) Make interpolations and extrapolations based on data patterns; (3) Reshare new algorithms and decision trees; (4) Cross share statistical information about the past behaviors of nearby human drivers.
As Bisbee intimated, that massive cross sharing may be a technology "ahead" of our societal, legal and emotional development. But it will happen - and largely without our ability to control what is shared or where and with what/whom.
All of this is a bit above my pay grade. I use AI and ML with data and add in a lot of mathematical (generally not brute force) analytics. My branch of mathematics is more useful for analysis than "prediction", etc., so I tend to be removed from most of the deep learning and autonomous systems. I will say one thing, and I mean no offense: many of the AI folks do speak of themselves in "god-like" terms. They are far more impressed (in general) with their progress than their shortcomings.
Here is a major change since I started working on systems in the early 1980s: "Frameworks". In 1985, when I released a large scale communications platform, my team and I had written 99% of the code that was in the system. We used an early "open source" ISAM code base for some of the data management. The rest, we wrote by hand. Inefficient for sure, but you knew what every line of code did.
That is all gone. In some cases, modern systems actually have a reverse ratio - the released systems have so many frameworks in them that the "developers" have written 1% of the code they deployed. The other 99% was written by someone else - and in spite of the open source fetish for "community inspection", we have some evidence that many frameworks are put into use without much anlaysis beyond reading the purpose and the API. The only real testing seems to be for performance and memory leakage. I have never seen a development team do deep code reviews for the frameworks they use - it may be happening somewhere, but I have never seen it.
So what to make of all this? Watch the Disney movie "Fantasia" and see the Sorcerer's Apprentice segment. Experimental thinkers have been unleashing new technology on society for centuries without understanding the consequences. I am sorry, but I don't think my point of view is either pessimism or nihilism; methinks it's called 'realism'.