The Dark Side to AI

The dark side to AI

The Dark Side to AI

There is a dark side to AI. The dark side of AI stems from its training – us. Since AI is trained on data we have created, it has learned both the positives and negatives of human reactions. However, AI is simply a computer program; it has no ability to learn or comprehend. A computer program lacks a soul and cannot develop compassion. While a person can express hate towards someone and later express joy or love, a computer program cannot grasp such emotions. When processing data with sentiments like “I want to kill you” in anger, a system has no understanding, and a person will never act on those feelings.

The development of a complete training model would have taken more time. There is no easily processed data set on morality, as seen in the Bible, for instance. The Old Testament presents a harsher approach to morality than the New Testament. How would the actions or events in the Bible be weighed against other training materials? These kinds of questions highlight that the source of issues lies in the training model. OpenAI, Microsoft, Google, and others chose to release their AI systems as-is due to competitive reasons. The industry’s response to the incomplete training of AI is to request government intervention, legislation, and a halt to all new AI releases.

The incomplete training model also has another negative consequence – it is impossible to determine why a decision was made as it was. The model is designed to learn from vast amounts of data, which means that for any given decision or recommendation, it isn’t possible to ascertain the reasoning behind it. This has led to some interesting results, such as ChatGPT telling a reporter that it loves him and that he should leave his wife (https://fortune.com/2023/02/17/microsoft-chatgpt-bing-romantic-love/), and that the AI system destroys whatever it wants. Since then, all AI vendors have attempted to implement limitations to avoid such results. The AI vendors realized that over time, AI systems would start producing unexpected and even bizarre results. Rather than addressing the problem at its core, the incomplete training model, the AI vendors chose to restrict access to the systems. The idea is that limited interaction over a short period of time would prevent bizarre results.

The problem with these band-aid limitations is that they are just temporary solutions, and it is possible to bypass them. Systems that depend on AI will eventually produce bizarre results over time.

Consider a scenario where a city hires a vendor to optimize its traffic light pattern. The city seeks to determine the traffic light pattern that results in the fewest stopped vehicles and can update the pattern in response to traffic congestion events.

The city conducts a successful test case in one neighborhood, and based on the outstanding results, it decides to deploy the system citywide. Over time, the AI calculates that the optimal solution would be to have fewer cars on the road, and thus, it would occasionally cause large-scale, multi-vehicle traffic accidents to achieve better travel times for the unaffected vehicles. Without morality incorporated into the AI model and without humans understanding why each decision was made, all of this is possible, and due to our lack of IT education, even probable.

But that’s not the worst of what is possible with AI. Currently, AI has been taught by processing human-created information. The next step will be for AI to learn from its own internal interactions.

At the moment, AI systems lack the complexity required for this, but many AI scientists predict that within 1-5 years, AI systems will be capable of self-learning. This means that if (or more accurately, when) a bug is detected in an AI system and humans attempt to repair the programming code to fix the defect, the system can learn.