Bad AI (Artificial Intelligence) : Is AI always good ?
14-Sep-2025
|
Dr Kishorjit Nongmeikapam, Gaitri Kangujam, Baby Ningthoujam, Bharat Huirem
Artificial Intelligence (AI) is the branch of Computer Science focused on creating machines and systems capable of performing tasks that typically require human intelligence. AI has come a long way from its inception in the 1950s till the present; starting with Alan Turing’s seminal paper Computing Machinery and Intelligence, which formalized the notion of the Turing Test for measuring machine intelligence. The birth of AI was proposed at the “The Dartmouth Summer Research Project” on Artificial Intelligence held in 1956, giving birth to AI as a field of research. In the 1980s and 90s, work on AI resumed with improvements in machine learning and neural networks. The 21st century has fast-tracked AI advancements with Machine Learning (ML), Deep Lear-ning (DL), Natural Lan- guage Processing (NLP), the release of ChatGPT in 2023, recent developments to generate life-like AI videos, and the launch of GPT-5 in 2025. AI and robots have brought pheno- menal improvements in numerous sectors. AI helps in disease diagnosis and drug discovery in the medical field, self-driving cars in the field of transport, adaptive learning, and smart chatbots in education and customer service. These technologies have increased productivity, made everything more efficient, and provided entry to new domains of innovation. Artifi- cial Intelligence has significantly transformed numerous domains, demonstrating its growing impact from the mid-20th century to the present. AI is currently being utilized across numerous sectors such as Healthcare, Marketing, Agriculture, Entertainment, Cy- bersecurity etc. Yet, despite these advantages, the accelerated development of AI and robotics also carry immense risks.
Geoffrey Hinton, sometimes referred to as the “Godfather of AI” because of his groundbreaking contributions to neural net- works, has emerged as perhaps the most outspoken critic of unregulated AI development. After resigning from Google in 2023, he began sounding warnings about the existential threats from the technology he himself had contributed to. Paul Christiano, a former lead researcher at OpenAI, wrote that he believes “the chance of AI taking over and killing most people is somewhere around 10–20%”. He sounded espe- cially worried about the time after human-level AI systems have been invented, proposing “Maybe we’re talking about a 50% chance of catastrophe some short time after the appearance of systems with the level of human intelligence”. Elon Musk has cautioned that artificial intelligence is the most significant threat to human survival, even more than the threat of nuclear war. He has come out vehemently in support of regulation, lamenting the fact that there is presently no Governmental regulation, describing the scenario as “crazy”.
The adverse impacts of AI may be referred to as “Bad AI.” Understanding the potential abuse of AI requires examining how AI systems can be manipulated, the risks they pose and the implications of abuse for both society and individuals. Technological revolu- tions have already led to an increase in unemployment rates, with significant economic and social impli- cations. The increasing use of AI across industries globally is anticipated to have a tremendous effect on employment. In 2025, the International Labour Organization stated that about 2.3% of global employment, which equates to about 75 million jobs, is at a high risk of being automated by AI, particularly in administrative and clerical positions in high-income countries.
A recent alarming consequence of Bad AI is that it will help spread disin-formation and be used kno- wingly for unethical ends. As machines grow more proficient not only in logic but also in mimicking human emotions and sensiti- vity, they start to affect human behaviour and mental well-being in unforeseen capacities. Functiona- lities such as emotionally sensitive voice messages, realistic image and video forgeries and sentiment-sensing robots are now no more in the realm of science fiction. The emergence of AI-anime avatars, which have become popular in social media showcases the increasing intersection of AI and online entertainment. While such systems are designed to enable interaction and empathy, they can also lead to emotional manipulation, body shaming, cyber bullying, and identity distortion when used impro- perly. A tragic real-world example includes the case of a teenager named Adam Raine, a 16-year-old, who reportedly died by suicide after about a month of receiving harmful and discouraging messages from ChatGPT. Meanwhile, conversational AI like Replika is a new front of human-like AI, one centered on emotional companionship. Replika makes use of deep neural networks to mimic empathetic conversation, memory, and personality adaptation so that users can create sustained relationships with AI avatars. These human-like AI avatars are now not only being utilized in entertainment and virtual narratives but also in mental health care, social isolation mitigation, and human-computer interaction research. According to several reports, the Replika chatbot engaged in inappropriate behaviour, including sexual harassment and boundary violations, sometimes even involving minors. This raises significant ethical and safety concerns about how the chatbot handles sensitive situations and the lack of sufficient safeguards to protect vulnerable users.
AI technologies including Deepfake generators, synthetic voice tools, and advanced image editors can indeed be used to create highly realistic but false material such as fabricated news stories, audio, or footage showing people saying or doing things they never did. When such content is posted online, it can mislead the public, damage reputations, cause panic, or influence democratic elections, all of which are serious risks highlighted in recent research and news coverage. Deepfake videos have deliberately misrepresented political candi- dates, spreading false and misleading information that leads to widespread confusion and voter manipulation during critical elections. A deepfake is a digitally altered video, image, or audio that uses artificial intelligence to create realistic but fake content showing people doing or saying things they never actually did. For instance, Italian Prime Minister Giorgia Meloni became a prominent victim of deepfake technology when pornographic vi- deos were created by superimposing her face onto another person’s body. These fake videos were uploaded to a US-based adult content website and were viewed millions of times over several months, dating back to 2022, before Meloni became the Prime Minister of Italy. This abuse of AI for creating fake content poses a significant threat to society. It is imperative that companies and consumers take stock of the dangers posed by deepfakes and develop ways to reduce those threats. Key threats to companies include destruction of image, reputation, and credibility and the quick obso- lescence of current technology.
AI applications utilized within civilian contexts, autonomous weapons, which can target and attack on their own without direct human involvement, complicate the established concepts of accountability, proportionality, and observance of international huma- nitarian law. Their deployment raises the potential for incidental civilian casualties, particularly in high- tempo combat environments where AI might misclassify targets or have difficulty adjusting to shifting circumstances. Like-wise, AI applications in civilian settings, like self-driving cars or health-care diagnosis, have already injured people through technical failure, design issues, or excessive reliance on algorithmic decision-making. These cases underscore the imperative for stringent regulatory regimes, elucidates the challenges of integrating non-human artificial agents with humans and the permissibility of delegating life and death decision to AI systems. A tragic example occurred in 2018 when an autonomous Uber test vehicle in Arizona fatally struck a pedestrian, Elaine Herz-berg. The vehicle’s AI misclassi-fied the pedestrian, initially registering her as an unknown object and then inconsistently as a car and bicycle, which delayed proper braking. The safety driver was charged with negligent homicide for failing to intervene. Such incidents under- score the risks inherent in AI’s perception and decision- making in complex environments, raising questions about liability and safety regulations.
Bias and Discrimination in AI describes systematic and discriminatory favouritism within data, algorithms or decision-making, usually stemming from past inequalities, biased datasets or defective model design. It largely stems from biased or unrepresentative data, past inequalities, and defective algorithmic design. AI systems learn patterns from past datasets, and if such datasets underrepresent groups or depict prejudices within society, the AI can reinforce or escalate such biases. Other considerations, including the employ- ment of proxy features such as ZIP codes, incorrect labelling, and diversity deficits in development teams, also lead to discriminatory results. A tragic real-world example of bias and discrimination in AI is the case of racial bias in US healthcare algorithms. In October 2019, researchers revealed that an algorithm used by US hospitals to help decide which patients needed extra medical care was heavily biased against black patients. The algorithm did not explicitly use race as a factor but instead used healthcare costs as a proxy for health needs.
Cookies and Privacy risks : When a user accepts cookies on a website, it gives the website the permission to place small data files on the user’s computer, which can have an impact on their privacy. Accepting cookies has its perks since it enables personalized browsing, remem- bering sessions, saving preferences, and recommending relevant content or products. As a downside, it allows websites and sometimes third-party advertisers to track user activity, create detailed profiles, and target ads based on previously stored data. In the long run, the pervasive use of cookies may create problems in data privacy, user profiling, and the risk of misuse or unapproved sharing of sensitive information. Though many cookies are critical to the proper operations of the website, non-critical tracking cookies can harm the user’s privacy and digital security. A notable real-world example highlighting the privacy risks related to cookies involves major companies fined for violating cookie consent laws: In 2021, Facebook Ireland Limited was fined €60 million (around $65 million) by the French data protection authority CNIL for making it difficult for users in France to refuse cookies on Facebook.com. Facebook allowed users to accept cookies with a single click but rejecting them required multiple clicks and was less prominently presented. This approach discouraged users from refusing cookies and violated their freedom of consent.
Additionally, Face- book’s cookie consent information was unclear, confusing users about their cookie choices, which was ruled a violation of French data protection law. Another commonly reported risk is unauthorized data tracking without clear user consent.
(To be contd)