APRIL 25 — Artificial intelligence (AI) is undoubtedly one of the most talked about phenomena that have taken the science and technology world by storm.
AI is considered a computational construct and not a psychological construct like human intelligence broadly is, as it does not originate from the same underlying human cognitive or emotional processes.
In other words, artificial intelligence has evolved through computer science and engineering advancements, marked by human-initiated intervention, intellectual effort, and purposeful innovation.
AI is inferred from the results of simulated aspects of human thought and decision-making facilitated by data processing, machine learning techniques, and algorithmic principles.
In a recently published article titled "Defining intelligence: Bridging the gap between human and artificial perspectives" by Gilles E. Gignac and Eva T. Szodorai, human intelligence and artificial intelligence were compared and contrasted
Human intelligence is defined as a human's “maximal capacity to achieve a novel goal successfully using perceptual-cognitive [processes]” while artificial intelligence is defined abstractly as the maximal capacity of an artificial system to successfully achieve a novel goal through computational algorithms.
Undoubtedly, the applications of AI have become widespread and are gaining traction in all facets of life ranging from academia to various industries such as energy, finance, healthcare, retail, and manufacturing to mention but a few.
Being a general-purpose technology like electricity or computers, AI is also applied in many ways including image recognition, language translation, decision-making, e-commerce, credit scoring and various other domains.
Many more innovative and groundbreaking AI applications are expected in the future as AI technology continues to develop.
AI applications are simply software programs that employ AI techniques to perform specific tasks whether simple tasks, repetitive tasks, complex tasks, or cognitive tasks that require human-like intelligence.
Among the many different applications of AI are natural language processing (NLP), computer vision, machine learning (ML), robotics, business intelligence, disease diagnosis, treatment development, personalised care, personalised learning, crop yield improvement, cost reduction, environmental protection, as well as improved efficiency, increased productivity, and improved quality.
Despite all these invaluable applications of AI, “Inbreeding” has become a cancerous menace that is potentially threatening the long-term effectiveness of AI systems.
In genetics, inbreeding refers to the production of offspring from genetically similar members of a population thus leading to genomic corruption and the production of less diverse offspring with significant health problems and other deformities as a result of the amplification of the expression of recessive genes.
While the increase in inbreeding has become one of the main challenges for the conservation of genetic variability in livestock, similarly, in the world of generative AI, inbreeding not only threatens the long-term effectiveness of AI systems but also the diversity of human culture.
From an evolutionary perspective, the first-generation large language models and other generative AI systems were trained on a relatively clean “gene pool” of human artifacts, using huge quantities of textual, audio, and visual content to represent the essence of our cultural and collective sensibilities.
However, as the internet increasingly gets flooded with more AI-generated artifacts, there is a significant risk that new AI systems will be trained on datasets that include large quantities of AI-created content.
In other words, the content would no longer be a direct human culture, but emulated human culture with varying levels of distortion, thus, corrupting the “gene pool” through inbreeding.
This problem will accelerate and become more pronounced as the generative AI systems increase in use because newer AI systems will be trained only on copies of human culture with increasingly distorted artifacts, like photocopying a photocopy of a photocopied document.
This emerging “generative inbreeding,” would not only cause a potential degradation of generative AI systems, as inbreeding reduces their variability and ability to accurately represent human culture, language, and artifacts but would also lead to the distortion of human culture by inbred AI systems with increasingly “deformities” in the cultural gene pool that don’t represent our cultural and collective sensibilities.
Recent studies have, therefore, suggested that generative inbreeding could lead to “model collapse” due to “data poisoning” thereby breaking AI systems by causing them to produce worse and worse artifacts over time.
In other words, the progressive decrease in the quality of generative models is a time bomb and poses high-risk misinformation, disinformation, and unreproducible science
Therefore, it is very essential to design AI systems that are capable of distinguishing generative content from human content, though this is far more difficult than it seems.
The need to prioritize detection, explainability policies, and fact-checking has also been reiterated by the experts as the panacea to uphold ethical standards and foster a climate of trust.
Therefore, both technical and policy protections are no longer a luxury but a necessity to harness the full potential of AI as well as to achieve and sustain a world of real human cultures rather than a world where the culture is influenced more by generative AI systems.
* Assoc. Prof. Idris Adewale Ahmed is Deputy Dean (Postgraduate Studies), Faculty of Applied Science, Lincoln University College.
**This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.