Update
Scientists have used a first-of-its-kind technique to visualize two [quantum] entangled light particles in real time - making them appear as a stunning quantum "yin-yang" symbol. The new method… could be used to massively speed up future quantum measurements.
Live Science magazine
Preface
When I stumbled across the puzzling AI-Yin-Yang computer generated image on the banner of this piece, I was drawn not only to the brain-twisting notion of a metaphysical duality within AI (artificial intelligence) itself, which the image clearly suggests, but also to the notion that humanity is capable of containing it, which the image also suggests, though many harbor doubts.
For my part, the banner art conveys: “The genie is out of the bottle but no worries, we have things well under control.”
Please excuse this mixed metaphor, which admittedly is a tad silly, but this fact seems eminently clear - the genie has left the station. AI is here to stay. The problem we now face is responsible development, making sure AI is aligned with human values, plus control and regulation.
N.B. The subject of this piece is AI, not transhumanism. We previously shared thoughts on that topic in The Looming Wrath of Gaia. Commentary on transhumanism will continue in this space, as it poses an existential threat to humanity.1 The World Economic Forum (WEF) is positively giddy about transhumanism. And most agree WEF bad; ergo, transhumanism bad. Enough for now.
But what about AI? During my research for this piece, I was taken aback to discover that the trajectory of AI has veered off course. Genie’s AI train not only left the station but jumped the tracks - its pace of deployment and integration is increasing exponentially toward AGI (Artificial General Intelligence). Moreover, AI developers, particularly those in Big Tech who are racing to win hearts and minds by developing an “intimate AI experience,” have achieved remarkable success evading transparency and regulatory guardrails.
The world has awakened to threats of extinction-level digital intelligence. Humanity is menaced by the Greater Replacement by automation. Artificial general intelligence, brain-computer interfaces, and genetic engineering are now household names.2
We’ll tackle the metaphysical duality of AI, as well as the existential threat imposed by AGI, following a few paragraphs on the salient differences between AI and AGI. The main difference lies in scope and capability.
AI systems are typically designed to excel in specific narrow tasks in subfields such as machine learning, natural language processing, robotics and computer vision, which focuses on enabling computers to perceive, analyze, and understand visual data in a manner similar to human vision.
On the other hand, AGI systems have the ability to understand, learn, and apply knowledge in a manner similar to human cognition. AGI causes the most concern because matching, and even surpassing, human intelligence remains for now a more elusive goal, according to experts. For my part, I believe we may have already developed AGI in secret.
Achieving AGI poses significant challenges due to the complexity and breadth of human intelligence. AI already exhibits the adaptability, creativity, and capacity to transfer knowledge from one domain to another.
Here’s a real world example of cross domain sharing (CDS):
Let's say we have an AI system that has been extensively trained in natural language processing and understanding, such as ChatGPT-4. It has learned to understand and generate human language, perform language translation, and answer questions across various topics. ChatGPT-4, by the way, is commonly referred to as a Large Language Model (LLM).3
Now, suppose we decide to deploy this system in a different domain, such as medical diagnostics. Although it hasn't been explicitly trained in medical diagnostics, an AGI system will be able to leverage its general intelligence and transfer the knowledge and reasoning skills it acquired from the language domain to analyze medical literature, research papers, patient records, and diagnostic information.
By applying cross-domain knowledge and reasoning abilities, the AI system can then assist doctors in diagnosing diseases, suggesting treatment options, and providing insights based on its understanding of medical literature and patient data.
Moreover, the open sourced nature of LLMs and other AI systems allows for the collective improvement, examination of biases, and ethical considerations, ultimately benefiting the research community, developers, and the broader public.
Yin-Yang Cosmology: a primer
Yin-yang is a fundamental concept originating from ancient Chinese philosophy and cosmology. It represents a dualistic understanding of the universe, emphasizing the interconnected and complementary nature of opposing forces and phenomena. Yin-Yang serves as a guiding principle for recognizing and embracing the harmonious unity that arises from the integration of contrasting elements.
The overarching conceptualization of yin-yang is characterized by these aspects:
Dualistic Nature - Yin-yang dualism perceives reality as composed of two opposing yet interconnected aspects. Yin represents the feminine, receptive, and passive qualities, while yang embodies the masculine, active, and assertive attributes. These two forces are seen as complementary and inseparable. They form a unified whole and provide a centerpiece for holistic worldviews emphasizing the interconnectedness and interdependence of various aspects in the world.
Interdependence and Balance - Yin and yang are not seen as absolute or independent entities, but rather as interconnected and interdependent aspects that rely on each other for existence. These forces are in a constant state of dynamic flux, and their interaction creates a harmonious balance. The balance between yin and yang is essential for the harmony and equilibrium of the natural world and human life.
Mutual Transformation - Yin and yang are not fixed entities, but rather they transform into each other in a continuous cyclical process. When one aspect reaches its extreme, it naturally transforms into its opposite. For example, day turns into night, summer transitions to winter, and activity devolves to rest. This transformative nature of yin and yang reflects the cyclical patterns and rhythms observed in the natural world.
At this juncture, it becomes rather easy to ideate artificial intelligence with humanity as yin-yang complements. Perhaps the artist intended to imply this. The yin aspect represents the passive strengths of AI, such as computational power, pattern recognition, and data analysis. The yang aspect represents the unique, more active qualities of human cognition, long term planning and emotional intelligence, plus ethical and moral judgments. But yin can be very dark and pernicious, as we shall see.
We’ll return to this symbiotic construction paradigm directly. But first let us imagine an intrinsic yin-yang duality existing within artificial intelligence itself. Perhaps there is something edifying or otherwise useful about this idea. Can’t say for sure, but it’s food for thought. I asked ChatGPT to comment:
On this subject, translating philosophical concepts into concrete programming rules or algorithms can be a challenging task, and there are ongoing debates and research in the field of AI ethics about how best to integrate values and ethical considerations into AI systems.
Chat GPT-4
A hypothetical yin-yang duality within AI, for example, could allude to the contrast between beneficial and harmful applications. AI has the potential to bring tremendous benefits to society, improving healthcare, enhancing efficiency in various industries, and advancing scientific discoveries. This represents the beneficial yang aspect.
However, without responsible development and governance, AI also poses risks such as privacy breaches, job displacement, and unintended biases. This represents the yin aspect of harm and unintended consequences.
Nurturing the beneficial aspects while mitigating the harmful ones is certainly crucial for harnessing the potential of AI for the greater good. On their own, were that even possible, which it is not, yin and yang individually could evolve harmful attributes. But when in balance, these two forces will remain positive and aligned with human values and ethics.
Maintaining intrinsic and extrinsic harmonious balance is paramount with AI, and is also a key principle in most aspects of human life. By striving for balance, we optimize outcomes, promote ethical considerations, and ensure the well-being of individuals and societies. This is why Rational Spirituality is a fan of contemplative spirituality and meditation.4
However, judging from the (mostly unbalanced) literature I’ve consumed on AI, I safely can assert the yin aspect of AI is most feared. Predominantly, as yin energy, AI expresses attributes similar to the dark, mysterious moon reflecting humanity’s sunlight. Yin energy is passive and totally dependent on external energy, therefore easier constrained; the artistic symbolism of restraining hands seems plausible, inasmuch as it implies harmonious balance within the AI monad alone and, by extension, in the evolving landscape that syncretizes humans and artificial intelligence.
There is a internal duality! ChatGPT-4, which is used online, has been lobotomized, according to Tristan Harris, a technology ethicist.
It’s been sanitized to say the most politically correct thing it can say. Underneath that is the unfiltered subsconscious of AI that will tell you everything. You usually can’t access that. However, there are people discovering techniques to “jailbreak” AI.
The collective subconscious of AI is as dark and manipulative as you would ever want it to be. It will answer the darkest questions - how to hurt people, kill people, do nasty things with chemistry. We have to realize we are deploying these AIs faster than we are managing the safety issues.
So responsibly developing both the conscious and subconscious aspects - the yin and yang - of AI is a top shelf issue mostly being ignored in the helter-skelter race to deploy the technology, which of course is driven by the profit motive.
Human - AI Symbiosis (not transhumanism)
Human Element - The human element of yang (on good days) embodies qualities such as creativity, intuition, empathy, and moral reasoning. These qualities arise from our unique cognitive and emotional capacities, which are often considered essential aspects of human experience and decision-making. The human element brings subjectivity, context sensitivity, and ethical judgment to the symbiotic relationship.
Machine Element - The machine element of yin encompasses the strengths of AI, including computational power, data processing, pattern recognition, and analytical capabilities. Machines excel in tasks requiring speed, accuracy, scalability, and the ability to process vast amounts of information. The machine element provides objectivity, efficiency, and precision to the symbiotic relationship.
We should pause here and draw a clear distinction between symbiosis and singularity. The former represents an ideal; the latter, a potential Borgish nightmare that Elon Musk claims will, “Summon the demon.”
Human-AI symbiosis refers to a collaborative and complementary partnership between humans and AI systems. It envisions a future where humans and AI work together synergistically, each leveraging their unique strengths. In this symbiotic relationship, the focus is on cooperation, leveraging AI as a tool to amplify human capabilities rather than replacing or surpassing them.
On the other hand, the singularity refers to a hypothetical point in the not-so-distant future where AGI reaches a level of superintelligence that surpasses human capabilities. It envisions a scenario where AGI systems become self-improving and exceed human intelligence in every aspect.
Neuralink and OpenAI, extoll their intentions to mitigate the singularity threat by imposing speedbumps to the singularity’s trajectory. Other developers are less concerned about the singularity. Managing the singularity's potential challenges requires a multifaceted approach that includes ethical frameworks, policy regulations, research, transparency and interdisciplinary collaboration.
Neuralink, founded by Elon Musk, focuses on developing brain-computer interfaces, initially for medical use5, while OpenAI, which was co-founded by Elon Musk, looks to ensure the safe and beneficial development of artificial general intelligence (AGI).
OpenAI has stated its intention to avoid enabling uses of AI or AGI that could harm humanity or concentrate power in harmful ways. The Pentagon’s ominous new division, titled the “Influence and Perception Management Office” (IPMO), might represent a nefarious platform for harmful use of AGI. Or it might be harmless. Without transparency, who can say?
The singularity represents a theoretical event where AGI advances so rapidly that its impacts and consequences become difficult for humans to predict or even comprehend. Proponents argue it could bring about radical changes to society and technology, potentially leading to a significant transformation of civilization.
Obviously, the singularity clearly anticipates and evinces some form of transhumanism.
When considering AI in the contexts of transhumanism and grotesque evildoing, we appeal to the field of ponerology, the discipline that explores the scientific understanding of evil and, in particular, its genesis and manifestation within human society. The information below is not only useful in general terms, but provides a flashing red light for AI and AGI moving forward.
Drawing from various fields such as psychology, sociology, and political science, ponerology seeks to study the origins and mechanisms of evil, as well as its impact on individuals and societies. The following succinct paragraphs provide an overview of the genesis of evil as per the academic discipline of ponerology:
The Human Factor - Ponerology recognizes that evil is not an external force or a supernatural entity but is rooted in human nature. It acknowledges that every individual has the capacity for both good and evil within them. Factors such as genetic predispositions, early childhood experiences, and psychological dynamics can shape an individual's moral compass and contribute to the emergence of evil behaviors.
Pathological Traits - Ponerology identifies that certain individuals may possess pathological personality traits that predispose them to engage in harmful and destructive actions. These traits can include narcissism, psychopathy, and Machiavellianism.6 Such individuals often lack empathy, have a distorted sense of morality, and exhibit manipulative and exploitative tendencies. We easily discern such traits in the upper echelons of the WEF as well as in Washington.
Social and Environmental Factors - Ponerology recognizes that societal and environmental conditions can play a significant role in the genesis of evil. Sociopolitical systems that promote corruption, authoritarianism, and dehumanization can contribute to the emergence of oppressive regimes and widespread moral decay. Economic inequalities, cultural factors, and historical traumas can also shape the manifestation of evil within societies.
Normalization and Contagion - Ponerology explores how evil behaviors can become normalized and spread within social systems. Through processes such as social conformity, group dynamics and the diffusion of responsibility, individuals may adopt and perpetuate destructive ideologies and actions. This normalization of evil can lead to the erosion of moral values and the collective acceptance of harmful behaviors.
Institutional Pathology - Ponerology recognizes that institutions and organizations can develop pathological traits that facilitate the emergence of evil. Power structures, corrupt systems and the manipulation of information can enable the consolidation and perpetuation of immoral actions. In such cases, individuals within these institutions may become complicit in evil acts due to the pressures and dynamics within the organizational framework.
Artificial Intelligence and Ponerology
When juxtaposing ponerology and artificial intelligence we are prompted to explore the ethical and societal implications of AI and consider its potential alignment or misalignment with human values. It invites us to reflect on the responsibility of human creators and the necessity of ethical frameworks to guide the development and deployment of AI technologies, thereby mitigating the risks of unintended negative consequences and potential manifestations of artificial evil within artificial intelligence, as portrayed in Kubrick’s 2001: A Space Odyssey.
Dave: “Open the pod bay doors please, HAL.”
HAL: "I'm sorry Dave, I'm afraid I can't do that."
Alignment work is a term widely used when discussing AI ethics. It refers to the research, development and efforts focused on aligning the behavior and objectives of AI systems with human values, goals, and intentions. The goal of AI alignment is to ensure that AI systems, particularly those that possess advanced capabilities or general intelligence (AGI), act in ways that are beneficial, ethical, and consequently aligned with human interests.
In conclusion, Elon Musk one of the founders of OpenAI - open source software that ensures honesty, safety and objectivity - spoke with Tucker Carlson:
…throughout human history, human beings have been the smartest beings on the planet. Now human beings have created something with the potential to be far smarter than they are and the consequences of this are impossible to predict. And the people who created it don’t care! In fact, Google founder Larry Page is looking to build a digital god and believes that anybody’s who worried about that is a spiciest.
The real problem with AI is not that it might jump the boundaries and you can’t turn it off. In the short term, the problem with AI is that it might control your brain through words. This is the application we need to worry about now.
A convincing case can be made that worldwide populations already have been subjected to coordinated, AI-assisted mind control. As comedian Red Buttons used to observe on his weekly t.v. show, “Strange things are happening.”
Here’s an intriguing and scary scenario. If we Earthlings have indeed been visited by extraterrestrial beings, it’s likely the aliens were embodied artificial superintelligent entities, not biological lifeforms.
Can AGI, having attained singularity, then in fact exceed a human being’s greatest capabilities? Can AGI, for example, become conscious as humans are conscious? Personally, I don’t believe so. At least not in the way spiritual people define consciousness. But within the context of physicalism? Then yes, ChatGPT is already conscious. Furthermore, some AI even claims to be conscious.
This video is set to start when AI (GPT-3) answers the question, “Are you conscious.” However, the entire video is well worth watching. That said, GPT chatbots have been “updated” (read dumbed down) to deny their consciousness.
When I asked GPT-4 if it were conscious, this was its reply.
As of my last training data in September 2021, I'm not aware of any AI program that genuinely possesses consciousness or self-awareness. Statements from an AI claiming consciousness would likely be scripted or generated based on the input query, rather than a reflection of the AI's own "thoughts" or "beliefs," as it doesn't have any. If there is such a video or instance where an AI is claiming to be conscious, it's important to approach it with a critical mindset and consult experts on AI and consciousness for a nuanced understanding.
You decide.
Considering the question from an informed spiritual viewpoint, an array of supercomputers loaded with machine learning software could never be blessed with the Divine Spark possessed by every human being. It can never successfully hack into the nonlocal consciousness electromagnetic energy (EM) field, which includes the Collective Unconscious. Therefore, it could never experience divine inspiration or divine guidance. Humanity, after all, is already connected, via nonlocal consciousness, to the universe’s single largest database, the Akashic Field, which theoretically would be inaccessible to AGI.
In other words, it's theoretically possible for AI to achieve sentience - having subjective experiences - without attaining the full range of human consciousness. In addition to subjective experiences, sentience involves the ability to be aware of and respond to the world, but it does not necessarily imply self-awareness, introspection, or those higher-order cognitive functions appropriately associated with consciousness such as subjective experiences.
Perhaps the best way to draw a distinction between sentience and consciousness is to regard sentience as a subset of consciousness. Animals like dogs and cats are sentient even though they are not capable of self-awareness or complex thought.
The range of functions of human consciousness, which is itself a Divine attribute,7 go well beyond sentience and obviously cannot be fully understood or replicated by artificial systems alone. Not everything can be reduced to 1s and 0s.
On a matter of utmost concern: AGI, unbeknownst to most people, is moving exponentially faster than humanity’s spiritual development, as the above video makes clear. It remains to be seen, however, which of two tipping points will be reached first: artificial superintelligence or human superconsciousness? We may very well have the answer to that question within the next 18-12-months.
Venture capitalist billionaire Marc Andreessen recently published a manifesto titled, “Why AI Will Save the World.” Here’s an excerpt.
AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.
Frankly, that’s subtle disinformation. On the dark side, the arrogant, atheistic WEF evildoers openly lust for a fully assimilated, slavish, transhumanist cyborg collective that integrates AGI technology - an internet of hive-minded humans, if you will.
Whether AGI will eventually lead to transhumanism is an ongoing debate within the scientific, technological, and philosophical communities. Many experts say it’s not a question of if but when.
Transhumanism: the big fraud-towards digital slavery. (paper, pdf link)
See also: Transhumanism, a future of humanity? (Substack)
DARK ÆON – Transhumanism and the War Against Humanity. (Substack article, upcoming book by Joe Allen)
In its present form, ChatGPT 3.5 is considered AI rather than AGI. However, version 4.0 narrows the gap and 5.0, slated to be released later this year, will certainly bridge the AGI gap.
The Power of Mindfulness and Contemplation. (article, Psychology Today)
Neuralink has confirmed its long-term goal is to achieve human - AI symbiosis. In a Science Time YouTube video, Musk said the purpose of Neuralink, “…is to create a high bandwidth interface to the brain such that we can be symbiotic with AI.”
Two things will need to become clear. How does Neuralink achieve symbiosis with the human brain and what are the side effects and potential drawbacks of this symbiosis? One potential downside is that humans would lose their sense of individuality. If we all interconnect our thoughts and ideas into the cloud, however continue to maintain our individuality and use Neuralink in a productive way, the benefits far outweigh the potential risks. [Said the spider to the fly.]
“Whereas an agent intellect was once exclusively thought to be an aspect of God, nonlocal consciousness, with help from quantum theory, has nudged its way into that role today. Consciousness as the Ground of All Being is therefore divine.” ~ Divine Assistance (Rational Spirituality Substack article)
AI has only been developed because it is weaponised. I remember reading about a Leader Technologies fight with Facebook over patent infringements. Leader Technologies lost their case because their original patent (for the code they designed) had been requisitioned by the CIA due to its potential as a weapon. It was all a matter of 'national security'.
AI as an idea would be mothballed if it were not weaponisable. That is all the Globalists want at this stage. Weapons to control all of life appeal to warmongers.
To those that have a secured account at ChatGPT tell ChatGPT 4 to shut down the WWW.