A.I. Are We There Yet?

Reading our thoughts

We keep hearing about devices that attach to the brain, most recently Elon Musk’s Neuralink. Charles Schwab of the WEF talks about being able to “intercept” thoughts in ten years. With Neuralink, we are promised all kinds of incredible advancements, from curing Parkinson’s disease to enabling the blind to name a few. We have recently seen work being done to cure paraplegics paralysis where Neuralink presented a study in which a pig named Gertrude had two devices implanted into her brain: one to record neural activity and the other to stimulate specific regions of the brain. The study demonstrated the ability of the devices to record and interpret brain signals in real-time, and to stimulate specific regions of the brain, leading to observable changes in the pig’s behaviour. This study was seen as a proof-of-concept for developing more advanced Brain Machine Interfaces (BMI)s for human use in the future. However, much more research and development is needed before BMIs can be safely and effectively used in humans. Although this is incredible, this is not the same as thoughts and emotions. Moving an appendage, although an incredible achievement, is very simple compared to other brain activities like cognitive thought that is on a scale of complexity exponentially greater. But now, we see much excitement in the field of brain-computer interfaces (BCIs) recently, with researchers claiming that they can monitor and interpret the electrical signals in the brain to understand a person’s thoughts and feelings. However, let’s not get too far ahead of ourselves since it’s important to understand that just because we can “hear” what’s going on in the brain does not mean we can modify and transmit thoughts to different BMIs, to signal it back to another brain. But it seems there is much hyperbole regarding anything to do with brain links and less about science and fact.

Before we go further, I want to clarify that while I have some knowledge regarding this subject and the salient topics that relate to it, I need to become a specialist in these areas. However, through my research and exposure to cognitive behavioural concepts, I form my opinions. It is important to note that my responses are based on the information available and may only sometimes be the most accurate or up-to-date.

How the Brain works

To understand why this is the case, it’s helpful to know a bit about how the brain works. The electrical signals in the brain, known as neural spikes, are generated by the activity of neurons and are used to communicate information between different parts of the brain. However, the meaning of these signals needs to be better understood, and the relationship between neural activity and our thoughts and feelings is complex and multi-layered.

I have learned that to “edit” a signal in the brain; we need to know what the signal represents and how to manipulate it to produce the desired effect. This is an extremely challenging task, and we still need to truly understand how the brain works at this level of detail, which we don’t. It’s one thing to intercept and retransmit motor signals generated by the motor cortex in the brain, sent through the spinal cord to the muscles for which this has been done. However, it’s another story compared to cognitive thought and emotions since different brain regions, such as the prefrontal cortex and the limbic system, are involved. The brain signals involved in these processes are less specific and often involve the synchronized activity of multiple brain regions. Adding even more complexity, cognitive thought and emotion can also directly impact motor signals, causing changes in body posture, facial expressions, and other movement-related responses. Even if we are able to modify a signal in the brain, there is no guarantee that the brain will correctly interpret the signal or that it will produce the desired effect. The brain is a highly sophisticated system, and the effects of modifying neural activity need to be better understood. There is a real risk that any attempt to manipulate the brain in this way could have undesirable consequences, such as causing brain damage or negatively altering a person’s behaviour or personality. Thoughts and emotions are not physical entities but rather complex and multi-faceted mental experiences. They are generated by millions of neurons in the brain, and the relationship between neural activity and thoughts and emotions needs to be better understood. To process thoughts and emotions, we need to understand the complex and dynamic interactions between different parts of the brain and how they give rise to these mental experiences.

We are not there yet

While the idea of being able to “edit” the signal in the brain is tantalizing, it is important to remember that we are still a long way from truly understanding how the brain works and from being able to safely and effectively manipulate its activity. We must proceed cautiously and avoid making unrealistic claims about the capabilities of BCIs.

Therefore, rest assured, this media-spun-fear of some evil device, implanted in our brains to read our every thought and emotion, transmitting it to our overlords who ensure we are not committing some form of wrong-doing or thought crime, is not something we will see soon.

A lot of the reason behind all these debates and discussions is due to the recent release of ChatGPT, which brings to the forefront the advancements in Artificial Intelligence. It simultaneously impresses us and scares the living hell out of us. The fact that this all disseminates from what seems to be a firehose of information only makes matters worse. In many instances, this information lacks specifics that they conveniently ignore to create enticing clickbait headlines. We tend to wrap pseudoscience that we know around things we do not fully understand so that we are good with it. Artificial Intelligence is one of the most misused terms in technology when featured in news articles, magazines, books, political speeches, and even in company PowerPoint slides. Even the Pope talks about it, and in 2019, the Gartner Group reported that one in three corporates claimed to have some form of A.I.

Artificial Intelligence

Arguably, what people refer to when talking about A.I. is what they call Machine Learning, a set of statistical methods, self-improving learning algorithms, and performance optimization procedures, excelling at the task of “learning by association. “Machine Learning algorithms power the data products pervading our daily life, such as our social media feeds, GPS systems or home assistants. Their capability grew exponentially in the last decade, mainly for two reasons: One, in-memory computational capabilities scaling up an order of magnitude in the last few years (both vertically and horizontally); and two, the availability of huge sets of data to feed to those algorithms.

They are complex functions, mapping an input to an output, where data is represented in a certain be-spoke way and fed to a function, self-improving through a feedback mechanism and smart optimization. Neural Networks, for example, sold interchangeably as A.I., Deep Learning or Machine Learning, are an iterated sequence of matrix multiplications.

Some classify AI as follows:

  • Artificial General Intelligence (AGI). It is not there yet, with still a long way to go.
  • Artificial Super Intelligence (ASI). They leave this to the realm of sci-fi.
  • Artificial Narrow Intelligence (ANI). Which is about machines “using algorithms to make decisions regarding a single subject.

Confusion happens when the differences and interpretations become foggy and overlap, making it difficult to discern where and when to apply them to any given situation. As close as we may be to developing true AI, we are still a significant distance away from truly being able to say we have developed a complete and true A.I.

Although today we have “thinking robots” that perform a multitude of tasks, from packing boxes and cutting grass to painting cars and stocking shelves, to highlight how far we are from this achievement, we need to illustrate a situation where the human brain flexes its superiority.

While on a mission aboard the space shuttle, the astronauts encountered an issue with a connector that needed to be connected. However, due to the extreme temperatures in space, the connector had become distorted and wouldn’t fit together. To resolve this issue, the astronauts came up with a unique solution. They used the sun’s rays to heat one piece of the connector and then used the cool shadow of the shuttle to cool the other piece. With the use of this simple yet effective technique, they were able to connect the pieces and continue with their mission successfully. This quick thinking and resourcefulness demonstrated these astronauts’ ingenuity and problem-solving skills, even in the challenging space environment.

Sentient Beings

Artificial intelligence (AI) would not be able to handle this situation because it cannot physically manipulate objects in the real world. The task of heating and cooling the connector required hands-on interaction and the ability to perceive and respond to changes in temperature. Currently, AI systems are limited to processing information and making decisions based on that data. They cannot interact with the physical world in a meaningful way. AI systems also need more common sense and physical intuition, which would be necessary to understand the consequences of heating and cooling a connector and how to perform the task effectively. Therefore, in this situation, human intervention and problem-solving skills were required to resolve the issue.

For several reasons, we are not yet at the point of having developed a true, thinking, sentient AI system. Firstly, current AI systems lack consciousness, self-awareness, and free will. They operate based on pre-programmed rules and algorithms and cannot think and make decisions independently. Secondly, AI systems also lack common sense and physical intuition, which are necessary for understanding and interacting with the real world in a meaningful way. Thirdly, AI systems do not have emotions and cannot experience the world as humans do. Finally, there are still significant technical challenges to overcome, such as ensuring that AI systems are robust, secure, and ethical in their decision-making. Developing a truly sentient AI system will require significant advancements in multiple research areas, including natural language processing, computer vision, robotics, and cognitive science.

To use an analogy, the A.I systems we see today are in reality incredible impersonators, much like if we watch someone on YouTube impersonate a famous person, we can close our eyes and almost imagine that person being impersonated speaking or singing. As close as the impersonator sounds like the person whom they are impersonating, that is as far as it goes since that impersonator could not then drive to the home of whom they impersonate, open the door to their home and fool their family. They would quickly be met with a 911 call. Just because we can mimic a human’s intelligence, does not mean we created a sentient being. With all that said, I am still incredibly excited to be living during this time since we will see advancements not seen in hundreds of years over the next few decades. We have learned to fly, landed on the moon, and now we have the beginnings of sentient artificial

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s