ChatGPT - The beginning of the end? Why not BNNs as ANNs instead?
Lately, it seems like around every corner there is another podcast, video, office or dinner conversation, all echoing about the pervasive nature, the danger, the panacea, the good, and the bad of Artificial Intelligence. Are we missing the big picture though? Are we possibly missing the smaller one too?
As my wife and I drove with our 12-year-old son (Gideon) through the mangrove swamps of the Yucatan, as we reflected on the pink flamingos, waters pigmented to a rust red in observance of a small biota, traveling on an incredible alone time adventure with our middle son, within the immense biodiversity, and marinating in the intense conceptual idea of an earth destroyed over night as the cretaceous fell silent when the comet hit all those years ago; we found a suddenly deeper than life conversation spilling onto us from over the airwaves. Here in that moment, Spotify blasted away the audible expressions of Aza Raskin and Tristan Harris as they explained to Joe the dangers which we all now could face. AI is here, and something has changed…
Interweaved within the views of Iguanas wandering ruins of old piers and buildings that some hurricane had left to the ways of sand, we felt both the awe and audacity of the moment. What had OpenAI, Google, and Microsoft done? What was Sam really thinking after all? Where were these people now leading us? Along that beach front that was Sisal, my son brought about a single resonating quote that has been hanging on my mind since; “We are like the chick in the egg, with AI opening it for us. But a chick needs to struggle and fight and open the egg itself”, he said. “If it doesn’t have the struggle, it will not live to grow, it will die…”
Maybe this should be reasoned out within the Sam Altman versus Sam Harris epoch. Are we on that precipitous knifes edge, developed into a moment that was crafted by a few expert coders, whose hope was to see what could happen? Or are we still in control out here in the world of the mere mortal? Could it all end up that this thing we have created is the cure for cancer, or instead will it be the mother to all bioweapons poised in the form of a mad scientist using a desktop DNA sequencer?
We seem to have chosen to look to the bigger models, to the Generative Pre-trained Transformers, the GPTs. We have looked at the ability of Machine Learning but were afraid of the implications of asking the wrong questions, instead of the right ones. If you ask ChatGPT if one should try to test “jail breaks” on it, what it might say is that “it is a bad idea”; at least it did for me, stating “unintended consequences may arise”. A shocking revelation to me now, after years of hearing that the only bad question is no question at all.
Weeks later, back in the States again, driving along the Central Valley roads, waiting for the next atmospheric river to come, watching the Sand Hill Cranes dancing in the field I have driven past my entire life, the same field that in 1850 was an oak grove with Grizzly Bears and Elk, the same place that once was an inland sea, a feeling came to me; a simple question? Why not BNNs instead. All this worry about the largest super computing capability ever, built on a platform of billions of words, and trillions of moments. We gave it too much data, it simply has too much information, and information is power. What if we were to think simpler, more like a human, more like an animal, more like a Biological Neural Network, more like a BNN? What if this thing we have now is like Nuclear Power, with reactors that need to be downsized to be safe, controllable, useable at the level of human judgment and control?
In this space, to see a human based Artificial Neural Network, built within the framework and context of the human brain, perhaps with an area that is dedicated to speech (Temporal Lobe), and another to perception (like the Parietal Lobe), perhaps with less expectation, perhaps within the realm of humanity, perhaps then we could be more realistic in our ability to derive a use and capacity with a basic control of the power it could potentially yield.
The common human knows somewhere between 40,000 to 50,000 words, at best, yet we as a species are known for the completion of the most uncommon thing; an ability to adapt to nearly any environment. We can use electrical impulses and neurotransmitters to communicate, perceive, estimate, create, develop, plan, execute, and traverse the globe. Even if a person knew nothing, never saw another human, never spoke a word, even in this setting there would be growth, skills developed, thought, perception, but more important a remaining capacity to learn.
It is this simple fact that makes me believe that a more hinged version of AI should be developed as the path forward. We should be pushing BNNs as the model for ANNs and looking deeper into our capacity as individual beings as a proxy for the larger AI model. When we create an AI model that is the culmination of all cognitive thought in the form of information unbound, there is by no means that we will ever be able to control or understand it once it takes that next and very final leap. We are not there yet, we are still waiting, or perhaps it is still waiting, but one way or another, that eggshell will crack open, and we will emerge; the question is will it open our shell, or we will? To my point, and the irony of it all is, when asking ChatGPT about this piece of writing, it states, “while it is a compelling piece of writing, it is not entirely accurate”, and it only gave me a 7 out of 10 score. So here is to me struggling to get out of my own eggshell while I still can I suppose.