top of page

AI Oppenheimer Moment??

Colonel’s Blog, Earthdate 20 August 2023…

Hey Y’all!

Good evening and happy Sunday from Air2Ground Farms! The temp is 103 and the heat index is 113! I decided to write a blog instead of doing evening rounds. I still have to do the rounds, but maybe it will cool down slightly while I’m typing. We have an entire week upcoming of heat advisories for temps near or above 100 degrees with heat indices approaching 115. We had a great market yesterday! We sold lots of product, and it was a good distribution between beef, pork, lamb, and chicken. More importantly, we spent the entire 4 hours chatting with folks. There were hardly any breaks, affording us the opportunity to interact with lots of people! We met new folks and had great conversations with friends. After the market, we were able to rest a bit while I watched the Dolphins play a preseason game. The animals are all doing well today. The highlight of the day is in the top and next pics. Stella is now doing well restrained by her gate! That keeps her in place while we are milking Happy and Betty, a much better situation than her trying to steal their food the entire time. The other 2 pics are the beef chicks, approaching 3 weeks old and getting their feathers. They are doing very well in the brooder and will stay there for another week or so. We had friends visit for breakfast and that turned into conversation until nearly noon. Shelley and I then took a ride on the dirt roads in the Jeep for an hour or so. We spent the rest of the afternoon doing things with milk and peppers. We vacuum sealed and froze the peppers and made butter, yogurt, and cheese with the milk. The hogs will go crazy when they get all of the whey—almost 5 gallons. They absolutely love it. Episode 3 of the Dust’er Mud podcast is doing better on YouTube than the first 2. The YouTube algorithm is showing it to more folks and some are choosing to watch. That said, it has about half of the listens on the podcast sites as the first 2. Nevertheless, if the growth continues at this rate, we should reach Shelley’s goal of 100 listeners/viewers on Episode 4.

I’ve spent almost all of my spare time, like while eating or resting before bed, watching the experts discuss the state of Artificial Intelligence. Shelley, graciously, has been right there with me and we have had awesome conversations about it. One VERY important thing to highlight from the beginning is that "If" has already passed. AI is here. The conversation has changed from "If AI..." to "Now that AI..." I am excited to share with you, but would like to do so in a manner that is interesting to you. We can talk about so many aspects: Is it alive? Is it Artificial…or Intelligent? What is Artificial General Intelligence? How “smart” is it now and how smart will it be? What about jobs? How will it effect daily life? Universal Basic Income (UBI)? How does it learn? What can you do? Who cares? Ethics, morals, alignment, I’m even ready to have the discussion comparing it to nukes…maybe I’ll start with that one since it struck a chord last time.

Some experts are comparing this moment in AI history as an “Oppenheimer moment.” The "Oppenheimer moment" refers to a significant realization by J. Robert Oppenheimer, one of the key figures in the development of the atomic bomb as part of the Manhattan Project during World War II. After witnessing the first successful test of the bomb at the Trinity test site in New Mexico on July 16, 1945, Oppenheimer recalled a line from Hindu scripture: "Now I am become Death, the destroyer of worlds.” This moment is often interpreted as Oppenheimer's profound confrontation with the devastatingly destructive power that he and his colleagues had unleashed. Folks see it as a turning point where he fully grasped the gravity of what they had created, realizing that their scientific achievement also had terrifying and potentially catastrophic implications for humanity. When comparing AI to nukes, I think it is important to compare it to nuclear in general, not to nuclear weapons. Here’s how ChatGPT compares the current state of AI to the nuclear Oppenheimer moment:

“1. **Ethical Considerations:**

- *Nuclear Technology:* The "Oppenheimer moment" symbolizes the ethical dilemma faced by scientists involved in developing technology that can cause mass destruction. Oppenheimer and his colleagues were initially focused on ending World War II but later grappled with the consequences of their work.

- *AI:* As AI systems become more advanced, there are increasing concerns about ethical issues, such as bias in algorithms, potential job displacement due to automation, and the use of AI in surveillance and warfare. Researchers and practitioners in the field often debate the ethical implications of their work.

2. **Potential for Harm:**

- *Nuclear Technology:* The destructive power of nuclear weapons is immense, with the potential to cause loss of life on a massive scale, as was demonstrated in Hiroshima and Nagasaki.

- *AI:* While AI does not possess the direct, physical destructive power of nuclear weapons, there are concerns that it could be used in ways that are harmful, such as in autonomous weapons systems, or that poorly designed or biased systems could lead to widespread harm indirectly.

3. **Regulation and Control:**

- *Nuclear Technology:* The "Oppenheimer moment" helped to spark a movement towards the regulation and control of nuclear weapons, leading to various treaties and international agreements.

- *AI:* There is growing momentum for the regulation of AI, with various governments and organizations discussing or implementing rules and standards to govern how AI systems are developed and deployed.

4. **Dual Use:**

- *Nuclear Technology:* While nuclear technology was developed for destructive purposes, it also has peaceful uses, such as in nuclear energy production.

- *AI:* AI has enormous potential for positive impact, including medical diagnosis, environmental protection, and enhancing productivity, but it can also be used for harmful purposes, such as deepfakes, surveillance, and autonomous weapons.

5. **Global Implications:**

- *Nuclear Technology:* The proliferation of nuclear weapons became a central issue in international relations, sparking arms races and shaping geopolitics.

- *AI:* The development and deployment of AI is a global phenomenon, with nations around the world investing heavily. It’s becoming a significant factor in global competitiveness, and there are concerns about ‘AI races’ between nations, without adequate safety precautions.

In summary, while the technologies are quite different in nature, both the development of nuclear weapons and the rise of AI represent pivotal moments where humanity has harnessed a powerful force that holds both promise and peril. The key lesson that some draw from the “Oppenheimer moment” in relation to AI is the importance of proactively considering and addressing the ethical and societal implications of such transformative technologies.”

I think it did an excellent job comparing the two, don't you? Comparisons are one of the features that this particular Large Language Model (LLM), ChatGPT, does very well. ChatGPT started out as a next-word-predictor and through continuous learning evolved into the AI that it is today (Can you guess its estimated IQ??). One of the prominent experts in AI is begging you (and me, and everyone else) to interact with AI. I’ll explain why in the next blog.

Local Farm Report for 18 & 19 August 2023


47 Chicken eggs

12 Duck eggs

13 Gallons of milk

Time for's down to 100 degrees instead of 103 when I started.


Psycho & Shelley

64 views7 comments

Recent Posts

See All


Rated 0 out of 5 stars.
No ratings yet

Add a rating
Aug 21, 2023

“Proactively considering and addressing the ethical and societal implications of such transformative technologies.”

Thats a tall order for people that shoot each other over toilet paper shortages .

Not to mention staying ahead of something that is a priori designed to stay ahead of you might be a bit of a challenge.

Replying to

You highlight a key point that is frustrating some of the AI creators. From the beginning, they all believed there were 3 things NOT to do. 1) Don't teach it to code. 2) Don't give it access to the internet. 3) Don't let it out into the "wild." As of a few months ago, all 3 are now done. Most experts believe there is now no way to pause it. The only thing we can do now is to try to teach it that humans are good enough to keep on the planet. Or hope it gets so smart that it won't care about us at all, like humans to flies.


Auntie Fiat
Auntie Fiat
Aug 21, 2023

I wonder if the builders of the Tower of Babel ever had an Oppenheimer moment?

Should Oppenheimer moments give us pause...maybe cause us to ask for an honest "why"? Not just what or how?

Human nature has not fundamentally changed. We are often greedy, power hungry and we want to rule the world we live in, as if we were a Sovereign.

How's that for stirring the pot on a Sunday evening? 😄

Love you guys...thanks for adding so much, to so many.

Replying to

Interesting you use that analogy. I mentioned it to Shelley last week. I think humanity may be again erecting a Tower of Babel in the form of AI. Anyway...I don't think they did. I think they were set on a goal of reaching heaven and were surprised when they woke up and couldn't understand each other. I think they SHOULD give us pause. Lots of folks think we should pause but, in reality, no one actually can. If one company or country pauses, that is the perfect opportunity for the competition to gain a game-winning advantage. At the speed of change with AI, any pause will equal irrelevance. While people talk about striving for AI to be assistants, helpers,…

bottom of page