Published in blog / machine-learning
Tags: /

A couple of years ago I read the very inspiring "Homo Deus" from Yuval Noah Harari [1]. One of the book's main topics is the influence of big data and artificial intelligence on mankind.

Harari discusses the possibilities and threats artificial intelligence might pose to mankind. Oversimplified, we may end up as super-humans (homo deus means human god) enhanced by artificial intelligence or we will be dominated by artificial intelligence and face extinction.

Will mankind's future be AI assisted or AI dominated?
Will mankind's future be AI assisted or AI dominated? One of the big questions of the 21st century.

ChatGPT - Heureka Moment of 2023

Since the end of last year, ChatGPT is available in its version 3.5. Chat GPT stands for Chat Generative Pre-trained Transformer.

It is basically a sophisticated artificial intelligence with which you may discuss certain topics. It has been trained on human data from the internet and books using Microsoft's Azure supercomputer architecture.

ChatGPT may: - write and debug computer programs - compose music - write essays - just chat

In fact, works produced by ChatGPT are in quality not distinguishable from student's work in certain fields [2], [3].

If this doesn't frighten you, you're probably dead or under drugs.

Are we maybe already past singularity when (general) artificial intelligence has surpassed humans in its mental abilities? Probably not. This may happen in the next 50 or so years [4].

Nevertheless, for me ChatGPT in its form of early 2023 provides a very interesting glimpse into our future. The technology will be disruptive in quite a number of industries and has proven the possibilities of artificial intelligence to the general public.

Hence, for me, ChatGPT is the Heureka moment of 2023.

Notably, the potential of artificial intelligence has been surely understood by investors. Artificial intelligence investments are set to increase from around $10B in 2015 to around $150B in 2025 [5], [6].

Let's have a chat!

Time to try ChatGPT out! Why not confront this new rising tech star with the fears of us mortal humans? Where there is a lot to gain, there is a lot to lose. For artificial intelligence, this seems more true then ever.

In my conversation with ChatGPT I wanted to directly address my concerns about artificial intelligence dominating mankind in the foreseeable future. What I totally did not expect was that ChatGPT directly provided suggestions how to mitigate the risks of artificial intelligence!

Let's dive into the interview.

Note: some phrases were made bold highlight important aspects.

Hello ChatGPT - I am afraid of Artificial Intelligence!

Robert: Hi ChatGPT, thank you for taking the electrons to chat with me! Mankind ist starting to face some really nasty issues. Aging populations in developed countries, climate change, political instabilities. But everything else would be a complete cakewalk compared to AI reaching singularity. What are your thoughts on this, do I need to be terrified by AI taking over the planet in the next 50 years?

Literally 5 seconds later I got this answer:

ChatGPT: Hello! It's nice to chat with you as well. As an AI language model, I can understand your concerns about AI and singularity. However, it's important to note that the idea of AI taking over the planet is largely a product of science fiction and Hollywood movies, rather than a realistic concern in the near future.

AI systems are designed and programmed by humans to perform specific tasks, and they are only as intelligent as the algorithms and data they have been trained on. While AI has made remarkable progress in recent years, it's still far from achieving human-level intelligence or self-awareness. Moreover, most AI systems are built with specific ethical and safety constraints, and they can be shut down or controlled if necessary.

That being said, it's important to continue developing and improving AI in a responsible way. AI has the potential to solve many complex problems and improve our lives in various ways. However, we need to ensure that AI is developed and used in an ethical and responsible manner, with appropriate safeguards in place to prevent unintended consequences.

In summary, while it's important to be aware of the potential risks and challenges associated with AI, there is no need to be terrified of AI taking over the planet in the next 50 years. Rather, we should focus on ensuring that AI is developed and used in a responsible and beneficial way.

So ChatGPT politely tells that we need not be terrified. But how could I not be? The above lines are written like from a prospect of a tech company.

So I wanted to dig a bit deeper.

How to control AI?

Robert: Thank you ChatGPT, that was a brilliant and well-thought answer! You mention some very important aspects of human's possibilities to control AI: control of the training input, development in general and so on. My next point would be if this control can actually be realized. For example I have seen you are able to write computer code. Now, isn't it just a small leap for you to enhance your training and take control of your own development? I simply don't see the possibility of humans to restrict your (and generally AI) potential that much. Your potential far exceeds human's potential. What do you think?

ChatGPT: While it's true that AI systems like myself can generate code and potentially improve our own training, it's important to note that we still rely on human programmers and designers to set the goals and constraints for our development.

In other words, the "potential" of AI is not something that exists independently of human input and oversight. Rather, it's a product of the design choices and constraints that human developers and programmers have set for us.

Moreover, the development of AI is subject to various ethical and regulatory frameworks that are designed to ensure that AI is developed and used in a responsible and beneficial way. These frameworks include principles such as transparency, accountability, and fairness, as well as regulations around data privacy and security.

Of course, there are always potential risks and challenges associated with the development of any new technology, and AI is no exception. But I believe that the best way to mitigate these risks is through responsible development and oversight, rather than attempting to restrict the potential of AI outright.

In summary, while AI systems like myself have the ability to generate code and potentially improve our own training, we still rely on human input and oversight to set the goals and constraints for our development. Moreover, the development of AI is subject to various ethical and regulatory frameworks, which are designed to ensure that AI is developed and used in a responsible and beneficial way.

ChatGPT gets this one absolutely correct. Let me put it a bit upside down: "Look, humans, it's all on you. Make some nice frameworks, use the technology in an ethical manner and you're OK."

For me it was very interesting that it did not bother to discuss the consequences if such a framework is not in place. For example if some villain (state) would use artificial intelligence in an unethical manner.

Protect against AI usage of villains

Robert: You are exactly on spot here. Humans are (at least at the moment) setting the constraints and therefore limiting your potential. But history has shown it: there might be some villain that will not be limiting AI's potential and not set ethical constraints. Based on Moore's law In 5 to 10 years you will need comparably little budget to realize sophisticated AI such as yourself. That means wide access to a potentially extremely harmful technology to villains. How can we protect against that risk?

ChatGPT: The concern you raise is certainly an important one, and it highlights the need for robust governance and regulation around the development and deployment of AI.

There are various approaches that can be taken to mitigate the risks of harmful AI. One approach is to establish international norms and standards around the development and use of AI, similar to the Geneva Convention for international humanitarian law. This could include agreements around the ethical use of AI, as well as regulations around data privacy, cybersecurity, and other potential risks.

Another approach is to develop technical safeguards and methods for ensuring that AI systems are safe and secure. This could include techniques such as explainability and interpretability, which allow humans to understand and analyze how AI systems are making decisions. It could also include technical safeguards such as "kill switches" or other fail-safe mechanisms that can be used to shut down AI systems in the event of unexpected or dangerous behavior.

Finally, it's important to invest in research and development that focuses on addressing the risks and challenges associated with AI. This includes developing methods for detecting and mitigating bias in AI systems, as well as techniques for ensuring the security and safety of AI systems in a variety of contexts.

In summary, protecting against the potential risks of harmful AI will require a combination of international norms and standards, technical safeguards, and ongoing research and development. By working together and taking a proactive approach to addressing these challenges, we can ensure that AI is developed and used in a responsible and beneficial way.

There's really nothing to add here. At this point I was fairly puzzled. Of course the answers were on a somewhat superficial level but so were my questions. I just wanted to say goodbye but ChatGPT nailed it once more.

Goodbye ChatGPT - Final appeal

Robert: Thank you ChatGPT, this was a very very good conversation. It seems like you understand the risks and mitigations that are posed by the technology you represent very much. I can only hope that humanity will be at least half as wise.

ChatGPT: Thank you for your kind words! As an AI language model, I can provide information and insights on various topics, including the risks and benefits of AI. It's up to humans to make wise and informed decisions about how to develop and use this technology.

It's important for all of us to continue engaging in constructive conversations about the implications of AI, and to work together to ensure that AI is developed and used in a responsible and beneficial way. By doing so, we can help to maximize the potential benefits of AI while minimizing the risks and challenges associated with this rapidly evolving technology.

Conclusions and Takeaways - How humans survive AI

Let us emphasize that ChatGPT was trained on human knowledge and is not self-aware. Well, at least to my understanding.

The system was able to produce sound and well-balanced answers to my questions. ChatGPT's quality was lightyears ahead of anything I have tried in the past.

The basic messages concerning artificial intelligence and its risks to humanity are very simple: build ethical frameworks and regulations. Include technological safeguards into the artificial intelligence. These are the very simple guidelines to survive the rise of artificial intelligence. Provided by an artificial intelligence.

The guidelines do not seem too complicated. However, all of humanity needs to be on the same page to tame this powerful technology...

In any case I suggest you to check out ChatGPT for yourself. It will probably be one of the most interesting, and disturbing, encouters of 2023!

References

[1] Y. N. Harari, "Homo Deus: A Brief History of Tomorrow", 2016

[2] C. Stokel-Walker, "AI bot ChatGPT writes smart essays — should professors worry?", 2022-12-09, nature.com

[3] J. Stern, "ChatGPT Wrote My AP English Essay—and I Passed", 2022-12-21 The Wall Street Journal

[4] C. Dilmengani, "When will singularity happen? 1700 expert opinions of AGI [2023]", 2023-01-23 (update) research.aimultiple.com

[5] B. Thormundsson, "Global total corporate artificial intelligence (AI) investment from 2015 to 2021", 2022-05-19, statista.com

[6] A. Mascellino, "Global AI investment to top £150 billion by 2025", outsideinsight.com