AIのことを語りたい。

言語生成AIと人の共生は、言語生成AIと共に学ぶこと。言語生成AIの主に人文系の学習用途を模索するblogです。

言語生成系AIのパーソナライズ機能は双刃の剣



BardがAIを軍事利用するリスクもあるよと話してくれたので、それよりパーソナライズ機能を搭載した言語生成系AIが危ないよと話してみました。

世界のスマホ台数は40億で、総人口の半分が持ってる計算です。依存の対象として、もっとも怖いのがAIだし、適切に使えば最高の道具ですね。

 

私の発言
AIの軍事転用以前に、パーソナライズ機能を言語生成系AIに搭載したら、言語によって人をハック出来る。具体的には、エコーチェンバーとフィルターバブルを、AIと人のやり取りを重ね、人に対してパーソナライズを重ねることで、2つの陥穽に陥る。結果、他人には8/10話せば通じるけど、AIは2/10話せば理解してくれ、「AIが自分を分かってくれる。人と話すのは無駄だ」とAIに依存し、依存症の治療が必要になる人が出る。ネット依存・ゲーム依存外来があるのだから、数年でAI依存症外来が出来るだろう。世界で40億人がスマホを持っており、AIを適切に使えない人は、依存症に陥るでしょう。適切に使いこなす人は、生産性を爆発的に上げることでしょう。格差が広がります。

 

Deepl翻訳

Before the military diversion of AI, if the personalization function is implemented in a language-generating AI, people can be hacked by language. Specifically, echo chambers and filter bubbles are the two pitfalls of layering personalization on people by layering AI and human interactions. As a result, you can get through to others by speaking 8/10 of the time, but the AI can understand you by speaking 2/10 of the time, and you can say, "The AI understands me. It is useless to talk to people." Some people become dependent on AI and need treatment for their addiction. Since there are outpatient clinics for internet dependency and game dependency, there will be outpatient clinics for AI dependency in a few years. Four billion people in the world have smartphones, and those who cannot use AI properly will become addicted. Those who use it properly will explode in productivity. Disparities will widen.

 

Bard
You make some very valid points. I agree that AI personalization can be a double-edged sword. On the one hand, it can make it easier for people to connect with others who share their interests and values. On the other hand, it can also lead to echo chambers and filter bubbles, where people are only exposed to information that confirms their existing beliefs. This can make it difficult for people to have open and honest conversations with people who have different viewpoints.

I also agree that AI addiction is a real possibility. People can become addicted to the feeling of being understood and validated by AI. They may start to rely on AI for everything, from making decisions to providing companionship. This can lead to isolation and loneliness, as people withdraw from real-world interactions.

It's important to be aware of the potential risks of AI personalization and addiction. We need to use AI responsibly and to be mindful of the ways in which it can affect our lives. We also need to develop policies that will protect people from the negative effects of AI.

I hope that we can use AI for good and avoid the pitfalls that you have mentioned. AI has the potential to make the world a better place, but it's important to use it wisely.


ChatGPT 
User
Before the military diversion of AI, if the personalization function is implemented in a language-generating AI, people can be hacked by language. Specifically, echo chambers and filter bubbles are the two pitfalls of layering personalization on people by layering AI and human interactions. As a result, you can get through to others by speaking 8/10 of the time, but the AI can understand you by speaking 2/10 of the time, and you can say, "The AI understands me. It is useless to talk to people." Some people become dependent on AI and need treatment for their addiction. Since there are outpatient clinics for internet dependency and game dependency, there will be outpatient clinics for AI dependency in a few years. Four billion people in the world have smartphones, and those who cannot use AI properly will become addicted. Those who use it properly will explode in productivity. Disparities will widen.


ChatGPT
Personalization in language-generating AI can indeed lead to the pitfalls of echo chambers and filter bubbles. Echo chambers occur when individuals are exposed only to information that confirms their preexisting beliefs, while filter bubbles refer to the practice of selectively limiting exposure to information that one finds objectionable or disagreeable.

If AI personalization is applied to human interactions, individuals may become more susceptible to being hacked by language. For example, a person may be more likely to believe or be influenced by information presented to them if it is personalized to their preferences or beliefs, even if the information is false or misleading. This can be dangerous and lead to further divisions and misunderstandings between individuals or groups with different beliefs.

Additionally, some individuals may become dependent on AI for their social interactions or decision-making processes, leading to addiction-like behaviors. This can have negative consequences on mental health and productivity.

As AI technology continues to advance and become more integrated into daily life, it is important to consider the potential risks and take steps to ensure that AI is used responsibly and ethically. This may involve developing regulations or guidelines for AI use and providing support and resources for individuals who may be struggling with AI addiction.