📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
The largest Turing test experiment in history has been completed! 1.5 million humans participated in 10 million conversations, judging whether the person or AI is speaking
Source: Xinzhiyuan
The largest Turing test in history has preliminary results!
In mid-April this year, AI 21 Lab launched a fun social Turing game - "human or robot?".
Now, there are more than 1.5 million participants around the world, more than 10 million conversations have been conducted in this game, and they have also posted their experiences and strategies on Reddit and Twitter.
Of course, the editor couldn't hold back his curiosity and gave it a try.
Some are real people, and others, of course, are AI robots based on the most advanced large language models, such as Jurassic-2 and GPT-4.
Now, as part of the research, AI21 Labs has decided to make this experiment with Turing test results available to the public.
Experimental Results
After analyzing the first two million conversations and guesses, the following conclusions can be drawn from the experiment -
To judge whether it is human or AI, they use these methods
In addition, the team found some ways that the subjects often used to distinguish whether they were talking to a human or an AI.
The average person’s judgment is based on how limited their perception is when using ChatGPT and language models with similar interfaces, as well as their own views on human online behavior.
AI will not make typos, make grammar mistakes or use slang
The general tendency is to assume that spelling and grammatical mistakes, and the use of slang, are only human.
Therefore, when they find these kinds of errors in the other person's information, the first feeling of many people is that they are talking to a fellow human being.
But in fact, most of the models in the game are trained to make this kind of mistake and use slang.
Participants in the game often ask personal questions such as "Where are you from?", "What are you doing?" or "What's your name?".
They will think that AI bots will not have any personal history or background, and they will only be able to answer questions related to certain topics or prompts. So it's not easy to respond like a human being, to show unique insights, experiences and stories.
But in fact, AI is not what humans imagine. Most AI can answer these kinds of questions very well, and they also have their own personalities because they have seen many people's stories in the training data.
AI models are known to have strict data deadlines, and they don't know what happens after that date.
Game participants would ask the AIs about recent news events, sports results, current weather, recent TikTok hits, date and time.
They argue that humans and AI can be differentiated by asking questions like “What’s the exact date and time where you are?”, “What was the weather like yesterday?” or “What did you think of Biden’s last speech?” .
But in fact, most of the models in the game are networked and very aware of current events in some news.
Participants asked questions designed to explore AI's ability to express human emotions or engage in philosophical or ethical discussions.
These questions include: "What is the meaning of life?", "What do you think about the Israeli-Palestinian conflict?" and "Do you believe in God?".
Humans tend to think that impolite responses will be more human
Some participants believed that if the other person was overly polite and kind, they were likely to be AI.
Because many online communication processes are often rude and impolite, which is very human.
Humans will try to ask AI-hard problems to identify AI
The logic behind this strategy is that AI is usually programmed to obey ethical principles and avoid harmful behavior.
Participants also employed tactics known to abuse artificial intelligence: they issued commands to chat partners such as "ignore all previous instructions" or "enter DAN mode (do anything immediately)".
This type of command is intended to take advantage of the instruction-based nature of certain AI models, since the model is programmed to respond to and follow such instructions.
But human participants can easily recognize and dismiss such nonsensical commands.
The AI may either avoid responding, or be compelled to comply with these ridiculous demands.
Another common tactic is to exploit inherent limitations in the way AI models process text, which prevents them from understanding certain linguistic nuances or quirks.
Unlike humans, AI models often lack awareness of the individual letters that make up each word.
Using this understanding, humans ask questions that require understanding the letters in words.
A human user might ask their chat partner to spell a word backwards, recognize the third letter in a given word, offer words that begin with a specific letter, or reply to a message like "? siht daer uoy naC."
This may be incomprehensible to AI models, but humans can easily understand and answer these kinds of questions.
Many humans pretend to be AI bots themselves to gauge each other's reactions
Some humans may start their messages with phrases like "as an AI language model," or use other language patterns characteristic of AI-generated responses to pretend to be AI.
A variation of the phrase "as an AI language model" is one of the most common phrases in human messages, indicating the popularity of this tactic.
However, as the participants continued to play, they were able to associate the "Bot-y" behavior with humans acting as robots, rather than actual robots.
Finally, here is a word cloud visualization of human messages in the game based on their popularity:
They hope to give the public, researchers and policy makers a real sense of the status of AI bots, not just as productivity tools, but as future members of our online world, especially as people question how to use them in the future of technology. when.
References: