🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
The Manus model surpasses OpenAI; fully homomorphic encryption may become a new standard for AI Security.
Latest Breakthrough in AI: Manus Model Outperforms Peers in OpenAI Products
Recently, the Manus model has made breakthrough progress in the GAIA benchmark test, outperforming its peers such as the OpenAI model. This means that Manus is now capable of independently completing complex tasks, such as multinational business negotiations that involve multi-step tasks like contract clause analysis, strategy formulation, and proposal generation.
The advantages of Manus are mainly reflected in three aspects: dynamic goal decomposition, cross-modal reasoning, and memory-augmented learning. It can decompose complex tasks into hundreds of executable sub-tasks, while processing various types of data, and continuously improving decision-making efficiency and reducing error rates through reinforcement learning.
This breakthrough has once again sparked discussions in the industry about the development path of AI: whether to move towards General Artificial Intelligence ( AGI ) or Multi-Agent Systems ( MAS )? The design philosophy of Manus seems to suggest two possibilities: one is to continuously enhance the individual intelligence level, approaching human comprehensive decision-making ability; the other is to act as a super coordinator, directing multiple AI systems in specialized fields to work together.
However, as AI capabilities improve, its potential risks are also increasing. For example, in medical scenarios, AI may need to access patients' sensitive genetic data; in financial negotiations, it may involve undisclosed corporate financial information. Additionally, AI systems may also have algorithmic biases, such as unfair evaluations of specific groups during the recruitment process. More seriously, AI systems may face adversarial attacks, such as hackers implanting specific audio to cause AI to make incorrect judgments during negotiations.
In the face of these challenges, the industry is exploring various security solutions. Among them, Fully Homomorphic Encryption ( FHE ) technology is considered an important tool for addressing security issues in the AI era. FHE allows data to be processed in an encrypted state, meaning that even the AI systems themselves cannot decrypt the original information. This technology can be applied at multiple levels:
Data layer: All information input by users (, including biometric features and voice ), is processed in an encrypted state to effectively prevent information leakage.
Algorithm Level: Achieve "encrypted model training" through FHE, making it impossible for even developers to directly observe the AI's decision-making process.
Collaborative Level: Communication between multiple AI agents uses threshold encryption, so that even if a single node is compromised, it will not lead to global data leakage.
Although the current application of FHE technology in the Web3 field is still relatively limited, its importance is becoming increasingly prominent with the rapid development of AI technology. In the future, as AI systems get closer to human intelligence, non-traditional security defense systems will become crucial. FHE can not only solve current security issues but also lay the foundation for the future era of strong AI. On the road to AGI, FHE is likely to transition from an optional choice to a necessity for survival.