The Manus model surpasses OpenAI; fully homomorphic encryption may become a new standard for AI Security.

Latest Breakthrough in AI: Manus Model Outperforms Peers in OpenAI Products

Recently, the Manus model has made breakthrough progress in the GAIA benchmark test, outperforming its peers such as the OpenAI model. This means that Manus is now capable of independently completing complex tasks, such as multinational business negotiations that involve multi-step tasks like contract clause analysis, strategy formulation, and proposal generation.

The advantages of Manus are mainly reflected in three aspects: dynamic goal decomposition, cross-modal reasoning, and memory-augmented learning. It can decompose complex tasks into hundreds of executable sub-tasks, while processing various types of data, and continuously improving decision-making efficiency and reducing error rates through reinforcement learning.

This breakthrough has once again sparked discussions in the industry about the development path of AI: whether to move towards General Artificial Intelligence ( AGI ) or Multi-Agent Systems ( MAS )? The design philosophy of Manus seems to suggest two possibilities: one is to continuously enhance the individual intelligence level, approaching human comprehensive decision-making ability; the other is to act as a super coordinator, directing multiple AI systems in specialized fields to work together.

Manus brings the dawn of AGI, and AI safety is also worth pondering

However, as AI capabilities improve, its potential risks are also increasing. For example, in medical scenarios, AI may need to access patients' sensitive genetic data; in financial negotiations, it may involve undisclosed corporate financial information. Additionally, AI systems may also have algorithmic biases, such as unfair evaluations of specific groups during the recruitment process. More seriously, AI systems may face adversarial attacks, such as hackers implanting specific audio to cause AI to make incorrect judgments during negotiations.

In the face of these challenges, the industry is exploring various security solutions. Among them, Fully Homomorphic Encryption ( FHE ) technology is considered an important tool for addressing security issues in the AI era. FHE allows data to be processed in an encrypted state, meaning that even the AI systems themselves cannot decrypt the original information. This technology can be applied at multiple levels:

  1. Data layer: All information input by users (, including biometric features and voice ), is processed in an encrypted state to effectively prevent information leakage.

  2. Algorithm Level: Achieve "encrypted model training" through FHE, making it impossible for even developers to directly observe the AI's decision-making process.

  3. Collaborative Level: Communication between multiple AI agents uses threshold encryption, so that even if a single node is compromised, it will not lead to global data leakage.

Although the current application of FHE technology in the Web3 field is still relatively limited, its importance is becoming increasingly prominent with the rapid development of AI technology. In the future, as AI systems get closer to human intelligence, non-traditional security defense systems will become crucial. FHE can not only solve current security issues but also lay the foundation for the future era of strong AI. On the road to AGI, FHE is likely to transition from an optional choice to a necessity for survival.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)