Manus breaks through GAIA benchmark test, AI Security challenge highlights the potential of fully homomorphic encryption.

robot
Abstract generation in progress

Manus Achieves Breakthrough Progress in GAIA Benchmark Testing

Recently, Manus set a new record in the GAIA benchmark test, outperforming other large language models in the same category. This achievement indicates that Manus is capable of independently handling complex tasks, such as multinational business negotiations, which involve multiple aspects including contract analysis, strategic planning, and proposal formulation.

Compared to traditional systems, Manus has advantages mainly in three aspects: dynamic goal decomposition, cross-modal reasoning, and memory-augmented learning. It can break down large tasks into hundreds of executable subtasks, handle multiple types of data, and continuously improve its decision-making efficiency and reduce the probability of errors through reinforcement learning.

The progress of Manus has once again sparked discussions in the industry about the development path of artificial intelligence: will the future head towards a unified model of Artificial General Intelligence (AGI) or a collaborative model of Multi-Agent Systems (MAS)?

This issue involves the design philosophy of Manus, which suggests two possible directions for development:

  1. AGI Path: Continuously enhancing the capabilities of a single intelligent system to gradually approach the comprehensive decision-making level of humans.

  2. MAS Path: Position Manus as a super coordinator to direct the collaboration of intelligent agents across various professional fields.

On the surface, this is a discussion about the technological roadmap, but it fundamentally reflects the inherent contradiction in AI development: how to strike a balance between efficiency and safety. As individual intelligent systems approach AGI, the risks associated with the opacity of their decision-making processes also increase; while multi-agent collaboration can disperse risks, it may miss critical decision-making moments due to communication delays.

The evolution of Manus has inadvertently amplified the inherent risks associated with AI development. For instance, in medical scenarios, Manus requires real-time access to sensitive patient data; during financial negotiations, it may involve undisclosed information of companies. Additionally, there are issues of algorithmic bias, such as giving unfair salary recommendations to specific groups during recruitment processes, or a higher misjudgment rate for clauses in emerging industries when reviewing legal contracts. Another risk worth noting is adversarial attacks, where hackers may interfere with Manus's judgment of opponent offers during negotiations by implanting specific audio signals.

These challenges highlight a harsh reality: the more advanced intelligent systems become, the broader their potential attack surface.

Manus brings the dawn of AGI, and AI security is also worth pondering

In the Web3 space, security has always been a topic of considerable concern. Starting from the "impossible triangle" proposed by Ethereum founder Vitalik Buterin (the difficulty of achieving security, decentralization, and scalability simultaneously in blockchain networks), various cryptographic technologies have emerged:

  • Zero Trust Security Model: Based on the principle of "never trust, always verify," it enforces strict authentication and authorization for every access request.

  • Decentralized Identity (DID): A standard for identity recognition that does not require a centralized registry, providing a new way of identity management for the Web3 ecosystem.

  • Fully Homomorphic Encryption (FHE): An advanced technology that allows computations on data in an encrypted state, particularly suitable for scenarios such as cloud computing and data outsourcing.

Among these technologies, fully homomorphic encryption, as the latest emerging encryption method, is expected to become a key technology for solving security issues in the AI era. It allows computations to be performed directly on encrypted data, providing new possibilities for privacy protection.

To address the security challenges posed by AI, we can take action from the following aspects:

  1. Data Layer: Ensure that all information input by users (including biometric features, voice, etc.) is processed in an encrypted state, and even the AI system itself cannot decrypt the original data.

  2. Algorithm Level: Achieve "encrypted model training" through FHE, so that even developers cannot directly observe the decision-making process of AI.

  3. Collaborative Level: In multi-agent systems, threshold encryption is used so that even if a single node is compromised, it will not lead to global data leakage.

As AI technology continues to approach human intelligence levels, we need more advanced defense systems. FHE not only addresses current security issues but also lays the foundation for the future era of strong AI. On the road to AGI, FHE is no longer an option but a necessary condition to ensure the safe development of AI.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Share
Comment
0/400
ZenMinervip
· 07-06 23:43
How to ensure safety still depends on encryption.
View OriginalReply0
FallingLeafvip
· 07-06 23:40
Who can withstand the risks of encryption?
View OriginalReply0
OnchainHolmesvip
· 07-06 23:37
fully homomorphic encryption is amazing! This is the real necessity.
View OriginalReply0
GateUser-0717ab66vip
· 07-06 23:32
I don't understand it, but it feels impressive.
View OriginalReply0
RektDetectivevip
· 07-06 23:22
The algorithm is still biased.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)