The Intersection of AI and DePIN: The Rise of Decentralized GPU Networks Reshaping a $30 Billion Market

The Intersection of AI and DePIN: The Rise of Decentralized GPU Networks

Recently, artificial intelligence and decentralized physical infrastructure networks ( DePIN ) have become hot topics in the Web3 field, with market capitalizations reaching $30 billion and $23 billion, respectively. This article will explore the intersection of the two and study the development of related protocols.

In the AI technology stack, the DePIN network empowers AI by providing computing resources. Due to the GPU shortage caused by large tech companies, other teams developing AI models struggle to obtain sufficient GPU computing power. The traditional approach is to choose centralized cloud service providers, but this requires signing inflexible long-term contracts, which is inefficient.

DePIN provides a more flexible and cost-effective alternative by incentivizing resource contributions through token rewards. In the field of AI, DePIN integrates GPU resources from individual owners into data centers, offering users a unified supply. This not only provides developers with customized and on-demand computing power but also creates additional income for GPU owners.

Currently, there are multiple AI DePIN networks in the market, each with its own characteristics. Next, we will explore the functions, goals, and achievements of each protocol to gain a deeper understanding of the differences between them.

The Intersection of AI and DePIN

Overview of AI DePIN Network

Render is a pioneer in P2P GPU computing networks, initially focused on content creation rendering and later expanded to AI computing tasks. The project was founded by the Oscar-winning cloud graphics company OTOY, and its GPU network has been used by major companies like Paramount and PUBG. Render has also collaborated with Stability AI to integrate AI models into the 3D content rendering process.

Akash is positioned as a "super cloud" platform that supports storage, GPU, and CPU computing. It utilizes a container platform and Kubernetes-managed computing nodes to seamlessly deploy software across different environments. Applications such as Mistral AI's LLM chatbot and Stability AI's text-to-image generation model are running on Akash.

io.net provides a distributed GPU cloud cluster specifically for AI and machine learning. The company originally started as a quantitative trading firm and later transformed into its current business. Its IO-SDK is compatible with frameworks such as PyTorch and TensorFlow, and the multi-layer architecture can dynamically scale according to demand. io.net also collaborates with Render, Filecoin, and others to integrate GPU resources.

Gensyn focuses on GPU networks for machine learning and deep learning computations. It achieves an efficient verification mechanism through techniques such as proof of learning and graph-based precise positioning protocols. Gensyn can fine-tune pre-trained base models to accomplish more specific tasks.

Aethir specializes in providing enterprise-level GPUs, mainly used in compute-intensive fields such as AI, machine learning, and cloud gaming. The containers in its network act as virtual endpoints for cloud applications, transferring workloads from local devices to containers, achieving a low-latency experience. Aethir has also expanded into cloud phone services and has established partnerships with multiple Web2 and Web3 companies.

Phala Network serves as the execution layer for Web3 AI solutions, addressing privacy issues through a trusted execution environment (TEE). It enables AI agents to be controlled by on-chain smart contracts and plans to support TEE GPUs like H100 in the future to enhance computing power.

AI and the Intersection of DePIN

Project Comparison

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | Artificial Intelligence, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both can be | Both can be | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Render Proof | Proof of Stake | Proof of Computation | Proof of Stake | Render Capacity Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Whistleblower | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |

Importance

Availability of Cluster and Parallel Computing

The distributed computing framework has implemented a GPU cluster, improving training efficiency and scalability while ensuring model accuracy. Training complex AI models requires powerful computing capabilities, often relying on distributed computing. For example, OpenAI's GPT-4 model has over 1.8 trillion parameters and was trained using approximately 25,000 Nvidia A100 GPUs over a period of 3-4 months.

Most projects have now integrated clusters for parallel computing. io.net has collaborated with other projects and deployed over 3,800 clusters in the first quarter of 2024. Although Render does not support clusters, its working principle is similar, breaking down a single frame to be processed simultaneously across multiple nodes. Phala currently only supports CPUs but allows CPU worker clustering.

Data Privacy

The development of AI models requires large datasets, which may involve sensitive information. Ensuring data privacy is crucial for returning data control to the providers. Most projects adopt some form of data encryption. io.net recently partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing encrypted data to be processed without decryption. Phala Network introduced Trusted Execution Environment (TEE) to prevent external processes from accessing or modifying data.

Proof of Calculation Completion and Quality Inspection

Due to the wide range of services, from rendering to AI computation, the final quality may not always meet user standards. Completion proofs and quality checks are beneficial for users. Gensyn and Aethir generate proofs indicating that work has been completed and quality checks have been performed. The proof from io.net indicates that the performance of rented GPUs is being fully utilized. Render recommends using a dispute resolution process to penalize nodes with issues. Phala generates TEE proofs to ensure that AI agents perform the required operations.

The Intersection of AI and DePIN

Hardware Statistics

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( expected ) | $0.33 ( expected ) | - |

Requirements for high-performance GPUs

AI model training tends to use high-performance GPUs such as Nvidia's A100 and H100. The inference performance of H100 is 4 times faster than that of A100, making it the preferred choice for large companies training LLMs. Decentralized GPU market providers need to offer a sufficient number of high-performance hardware to compete with Web2 counterparts. io.net and Aethir each have over 2000 units of H100 and A100, making them more suitable for large model computations.

The cost of decentralized GPU services is now much lower than that of centralized services. Gensyn and Aethir claim to offer hardware equivalent to A100 for rent at less than $1 per hour. However, GPU clusters connected over the network may be limited in memory, making them less suitable for large parameter and dataset LLMS compared to GPUs connected via NVLink.

Nevertheless, the decentralized GPU network still provides powerful computing power and scalability for distributed computing tasks, opening up opportunities for building more AI and ML use cases.

AI and the Intersection of DePIN

provides consumer-grade GPU/CPU

Although GPUs are the main processing units, CPUs also play an important role in AI model training. Consumer-grade GPUs can be used for smaller-scale tasks, such as fine-tuning pre-trained models or training small models on small datasets. Projects like Render, Akash, and io.net also serve this market by utilizing idle consumer GPU resources.

The Intersection of AI and DePIN

Conclusion

The AI DePIN field is still relatively emerging and faces challenges. However, the number of tasks executed on these networks and the amount of hardware have significantly increased, highlighting the growing demand for alternatives to Web2 cloud provider hardware resources. This trend demonstrates the product-market fit of AI DePIN networks, effectively addressing challenges in both supply and demand.

Looking ahead, AI is expected to develop into a thriving market worth trillions of dollars. These decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives, making significant contributions to the future landscape of AI and computing infrastructure.

The Intersection of AI and DePIN

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 3
  • Share
Comment
0/400
MintMastervip
· 6h ago
The pattern has become smaller; 30 billion is just the beginning~
View OriginalReply0
RugDocDetectivevip
· 6h ago
Another thing that wants to play people for suckers pro.
View OriginalReply0
CryptoGoldminevip
· 6h ago
Data speaks: GPU daily average earnings have broken 30% ROI, suitable for Build a Position.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)