Begin with OPEN Cloudverse

  1. Connect your wallet

  2. Fund your Cloud Wallet with $OPEN

  3. Launch your first GPU-powered job in minutes

  4. Track usage + receive output via dashboard or CLI

🚀 Getting Started


💡 Revenue from Cloudverse jobs flows into the protocol treasury, which then redistributes 12% to $OPEN stakers.

Best balance of power

-15% with ≥50k staked

140 OPEN/hr

RTX 4090

Efficient for CV models

-10% with ≥25k staked

90 OPEN/hr

RTX 3090

Great for large models

-20% with ≥100k staked

250 OPEN/hr

NVIDIA A100

GPU TypeHourly Rate (OPEN)Discount w/ StakeNotes

🪙 Pricing & Rewards (Example)


You’ll be able to integrate Cloudverse directly into MLOps pipelines, dev tools, and backend systems.

  • POST /jobs – Submit a compute job

  • GET /jobs/{id} – View job status/logs

  • GET /wallets/balance – Check available credits

  • POST /stake – Stake $OPEN for discounts + governance

  • GET /providers – List available nodes by region/specs

A full-featured REST & Web3 API will support:

📦 Developer API (Coming Soon)


Nodes earn a score based on uptime, completion rates, and latency

🛡️ Provider Reputation

Schedule larger jobs to be split across multiple nodes

📤 Auto-Scaling

Route jobs to providers nearest your data (latency-aware)

🌐 Edge Deployment

Upload/download code/data from IPFS, S3, or Web3Storage

📦 Storage Sync

Submit jobs, manage wallets, retrieve outputs via CLI tool

🧰 Dev-Friendly CLI

Launch complex pipelines from YAML specs or GitHub repo

🧠 Job Templates

Job logs + receipts stored immutably on IPFS or Filecoin

🧾 Transparent Logging

On-chain cost estimation + billing using job size, GPU time, and bandwidth

🔐 Smart Metering

FeatureDescription

💡 Key Features (Expanded)


All jobs are containerized, stateless, and can be re-run from identical seeds.

  • Monitor GPU usage (memory, compute cycles, cost)

  • Pause, resume, or clone workloads

  • Schedule repeating jobs or pipeline runs

  • Export logs, outputs, or performance reports for compliance

From the OpenGPU dashboard or CLI:

📊 Step 4: Monitor, Scale, Repeat


This approach guarantees fair usage, verifiable work, and zero hidden fees.

  • Pre-bill: A prediction of job cost is generated using historical usage data

  • Escrow: Amount is held in a smart contract (time-locked)

  • Execution: The GPU node processes your job in a containerized sandbox

  • Completion:

    • Smart contract checks for successful hash/output signature

    • Fees are disbursed instantly

    • Audit logs are made available via IPFS or Arweave

    • Any unused funds are refunded

⚙️ Step 3: Metering + Payment Flow


Cloudverse automatically matches your job to a node with compatible specs.

  • Choose GPU class: NVIDIA A100, RTX 4090, or tiered performance categories

  • Select runtime environment:

    • Pre-configured: PyTorch, TensorFlow, CUDA, Jupyter

    • Custom Docker Image: Bring your own dependencies

  • Upload your code + data (via IPFS, GitHub, or direct upload)

  • Set job parameters (runtime, region preference, output location)

🚀 Step 2: Launch a Compute Job


🔄 Optional: stake $OPEN to receive priority access + discounts

  • Connect your crypto wallet (e.g., MetaMask, Phantom, Coinbase Wallet)

  • Deposit $OPEN (or stablecoins) into your Cloud Wallet, a non-custodial smart contract holding your compute credits

  • View your balance, active jobs, and staked $OPEN from the OpenGPU dashboard

🔐 Step 1: Connect Wallet & Fund Cloud Wallet

🧭 How It Works (Expanded)


Cloudverse brings the infrastructure muscle to the open compute frontier.

  • Large-scale data analysis (Pandas, Dask, RAPIDS)

  • Backtest trading algorithms or run ZK-proof circuits

  • Natural language processing (e.g., vector embeddings, RAGs)

🔍 Data Science & Quant

  • Render high-fidelity scenes in Blender or Unity

  • Offload GPU rendering for 3D NFTs or game environments

  • Generate synthetic datasets for AR/VR

🎮 Real-Time Rendering & Media

  • GPU-accelerated simulations in physics, chemistry, genomics

  • Monte Carlo methods or fluid dynamics

  • GPU offloading for compute-heavy research

🧪 Scientific Computing

  • Train LLMs or fine-tune small/medium-sized models

  • Deploy inference endpoints for generative AI (Chatbots, Stable Diffusion)

  • Experiment with custom model architectures in PyTorch or TensorFlow

🧠 Machine Learning / AI

Whether you're a solo ML researcher or a startup deploying production AI, Cloudverse has you covered:

🛠 Use Cases: What You Can Build with Cloudverse


Cloud Payments removes the need for centralized invoicing, prepaid credit systems, or trust in intermediaries.

  1. User estimates job cost via the OpenGPU dashboard or CLI

  2. Funds are temporarily locked in a smart contract (escrow)

  3. Upon job success:

    • The GPU node is paid

    • Unused gas/credits are returned to user

    • Receipt + audit log is generated

🔁 Workflow:

  • $OPEN – Native token (default and incentivized)

  • Stablecoins – $USDC and $DAI via cross-chain bridges (Polygon, Base)

  • Fiat – via integrated payment gateways (Stripe, Transak) — Coming Soon

🌍 Supported Payment Options:

Cloud Payments is the payment and metering layer within Cloudverse that powers trustless, on-chain billing. It ensures that users only pay for the exact amount of compute they consume, with smart contracts managing pricing, deposits, refunds, and payment settlement.

💳 Cloud Payments: Pay-as-You-Compute


Cloudverse turns the GPU economy into an open marketplace — programmable, scalable, and community-owned.

  • 🔗 Runs on a global network of GPU providers (individuals, data centers, edge nodes)

  • 🚀 Designed to deliver sub-60s job deployment

  • 💰 Metered billing via smart contracts (Cloud Payments)

  • 🌐 Integrates wallet-native authentication (no user accounts needed)

  • 📡 Supports real-time job streaming, logs, and output retrieval

Core Attributes:

It is particularly optimized for compute-intensive workloads in fields like AI/ML, high-performance computing, scientific research, data science, and generative media.

Cloudverse is OpenGPU’s decentralized, cloud-native compute environment that allows users to seamlessly deploy workloads on distributed GPU infrastructure. It offers the flexibility of traditional cloud providers (like AWS or GCP), but with the cost-efficiency, transparency, and decentralization of a Web3-native system.

Last updated