Architecture
How viem-go is structured, how requests flow, and where the performance comes from
How viem-go is structured, how requests flow, and where the performance comes from
viem-go mirrors viem's Client → Transport → Actions architecture, but implemented in idiomatic Go (contexts, channels, goroutines) with a strong focus on reducing RPC round-trips and minimizing per-call overhead.
PublicClient, WalletClient) built on top of a shared BaseClient.client.Request(...).PublicClient, WalletClient) share a common BaseClient core. Any transport (HTTP, WebSocket, Fallback) works with any client type -- mix and match as needed.Every request flows through the same layered stack. Your application code sits at the top, and each layer adds a specific responsibility before the request reaches the JSON-RPC provider.
| Package | What it's for |
|---|---|
client/ | BaseClient, PublicClient, WalletClient: config + one Request entrypoint + convenience methods |
client/transport/ | HTTP/WS/custom/fallback JSON-RPC transports (retries, timeouts, batching) |
actions/public/ | Read + watch actions (call, logs, blocks, receipts, multicall, watchers) |
actions/wallet/ | Wallet actions (send tx, sign, writeContract, EIP-5792 sendCalls, etc.) |
abi/ | ABI parse + function/event encode/decode helpers |
contract/ | Generic typed contract reads/writes and typed descriptors |
contracts/ | Prebuilt contract bindings (eg. erc20) |
chain/ + chain/definitions | Chain metadata (IDs, RPC URLs, known contracts like Multicall3) |
types/ | Shared RPC/block/tx/log types and option structs used across the stack |
utils/ | Hot-path utilities: LRU cache, hashing/signature helpers, formatting, etc. |
utils/rpc | Low-level HTTP + WebSocket JSON-RPC clients + batch scheduler |
utils/observe, utils/poll | Shared subscription dedupe + polling primitives (Go equivalents of viem utilities) |
contracts/ package includes prebuilt ERC-20 bindings (Name, Symbol, BalanceOf, etc.) so you don't need to provide raw ABI JSON strings for common token interactions.Every interaction with the blockchain follows one of three common paths. Understanding these helps you reason about latency, batching, and where optimizations apply.
The most common path. Your code calls ReadContract, which ABI-encodes the calldata, sends a single eth_call via the transport, and decodes the typed result back to you.
When you need to read many values at once, Multicall aggregates them into a single eth_call to the Multicall3 contract. This dramatically reduces RPC round-trips -- especially when combined with concurrent batching across goroutines.
There are two separate batching layers that can both apply:
eth_call to Multicall3.Watch actions stream real-time events over a Go channel. The transport determines the strategy: WebSocket uses native eth_subscribe for low-latency push notifications, while HTTP falls back to polling on an interval. An observer layer deduplicates watchers so multiple consumers share a single source.
viem-go's performance advantages come from two directions: reducing the number of network round-trips, and lowering the CPU cost of hot-path operations like ABI encoding and event decoding.
Most Ethereum "slowness" is network + provider latency. The #1 thing that improves real-world throughput is doing fewer requests:
eth_call.batch.multicall.Even when RPC dominates, per-call overhead matters for:
viem-go includes specialized fast paths where they pay off most.
One concrete example is Multicall3 aggregate3 encoding/decoding: viem-go includes a hand-rolled encoder/decoder that writes/reads directly to/from bytes (no reflection, minimal allocations) for large call arrays.
You have an apples-to-apples benchmark harness in-repo under benchmarks/ with generated reports in benchmarks/results/.
From benchmarks/results/comparison.md:
The longer benchmarks/results/full-report.md also reports "average" speedups (different aggregation) but the direction is consistent: viem-go's hot paths are substantially cheaper in your harness.
go-ethereum is the foundation for much of the Go ecosystem and viem-go uses parts of it (eg. ABI parsing and common types). The difference is layering and workload shape:
So the strongest claim is:
viem-go is optimized for application workloads (lots of reads, lots of ABI work, batching, fan-out) by reducing round-trips and providing specialized fast paths where generic approaches become costly.