Architecture
How viem-go is structured, how requests flow, and where the performance comes from
viem-go mirrors viem's Client → Transport → Actions architecture, but implemented in idiomatic Go (contexts, channels, goroutines) with a strong focus on reducing RPC round-trips and minimizing per-call overhead.
The big idea
- Clients are configured runtimes (
PublicClient,WalletClient) built on top of a sharedBaseClient. - Actions are typed operations implemented in terms of
client.Request(...). - Transports implement JSON-RPC over HTTP/WebSocket (plus fallback/custom patterns).
- ABI/contract utilities sit "above" actions to encode/decode calldata/logs and provide ergonomic typed APIs.
PublicClient, WalletClient) share a common BaseClient core. Any transport (HTTP, WebSocket, Fallback) works with any client type -- mix and match as needed.Component map
Every request flows through the same layered stack. Your application code sits at the top, and each layer adds a specific responsibility before the request reaches the JSON-RPC provider.
- -contract.ReadContract[T](...)
- -contracts/erc20 bindings (Name, Symbol, BalanceOf, ...)
- -public.Multicall / MulticallConcurrent
- -public.WatchBlockNumber / WatchEvent / WatchContractEvent
- -wallet.SendTransaction / WriteContract / SignMessage
- -PublicClient / WalletClient embed BaseClient
- -BaseClient holds config + transport + UID
- -BaseClient.Request(ctx, method, params...) — the one RPC door
- -HTTP (optional JSON-RPC batching, retries, timeouts)
- -WebSocket (subscriptions, keep-alive, reconnect)
- -Fallback (multi-endpoint + ranking)
Main packages & responsibilities
| Package | What it's for |
|---|---|
client/ | BaseClient, PublicClient, WalletClient: config + one Request entrypoint + convenience methods |
client/transport/ | HTTP/WS/custom/fallback JSON-RPC transports (retries, timeouts, batching) |
actions/public/ | Read + watch actions (call, logs, blocks, receipts, multicall, watchers) |
actions/wallet/ | Wallet actions (send tx, sign, writeContract, EIP-5792 sendCalls, etc.) |
abi/ | ABI parse + function/event encode/decode helpers |
contract/ | Generic typed contract reads/writes and typed descriptors |
contracts/ | Prebuilt contract bindings (eg. erc20) |
chain/ + chain/definitions | Chain metadata (IDs, RPC URLs, known contracts like Multicall3) |
types/ | Shared RPC/block/tx/log types and option structs used across the stack |
utils/ | Hot-path utilities: LRU cache, hashing/signature helpers, formatting, etc. |
utils/rpc | Low-level HTTP + WebSocket JSON-RPC clients + batch scheduler |
utils/observe, utils/poll | Shared subscription dedupe + polling primitives (Go equivalents of viem utilities) |
contracts/ package includes prebuilt ERC-20 bindings (Name, Symbol, BalanceOf, etc.) so you don't need to provide raw ABI JSON strings for common token interactions.How requests flow
Every interaction with the blockchain follows one of three common paths. Understanding these helps you reason about latency, batching, and where optimizations apply.
1. Read contract
The most common path. Your code calls ReadContract, which ABI-encodes the calldata, sends a single eth_call via the transport, and decodes the typed result back to you.
2. Multicall
When you need to read many values at once, Multicall aggregates them into a single eth_call to the Multicall3 contract. This dramatically reduces RPC round-trips -- especially when combined with concurrent batching across goroutines.
There are two separate batching layers that can both apply:
- Transport-level JSON-RPC batching (HTTP): batches multiple JSON-RPC requests into one HTTP request.
- Action-level Multicall aggregation: batches many contract calls into one
eth_callto Multicall3.
3. Watch block number
Watch actions stream real-time events over a Go channel. The transport determines the strategy: WebSocket uses native eth_subscribe for low-latency push notifications, while HTTP falls back to polling on an interval. An observer layer deduplicates watchers so multiple consumers share a single source.
Where performance comes from
viem-go's performance advantages come from two directions: reducing the number of network round-trips, and lowering the CPU cost of hot-path operations like ABI encoding and event decoding.
Fewer RPC round-trips
Most Ethereum "slowness" is network + provider latency. The #1 thing that improves real-world throughput is doing fewer requests:
- Multicall collapses many reads into one
eth_call. - The Multicall batcher collapses many concurrent multicalls into fewer bigger multicalls (within a wait window), mirroring viem's
batch.multicall. - Optional HTTP JSON-RPC batching can collapse unrelated requests (when the provider supports it).
Lower per-call overhead in hot paths
Even when RPC dominates, per-call overhead matters for:
- decoding event logs at scale
- large multicall arrays (hundreds/thousands of calls)
- "inner loop" utilities (hashing, signature recovery, unit parsing)
viem-go includes specialized fast paths where they pay off most.
One concrete example is Multicall3 aggregate3 encoding/decoding: viem-go includes a hand-rolled encoder/decoder that writes/reads directly to/from bytes (no reflection, minimal allocations) for large call arrays.
Benchmarks
You have an apples-to-apples benchmark harness in-repo under benchmarks/ with generated reports in benchmarks/results/.
From benchmarks/results/comparison.md:
- Geometric mean speedup: 7.12x (Go faster overall)
- Wins: 59/59 benchmarks
- Suite highlights:
- call: ~45.9x
- event: ~21.8x
- multicall: ~2.9x
- abi: ~7.3x
The longer benchmarks/results/full-report.md also reports "average" speedups (different aggregation) but the direction is consistent: viem-go's hot paths are substantially cheaper in your harness.
Comparison with go-ethereum
go-ethereum is the foundation for much of the Go ecosystem and viem-go uses parts of it (eg. ABI parsing and common types). The difference is layering and workload shape:
- go-ethereum exposes low-level primitives (RPC client, ABI pack/unpack, etc.).
- viem-go provides the higher-level "viem patterns" out of the box:
- action-based API surface
- multicall + deployless multicall
- multicall batching across goroutines
- optional JSON-RPC batching in transports
- watcher utilities (poll vs subscribe) with dedupe patterns
So the strongest claim is:
viem-go is optimized for application workloads (lots of reads, lots of ABI work, batching, fan-out) by reducing round-trips and providing specialized fast paths where generic approaches become costly.