viem-goviem-go

Architecture

How viem-go is structured, how requests flow, and where the performance comes from

viem-go mirrors viem's Client → Transport → Actions architecture, but implemented in idiomatic Go (contexts, channels, goroutines) with a strong focus on reducing RPC round-trips and minimizing per-call overhead.

The big idea

  • Clients are configured runtimes (PublicClient, WalletClient) built on top of a shared BaseClient.
  • Actions are typed operations implemented in terms of client.Request(...).
  • Transports implement JSON-RPC over HTTP/WebSocket (plus fallback/custom patterns).
  • ABI/contract utilities sit "above" actions to encode/decode calldata/logs and provide ergonomic typed APIs.
All client types (PublicClient, WalletClient) share a common BaseClient core. Any transport (HTTP, WebSocket, Fallback) works with any client type -- mix and match as needed.

Component map

Every request flows through the same layered stack. Your application code sits at the top, and each layer adds a specific responsibility before the request reaches the JSON-RPC provider.

Your code
High-level helpers
  • -contract.ReadContract[T](...)
  • -contracts/erc20 bindings (Name, Symbol, BalanceOf, ...)
Actions
actions/public, actions/wallet
  • -public.Multicall / MulticallConcurrent
  • -public.WatchBlockNumber / WatchEvent / WatchContractEvent
  • -wallet.SendTransaction / WriteContract / SignMessage
Client runtime
client/
  • -PublicClient / WalletClient embed BaseClient
  • -BaseClient holds config + transport + UID
  • -BaseClient.Request(ctx, method, params...) — the one RPC door
Transport layer
client/transport/
  • -HTTP (optional JSON-RPC batching, retries, timeouts)
  • -WebSocket (subscriptions, keep-alive, reconnect)
  • -Fallback (multi-endpoint + ranking)
Node / Provider (JSON-RPC)

Main packages & responsibilities

PackageWhat it's for
client/BaseClient, PublicClient, WalletClient: config + one Request entrypoint + convenience methods
client/transport/HTTP/WS/custom/fallback JSON-RPC transports (retries, timeouts, batching)
actions/public/Read + watch actions (call, logs, blocks, receipts, multicall, watchers)
actions/wallet/Wallet actions (send tx, sign, writeContract, EIP-5792 sendCalls, etc.)
abi/ABI parse + function/event encode/decode helpers
contract/Generic typed contract reads/writes and typed descriptors
contracts/Prebuilt contract bindings (eg. erc20)
chain/ + chain/definitionsChain metadata (IDs, RPC URLs, known contracts like Multicall3)
types/Shared RPC/block/tx/log types and option structs used across the stack
utils/Hot-path utilities: LRU cache, hashing/signature helpers, formatting, etc.
utils/rpcLow-level HTTP + WebSocket JSON-RPC clients + batch scheduler
utils/observe, utils/pollShared subscription dedupe + polling primitives (Go equivalents of viem utilities)
The contracts/ package includes prebuilt ERC-20 bindings (Name, Symbol, BalanceOf, etc.) so you don't need to provide raw ABI JSON strings for common token interactions.

How requests flow

Every interaction with the blockchain follows one of three common paths. Understanding these helps you reason about latency, batching, and where optimizations apply.

1. Read contract

The most common path. Your code calls ReadContract, which ABI-encodes the calldata, sends a single eth_call via the transport, and decodes the typed result back to you.

contract.ReadContract[T]
or PublicClient.ReadContract
ABI encode calldata
PublicClient.Call / public.Call
format request + block tag + overrides
BaseClient.Request("eth_call", ...)
HTTP / WebSocket transport
JSON-RPC request → response
ABI decode result
typed value (T / *big.Int / struct / ...)

2. Multicall

When you need to read many values at once, Multicall aggregates them into a single eth_call to the Multicall3 contract. This dramatically reduces RPC round-trips -- especially when combined with concurrent batching across goroutines.

public.Multicall / MulticallConcurrent
Encode N calls
aggregate3 calldata + optional merge of concurrent multicalls (batch.multicall)
BaseClient.Request("eth_call", multicall3 aggregate3)
Decode aggregate3 results
per-call decoded values

There are two separate batching layers that can both apply:

  • Transport-level JSON-RPC batching (HTTP): batches multiple JSON-RPC requests into one HTTP request.
  • Action-level Multicall aggregation: batches many contract calls into one eth_call to Multicall3.

3. Watch block number

Watch actions stream real-time events over a Go channel. The transport determines the strategy: WebSocket uses native eth_subscribe for low-latency push notifications, while HTTP falls back to polling on an interval. An observer layer deduplicates watchers so multiple consumers share a single source.

public.WatchBlockNumber(ctx, client, params)
returns <-chan events
Observer dedupe
multiple watchers share one source per client UID
HTTP (or forced poll)
poll eth_blockNumber on interval
WebSocket
eth_subscribe(newHeads) → stream block headers

Where performance comes from

viem-go's performance advantages come from two directions: reducing the number of network round-trips, and lowering the CPU cost of hot-path operations like ABI encoding and event decoding.

Fewer RPC round-trips

Most Ethereum "slowness" is network + provider latency. The #1 thing that improves real-world throughput is doing fewer requests:

  • Multicall collapses many reads into one eth_call.
  • The Multicall batcher collapses many concurrent multicalls into fewer bigger multicalls (within a wait window), mirroring viem's batch.multicall.
  • Optional HTTP JSON-RPC batching can collapse unrelated requests (when the provider supports it).

Lower per-call overhead in hot paths

Even when RPC dominates, per-call overhead matters for:

  • decoding event logs at scale
  • large multicall arrays (hundreds/thousands of calls)
  • "inner loop" utilities (hashing, signature recovery, unit parsing)

viem-go includes specialized fast paths where they pay off most.

One concrete example is Multicall3 aggregate3 encoding/decoding: viem-go includes a hand-rolled encoder/decoder that writes/reads directly to/from bytes (no reflection, minimal allocations) for large call arrays.

Benchmarks

You have an apples-to-apples benchmark harness in-repo under benchmarks/ with generated reports in benchmarks/results/.

From benchmarks/results/comparison.md:

  • Geometric mean speedup: 7.12x (Go faster overall)
  • Wins: 59/59 benchmarks
  • Suite highlights:
    • call: ~45.9x
    • event: ~21.8x
    • multicall: ~2.9x
    • abi: ~7.3x

The longer benchmarks/results/full-report.md also reports "average" speedups (different aggregation) but the direction is consistent: viem-go's hot paths are substantially cheaper in your harness.

Comparison with go-ethereum

go-ethereum is the foundation for much of the Go ecosystem and viem-go uses parts of it (eg. ABI parsing and common types). The difference is layering and workload shape:

  • go-ethereum exposes low-level primitives (RPC client, ABI pack/unpack, etc.).
  • viem-go provides the higher-level "viem patterns" out of the box:
    • action-based API surface
    • multicall + deployless multicall
    • multicall batching across goroutines
    • optional JSON-RPC batching in transports
    • watcher utilities (poll vs subscribe) with dedupe patterns

So the strongest claim is:

viem-go is optimized for application workloads (lots of reads, lots of ABI work, batching, fan-out) by reducing round-trips and providing specialized fast paths where generic approaches become costly.

Further reading