viem-goviem-go

Architecture

How viem-go is structured, how requests flow, and where the performance comes from

viem-go mirrors viem’s Client → Transport → Actions architecture, but implemented in idiomatic Go (contexts, channels, goroutines) with a strong focus on reducing RPC round-trips and minimizing per-call overhead.

The big idea (mental model)

  • Clients are configured runtimes (PublicClient, WalletClient) built on top of a shared BaseClient.
  • Actions are typed operations implemented in terms of client.Request(...).
  • Transports implement JSON-RPC over HTTP/WebSocket (plus fallback/custom patterns).
  • ABI/contract utilities sit “above” actions to encode/decode calldata/logs and provide ergonomic typed APIs.

Component map

Your code
  │
  ▼
High-level helpers
  - contract.ReadContract[T](...)
  - contracts/erc20 bindings (Name, Symbol, BalanceOf, ...)
  │
  ▼
Actions (actions/public, actions/wallet)
  - public.Multicall / MulticallConcurrent
  - public.WatchBlockNumber / WatchEvent / WatchContractEvent
  - wallet.SendTransaction / WriteContract / SignMessage / ...
  │
  ▼
Client runtime (client/)
  - PublicClient / WalletClient embed BaseClient
  - BaseClient holds config + transport + UID
  - BaseClient.Request(ctx, method, params...) is the “one RPC door”
  │
  ▼
Transport layer (client/transport/)
  - HTTP (optional JSON-RPC batching, retries, timeouts)
  - WebSocket (subscriptions, keep-alive, reconnect)
  - Fallback (multi-endpoint + ranking)
  │
  ▼
Node/provider (JSON-RPC)

Main packages & responsibilities

PackageWhat it’s for
client/BaseClient, PublicClient, WalletClient: config + one Request entrypoint + convenience methods
client/transport/HTTP/WS/custom/fallback JSON-RPC transports (retries, timeouts, batching)
actions/public/Read + watch actions (call, logs, blocks, receipts, multicall, watchers)
actions/wallet/Wallet actions (send tx, sign, writeContract, EIP-5792 sendCalls, etc.)
abi/ABI parse + function/event encode/decode helpers
contract/Generic typed contract reads/writes and typed descriptors
contracts/Prebuilt contract bindings (eg. erc20)
chain/ + chain/definitionsChain metadata (IDs, RPC URLs, known contracts like Multicall3)
types/Shared RPC/block/tx/log types and option structs used across the stack
utils/Hot-path utilities: LRU cache, hashing/signature helpers, formatting, etc.
utils/rpcLow-level HTTP + WebSocket JSON-RPC clients + batch scheduler
utils/observe, utils/pollShared subscription dedupe + polling primitives (Go equivalents of viem utilities)

How requests flow (3 common paths)

1) Read contract (eth_call)

contract.ReadContract[T] (or PublicClient.ReadContract)
  │  ABI encode calldata
  ▼
PublicClient.Call / public.Call
  │  format request + block tag + overrides
  ▼
BaseClient.Request("eth_call", ...)
  │
  ▼
HTTP/WebSocket transport → JSON-RPC → response
  │
  ▼
ABI decode result → typed value (T / *big.Int / struct / ...)

2) Multicall (many reads, fewer round-trips)

public.Multicall / public.MulticallConcurrent
  │  encode N calls → aggregate3 calldata
  │  (optional) merge concurrent multicalls (batch.multicall)
  ▼
BaseClient.Request("eth_call", multicall3 aggregate3)
  ▼
decode aggregate3 results → per-call decoded values

There are two separate batching layers that can both apply:

  • Transport-level JSON-RPC batching (HTTP): batches multiple JSON-RPC requests into one HTTP request.
  • Action-level Multicall aggregation: batches many contract calls into one eth_call to Multicall3.

3) Watch block number (poll or subscribe)

public.WatchBlockNumber(ctx, client, params) → <-chan events
  │
  ├─ if HTTP (or forced poll):
  │    poll eth_blockNumber on interval
  │
  └─ if WebSocket:
       eth_subscribe(newHeads) → stream block headers

Plus: observer dedupe so multiple watchers share one source per client UID.

Where performance comes from

A) The obvious win: fewer RPC round-trips

Most Ethereum “slowness” is network + provider latency. The #1 thing that improves real-world throughput is doing fewer requests:

  • Multicall collapses many reads into one eth_call.
  • The Multicall batcher collapses many concurrent multicalls into fewer bigger multicalls (within a wait window), mirroring viem’s batch.multicall.
  • Optional HTTP JSON-RPC batching can collapse unrelated requests (when the provider supports it).

B) Lower per-call overhead in hot paths

Even when RPC dominates, per-call overhead matters for:

  • decoding event logs at scale
  • large multicall arrays (hundreds/thousands of calls)
  • “inner loop” utilities (hashing, signature recovery, unit parsing)

viem-go includes specialized fast paths where they pay off most.

One concrete example is Multicall3 aggregate3 encoding/decoding: viem-go includes a hand-rolled encoder/decoder that writes/reads directly to/from bytes (no reflection, minimal allocations) for large call arrays.

Benchmarks: viem-go vs viem (TypeScript)

You have an apples-to-apples benchmark harness in-repo under benchmarks/ with generated reports in benchmarks/results/.

From benchmarks/results/comparison.md:

  • Geometric mean speedup: 7.12x (Go faster overall)
  • Wins: 59/59 benchmarks
  • Suite highlights:
    • call: ~45.9x
    • event: ~21.8x
    • multicall: ~2.9x
    • abi: ~7.3x

The longer benchmarks/results/full-report.md also reports “average” speedups (different aggregation) but the direction is consistent: viem-go’s hot paths are substantially cheaper in your harness.

How to talk about go-ethereum comparison (accurately)

go-ethereum is the foundation for much of the Go ecosystem and viem-go uses parts of it (eg. ABI parsing and common types). The difference is layering and workload shape:

  • go-ethereum exposes low-level primitives (RPC client, ABI pack/unpack, etc.).
  • viem-go provides the higher-level “viem patterns” out of the box:
    • action-based API surface
    • multicall + deployless multicall
    • multicall batching across goroutines
    • optional JSON-RPC batching in transports
    • watcher utilities (poll vs subscribe) with dedupe patterns

So the strongest claim is:

viem-go is optimized for application workloads (lots of reads, lots of ABI work, batching, fan-out) by reducing round-trips and providing specialized fast paths where generic approaches become costly.

Further reading