Deploying a Substreams-Powered Subgraph
In this chapter, we will cover the process of deploying a Substreams-powered Subgraph. We have built modules to extract and transform our data, and now we need to send this data to a Subgraph. Although we are utilizing a Subgraph as our data sink, there are other targets that can be used, one for example being an SQL database.
Steps for Deployment
- Build a Schema for the Subgraph
- Declare the
graph_out
Module in thesubstreams.yaml
File - Build the
graph_out
Module Logic - Building a Substreams Package
- Add/Update a
subgraph.yaml
File - Deploy the Subgraph
Building a Schema for the Subgraph
First, we need to define a schema that will be used by the subgraph. In this example, our schema tracks pools and tokens along with their TVL.
type Pool @entity {
id: ID!
token0: Token!
token1: Token!
createdAtTxHash: String!
createdAtBlockNumber: BigInt!
createdAtTimestamp: BigInt!
tvl: BigDecimal!
}
type Token @entity {
id: ID!
name: String!
symbol: String!
decimals: Int!
}
The Token
entity has a one to many relationship with the Pool
entities, as one Token
can be used as liquidity for multiple Pool
s. For more information on building Subgraph Schema’s and the relationships entities can have between each other, refer to The Graph Subgraph Documentation.
Declaring the graph_out
Module
In the substreams.yaml
file, we define the graph_out
module. This module should include all the necessary input modules and declare the correct output type, in this case, we are outputting EntityChanges
from the sf.substreams
crate.
- name: graph_out
kind: map
initialBlock: 12369621
inputs:
- map: map_pools_created
- store: store_pool_tvl
mode: deltas
output:
type: proto:sf.substreams.entity.v1.EntityChanges
Building the graph_out
Module Logic
The graph_out module is responsible for emitting entity changes to the subgraph. Here’s a basic implementation that uses the modules we have previously created:
#[substreams::handlers::map]
fn graph_out(
pools: contract::Pools,
tvl_deltas: Deltas<DeltaBigDecimal>,
) -> Result<EntityChanges, substreams::errors::Error> {
// Initialize Database Changes container
let mut tables = EntityChangesTables::new();
for pool in pools.pools {
let token0 = pool.token0.unwrap();
let token1 = pool.token1.unwrap();
tables
.create_row("Token", token0.address.clone())
.set("name", token0.name)
.set("symbol", token0.symbol)
.set("decimals", token0.decimals);
tables
.create_row("Token", token1.address.clone())
.set("name", token1.name)
.set("symbol", token1.symbol)
.set("decimals", token1.decimals);
tables
.create_row("Pool", pool.address)
.set("token0", token0.address.clone())
.set("token1", token1.address.clone())
.set("created_at_tx_hash", pool.created_at_tx_hash)
.set("created_at_block_number", BigInt::from(pool.created_at_block_number))
.set("created_at_timestamp", BigInt::from(pool.created_at_timestamp))
.set("tvl", BigDecimal::zero());
}
for delta in tvl_deltas.deltas {
let pool_address = key::segment_at(&delta.key, 1);
tables
.update_row("Pool", pool_address)
.set("tvl", delta.new_value);
}
Ok(tables.to_entity_changes())
}
In this function, we use an EntityChangesTables
object to build up all the table create, update, and delete operations that the Subgraph will perform. The module then outputs this for the Subgraph to ingest.
The entity ID and field names need to match-up with the schema defined in the subgraph.yaml
file. If there is a discrepency in the name of the entity or fields, the Subgraph will throw an error whilst syncing.
A note on
graph_out
To make your codebases more manageable, thegraph_out
modules primary responsibility should be to emit the relevantEntityChanges
to your Subgraph. Any complex event extraction or data processing should happen outside of this module to ensure a clear separation of concerns.
Building a Substreams Package
Once all of this is complete, we can build and pack our Substreams package using the make pack
command. This builds an .spkg
file that the subgraph will use as its data source.
Adding/Updating the subgraph.yaml
File
Next, we update the subgraph.yaml
file to declare our Substreams package as the subgraph’s data source, specifying the module that will emit the relevant entity change messages, in our case this will be graph_out
.
specVersion: 0.0.6
description: Uniswap V3 Substreams-based Subgraph
repository: # fill in with git remote URL
schema:
file: ./schema.graphql
dataSources:
- kind: substreams
name: uniswap-v3
network: mainnet
source:
package:
moduleName: graph_out
file: uniswap-v3-v0.1.0.spkg
mapping:
kind: substreams/graph-entities
apiVersion: 0.0.5
Deploying a Substreams Powered Subgraph
Now that we have packed our Substreams, and defined our Subgraph’s manifest file, we can deploy our Subgraph. To build and deploy our subgraph, follow these steps:
- Navigate to The Graph Studio and create a subgraph.
- If the Graph CLI is not already installed globally on your machine, run:
or via yarn
npm install -g @graphprotocol/graph-cli
yarn global add @graphprotocol/graph-cli
- Authenticate within the CLI:
graph auth --studio <DEPLOYMENT_KEY>
- Build the Subgraph:
graph build
- Deploy the subgraph:
graph deploy --studio <SUBGRAPH_NAME>
In this section, we covered the steps necessary to deploy a Substreams-powered subgraph. We started by defining the schema, declared the graph_out
module, and implemented the necessary logic for emitting entityt changes to our Subgraph. Then, we packed our Substreams and updated the subgraph.yaml
file to declare our Substreams package as the subgraph’s data source, and finally packaged and deployed our subgraph.
By following these steps, you have now successfully deployed a Substreams-powered subgraph that can be queried on The Graph Network. This powerful integration allows you to leverage the performance of Substreams alongside the robust querying capabilities of The Graph.