Those who read our newsletters know that Kaiko Research is fond of liquidity analysis. This week, we felt it would be useful to take a step back and explore the liquidity data that we use, how we use it, what it means and why it matters. But first, what is liquidity?
What is liquidity?
In the simplest terms, liquidity in crypto measures how efficiently a token can be bought and sold. According to our token liquidity rankings, BTC is the most liquid token, meaning that someone could, for example, buy or sell $100k worth without significantly affecting its price. Someone trying to sell a $100k of a less liquid token would face slippage, meaning that they receive less in USD terms than what they sold is theoretically worth. For example, if someone tried to sell $100k worth of WLD on Uniswap V3, they would face slippage of 6.3% and only receive $93.6k USDC.
It’s been our contention that liquidity is a better measure of a token’s “true” value than market capitalization; the former measures how much of a token can be converted into fiat/stablecoins while the latter simply multiplies token supply by price.
the data
When we first pull the data from our API using Python, we get a CSV that looks like this:
Note that I’ve hidden the vast majority of the 70 columns of data that we receive. This data can be broken down into a few main categories:
- Date and Time
- Depth (bid and ask, from 0.1% to 10% from the mid-price)
- Depth is the sum of limit orders a certain percentage away from the mid-price (more on this later).
- Mid-Price
- Spread
- The difference between the best bid and best ask.
- Reference Data (exchange and pair)
- Slippage (bid and ask)
- This is a percentage, given based on a hypothetical order size. For example, we can get slippage data for a hypothetical $100k buy (executed on the ask side) or sell (bid side)
Granularities and Aggregations
Kaiko’s most granular order book data is taken at the snapshot level. We collect two order book snapshots per minute, for all instruments traded on all exchanges. This data contains every individual bid and ask on an order book.
However, this data is massive and quite difficult for researchers to work with, which is why we designed some more convenient aggregations.
All order book data that we use is aggregated, meaning bids and asks are summed at various various price levels, for example “1% bid depth” vs. “10% bid depth.”
This makes it a lot easier to analyze market depth, but we still run into a granularity problem, in that for any day, market depth for an instrument contains ~2800 data points (derived from our snapshots taken twice per minute). That’s why we also have an additional level of aggregation: we take averages of this data over time intervals.
We can take average market depth at 1 minute, hour or daily intervals. The research team typically anlayzes market depth for an instrument at daily intervals.
Asset Level Liquidity
Instrument-level order book analysis is quite useful when assessing specific market events, however it remains difficult to understand liquidity at an asset level.
That’s why we developed an even easier order book data type: Asset Liquidity Metrics.
This data sums market depth and volume across all traded instruments for an asset. For Bitcoin, this would include liquidity for BTC-USD, BTC-USDT, ETH-BTC, etc on all exchanges in our coverage (powerful!).
To summarize, raw order book data is really tough to work with which is why aggregations are important, especially for researchers.