[ARFC] Aave V3 Deployment on zkEVM L2

Proposal updated for Snapshot escalation


Title: [ARFC] Aave V3 Deployment on zkEVM L2

Author: @marczeller - Aave-Chan Initiative

Date: 2023-09-22


Summary:

Proposal for the deployment of Aave V3 on zkEVM L2, with limited assets onboarding (“MVP V3”) to establish a strategic Aave presence early in this new network while staying conservative in terms of risk.

Abstract:

This ARFC proposes the deployment of a Minimal Viable Product (MVP) version of Aave V3 on the zkEVM L2 network, an EVM-equivalent zk-rollup developed by the Polygon team.

Motivation:

The deployment of Aave V3 on zkEVM L2 aims to establish a strategic presence on this new network early on, promoting L2 diversity and expanding Aave’s reach in the DeFi space.

Specification:

The proposal outlines the deployment of Aave V3.0.1 on zkEVM L2 with three collaterals (WETH, WMATIC & USDC) and one borrowable asset (USDC). Risk parameters are provided for community discussion and feedback from risk service providers.

Recommended Initial Parameters

Risk Parameter WETH USDC WMATIC
Isolation Mode NO NO NO
Enable Borrow YES YES YES
Borrowable in Isolation NO YES NO
Enable Collateral YES YES YES
Loan To Value 80% 77% 68%
Liquidation Threshold 82.5% 80% 73%
Liquidation Bonus 5% 5% 10%
Reserve Factor 15% 10% 20%
Liquidation Protocol Fee 10% 10% 10%
Borrow Cap 280 1.26M 375K
Supply Cap 350 1.4M 500K
Debt Ceiling N/A N/A N/A
Base 1% 0% 0%
Slope1 3.8% 4% 6.1%
Uoptimal 80% 90% 75%
Slope2 80% 60% 100%

Deployment Checklist:

  • Risk service provider’s feedback :heavy_check_mark:
  • Infrastructure/technical evaluation report :heavy_check_mark:
  • Oracle recommendation :heavy_check_mark:
  • The graph support :heavy_check_mark:
  • Cross-chain messaging infrastructure for governance :heavy_check_mark:

Disclaimer:

The Aave-Chan Initiative (ACI) is not associated with or compensated by Polygon for publishing this ARFC.

As part of its delegate platform, the ACI promotes L2 diversity.

Marc Zeller is a former DeFi strategy advisor for the Polygon Foundation but resigned from this role at the time of the ACI service provider proposal publication.

This ARFC is created to facilitate the Aave community’s discussion and decision-making regarding the deployment of Aave V3 on zkEVM L2.

Next Steps:

  1. Conduct a snapshot vote for the Aave community to approve or reject the deployment.
  2. If approved, proceed with the necessary technical implementations for deployment on zkEVM L2.

Copyright:

Copyright and related rights waived via CC0.

13 Likes

On the topic of Oracle recommendation, at the time of writing, Chainlink, Aave Protocol’s historical price feed service provider, is not providing price feed services to the ZkEVM network.

Although it is anticipated that Chainlink will eventually support ZkEVM, discussions regarding the recommended oracle for the ZkEVM MVP market can take place within this ARFC discussion.

The ACI believes there are three options available to Aave governance for price feeds:

  1. Develop a replicator automated service that queries off-chain ETH L1 price feeds for the three assets and inputs them into the Aave protocol. This option assumes L1 prices are canon since the three proposed onboarded assets are of the highest quality and liquidity suitable for this deployment. If a discrepancy occurs on the zkEVM in secondary liquidity, arbitrage agents are expected to profit from it by sourcing liquidity on the L1.

  2. Onboard Pyth as the primary price feed service provider for ZkEVM deployment and use the L1 replicator service as a “failsafe” secondary oracle. Implement a derivation trigger contract that will fallback to the secondary oracle in case of significant deviation between both Oracles’ outputs.

  3. Similar to option 2, onboard API3 as the primary price feed service provider and use the L1 replicator service as a “failsafe” secondary oracle.

It is expected that both Pyth and API3 will present their solutions in this thread, and the community is encouraged to provide technical feedback on these options.

Oracles are the most critical piece of infrastructure for a liquidity protocol, and the choice of a service provider should not be a “beauty contest” but rather a pragmatic decision based on the most suitable option in terms of risk and implementation.

13 Likes

Following on from Marc’s proposal, I’d like to provide some background information about API3 and our data feeds.

Background
API3 is a DAO governed, first party oracle project. Founded in 2020, API3 currently supports 20 different chains for RNG services, and 11 for crypto + FOREX price data. API3 recently launched the API3 marketplace, available at market.api3.org, for fully permissionless data feed access across 11 mainnets and testnets, with documentation available here.

API3 is building towards having data feeds that redirect MEV used to gain priority for liquidations away from the block producers, and extract it at the oracle layer instead - Oracle Extractable Value. This is then returned to the dapp using the feed, allowing dapps to retain up to approximately 40% of the value currently paid out to incentivize liquidations. It would be entirely up to AAVE to determine what to do with this, with examples being using it to improve liquidity, by for example providing additional APY to improve protocol usage, or redirecting towards the safety module.

The additional data feed updates triggered by searchers utilising OEV also contribute to a data feed that updates more during times of volatility, providing a more granular feed for users. The ability for theoretically infinite granularity from OEV enabled feeds gives a similar user experience to pull-based oracle designs, with the familiarity and ease of integration of a push-based approach.

(Continued in next post)

6 Likes

Data Feeds

API3 data feeds are called dAPIs, where a dAPI represents a mapping that is managed by the API3 DAO. This mapping can be pointed towards a single-sourced data feed or an aggregated data-feed and allows to account for the management overhead associated with oracle infrastructure (e.g. switching out oracles). Users are simply reading a ‘human-readable’ name like ‘AAVE/USD’ which is associated with a data feed ID on a contract level. This mapping can be pointed towards e.g. AAVE/USD from Binance and then switched to an aggregated AAVE/USD feed from Binance, Coinbase & Coingecko, without requiring any changes from the user.

dAPIs are powered by first-party oracles, which means that the API provider is directly running the required oracle infrastructure without introducing additional third-parties between the data producer and data consumer, which creates full transparency of data sourcing as well as efficiency. Providers like Finage, dxfeed, Twelvedata, NewChangeFx, Tradermade and others are all running first-party oracles that are powering dAPIs. On-chain updates are triggered on the basis of deviation and heartbeat, akin to what AAVE is used to with Chainlink. Data can then simply be read directly on-chain without the need to e.g. push it on-chain with each contract interaction or other needs.

dAPIs are differentiated between self-funded and managed data feeds. Self-funded dAPIs can be seen as an introduction and basic oracle infrastructure. They are single source and have an associated wallet that can be funded by anyone, which will bring the data feed “to life”. They can be accessed entirely permissionlessly and once data is on-chain it can be read by anyone for free without any type of access control measures. Self-funded dAPIs (Phase 1) have recently been released alongside our API3 Market, that allows to interact with them seamlessly. On the API3 Market you’ll be able to find self-funded dAPIs on over 11 mainnets and testnets, including Polygon’s zkEVM and its respective testnet. All you have to do is send it some funds and updates will begin automatically within 15 minutes.

Managed data feeds can be seen as the ‘pro’ version of API3 data feeds. Here, data feeds are aggregated between multiple first party oracles with API3 and the data sources picking up the gas management overhead. Self-funded data feeds will be upgradable to managed data feeds over the API3 Market by paying for them in the native currency of the respective chain. The payment will roughly be based on the operational costs of the respective dAPI and API3 does not expect to profit from this endeavour (our focus lies on OEV monetization). Managed dAPIs (Phase 2) are the next big milestone that we’ll be releasing this quarter.

It is important to reiterate that once data is on-chain it can be read by anyone for free, which means that the cost of making e.g. managed dAPIs available doesn’t necessarily end up with AAVE. It is also important to note that an upgrade from self-funded to managed dAPIs won’t require any changes from the protocol utilising a specific dAPI as the management (e.g. redirecting the dAPI mapping) is handled by the API3 DAO. The consuming protocols simply continue reading “AAVE/USD” on the respective contract and will benefit from the change automatically.

Oracle Extractable Value

Following the phase two launch, API3 will concentrate on enabling OEV extraction (Phase 3). With the rollout of this phase, will enable third-parties to trigger data feed updates in a tamper proof way alongside the usual update conditions. Such third-parties could for instance be searchers that see the opportunity to liquidate a position by updating the data feed instead of waiting for the ‘traditional’ update conditions to hit. Searchers will need to bid for the right to such an update and API3 will redirect the majority (90%) of this bid back to the opportunity creating entity (in this case AAVE). In essence this shifts competitions of searchers for liquidations from a block producer level to the oracle level and allows value that is typically paid to block producers for priority to be paid to the dApps instead.

This subject is too detailed (and important) to condense into a single forum post, so links to resources about it are here:

For a typical MEV auction, searchers pay an average of 40% of fees to the block producer, whereas with OEV this value can be retained within your ecosystem. Using this value to improve Safety Incentive payouts, improve lending rates, or otherwise attract liquidity will create a beneficial effect that in turn helps generate more OEV over time, forming a beneficial flywheel. OEV also allows for more data feed updates during times of volatility, which gives more granularity when it matters most for users.

Ease of Integration

An existing Chainlink integration can be switched to API3 with changes to a single line of code. Compared to competitors, API3’s feeds can be used without risking extensive changes to audited and well used code, or having a protocol need to operate extra infrastructure. Not having to significantly change how a battle tested protocol works should be seen as important to keep user funds safeguarded. Recent DeFi events have unfortunately demonstrated that even ongoing audits can fail to pick up critical bugs.

Conclusion

API3 provides an oracle choice that has the advantages of the battle tested push architecture, with oracle extractable value data feeds later this year giving the infinite granularity of a pull model. Integration will require minimal changes to AAVE’s code, and API3 will assist wherever necessary. API3 are happy to answer any questions about this, and look forward to hearing feedback from the AAVE DAO.

7 Likes

Due to the sensitivity of this decision and the impact it may likely have across chains and assets outside of zkEVM, would it be a good investment by the DAO (or Grants DAO) to fund a special committee of community members to research the different provider options, their technical merits/concerns, asset and chain coverage, etc. and provide a recommendation to the broader community? This would ensure that the right experts are spending time analyzing this problem on behalf of the DAO and that each provider can be presented on a standardized basis.

I cannot imagine this is the only instance in which we must make oracle-related decisions and it may be wise to invest heavily up front to streamline decision making in the future while considering all potential implications.

1 Like

the ACI is not in favor of the DAO deploying a budget for this, the DAO is already covering the services of BGD Labs that are the core developers of the Aave DAO.

In our opinion, their voice have the highest relevance on this topic.

That being said, if the AGD want to deploy a budget for it, it’s their choice.

4 Likes

We are excited to see Aave expand to zkEVM! In this post, we provide rationale for why the Aave community should use the Pyth oracle to provide reliable, robust, and scalable price feeds for consumption by the protocol and its users in this new deployment.

About Pyth

The Pyth network is an oracle provider that makes financial market data available to multiple blockchains. Pyth’s market data is sourced from 80 first-party data providers, including some of the largest exchanges and market making firms in the world. The oracle offers currently offers 200+ price feeds for a number of different asset classes, including US equities, commodities, FX, and cryptocurrencies.

The oracle’s price feeds are designed to be fast, accurate, and reliable. Price feeds update multiple times per second—you can see live price updates on the Pyth website. Each price update is a robust aggregate of multiple data providers’ reported prices. The oracle uses an aggregation procedure to ensure that colluding (or accidentally incorrect) providers cannot substantially influence the aggregate price. Feeds are also reliable due to the redundancy between providers.

To date (launch in mid December 2022), the Pyth on-demand model has already delivered close to 700K price updates on-chain—with an ATH of 25K on-chain price updates in a single day following the launch of 22 perpetual markets on Synthetix Perps and the USDC depeg.

How does Pyth work?

Pyth runs aggregation for its Pythnet Price Feeds on its own application-specific blockchain called Pythnet a proof-of-authority chain, where the data providers run the validators for the network. Prices are delivered to other blockchains via the Wormhole interoperability protocol and a permissionless on-demand price update model. Pythnet is a computation substrate to securely combine the data providers’ prices into a single aggregate price for each Pyth price feed. Pythnet forms the core of Pyth’s cross-chain price feeds that serve all blockchains.

Pythnet Price Feeds Workflow

When a user (application, end user, liquidator, anyone…) wants to use a Pyth price in a transaction, they retrieve the latest update message (a signed VAA) from the price service and submit it in their transaction. The Pyth Data Association provides a public price service that allows anyone to listen for these messages via an HTTP or Websocket API—the network strongly recommends any application using Pyth in production environments to run its own price service instance for maximum reliability and decentralization. Finally, the target chain Pyth contract will verify the validity of the price update message and, if it is valid, store the new price in its on-chain storage.

The Pyth on-demand model is already available on 15 different blockchains and leading DeFi apps like Synthetix (Optimism), Ribbon Finance (Ethereum), CAP Finance (Arbitrum), or Aurigami (Aurora) have since integrated with Pyth’s innovative model. Pyth design enables anyone to access the full suite of Pyth price feeds, regardless of the blockchain you are on. Thus, whether your application is on Ethereum or any of its L2s, it will have access—at will—to the 200+ and growing Pyth price feeds. Pyth Network already has its contract deployed on Polygon zkEVM from which any on-chain application can read the current value of any price feed and onto which any user can pull the up-to-date Pythnet price.

Pyth already supports the assets mentioned in this temperature check (ETH, MATIC and USDC). In regards to the greater AAVE ecosystem, most of the assets accepted by Aave including but not limited to ETH, USDT, USDC, DAI, cbETH, CRV, LINK, BTC, AVAX, MATIC are also supported by Pyth. Pyth price feeds for rETH, stETH, BAL and FRAX are currently in progress of being listed, and if this proposal were to pass, the network would work toward adding any missing price feeds to further support Aave.

Robust and Decentralized

As mentioned above, the Pyth Network boasts 80 leading data providers spanning the largest crypto exchanges and leading market making firms including names like Wintermute, Jane Street, Binance, OKX and many more. Each data publisher streams first-party data on-chain to Pyth for subsequent aggregation and output.

Data publishers risk critical reputational and economic consequences should they attempt to act maliciously. Furthermore, individual data publishers have minimal incentive to do so as Pyth’s conservative minimum publisher thresholds and robust aggregation logic prevent any single data provider from materially influencing the Pyth price for their own economic gain. That is, if publishers are submitting a price of $100 and one publisher submits a price of $80, the aggregate price should remain near $100 and not be overly influenced by the single outlying price.


Scenarios for the aggregation procedure. The lower thin bars represent the prices and confidence intervals of each publisher, and the bold red bar represents where we intuitively would like the aggregate price and confidence to be.

Pyth can attribute this robustness to its simple yet comprehensive two-step aggregation algorithm. In the first step, data publishers provide three votes on the price of the asset — one vote at their price and one vote at each of end (+/-) of their confidence interval — following which Pyth simply calculates the median of all the votes. The second step computes distance from the aggregate price to the 25th and 75th percentiles of the votes, then selects the larger of the two as the aggregate confidence interval.

This process acts like a hybrid between a mean and a median, giving confident publishers more influence, while still capping the maximum influence of any single publisher. The algorithm has an interpretation as computing the minimum of an objective function that penalizes the aggregate price from deviating too far from the publishers’ prices. This interpretation allows us to prove properties of the algorithm’s behavior: for example, the aggregate price will always lie between the 25th and 75th percentiles of the publishers’ votes.

In addition to robustness, Pyth has high uptime. For example, defining downtime as a period in which the most recent Pyth update was at least 5 slots (~2 secs) old, Pyth had 99.97% uptime for ETH/USD over the one-week period of March 28, 2023 through April 4, 2023. When that definition is altered to allow a max staleness of 10 slots (~4 secs), Pyth had 100% uptime for that feed over that period.

Pyth Performance

To assess the potential benefits of this proposal, we evaluate Pyth between March 28, 2023 00:00:00 and April 4, 2023 00:00:00 PM (both UTC) on its AAVE/USD and ETH/USD price feeds. We constructed a price series for the replicator (option 1) by using the Chainlink prices that would be bridged from Ethereum and API3 (option 3). Note: For API3, we use the AAVE/USD feed and the ETH/USD feed on zkEVM. For the centralized exchange (CEX) reference point for prices to evaluate the accuracy and latency of the oracles’ prices, we use Binance’s extremely liquid AAVE/USDT and ETH/USDT price feeds.

  • AAVE

During this period, the Pythnet feed updated and sent out a Wormhole VAA 586,975 times (on average once every 1.03 seconds). In comparison, during this period Chainlink updated 186 times and API3 updated 50 times.

To showcase Pyth’s performance, we plot the Pyth, Chainlink, API3, and CEX prices for AAVE/USD. We show below a zoomed-in version of the price plots for the smaller period March 28, 2023 09:00:00 to March 28, 2023 20:00:00, to better visualize the differences in the price series. The plots show that the oracle price feeds generally follow the CEX price feed with a bit of lag—as can be seen in the instances where the CEX price slightly precedes the oracle price. As can be seen, the Chainlink feed updates more frequently (12 updates) and tracks the CEX price better than API3 (6 updates), though it is still low frequency and has high tracking error relative to Pyth.

The following charts show the deviations for the full time period between the Pyth Network price and the CEX price and between the CEX price and each of the prices of Chainlink and API3, expressed in basis points (1 basis point is 1/100th of 1 percent). These plots show that the deviations of Chainlink from the CEX are significantly higher than those of Pyth, and that API3 has the highest deviations of all.

Another way to visualize this is via the plot below, which showcases the distribution of Pyth deviations from CEX and each of the distributions of the other oracles’ deviations from CEX in the form of a CDF. When comparing two CDF curves, the curve to the left is better, indicating lower tracking error. As can be seen, the Pyth CDF is generally to the left of all other curves. Note that the x-axis is on a log scale. Thus, a difference of 1 tick represents an improvement by a factor of 10. The charts below show that Pyth typically outperforms the other oracles by a factor of at least 10x.

The following table reads off the deviations at various percentiles from the above charts:

  • ETH

During this period, the Pythnet feed updated and sent out a Wormhole VAA 587,225 times (on average once every 1.03 seconds). In comparison, during this period Chainlink updated 274 times and API3 updated 29 times.

Similar to above, the price plots are for the subperiod March 28, 2023 06:00:00 to March 28, 2023 12:00:00. They show that the Chainlink feed updates more frequently (12 times) than API3 (3 times), though it is still worse than Pyth.

The deviation plots below corroborate this:

The charts below show that Pyth typically outperforms the other oracles by a factor of around 10x for ETH/USD.

The following table reads off the deviations at various percentiles from the above charts:

We can estimate the latency of Pyth prices relative to the CEX by taking the deltas of the price feed (i.e. p_{t+1} - p_t) at the price service level and checking the correlation of the oracle deltas with the CEX deltas at different lags. The lag at which we observe the optimal correlation is one way of estimating the latency introduced by the oracle itself. We can do a similar calculation for the other oracles as well. The table below shows the observed latency for the different oracles on the two feeds over the weeklong period of relevance.

As can be seen, Pyth has a relatively low latency (which matches the much lower deviations it observes relative to CEX than Chainlink or API3). Both Chainlink and API3 have relatively long lags in producing prices that can be consumed, which corresponds to the higher deviations relative to CEX price. These latency numbers help to explain the order-of-magnitude difference we see in the deviations of the oracles from the CEX price.

Implementation

The first step to this will be a snapshot vote for the Aave community to determine deployment on zkEVM using Pyth’s Pythnet Price Feeds.

Integrating with Pyth Network on zkEVM is straightforward, though slightly different from other oracles. The integration work is analogous to what is outlined in the Synthetix SIP about using Pyth prices:

Integrating with Pyth Network has two steps:

  1. The on-chain contract reads prices from the Pyth Network contract.
  2. Any application that sends transactions to the on-chain contract—e.g., a web frontend—needs to simultaneously send a Pyth price update. This update can be retrieved from the Pyth Network price service (which simply relays it from Wormhole’s peer-to-peer network).

The Pyth Network SDKs help integrators with both of these steps. An example frontend and contract integration is shown here in the javascript SDK.

The changes are simple to make, and as soon as Pythnet price feeds are integrated into the protocol pricing logic, Aave v3 can be deployed on zkEVM.

Conclusion

Choosing an oracle for a new deployment requires careful thought around several considerations like accuracy and robustness. We recommend that the Aave community choose Pyth for its oracle on zkEVM for the following reasons:

  • Quality—Pyth provides accurate and frequently updated prices that track price movements on primary trading venues closely. This allows protocols to both remain safe and offer good prices to their consumers.
  • Readiness— Pyth is already live, and anyone on zkEVM can already access Pythnet price feeds on the L2.
  • Reliability—Segmenting reliability into availability and accuracy, Pyth does well in both categories. Pyth provides accurate prices with very little downtime.
  • Security—Pyth is a truly decentralized oracle, with robustness guarantees in the form of its set of highly reputable data publishers, aggregation algorithm’s properties, and the confidence interval. Pyth’s publishing workflow includes multiple parties at each part of the stack, which means there is no single point of failure. By contrast, using the replicator mechanism is dangerous because it is dependent on a single operator remaining live.
  • Asset coverage—Unlike other oracle services, once a new price feed is requested on one chain and is made available via Pyth, it instantly becomes available on all Pyth-supported chains. The Pyth model’s scalability means asset coverage is high from day one and grows continually as new symbols are added for any use case on any chain.
2 Likes

Hey @KemarTiti,

want to thank you for your post but at the same time clear some of this up in regard to “performance” of API3 feeds.

Self-funded dAPIs are current ran at a standard deviation of 1% on all chains on crypto assets, which means that prices are updated on-chain, when the off-chain price differs from the on-chain price by this margin. So the “lag” that is being described here by you doesn’t take into account something that is publicly available information.
image

It is also flawed to compare the update frequency between two procotols and not mention that there are significant differences in their configurations. The data obtained for Chainlink from L1 for ETH/USD and AAVE/USD is collected from data feeds ran at 0.5% and 1% deviation and an hourly heartbeat for both. Self-funded dAPIs on zkEVM are ran at 1% with a 24-hour heartbeat. This makes statements like this one kinda weird, because it’s obvious that the lower deviation and a more frequent heartbeat will trigger significantly more updates (it would be weird if it didn’t).

It also makes these graphs deceptive because you’re trying to paint a narrative (API3 producing less updates than Chainlink), when it is quite literally the difference in configuration of the data feeds that work exactly as intended.

The deviation will be adjusted and lowered significantly for the managed version of dAPIs. Lower standard deviations will lead to significantly more updates and hence significantly less “lag” and the price will already be on-chain at this point, consumable by anyone without requiring other parties to write to storage everytime. AAVE will also not be required to “run its own price service instance for maximum reliability and decentralization” because it is baked into the product already. (at this point i’d also be interested in knowing what happens if AAVE doesn’t run this).

Another important factor to mention is that you’re comparing prices published on Pythnet with prices that are actually published on the respective chain that they will be consumed on, which also seems disingenuous because of the obvious cost limitation of writing data on-chain on these “consumer chains” (besides additional issues like RPC problems, etc that the entitiy that pushes prices on-chain has to deal with). In effect you’re saying “look our signed data gateway is giving you a price every second, you just gotta put it on-chain” and comparing that with values that are actually already on-chain and paid. To ensure this is always the case projects like API3 (and Chainlink) run their own nodes in addition to making use of reputable providers in the space, which allows us to guarantee uptime. Your approach outsources this responsibility to the entities pushing the price on-chain, which opens it up to severe issues during times of high congestion (e.g. Arbitrum Airdrop etc).

Both of our approaches are similar when it comes to allowing third parties to update price feeds in a tamper proof way though, where liquidators can benefit from access to such a feature to more cost efficiently liquidate positions for AAVE (or even pay them for that right).

Didn’t wanna come off as too direct and want to keep it as technical as possible (and don’t worry i love some competition), but when you put such immense work into comparisons and graphs for price feeds it would be good to make sure that the things you compare are actually comparable and run on the same configuration (or same place), because otherwise people might accuse you of trying to paint a certain picture here :slight_smile:


In addition to the post provided by @API3dave i want to underline that the managed version of dAPIs will give AAVE access to data feeds that are:

  1. already on-chain and done so in a decentralized manner without requiring to run additional infrastructure to obtain “more” decentralization
  2. deviations significantly lower than what is currently availble for self-funded dAPIs
  3. OEV as an addon that has the potential to massively reduce liquidation incentives paid out to third-parties by AAVE
  4. operating in a similar fashion to what AAVE is already used to with Chainlink Price Feeds
1 Like

I also thought i’d speak about the point of costs. As mentioned the cost for the data feeds doesn’t necessarily end up with AAVE as anybody (even the chains themselves akin to Chainlinks SKALE program) could sponsor these feeds and they become available for free to everyone on the chain. For instance, we’re going to use the Arbitrum Grant to sponsor managed dAPIs on Arbitrum, which will make them free for everyone.

Besides that it would be interesting to discuss who actually picks up the gas cost of operating these feeds, as costs are a severe factor in this.

I took the liberty of checking both the API3 contract 0x3dec619dc529363767dee9e71d8dd1a5bc270d76(here) and the respective Pyth contract 0xC5E56d6b40F3e3B5fbfa266bCd35C37426537c65(here) on Polygon’s zkEVM and their associated costs with price feed updates.

From the 28th of March to this date the average update cost of a data feed for API3 on Polygon zkEVM was 0.000210035 ETH or ~$0.4.
During the same time period the average update cost for a Pyth data feed was 0.001263113 ETH or ~$2.4.

This is a factor of 6x, which means we can put nearly 6 updates on zkEVM for the cost of a single Pyth update.

At the time of writing this, the latest API3 transaction cost 0.000259653405 ETH, while the latest Pyth transaction cost 0.001850769 ETH, which is even a factor of 7x.

Whoever ends up paying for these (or be advised to additionally run its own price service instance for maximum reliability and decentralization) might want to consider this in their calculations and considerations.

EDIT: Added calculations for the above measured with performances from other chains where Pyth data was available from March until today, to show the trend.
https://dune.com/ugurmersin/api3-v-pyth-cost-per-tx

5 Likes

Following the positive sentiment check of the community to deploy on Polygon’s zkEVM, we will be publishing our technical/infrastructural report of the network in the following weeks.

We want to highlight to the community that even if it is already in production, zkEVM is a weeks-old environment, based on new and innovative technology, so the evaluation is really important.




Regarding the Oracle considerations, even if interesting technologies appear like Pyth and API3, we have a hard stance on using Chainlink as the main price source of the protocol and not considering any type of replacement at the current moment.

The rationale is the following:

  • Chainlink has been powered Aave since day 0, with no negative incidents, so we don’t have any reason to doubt its reliability.
  • Expansion of Aave to other networks has multiple moving pieces, it is not reasonable to introduce more.
  • We have good confidence in Chainlink expanding to zkEVM soon, given the Chainlink community is a long-term partner of Aave and they have always covered all the new needs for feeds promptly.

That being said, it is always important and healthy to have discussions around assets-pricing on Aave and the different mechanisms around it, just not an option that in our opinion should be considered directly on a network expansion, without other minor experimentation first.




Regarding the possibility of introducing a new mechanism like the one described in Option 1 by @MarcZeller , we had already some preliminary thoughts in this direction, but we see enough disadvantages to not consider it a practical solution (at the moment).

5 Likes

Hi Ugur — Thank you for your prompt response to our message. We appreciate and share your enthusiasm for a friendly competition and our shared desire to help the AAVE community choose the best oracle solution! We agree that healthy competition will allow us to better evaluate and select the most suitable solution for AAVE’s needs.

You point out that the configurations of the three oracles are different: Chainlink and API3 have different heartbeat-based push updates, while Pyth has a pull-based update model. That’s exactly the point we’re trying to convey in these graphs! We believe a pull-based model leads to a higher frequency and quality of consumable updates than the push-based model. You make the point that because these configurations are not the same, it’s unfair or impossible to do a comparison among the oracles, but to the contrary, the quantitative comparisons are illustrating the effects of choosing different configurations. We believe the pull-based configuration helps Pyth achieve higher-quality updates relative to the configurations of other oracles that depend on a centralized pusher to crank according to a heartbeat model that trades off between frequent updates/accurate tracking and financial sustainability (lower thresholds for updates —> more updates —> higher gas costs for push-based oracles).

With regard to your point about protocols running their own price service instance, this is not a requirement. As stated in our original post and our docs, the Pyth Data Association runs a production-level endpoint for anyone to query recent price updates to pull and consume on target chains. Thus, if Aave didn’t run a separate instance to start, they would be able to consume the updates available at the PDA’s price service instance just fine. As you quoted, we recommend protocols to run their own price service instances “for maximum reliability and decentralization” because decentralization is a desirable quality in Web3. While push-based oracles rely on a single centralized party (a single point of failure) to push updates on chain, we want to create the option for protocols invested in complete decentralization to be able to operate their own price service instance.

Regarding your point on the comparison between Pyth data points on Pythnet and API3 on target chain (in this case Polygon’s zkEVM), we understand that you have concerns about how we have conducted the comparison. First, we would like to clarify that we are comparing prices as soon as they become consumable. This means that Pyth prices are consumable upon reaching the price service, while API3/CL prices only become consumable once they actually hit the target chain. This distinction is important because it affects the timing of when the prices are available for use on Aave. In addition, Pyth-integrated protocols are configured to avoid the use of stale prices on-chain by setting maximum staleness thresholds. Since consumer protocols are aware of the availability of Pyth prices on Pythnet and the price service, and are configured to prevent the use of stale prices on-chain, they consider signed price updates as “ready to be consumed” on-chain and allow users to bundle a price update with a transaction. (As to your point about uptime/RPC issues, if there was an inability to land transactions to update the price on-chain, that would also mean an inability to land transactions to consume those prices on-chain. Thus Pyth introduces no new uptime weak links.) Consequently, it is appropriate to compare these ready-for-consumption price updates with the push-based oracles’ on-chain updates.

Gas costs for updates are indeed lower on API3 since price updates do not require signature verification of a signed price update. However, this comes at the explicit trade off of less granular pricing as we noted in the exhibits above. Having more accurate prices leads to better user price fulfillment and ultimately a safer pricing approach when valuing user positions on-chain.

Moreover, a per-message comparison is misleading because the entity who bears the cost is very different. In the push model, a centralized entity (such as API3) is responsible for the cost of all of the oracle updates. Thus, even a relatively small cost adds up over time as updates must be pushed consistently. In Pyth’s pull-model, the costs are distributed amongst all users of the protocol, which makes them more sustainable. Depending on grants from L1s is clearly not a sustainable strategy for maintaining an oracle feed in the long run.

In addition to this, we are continually working to decrease the gas cost through gas optimizations to our verifier contracts as well as some more technical rearchitecting to come in the near future—stay tuned for updates on our blog on this front!

The point above was to illustrate that your picture of “api3 produces less updates” is based on 2 seperate configurations of chainlink and api3. 0.5% deviation at 1 hour heartbeats vs 1% at 24 hour heartbeats. Claiming things like

“and that API3 has the highest deviations of all” and " They show that the Chainlink feed updates more frequently (12 times) than API3 (3 times)" are pretty moot if you’re simply reiterating configurations, because the result should be more than obvious.

Which essentially just means that decentralization on this front is an option and not really a requirement ;)

Pretty untrue statement wouldn’t you say, as push based approaches have time and time proven that any oracle node is able to submit the transaction, which means that every node that is supplying data for a specific data feed has the ability to push the price on-chain. Since nodes are ran by multiple independent parties (in API3’s and Chainlink’s case alike)… i think you get the gist :smiley:

This is where i beg to differ heavily. During times of high congestion API3 and Chainlink services will always be available on-chain directly since we have dedicated infrastructure runnig that is purely used for our respective oracle services. During times of high congestion and/or black swan events you’re simply trusting that users (who mostly use public RPCs, lets be honest here) will be able to actually update prices, which is a bold statement to make. When users and/or liquidators are actually unable to rely on RPCs (e.g. recent nukes or Airdrop events) to push prices themselves, AAVE is going to be left potentially holding the bag when prices aren’t updated when they were supposed to. There is a reason dedicated infrastucture is typically made available for high stakes services and while i do “get” the idea of oursourcing parts of price pushing to the respective users, solely relying on them and their ability to manage/monitor/deal with RPCs is a risky bet to make, especially for AAVE when we’re talking millions in potential bad debt because prices have not been published when they should have been. It is also a heavy ask to simply go to a protocol and tell them to run a “price pusher” while also making sure to monitor respective infrastructure that this price pusher relies on (hosted where, uses what rpcs, etc) for simply consuming prices. You’re also asking these dApps to additionally pay for gas costs on this endavour.

Untrue again, as we also use signed price updates on our contracts. The current gas usage for self-funded dAPIs (due to their single source nature) is 37k gas, whereas your updates consume 270-280k for a single price update. The cost for our managed dAPIs will range from 75-100k (due to more signatures being verified), which is still going to be miles cheaper than what you’re accomplishing on-chain.

No, what you’re doing is asking each and every AAVE user to pay (at current times) 2.4$ every time they want to interact with the AAVE protocol on zkEVM and the price is not “fresh” enough, whereas we make sure that the price is fresh enough for a fraction of the cost that you’re accomplishing this (and doing so not in a centralized way as you claim) without requiring the user to pay a single penny for updating the oracle. I get that this works for e.g. derivatives platforms because it is mostly EV+ for a user to update the oracle to receive the “freshest” price for opening a 20x position, but for something like a lending protocol it is more than suboptimal.

The reduced cost also means that these things are actually affordable by chains and dapps alike and we’re charging exactly at cost of operating these products without rolling the cost onto the users (or even dApps by making them run an instance of a price pusher).
An additional benefit is that once data is on-chain it can be read by anyone for free and our conversations with chains have already proven that it is EV+ for them to pay at operating cost so that dApps (and most importantly their users) can benefit from free oracle services.

3 Likes

Hey @bgdlabs,

thanks for the answer. Fully understand and respect the decision to stick with a valuable and proven partner.
If there is some “minor experimentation” that you guys would like to conduct, we’d love to get in touch to potentially explore some of that. :slight_smile:

2 Likes

Some intriguing discussions are getting hidden in this thread.

@bgdlabs in which instances are you more supportive of introducing oracle diversity?

We understand and acknowledge the importance of Chainlink in Aave’s growth, security, and success but we also have an appetite for more experimentation across specific markets.

It would be exciting to empower other infrastructure providers in controlled settings.

9 Likes

I would like to echo Ugur’s comment here about us being willing to work with BGD and AAVE on whatever proof of concept is needed to show the reliability of our dAPIs.

We understand why BGD are reluctant to use different oracle providers, but I (personally) feel that dismissing oracles without testing could be seen as a protocol risk in itself, assuming a fork with a potentially more optimal solution is able to drain TVL (or gain TVL before an AAVE protocol deployment).

Thank you Marc Zeller for initiating this ARFC and to the broader AAVE community for engaging in this important discussion.

We have read the different options presented here, and we would like to express our support for Aave exploring integration opportunities with both Pyth and API3 as primary oracles on the Polygon zkEVM.

Starting with Pyth, they have a strong record of providing reliable and timely prices on different EVM chains through their cross-chain design. Pyth’s on-demand model accomplishes scalable provision of prices without sacrificing user experience or overwhelming blockspace on different networks. In particular, Pyth already provides prices for 0vix, a borrow-lend protocol on Polygon and zkEVM (and in fact the first borrow-lend platform on the zkEVM). Also QuickSwap will be integrating Pyth prices for their upcoming perpetual protocol. While there has been an important discussion on the push vs pull model and the subsequent trade offs each model make. We take the perspective that more frequent and granular pricing is best for protocol health and solvency.

On the other hand, API3 ambitious roadmap for self funded APIs, managed APIs for price feeds as 1st party oracles and then Oracle extractable value is really interesting for apps. API3 does not require dApps to run and monitor a separate price pusher instance or deal with additional infrastructure concerns. API3 is also close partners with us and we encourage new and novel oracle designs to push decentralization of this essential infrastructure stack.

The Polygon community appreciates the fair and reasonable arguments debated in public by @KemarTiti and @ugurmersin from both sides.

In the face of this conundrum, this may be an opportunity to improve oracle decentralisation and mitigate reliance on either oracle. We suggest the following situation -

Use Pyth oracles as primary source for price feeds while exploring Api3 as a secondary backup source.

Rationale this preposition is that, Pyth has been live in production since months, whereas API3 recently released the price feeds.

1 Like

Hey everyone, Ryan from Tellor here. Another approach could be to use the 3-pronged median approach, a structure that’s been live for 2.5 years with Ampleforth.

You can view the Ampleforth oracle’s track record here.

Benefits:
Using a 3-pronged median approach with API3, Tellor, and Pyth offers several benefits to the Aave protocol, including:

  1. Increased security: By using three different oracle providers, the risk of a single point of failure is reduced. This makes the oracle system more resilient to attacks and tampering.
  2. Decentralization: The oracle system becomes more decentralized with three oracles, reducing the risk of collusion and improving overall security.
  3. Accuracy: By taking the median of the responses from the three providers, the final price is more accurate and less prone to manipulation.

Adding Tellor to the mix would be a good fit here with its track record as an alternative to Chainlink, exemplified not only by Ampleforth, but Liquity as well, which uses Tellor as a fallback.

While the Aave community’s comfort and familiarity with Chainlink is well understood, felt this could be a good option for the community to consider while we’re on the topic.

We want to thank everyone for participating in the Oracle discussions.

Every potential actor has presented their solution, and we’re closing discussions on this topic.

The next step is @bgdlabs Infrastructure/technical evaluation report with their recommendations for Cross-chain messaging infrastructure for governance & oracle.

The ACI will follow the Aave DAO Core devs recommendation @bgdlabs.

2 Likes

Hello everyone, hi @MarcZeller,

Marcin from RedStone here - first of all, we’re glad to see that the discussion about Chainlink alternatives opens up as the market matures. Monopolies are never good for innovation and can lead to groupthink, which is detrimental to the security, especially in distributed Web3 space - we are strongly convinced that Aave needs more than just one oracle to be sustainable and safe in the long run (aligned with your approach @fig).

Why RedStone Oracles
RedStone is best positioned as a first-line oracle for Aave on chains that lack Chainlink implementation, such as Polygon zkEVM, let me explain why:

  • RedStone is “The” On-Demand Oracle - we have been promoting the on-demand model narrative and developing our solution for the past 2 years. We know it’s “trendy” to call yourself On-Demand Oracle but the implementation of such a model is far from trivial.

  • RedStone has been developed & tested since Dec 2020, on mainnet since March 2022, well audited and securing real funds. Along the way we had to solve countless tech challenges (i.e. Assembly-level optimisation of costs), so we are confident that our implementation is the most technologically advanced if it comes to the on-demand oracle model.

  • Our flow is designed to be maximally cost efficient - happy to provide benchmarks vs other oracles on that front.

  • Our team embraced various Oracles needs from a diverse suite of protocols. Therefore, we leverage the Modular architecture of RedStone to offer tailor-made models meeting dApss needs. Currently, we offer 3 bespoke data consumption models (details below) and are open to creating the next ones, as RedStone components have been designed to be adjustable.

  • DeFi ecosystem is built on a promise of decentralisation & permissionlessness. Looking at factors limiting the decentralisation of Oracles such as governance of “ultimate” multisig or full control over the off-chain component, we designed the RedStone ecosystem, so that it can operate under any circumstances.

  • While we give maximum possible freedom to dApps builders, we also acknowledge that the overhead of running Oracle relayers can be an overkill. Hence in our RedStone Classic model, our team can take care of running the Relayer or share the responsibility with a dApp. No need to run, maintain and secure your own service - we take care of that.

  • Battle-tested - on mainnet with 100% uptime since March 2022, and thousands of transactions secured, i.e. for DeltaPrime on Avalanche (dedicated Dune dashboard).

  • A protocol using RedStone Oracle has full clarity and transparency in terms of data sources and origins of consumed price feeds - as opposed to the black box offered by some of the popular incumbents. On top of that, we give dApps creators a spectrum of customisation options regarding off-chain and on-chain aggregation methods, as well as picking Data Providers they trust.

  • RedStone specialises in EVM L1s & L2s, our Founder knows all its quirks and features thanks to his Smart Contract auditing experience. At the same time, our team (80% devs) keeps up with the ZK-tech, therefore RedStone is available on zkSync Era, Polygon zkEVM, Scroll and Starknet. Our infrastructure is designed for Oracle use cases from day 1, as opposed to forking a legacy codebase or trying to adapt blockchains & bridges that haven’t been designed for that use case (true story).

On top of that, RedStone = working shoulder-to-shoulder with builders.
We collaborate hands-on with our Partners to come up with an optimal solution for their specific use case, customising many parameters such as:

  • data aggregation mechanisms,
  • data injection to the chain,
  • custom flows of liquidation/opening a position etc.

We are fully committed to building an entirely dedicated solution for Aave including risk mitigation mechanisms, i.e. lower liquidity in the early days of Polygon zkEVM.
Additionally, we plan to be in close contact and at disposal of Aave’s risk analysis partners.

Currently available RedStone models

  • Core model (On-Demand): data is dynamically injected into users’ transactions achieving maximum gas efficiency and maintaining great user experience as the whole process fits into a single transaction.
  • Classic model (Push): data is pushed into on-chain storage via decentralized and permissionless relayers. Dedicated to protocols designed for the traditional Oracles model + getting full control of the data source and update conditions (dApp contracts and relayer operators specify heartbeat & deviation threshold). No need to change the code logic vs Chainlink setup.
  • X model (No Front-running): targeting the needs of the most advanced DeFi protocols by eliminating the front-running risk through offsetting the delay between the user’s interaction and the Oracle price update

We believe the best solution for Aave would be RedStone Classic model, which offers full compatibility with Chainlink interface (hence ease of risk analysis @bgdlabs). If there’s interest, we’d be happy to facilitate the transition into the cost-optimal RedStone Core model in long term.

We are also keen to support Aave on other existing and upcoming L1s & L2s.To learn more about our models and positioning you can explore the below governance proposal on the Celo forum:

Conclusion
To sum up, we’re happy to work closely with @MarcZeller @bgdlabs or any entity contributing to Aave’s development on the oracle front and provide all the necessary data and input for them to make the most optimal decision. We’re open to joining governance calls where each Oracle can present and answer questions regarding implementation for Aave.

If we were to propose the next step in this discussion we would suggest that each oracle provider prepares a testnet POC for benchmarking purposes to gather data and see which oracle solution would be best suited for Aave’s specific needs.

End Note: Our sincere apologies for submitting this comment only now. A large part of the discussion took place over the Easter Holidays and there was no official timeframe or any submitting period defined, therefore we wanted to get involved once our team is back from family tables.

6 Likes

In my opinion none of the three points you gave are very strong, and instead show personal attachment to a solution based on legacy.

  • Aave hasn’t tried alternatives. yes chainlink has succeeded, but what makes it technically superior to alternatives? Saying chainlink has worked doesn’t mean anything without a comparison to alternatives.
  • I am not technical so I cant say for sure, but my thoughts here are that if BGD isnt willing to add this “extra piece” another group would be fine doing it. If adding an extra piece is a barrier to changing the aave protocol for the better then that is not good.
  • If there is an alternative, and it works well, and the DAO is ok with it it is weird not to except that alternative. It is backwards thinking to just go with chainlink cos it is a long term partner. the Dao should strive for greatness and there are so many obvious benefits to accepting chainlink alternatives

overall in my opinion there should be a temp check in the community to figure out if the dao is ok with chainlink alternatives. this comment section definitely implies many in the community are at least open to the idea

Chainlink are important infrastructure and key part of defi, but that doesnt mean aave cant have controlled experiments with other providers

1 Like