[ARFC] Aave V3 Deployment on zkEVM L2

I also thought i’d speak about the point of costs. As mentioned the cost for the data feeds doesn’t necessarily end up with AAVE as anybody (even the chains themselves akin to Chainlinks SKALE program) could sponsor these feeds and they become available for free to everyone on the chain. For instance, we’re going to use the Arbitrum Grant to sponsor managed dAPIs on Arbitrum, which will make them free for everyone.

Besides that it would be interesting to discuss who actually picks up the gas cost of operating these feeds, as costs are a severe factor in this.

I took the liberty of checking both the API3 contract 0x3dec619dc529363767dee9e71d8dd1a5bc270d76(here) and the respective Pyth contract 0xC5E56d6b40F3e3B5fbfa266bCd35C37426537c65(here) on Polygon’s zkEVM and their associated costs with price feed updates.

From the 28th of March to this date the average update cost of a data feed for API3 on Polygon zkEVM was 0.000210035 ETH or ~$0.4.
During the same time period the average update cost for a Pyth data feed was 0.001263113 ETH or ~$2.4.

This is a factor of 6x, which means we can put nearly 6 updates on zkEVM for the cost of a single Pyth update.

At the time of writing this, the latest API3 transaction cost 0.000259653405 ETH, while the latest Pyth transaction cost 0.001850769 ETH, which is even a factor of 7x.

Whoever ends up paying for these (or be advised to additionally run its own price service instance for maximum reliability and decentralization) might want to consider this in their calculations and considerations.

EDIT: Added calculations for the above measured with performances from other chains where Pyth data was available from March until today, to show the trend.
https://dune.com/ugurmersin/api3-v-pyth-cost-per-tx

5 Likes

Following the positive sentiment check of the community to deploy on Polygon’s zkEVM, we will be publishing our technical/infrastructural report of the network in the following weeks.

We want to highlight to the community that even if it is already in production, zkEVM is a weeks-old environment, based on new and innovative technology, so the evaluation is really important.




Regarding the Oracle considerations, even if interesting technologies appear like Pyth and API3, we have a hard stance on using Chainlink as the main price source of the protocol and not considering any type of replacement at the current moment.

The rationale is the following:

  • Chainlink has been powered Aave since day 0, with no negative incidents, so we don’t have any reason to doubt its reliability.
  • Expansion of Aave to other networks has multiple moving pieces, it is not reasonable to introduce more.
  • We have good confidence in Chainlink expanding to zkEVM soon, given the Chainlink community is a long-term partner of Aave and they have always covered all the new needs for feeds promptly.

That being said, it is always important and healthy to have discussions around assets-pricing on Aave and the different mechanisms around it, just not an option that in our opinion should be considered directly on a network expansion, without other minor experimentation first.




Regarding the possibility of introducing a new mechanism like the one described in Option 1 by @MarcZeller , we had already some preliminary thoughts in this direction, but we see enough disadvantages to not consider it a practical solution (at the moment).

5 Likes

Hi Ugur — Thank you for your prompt response to our message. We appreciate and share your enthusiasm for a friendly competition and our shared desire to help the AAVE community choose the best oracle solution! We agree that healthy competition will allow us to better evaluate and select the most suitable solution for AAVE’s needs.

You point out that the configurations of the three oracles are different: Chainlink and API3 have different heartbeat-based push updates, while Pyth has a pull-based update model. That’s exactly the point we’re trying to convey in these graphs! We believe a pull-based model leads to a higher frequency and quality of consumable updates than the push-based model. You make the point that because these configurations are not the same, it’s unfair or impossible to do a comparison among the oracles, but to the contrary, the quantitative comparisons are illustrating the effects of choosing different configurations. We believe the pull-based configuration helps Pyth achieve higher-quality updates relative to the configurations of other oracles that depend on a centralized pusher to crank according to a heartbeat model that trades off between frequent updates/accurate tracking and financial sustainability (lower thresholds for updates —> more updates —> higher gas costs for push-based oracles).

With regard to your point about protocols running their own price service instance, this is not a requirement. As stated in our original post and our docs, the Pyth Data Association runs a production-level endpoint for anyone to query recent price updates to pull and consume on target chains. Thus, if Aave didn’t run a separate instance to start, they would be able to consume the updates available at the PDA’s price service instance just fine. As you quoted, we recommend protocols to run their own price service instances “for maximum reliability and decentralization” because decentralization is a desirable quality in Web3. While push-based oracles rely on a single centralized party (a single point of failure) to push updates on chain, we want to create the option for protocols invested in complete decentralization to be able to operate their own price service instance.

Regarding your point on the comparison between Pyth data points on Pythnet and API3 on target chain (in this case Polygon’s zkEVM), we understand that you have concerns about how we have conducted the comparison. First, we would like to clarify that we are comparing prices as soon as they become consumable. This means that Pyth prices are consumable upon reaching the price service, while API3/CL prices only become consumable once they actually hit the target chain. This distinction is important because it affects the timing of when the prices are available for use on Aave. In addition, Pyth-integrated protocols are configured to avoid the use of stale prices on-chain by setting maximum staleness thresholds. Since consumer protocols are aware of the availability of Pyth prices on Pythnet and the price service, and are configured to prevent the use of stale prices on-chain, they consider signed price updates as “ready to be consumed” on-chain and allow users to bundle a price update with a transaction. (As to your point about uptime/RPC issues, if there was an inability to land transactions to update the price on-chain, that would also mean an inability to land transactions to consume those prices on-chain. Thus Pyth introduces no new uptime weak links.) Consequently, it is appropriate to compare these ready-for-consumption price updates with the push-based oracles’ on-chain updates.

Gas costs for updates are indeed lower on API3 since price updates do not require signature verification of a signed price update. However, this comes at the explicit trade off of less granular pricing as we noted in the exhibits above. Having more accurate prices leads to better user price fulfillment and ultimately a safer pricing approach when valuing user positions on-chain.

Moreover, a per-message comparison is misleading because the entity who bears the cost is very different. In the push model, a centralized entity (such as API3) is responsible for the cost of all of the oracle updates. Thus, even a relatively small cost adds up over time as updates must be pushed consistently. In Pyth’s pull-model, the costs are distributed amongst all users of the protocol, which makes them more sustainable. Depending on grants from L1s is clearly not a sustainable strategy for maintaining an oracle feed in the long run.

In addition to this, we are continually working to decrease the gas cost through gas optimizations to our verifier contracts as well as some more technical rearchitecting to come in the near future—stay tuned for updates on our blog on this front!

The point above was to illustrate that your picture of “api3 produces less updates” is based on 2 seperate configurations of chainlink and api3. 0.5% deviation at 1 hour heartbeats vs 1% at 24 hour heartbeats. Claiming things like

“and that API3 has the highest deviations of all” and " They show that the Chainlink feed updates more frequently (12 times) than API3 (3 times)" are pretty moot if you’re simply reiterating configurations, because the result should be more than obvious.

Which essentially just means that decentralization on this front is an option and not really a requirement ;)

Pretty untrue statement wouldn’t you say, as push based approaches have time and time proven that any oracle node is able to submit the transaction, which means that every node that is supplying data for a specific data feed has the ability to push the price on-chain. Since nodes are ran by multiple independent parties (in API3’s and Chainlink’s case alike)… i think you get the gist :smiley:

This is where i beg to differ heavily. During times of high congestion API3 and Chainlink services will always be available on-chain directly since we have dedicated infrastructure runnig that is purely used for our respective oracle services. During times of high congestion and/or black swan events you’re simply trusting that users (who mostly use public RPCs, lets be honest here) will be able to actually update prices, which is a bold statement to make. When users and/or liquidators are actually unable to rely on RPCs (e.g. recent nukes or Airdrop events) to push prices themselves, AAVE is going to be left potentially holding the bag when prices aren’t updated when they were supposed to. There is a reason dedicated infrastucture is typically made available for high stakes services and while i do “get” the idea of oursourcing parts of price pushing to the respective users, solely relying on them and their ability to manage/monitor/deal with RPCs is a risky bet to make, especially for AAVE when we’re talking millions in potential bad debt because prices have not been published when they should have been. It is also a heavy ask to simply go to a protocol and tell them to run a “price pusher” while also making sure to monitor respective infrastructure that this price pusher relies on (hosted where, uses what rpcs, etc) for simply consuming prices. You’re also asking these dApps to additionally pay for gas costs on this endavour.

Untrue again, as we also use signed price updates on our contracts. The current gas usage for self-funded dAPIs (due to their single source nature) is 37k gas, whereas your updates consume 270-280k for a single price update. The cost for our managed dAPIs will range from 75-100k (due to more signatures being verified), which is still going to be miles cheaper than what you’re accomplishing on-chain.

No, what you’re doing is asking each and every AAVE user to pay (at current times) 2.4$ every time they want to interact with the AAVE protocol on zkEVM and the price is not “fresh” enough, whereas we make sure that the price is fresh enough for a fraction of the cost that you’re accomplishing this (and doing so not in a centralized way as you claim) without requiring the user to pay a single penny for updating the oracle. I get that this works for e.g. derivatives platforms because it is mostly EV+ for a user to update the oracle to receive the “freshest” price for opening a 20x position, but for something like a lending protocol it is more than suboptimal.

The reduced cost also means that these things are actually affordable by chains and dapps alike and we’re charging exactly at cost of operating these products without rolling the cost onto the users (or even dApps by making them run an instance of a price pusher).
An additional benefit is that once data is on-chain it can be read by anyone for free and our conversations with chains have already proven that it is EV+ for them to pay at operating cost so that dApps (and most importantly their users) can benefit from free oracle services.

3 Likes

Hey @bgdlabs,

thanks for the answer. Fully understand and respect the decision to stick with a valuable and proven partner.
If there is some “minor experimentation” that you guys would like to conduct, we’d love to get in touch to potentially explore some of that. :slight_smile:

2 Likes

Some intriguing discussions are getting hidden in this thread.

@bgdlabs in which instances are you more supportive of introducing oracle diversity?

We understand and acknowledge the importance of Chainlink in Aave’s growth, security, and success but we also have an appetite for more experimentation across specific markets.

It would be exciting to empower other infrastructure providers in controlled settings.

9 Likes

I would like to echo Ugur’s comment here about us being willing to work with BGD and AAVE on whatever proof of concept is needed to show the reliability of our dAPIs.

We understand why BGD are reluctant to use different oracle providers, but I (personally) feel that dismissing oracles without testing could be seen as a protocol risk in itself, assuming a fork with a potentially more optimal solution is able to drain TVL (or gain TVL before an AAVE protocol deployment).

Thank you Marc Zeller for initiating this ARFC and to the broader AAVE community for engaging in this important discussion.

We have read the different options presented here, and we would like to express our support for Aave exploring integration opportunities with both Pyth and API3 as primary oracles on the Polygon zkEVM.

Starting with Pyth, they have a strong record of providing reliable and timely prices on different EVM chains through their cross-chain design. Pyth’s on-demand model accomplishes scalable provision of prices without sacrificing user experience or overwhelming blockspace on different networks. In particular, Pyth already provides prices for 0vix, a borrow-lend protocol on Polygon and zkEVM (and in fact the first borrow-lend platform on the zkEVM). Also QuickSwap will be integrating Pyth prices for their upcoming perpetual protocol. While there has been an important discussion on the push vs pull model and the subsequent trade offs each model make. We take the perspective that more frequent and granular pricing is best for protocol health and solvency.

On the other hand, API3 ambitious roadmap for self funded APIs, managed APIs for price feeds as 1st party oracles and then Oracle extractable value is really interesting for apps. API3 does not require dApps to run and monitor a separate price pusher instance or deal with additional infrastructure concerns. API3 is also close partners with us and we encourage new and novel oracle designs to push decentralization of this essential infrastructure stack.

The Polygon community appreciates the fair and reasonable arguments debated in public by @KemarTiti and @ugurmersin from both sides.

In the face of this conundrum, this may be an opportunity to improve oracle decentralisation and mitigate reliance on either oracle. We suggest the following situation -

Use Pyth oracles as primary source for price feeds while exploring Api3 as a secondary backup source.

Rationale this preposition is that, Pyth has been live in production since months, whereas API3 recently released the price feeds.

1 Like

Hey everyone, Ryan from Tellor here. Another approach could be to use the 3-pronged median approach, a structure that’s been live for 2.5 years with Ampleforth.

You can view the Ampleforth oracle’s track record here.

Benefits:
Using a 3-pronged median approach with API3, Tellor, and Pyth offers several benefits to the Aave protocol, including:

  1. Increased security: By using three different oracle providers, the risk of a single point of failure is reduced. This makes the oracle system more resilient to attacks and tampering.
  2. Decentralization: The oracle system becomes more decentralized with three oracles, reducing the risk of collusion and improving overall security.
  3. Accuracy: By taking the median of the responses from the three providers, the final price is more accurate and less prone to manipulation.

Adding Tellor to the mix would be a good fit here with its track record as an alternative to Chainlink, exemplified not only by Ampleforth, but Liquity as well, which uses Tellor as a fallback.

While the Aave community’s comfort and familiarity with Chainlink is well understood, felt this could be a good option for the community to consider while we’re on the topic.

We want to thank everyone for participating in the Oracle discussions.

Every potential actor has presented their solution, and we’re closing discussions on this topic.

The next step is @bgdlabs Infrastructure/technical evaluation report with their recommendations for Cross-chain messaging infrastructure for governance & oracle.

The ACI will follow the Aave DAO Core devs recommendation @bgdlabs.

2 Likes

Hello everyone, hi @MarcZeller,

Marcin from RedStone here - first of all, we’re glad to see that the discussion about Chainlink alternatives opens up as the market matures. Monopolies are never good for innovation and can lead to groupthink, which is detrimental to the security, especially in distributed Web3 space - we are strongly convinced that Aave needs more than just one oracle to be sustainable and safe in the long run (aligned with your approach @fig).

Why RedStone Oracles
RedStone is best positioned as a first-line oracle for Aave on chains that lack Chainlink implementation, such as Polygon zkEVM, let me explain why:

  • RedStone is “The” On-Demand Oracle - we have been promoting the on-demand model narrative and developing our solution for the past 2 years. We know it’s “trendy” to call yourself On-Demand Oracle but the implementation of such a model is far from trivial.

  • RedStone has been developed & tested since Dec 2020, on mainnet since March 2022, well audited and securing real funds. Along the way we had to solve countless tech challenges (i.e. Assembly-level optimisation of costs), so we are confident that our implementation is the most technologically advanced if it comes to the on-demand oracle model.

  • Our flow is designed to be maximally cost efficient - happy to provide benchmarks vs other oracles on that front.

  • Our team embraced various Oracles needs from a diverse suite of protocols. Therefore, we leverage the Modular architecture of RedStone to offer tailor-made models meeting dApss needs. Currently, we offer 3 bespoke data consumption models (details below) and are open to creating the next ones, as RedStone components have been designed to be adjustable.

  • DeFi ecosystem is built on a promise of decentralisation & permissionlessness. Looking at factors limiting the decentralisation of Oracles such as governance of “ultimate” multisig or full control over the off-chain component, we designed the RedStone ecosystem, so that it can operate under any circumstances.

  • While we give maximum possible freedom to dApps builders, we also acknowledge that the overhead of running Oracle relayers can be an overkill. Hence in our RedStone Classic model, our team can take care of running the Relayer or share the responsibility with a dApp. No need to run, maintain and secure your own service - we take care of that.

  • Battle-tested - on mainnet with 100% uptime since March 2022, and thousands of transactions secured, i.e. for DeltaPrime on Avalanche (dedicated Dune dashboard).

  • A protocol using RedStone Oracle has full clarity and transparency in terms of data sources and origins of consumed price feeds - as opposed to the black box offered by some of the popular incumbents. On top of that, we give dApps creators a spectrum of customisation options regarding off-chain and on-chain aggregation methods, as well as picking Data Providers they trust.

  • RedStone specialises in EVM L1s & L2s, our Founder knows all its quirks and features thanks to his Smart Contract auditing experience. At the same time, our team (80% devs) keeps up with the ZK-tech, therefore RedStone is available on zkSync Era, Polygon zkEVM, Scroll and Starknet. Our infrastructure is designed for Oracle use cases from day 1, as opposed to forking a legacy codebase or trying to adapt blockchains & bridges that haven’t been designed for that use case (true story).

On top of that, RedStone = working shoulder-to-shoulder with builders.
We collaborate hands-on with our Partners to come up with an optimal solution for their specific use case, customising many parameters such as:

  • data aggregation mechanisms,
  • data injection to the chain,
  • custom flows of liquidation/opening a position etc.

We are fully committed to building an entirely dedicated solution for Aave including risk mitigation mechanisms, i.e. lower liquidity in the early days of Polygon zkEVM.
Additionally, we plan to be in close contact and at disposal of Aave’s risk analysis partners.

Currently available RedStone models

  • Core model (On-Demand): data is dynamically injected into users’ transactions achieving maximum gas efficiency and maintaining great user experience as the whole process fits into a single transaction.
  • Classic model (Push): data is pushed into on-chain storage via decentralized and permissionless relayers. Dedicated to protocols designed for the traditional Oracles model + getting full control of the data source and update conditions (dApp contracts and relayer operators specify heartbeat & deviation threshold). No need to change the code logic vs Chainlink setup.
  • X model (No Front-running): targeting the needs of the most advanced DeFi protocols by eliminating the front-running risk through offsetting the delay between the user’s interaction and the Oracle price update

We believe the best solution for Aave would be RedStone Classic model, which offers full compatibility with Chainlink interface (hence ease of risk analysis @bgdlabs). If there’s interest, we’d be happy to facilitate the transition into the cost-optimal RedStone Core model in long term.

We are also keen to support Aave on other existing and upcoming L1s & L2s.To learn more about our models and positioning you can explore the below governance proposal on the Celo forum:

Conclusion
To sum up, we’re happy to work closely with @MarcZeller @bgdlabs or any entity contributing to Aave’s development on the oracle front and provide all the necessary data and input for them to make the most optimal decision. We’re open to joining governance calls where each Oracle can present and answer questions regarding implementation for Aave.

If we were to propose the next step in this discussion we would suggest that each oracle provider prepares a testnet POC for benchmarking purposes to gather data and see which oracle solution would be best suited for Aave’s specific needs.

End Note: Our sincere apologies for submitting this comment only now. A large part of the discussion took place over the Easter Holidays and there was no official timeframe or any submitting period defined, therefore we wanted to get involved once our team is back from family tables.

6 Likes

In my opinion none of the three points you gave are very strong, and instead show personal attachment to a solution based on legacy.

  • Aave hasn’t tried alternatives. yes chainlink has succeeded, but what makes it technically superior to alternatives? Saying chainlink has worked doesn’t mean anything without a comparison to alternatives.
  • I am not technical so I cant say for sure, but my thoughts here are that if BGD isnt willing to add this “extra piece” another group would be fine doing it. If adding an extra piece is a barrier to changing the aave protocol for the better then that is not good.
  • If there is an alternative, and it works well, and the DAO is ok with it it is weird not to except that alternative. It is backwards thinking to just go with chainlink cos it is a long term partner. the Dao should strive for greatness and there are so many obvious benefits to accepting chainlink alternatives

overall in my opinion there should be a temp check in the community to figure out if the dao is ok with chainlink alternatives. this comment section definitely implies many in the community are at least open to the idea

Chainlink are important infrastructure and key part of defi, but that doesnt mean aave cant have controlled experiments with other providers

1 Like

Great discussion so far and we support the deployment of Aave V3 on Polygon’s zkEVM with the mentioned assets.

Thanks to @API3dave @UgurMersin and @KemarTiti for providing in-depth explanations of the differences and benefits between API3 and Pyth. On this note, we would also like to signal our appetite to explore alternative oracle solutions for Aave. Chainlink has continuously proven to be an exceptional oracle partner, however, we believe that Aave can certainly benefit from exploring alternative oracle solutions that could help the protocol’s expansion on chains where Chainlink does not exist and in general, provide further optionality to Aave.

Furthermore, higher frequency pull-based solutions like Pyth could unlock areas of protocol utility and it would be interesting to get @ChaosLabs and @Pauljlei opinions on the effects of more-frequent granular price updates on setting capital efficient risk parameters (specifically e-mode).

We respect @bgdlabs peace of mind sticking to the status quo, but we are curious how you’d picture Aave exploring different oracle sets outside of a new Aave deployment?

We believe Pyth as the primary oracle and Chainlink/API3 as the secondary oracle could offer higher security against bad debt for Aave, due to Pyth’s frequent price updates.

Thanks!

Disclaimer: Wintermute is a data provider on Pyth.

1 Like

Maybe this post wasn’t clear enough. The discussions about Oracles are closed.

Oracles are the most important piece of infrastructure for a liquidity protocol. Price feed is the way the protocol knows if a position is sufficiently collateralized or needs to be liquidated.

Any Oracle failure would have devastating effects on the protocol, the users, and Aave Brand.

That is why Oracle recommendations are not a beauty contest. Everyone had a fair shot at presenting their solution. Now the Aave DAO service provider will publish recommendations.

As the ACI, we will follow @bgdlabs recommendations on their upcoming report.

We invite the community to respect the governance process and refrain from adding to the oracle debate until BGD Labs recommendations are out.

2 Likes

As we also agree that having a discussion about everything around Aave is healthy, some extra considerations regarding our position:

  • First of all, we exclusively defend what we think is more optimal for Aave, from the standpoint of an entity with a deep understanding of all Aave technical infrastructure and the implications of changes.
    Our engagement with the DAO is to advise and contribute technically based on the previous principle, and the reality is that swapping an Oracle provider without a major reason is definitely not something that helps the perceived stability of the protocol, which goes before everything else.

  • We have absolutely no problem with proposals regarding more controlled experimentation of other types of oracles different from Chainlink. But we don’t think adding an extra variable on a potential deployment on a network with pretty new technology by itself (ZK/validity rollup) is positive for Aave under any circumstance unless it is simply impossible to have the current provider (Chainlink) there.

  • It is important to highlight that we are not saying that alternative solutions are not good, just that changing the approach precisely on a new network is not so good idea.

  • From a technical standpoint, it is not a realistic possibility to change from a “push” to a “pull” model right now on Aave. The implications on protocol design and especially user experience are high.

  • It is not so appropriate for this thread to become a discussion between price providers, as the topic is more regarding an Aave v3 deployment on zkEVM. We recommend the different participants to better create independent threads with specific comments.

  • We welcome also representatives from Chainlink to comment on their stand/timeline regarding Polygon zkEVM.

6 Likes

Thank you, @bgdlabs.

Hey everyone, Michael from Chainlink Labs here. I appreciate the community’s drive and excitement to expand to new networks.

Chainlink Labs has been actively working on deploying Chainlink Data Feeds on the Polygon zkEVM network, with additional zk-rollups such as zkSync and Starknet also being high priorities as well.

Chainlink Data Feeds are currently planned for deployment on zkEVM in Q2 of this year, pending a rigorous security and validation process and clearance of any technical issues, which is standard for new network launches. This would align with Aave’s roadmap and timelines for launching on the network.

As with every chain integration, Chainlink Labs always takes a methodical and security-focused approach, demonstrated by the proven track record of Chainlink Data Feed deployments on other blockchains, including those used across all existing Aave market deployments. Each blockchain and L2 has its own nuances, including differences in the EVM implementation, gas markets, consensus, finalization, and more. It is also not uncommon for new chains to iterate and modify how the chain operates, which may impact how oracles operate on that network. We are simply not willing to compromise on security.

We believe Aave shares a similar security-focused mindset, given the many decisions that have been made to protect the protocol. These initiatives include a robust and continuously improving Safety Module, adding Chainlink Proof of Reserve circuit breakers to Avalanche V3 Markets, onboarding two risk managers (@Gauntlet and @ChaosLabs), running the V3 protocol through six separate audits, and more plans on the way like Aave Forest.

As noted by @bgdlabs, zkEVM is still a very new network, so extra caution must be taken when deploying any DeFi-related infrastructure on a new network. It often takes time for a blockchain to become battle-tested once in production and address any potential issues not discovered in testing. A security-first approach is essential to ensuring Aave market deployments are robust against exploitation across a variety of verticals.

We are excited to continue supporting Aave’s multi-chain strategy and look forward to deploying Chainlink Data Feeds on zkEVM in the coming months.

5 Likes

relaunching this thread as @bgdlabs have published an infrastructure report, editing the proposal :

3 Likes

Chaos Labs Launch Parameter Recommendations

Chaos Labs supports listing WETH, USDC, and WMATIC as the initial assets on the proposed deployment on Polygon zkEVM. Launching these assets on a new deployment will help ensure a smooth and successful debut while minimizing risk. Additionally, these assets are widely used in the DeFi space, making them a reliable choice for early adoption on the zkEVM L2 network.

Following the guidelines of our approach to start with the battle-tested parameters of V3, we begin with a baseline of LT, LTV, Liquidation Penalty, and IR Curves similar to the values set today for WETH and USDC on Ethereum V3 and WMATIC on Polygon V3.

For the initial supply and borrow caps, following the same principles as our recommendations for launching new assets and introducing Aave on other new deployments, we suggest setting the supply cap at 2x the liquidity level available under the Liquidation Penalty price impact for each asset, and the borrow cap as (supply cap* UOptimal). These caps will be optimized via Risk Steward after observing usage on the new deployment and considering market and liquidity conditions.

Recommended Initial Parameters

Risk Parameter WETH USDC WMATIC
Isolation Mode NO NO NO
Enable Borrow YES YES YES
Borrowable in Isolation NO YES NO
Enable Collateral YES YES YES
Loan To Value 80% 77% 68%
Liquidation Threshold 82.5% 80% 73%
Liquidation Bonus 5% 5% 10%
Reserve Factor 15% 10% 20%
Liquidation Protocol Fee 10% 10% 10%
Borrow Cap 280 1.26M 375K
Supply Cap 350 1.4M 500K
Debt Ceiling N/A N/A N/A
Base 1% 0% 0%
Slope1 3.8% 4% 6.1%
Uoptimal 80% 90% 75%
Slope2 80% 60% 100%
6 Likes

ARFC escalated to the snapshot stage

voting starts tomorrow.

2 Likes

Summary

Gauntlet supports the proposed parameters, which offer room for growth while balancing out risks associated with newer deployments. There is strong liquidity on Quickswap and Balancer relative to the size of Polygon zkEVM.

We have a few additional recommendations.

  • We recommend listing WSTETH at launch as well, with the below parameters.
  • We recommend lowering WETH Base to 0% and Slope 1 to 3.3% to accomodate for WSTETH recursive borrowing strategies.

Recommended Initial Parameters

Risk Parameter WETH USDC MATIC WSTETH
Isolation Mode NO NO NO NO
Enable Borrow YES YES YES YES
Borrowable in Isolation NO YES NO NO
Enable Collateral YES YES YES YES
Emode Category ETH N/A N/A ETH
Loan To Value 80% 77% 68% 71%
Liquidation Threshold 82.5% 80% 73% 76%
Liquidation Bonus 5% 5.00% 10.00% 6%
Reserve Factor 15% 10% 20% 15%
Liquidation Protocol Fee 10.00% 10.00% 10.00% 10%
Borrow Cap 280 1.26m 375k 25
Supply Cap 350 1.4m 500k 250
Debt Ceiling N/A N/A N/A N/A
uOptimal 80% 90% 75% 45%
Base 0% 0% 0% 0%
Slope1 3.3% 4% 6.1% 7%
Slope2 80% 60% 100% 300%
1 Like