Building the First-of-Its-Kind, Permissionless DeFi Clearinghouse, with Barna Kiss @ Malda (Video)
Crypto Hipster
474
00:33:5331.03 MB

Building the First-of-Its-Kind, Permissionless DeFi Clearinghouse, with Barna Kiss @ Malda (Video)

Barna Kiss began his career in early-stage venture capital, before later transitioning to working with Layer-1 blockchains and decentralized storage networks, focusing on scaling infrastructure and advancing decentralized ecosystems.

 

Leveraging his expertise in DeFi, Barna co-founded Malda, a first of its kind permissionless clearinghouse, that quickly gained traction. Malda has since grown to a peak Total Value Locked of $110 million and a peak Fully Diluted Valuation of $70 million, cementing its place as a key player in the decentralized finance space.

 

[00:00:00] Hello everybody and welcome to the Crypto Hipster Podcast. This is your host, Jamil Hasan, the Crypto Hipster, where I interview founders, entrepreneurs, executives, thought leaders, amazing people all around the world of crypto and blockchain. And today I have another amazing guest. I have the, I shouldn't ask him his title, I think it's CEO and founder, he'll correct me if I'm wrong. His name is Barna Kiss. He is the founder and CEO of Malda. Barna, welcome to the show.

[00:00:30] Pleasure to be here. Thank you for having me. You're very welcome. You're very welcome. So let me kick things off and I'll ask you first, you know, what is your background and is it a logical background for what you're doing now? I'm not one to judge what is logical or not, right? My background, I actually have like a business background. I studied business management back in university, but I started migrating towards technology during my master's.

[00:00:58] So I was very interested in machine learning. I already wrote my thesis doing machine learning models. It was actually exploring finding companies for venture capital investment, which was very interesting. And I think that started my transition towards technology and web three. So after university, I started working in the blockchain industry.

[00:01:21] I first started off with working in layer one blockchains, ecosystem development and decentralized storage spaces before I started with my co-founders, and then I started working in layer two. And then we started working in layer two, which was a decentralized lending protocol, a more simple implementation that we have launched on the linear blockchain.

[00:01:49] It's a zero knowledge rollup layer two on the Ethereum ecosystem. And then we started looking at more difficult problems to solve in the space. The reason we deployed already on the ZK rollup was our interest in zero knowledge technology. Although back then we didn't do anything with it, but at least we wanted to be close, you know, to have more exposure.

[00:02:11] And that started off our journey where we saw that in this multi-rollup ecosystem that Ethereum is pushing for with the scaling, liquidity fragmentation is a large problem. And it's an especially large problem for lending protocols. I would also say it's a large problem for decentralized exchanges because the quality of the service is fully determined by the depths of the liquidity.

[00:02:37] If we want to onboard new users into the Ethereum ecosystem, the quality of the service needs to be the same universal across all of these rollups, which enables and unlocks huge economic growth potential that is right now untapped. Because you can access the liquidity on the different rollups in a rather permissionless and fast manner.

[00:02:59] But currently you would need to rely on the native bridges to do it securely. It's not abstracted away. So for the end users, it's inconvenient. And that prevents investment in the ecosystem. And that is, I think it's correlating why Ethereum was not showing as much economic growth as competing ecosystems like Savannah. So what are you looking to solve at Malda?

[00:03:26] And also, you know, what is a decentralized clearinghouse and how does that work? Yeah. So what we are looking to solve is to create the first seamless multi-rollup DeFi protocol that solves liquidity fragmentation. The reason we want to solve that is to remove these barriers and enable capital flow across all of the different rollups.

[00:03:54] Malda at its heart, it's a lending protocol. It's still a DeFi lending protocol at its heart, but it also serves the role of a decentralized clearinghouse because capital can very easily flow from one rollup to another. So underlying how it works, we are using zero knowledge proofs for trustless interrupt. So we know what is happening on another chain in a fully trustless verified manner.

[00:04:21] And based on that, users can interact with the same depths of liquidity. Underpinned, this is underpinned by the protocol acting as a rebalancer of liquidity to serve the orders. That means that the full global liquidity is available for the users to borrow or to withdraw. So this is the point where it is serving the purpose of a decentralized clearinghouse between the different rollups. The capital can seamlessly travel and flow.

[00:04:49] And we are positioning ourselves at this intersection between these borders that are defined right now by rollups by providing the first protocol that serves the full ecosystem. And the clearinghouse happens because we are not charging any fees right now for the users if they elect to deposit on one chain and then withdraw on another.

[00:05:14] In the future, in the future, if the order size and as we grow, it makes it necessary to start charging fees for those that are using the protocol as just like a bridge. We built in the technology to do that. But initially, we do not want to even prevent this. And we just want to enable the easiest user experience for users who have capital on multiple chains.

[00:05:39] They deposit it into this global protocol and they can on time withdraw for any investment opportunities because it's a lending protocol. They are never actually receiving less. Even if we would charge like a small fee, they would always receive equal amount. And we have the ability to do that because the protocol in this format, it absorbs the order flow that would be done by bridging.

[00:06:04] And instead of the users right now, they need to bridge from one chain to another. And the current architecture on incident-based bridges, they don't absorb really anything, but they just serve the orders. What we do instead is a lending protocol has a deep liquidity. So if the user wants to withdraw, they are withdrawing a smaller amount, they can freely do so. There is no bridging behind it. Then there is another user on another direction.

[00:06:32] And in the end, we just essentially aggregate it naturally in the design. And we do a rebalancing daily. Or if it is necessary, we can increase the frequency or decrease the frequency of that rebalancing. And that rebalancing in the end reaches equilibrium. So we also build formulas to determine what is the expected demand on the given chain for liquidity. So for borrowers and withdrawals.

[00:07:00] And if it deviates by initially at 20%, we rebalance to be able to satisfy tomorrow also all of the orders. Meanwhile, as well, as I mentioned, they access the full liquidity, right? So we maintain the capability to own demand rebalance. This is not fully built in yet for launch in a dynamic way. But it will be built in a dynamic way.

[00:07:27] So if someone wants to borrow, let's say there is 100 million USDC in the protocol as it goes total on the rollups. And we use the rollups underlying to securely store these. They would essentially submit an order where they say, I want to borrow all of this 100 million. The ideal way is to use intents for this. So the order is only satisfied if we can satisfy 100 million.

[00:07:54] So there is no issue or problem if someone borrows while the order is being processed. So they would submit intents and they could also indicate a lower bound limit. Let's say I want to borrow all of the 100 million because I really need this for an investment opportunity. But I'm good with also 95. That's okay for me. If it's 94, I don't want to execute the order anymore. And that provides the most seamless user experience.

[00:08:19] Although, of course, it is unlikely that we have this exact situation when someone wants to borrow out all of the liquidity because of the money market mechanisms. So obviously what's more likely is there is only like 10 million liquidity on a given rollup left of USDC. But they require like 18. And we have, let's say, 10 plus 10 plus 10 on other rollups. We find the optimal route and rebalance like that. Okay. It's pretty complex. I have a lot to follow.

[00:08:49] I have a lot of questions then. So there are a lot of projects out there that struggle, right? To struggle with being able to transfer liquidity in Web3. I mean, some platform is not so much, but others are real. It's a real issue, right? So how do you fill that gap? There's a gap right now. How do you fill it? Yeah. Yeah.

[00:09:15] So essentially the main solution here is there are certain intern-based bridges that satisfy these orders. So right now on Everclear's side, we have the ability to submit an order for 500,000 and we can submit it multiple times. That's their maximum capacity that they can satisfy in one intern-based rebalance order that we satisfy through them.

[00:09:44] So we can submit it like every 10, 15 minutes. Very, very... It's very, very sure that we can submit that, let's say, every 15 minutes. It's a very conservative estimation. So if we need to do a daily rebalancing for 5 million, then we will do it on multiple orders. 10 orders, Everclear is able to satisfy that and that is it solved. The cross is also another option. They have great capabilities.

[00:10:12] And on certain assets such as USDC, you can rely on the designated burn and min bridge of the asset to do the rebalancing. Right. These have costs, of course. So the protocol works slightly differently than a traditional DeFi protocol in a sense that the reserve that it generates is also used to offset the costs.

[00:10:37] Because we don't want the suppliers and the borrowers paying the cost of this. As it would essentially, if there is no additional source of income for the fees, no designated income to satisfy these costs, it would accumulate bad debt. And that wouldn't be a sufficient user experience.

[00:10:56] In the future, I'm really looking forward when mainnet bridging becomes suitable because we are not time sensitive in these rebalances. Unless there is a given order, then it's more time sensitive. But at that point, you would also indicate for the user that you want to do something that requires a large amount of capital movement. It's not going to be satisfied in 20 seconds. It's going to be satisfied in five minutes.

[00:11:25] If they are going back to my previous example, if they want 100 million there, it's probably important for them. And more than likely, they can wait that time. But with mainnet bridging, this might even decrease further and become more optimal.

[00:11:43] If we are able to transfer from roll-up pool A to roll-up pool B via one mainnet transfer action, it might cost relatively large amount of gas fees. Maybe it will cost like 50 USD of gas fees, but we would use that to do large orders. And that is just by the nature of the protocol, we would need to transfer, let's say, 10 to 30 million in value.

[00:12:10] Then 50 USD in gas fees is completely feasible. And second of all, we would prefer to pay for that security. So that is another lever that will shift that in the future. Another point here is that large rebalances will be more of an exception. It is not going to happen very often that we need to move huge amounts of liquidity.

[00:12:35] So for that reason, this absorption of orders, because they are more or less bidirectional, and based on the historical bridging volumes going from between roll-ups, it is not going to be that huge. Even with this new protocol design, we don't foresee any reason not to be able to satisfy those orders. And I do want to mention one last point.

[00:13:00] So both Across and Everclear imagines a future where they are satisfying B2B orders. They do not want to satisfy end-user orders. Everclear already built with that in mind. Across is a more flexible setup, so they are satisfying B2C orders as well, and they are focusing on that given current market conditions. But ideally, on large blue chip tokens that are relevant in a DeFi lending protocol design.

[00:13:31] It's much more optimal, in my opinion, and it is required for the future, that protocols create this omni-chain, omni-roll-up design, where the protocol leverages multiple roll-ups for transactions. But they have this global liquidity design. And then, as more protocols like this appear,

[00:13:55] the solvers will have a much larger, high-volume or high-value transfer demand. And this shifts the whole space, because right now, if you are a solver on an intent-based bridge design, you do not want to lose essentially 90% of the volume to satisfy a potential 10% that is a higher value, because you are generating more revenue on the 90%. That's just basic mess, of course.

[00:14:23] Normal business and normal economic behavior. But if there are more protocols doing rebalancing, and they provide a large influx of B2B orders, and more protocols decide to abstract away the per-order bridging, there will be more solvers. Simply, if there is demand, they just appear. They are going to come, and they are going to store a large amount of value, and they are going to focus on the large orders.

[00:14:54] Simply, then that will be more revenue. And it can also happen that 50% or 70% of the orders will be only large value transfers for where we are able to satisfy that. I'm wondering. You said a couple of things. You said there's mainnet bridging, and you said omni-bridging.

[00:15:16] And then I used to think that all DeFi and liquidity pools were in their own silos. But what you're mentioning is that there's unified liquidity protocols. What's been the role of unified liquidity protocols in the evolution of DeFi, and where do you stay ahead going forward? Yeah.

[00:15:42] So, just to also state this for the audience as well, there are very few protocols that interact or that serve liquidity, that are currently doing unified liquidity. I think the first proposal came last year on his research about unified liquidity design that was fully using Shad Sigmansers, just to go back a bit in history, last year, around this time in February.

[00:16:11] So, it's a pretty new concept. But essentially, unified liquidity protocols will be the dominant force in the future, and that requires to solve interrupt within the Ethereum ecosystem, potentially outside of it. I think the major unlock here is zero-knowledge proofs. But before we jump into that, I just want to explain the other concept,

[00:16:38] which relies on Shad Sigmansers or Superbuilders, that leverages that atomic transactions can be made between two rollups, if the Shad Sigmanser or if the Sigmansers use a Superbuilder within, both drop the transactions and make it atomic if it is failing on one or the other rollup. So, instead of that, we decided to go with zero-knowledge proofs

[00:17:05] because we do not want to rely on a Shad Sigmanser implementation. We already see that there are some advancements towards that, but it's also fragmented, right? Like, you have the Superchain, they want to do their own Shad Sigmansing, potentially other ecosystems as well. But we just simply don't see the business perspective in that because you are, again, just limiting yourself on some level.

[00:17:31] And what we realized when we started building is that we can use zero-knowledge proofs, and we can create a design where we have a host chain and an extension chain. The host chain is on Linea, extension chains are other major rollups, or even Ethereum mainnet in the current design. And we push all the logic, lending protocol logic onto Linea.

[00:17:57] And on an extension chain, essentially the zero-knowledge proof serves as a trustless messaging layer. So a user initiates a deposit transaction, and that deposit transaction is initiated on base. Via that transaction, we generate a zero-knowledge proof on the state change and deliver that to Linea. And then we have implemented a new function, which is minting from extension.

[00:18:24] And that transaction appends the ledger without the funds being there. So it's this engineering design where our host chain calculates the full liquidity behind the protocol with the funds that are deposited on an extension chain. So then you essentially have a design where the liquidity is global and it is calculated. The interest rate is calculated, the user balances are calculated, everything is calculated on this global state.

[00:18:54] And at that point, the last part that you need to solve is to rebalance, which we discussed previously on how we are able to do that at scale. So the point of these protocols is that they are positioning themselves at this intersect of rollups. There will be always some opportunities here, some opportunities there,

[00:19:19] as you have like some designs and some protocols or some payment systems that are going to limit themselves to one rollup for more efficiency and harder finality. So if protocol is positioned like this, it is going to capture like a large amount of this, let's say cross border type of transactions.

[00:19:44] As a user, we're not need to think about like, am I depositing in this siloed deployment of a traditional or legacy lending protocol on rollup A or rollup B? Because if I need to move funds, it's not going to be quick. It's not going to be easy. But on the other hand, if they deposit in Astaing, the capital flow is very seamless.

[00:20:05] And I think the reason that these protocols will become very valuable and will be at the forefront on DeFi is that the Ethereum ecosystem really demands them. And we really need them because imagine you are trying to onboard someone new to Web3 or relatively new to Web3. And I have like some friends who are like this.

[00:20:31] They have already started investing in tokens, in centralized exchanges, and now they are playing around a bit with DeFi. So the first problem is that they come back to me when they decided they want to buy a certain meme coin to try it out. So my funds ended up here, but it's here. Like, what do I do? And then I need to explain, okay, then you need to bridge. And then the first answer is, you need to solve this to onboard the new users.

[00:21:00] Like, how are you going to make like DeFi and Web3 a reality? So that's a major reason. So very advanced users and very experienced users can navigate DeFi and Web3 like this. Non-experienced users have no chance. So a protocol where they actually deposit into the Ethereum ecosystem, they earn interest rate from the boroughs that are happening on roll-up A, on roll-up B, boroughs that are happening on mainnet.

[00:21:27] So it's just inherently a better UX for them. And they will only realize that there are roll-ups when they are borrowing or withdrawing for a certain opportunity. But if they are directly borrowing, for example, for something that they want to invest or spend on in real life, they don't even need to, again, touch the roll-ups.

[00:21:50] You can solve that and offset that with direct integration with an off-ramp, right, where they just receive the funds on their traditional bank account and can continue from there. Or if they just want to spend, you can integrate with DeFi debit card providers, where they can just spend via the debit card straight from the protocol balances. And I do want to loop back for one second for traditional users. So it solves also large pay points for traditional users.

[00:22:17] I don't know if you have found yourself ever in a situation like this where you found an investment opportunity that you wanted to put money into, but it was on a specific chain and you didn't have funds there. And then you had to figure out which bridge to use. Okay, how do I bridge my assets? Is this bridge available there? Is it secure, right? Oh no, the investment opportunity is gone, right?

[00:22:39] So with this design, we have the lowest commitment for deposits because you know that you can use the funds on multiple destinations. Interesting you said that. You know, talking about meme coins, right, with your friends. Yeah, the window of opportunity really in the meme coins is like the first five to ten minutes.

[00:23:07] That's when the people who went in and made the money, everybody else ends up losing, right? So how can you help your friends like, you know, if they don't have any sole Lana and want to buy, you know, a meme coin and only have money in Ethereum? How do you help? How can you help them like be able to do that so that they can be in the first five minutes? Yeah, I mean, I obviously don't encourage them to go into that.

[00:23:34] They are at the first steps of their DeFi experience, so to say. So going into meme coins is maybe not the vise's decision until they can understand the mechanisms behind it. And as you said, the window of opportunities is very low. So, yeah, but obviously we won't have Solana integrated at start.

[00:24:00] So let's flip it around to a more Ethereum ecosystem example. Let's say that they want to have funds on base. And if they decide to store the funds in Malda directly. So when an investment opportunity, a meme coin arises on base, they can simply initiate a withdrawal transaction and that withdrawal transaction. It right now goes through in 20 seconds.

[00:24:29] It's not instant yet. It's going to accelerate. So at this relevant example, we are bottlenecked by proving time behind the zero knowledge proofs. It is accelerating by a lot. So I expect that we will be in the seconds potentially by the end of the year. So sub 10 seconds, which is already very, very quick.

[00:24:54] And then I'm like hoping with real time proving of the blocks, if that becomes a reality, that accelerates it even further. But I don't want to also over promise on that. Right now we are at 20 seconds. It was a lot of engineering to get there. But that falls in very nicely with that 5-10 minute window that you mentioned. So an investment opportunity on base, you will never miss.

[00:25:23] 20 seconds is good compared to wires, which take three, five days. So, you know. At that point, we are going into like how much on the potential DeFi has that we haven't leveraged for web users, which is another favorite topic of mine. Yeah. So let's talk about that. You know, that requires trust. You know, and people have to trust their money is going to be there.

[00:25:47] But they also have to trust, you know, roles of, you know, security, transparency, other things you mentioned. Why is that building trust important for web 2 to get into web 3? Yeah, yeah, yeah. Very, very good point. And we take that very seriously. So I just want to highlight that we have finished and we are soon going to publish before we launch our audit report. And we are going to open source our repo for anyone to verify.

[00:26:17] And we decided to go with Veridice. For people who don't know who Veridice is, the reason we went with them is that we are leveraging the risk zero technology stack to build our zero knowledge groups. And they have audited the components that we are building on. Right? So they were a very obvious choice.

[00:26:39] They were probably the only candidate who had this experience on actually working with the stack that we are using to build the zero knowledge groups. And open sourcing is also very important because a zero knowledge group without being open sourced, it could be anything.

[00:26:58] That's another factor that everyone in DeFi will need to demand and know that a proof can, on surface level, function very nicely, but you can have some hidden components that you don't know about. If the repo is not open sourced, you don't have a repetitive auditor behind it, and you don't have the community auditing it as well directly by looking at that everything matches up.

[00:27:24] So security is also critical in the inter-op ecosystem of Ethereum. It's also very challenging. You do need to take into account the complex design of each roll-up to determine what is the necessary trust level that you require for executing transactions.

[00:27:49] To translate it into a very simple language, this is the same thing that centralized exchanges use when they require multiple block confirmations or potentially higher amount of confirmations if the value is very high because they want to make sure that a transaction will not revert.

[00:28:08] We also build a dynamic system like this, where we have Fastlane for users, that Fastlane operates on a sequencer that we run and we have implemented. Obviously, in the future, we wish to decentralize this component as well even further, but from start, users will have the option to self-sequence.

[00:28:32] As the repo is open sourced and there are also prover markets coming up, they can either generate a proof themselves. It's feasible with client-side proving, which is, I think, quite unique in the space, but also in a prover market, they can request a proof, and we will have a detailed guide on how to self-sequence and submit the transaction themselves.

[00:28:53] So, for example, the exact limitations on what level of security is going to be published at launch, but it will require L1 inclusion and multiple confirmations behind that on layer 1.

[00:29:11] So, that means that a user can trustlessly withdraw from the protocol in around 20 minutes, potentially, which is just compared to some rollups that are following an optimistic design. It's much, much faster. And another point of security is that while we are running the centralized sequencer, we would need to collude with the rollups to produce any faulty proofs.

[00:29:41] So, as you can imagine, it is very unlikely that the Arbitrum team or the Optimism team or the Linear team or Coinbase would collude with us. And you need to do a collision because we have built-in safeguards that even if the sequencer key would be compromised on a rollup, which never happened and very, very unlikely to happen, we would still not act on a faulty transaction.

[00:30:10] So, the whole thing is very robust. So, the whole thing is very robust, but it was very complex to design. That's why it's very important to have a highly reputable auditor behind it. For Web2 audience, I think the most important is to also simplify the terms, right? Like for them, understanding the security behind it, if we go into like ZKD, ZKD, it's quite a complex topic.

[00:30:37] So, I think if the space wants to reach a larger Web2 audience, which is a goal for us, we need to provide these in a more simple term. And I think the future is that users have abstracted away self-custody, for example. Like protocols can deploy a smart contract account for them, leverage PESCIs, so that it's only them that they can access those smart contract accounts.

[00:31:01] But it will be their choice to directly interact with the wallet or only interact with the PESC from their computer. Still a self-custody design with potential recovery mechanisms, but they also just can fully ignore that. Not everyone is fully into wallet designs, smart contract accounts, ERC-467, right? It's not everyone's favorite piece of conversation. It is mine, right?

[00:31:30] So, we are built a little bit differently. And I think that will underpin it. And the convenience and ability. So, I mentioned integrating a debit card is an amazing way for DeFi protocols to attract liquidity from Web2 users. We are probably not going to be necessarily borrowing, but it's important for them that they are not locking in their funds and they don't need to use an off-ramp, an exchange, all these complex components.

[00:31:56] So, they don't need to refill their cards from, I don't know, the hardware wallet, as a couple of examples, using that. And they can just leverage the design behind that. And then reputation is more important for them than auditing the contracts because it's, again, not something that they do. So, the brands that are expanding towards this space, they really have to build up the brand and the reputation directly for Web2 audiences.

[00:32:25] And, yeah, obviously security underlying is important, but I think that's a more niche group of people who will dive deep into it. Yeah, I agree. But it's exciting to see what the range of the alt's going to be. So, it sounds technical now, but I'm interested to see, like, it will be seamless someday. Yeah. Yeah. I agree with that. So, I want to thank you very much for your time today. I enjoyed speaking with you.

[00:32:54] I have one last question. It's how can people find out more information about you, about Malda? How can they become users? How can they do that? Yeah. So, we have a profile on X. It's Malda underscore XYZ, where they can find more information about the project as we are approaching Mainnet and as we are going to release the protocol first on Testnet,

[00:33:22] where they can interact with it, provide feedback, and then on Mainnet. They can also find my profile. It's at RealBarnockis, where I provide further information behind the protocol. And they can also find all of this information indirectly on Malda.xyz, which is the website for the protocol. Awesome. Thank you very much for your time today. Thank you very much for having me. It was a pleasure to talk with you. Thank you.

Digital transformation broadcast network

Follow Us on LinkedIn

Follow us on LinkedIn and be part of the conversation!

Powered by