Don Gossen-1

ChatGPT, Google Bard and the Battle for Data with Don Gossen

In this episode, Tim is joined by Don Gossen, the Founder & CEO of Nevermined, a web3 company that provides tools to read, write and own your digital assets.

With ChatGPT very much at the forefront of the zeitgeist, and with Google Bard on the horizon, Tim and Don discuss what the impacts of this proliferation of content could be, the battle for data, and why, as time passes, we might be merely seeing an interpretation of the internet, rather than the real thing.

To find out how Grindery is building a Swiss army knife for existing DAO frameworks, head to grindery.io.

 

 

Transcript

Tim - 00:00:01: This is Tim Delhaes and you are listening to DAO Talks podcast. Over the last couple of years, we've seen DAOs go through a roller coaster ride from being nowhere to everywhere. But what exactly are DAOs or Decentralized Autonomous Organizations? And how can we use them as a lens to view the wider world? What is happening to them now in the beer market? Is it the end of it? Or are we just normal hype cycle? All of that and much more is what we want to find out together.

 

Tim - 00:00:31: Today I'm going to be talking to Don Gossen. Don is the co-founder of Ocean Protocol and now co-founder and head of Nevermined. He going to be talking about the future of data, data tracing, and most importantly, about the future of AI, ChatGPT, and Google’s 3DS of bart. Let's go.

 

Tim - 00:00:54: Morning, Don. Where are you?

 

Don - 00:00:57: I am currently in Lisbon, Portugal.

 

Tim - 00:01:00: This must be episode like 30 or so for me. And I'm so used to do this, like, late at night or super early that I think I've only done two or three in the morning like this. And I always go at this time. It must likely be somebody at least in Europe. So that was my guess. I wasn't sure if you, like, around the corner or a little bit further away. Great. Thanks for joining. We're going to talk, obviously, about you, your background, Nevermined. Going to talk a lot about DAOs. So, first question, why Nevermined? 

 

Don - 00:01:30: Why the name? So it's sort of a play on words. It started as a data sharing platform leveraging Web3 technology for capabilities that are enabled via Providence and then by extension, Attribution. And the play on words is effectively data that's Nevermined. Right. So you can retain control and the integrity of the assets themselves. You're not giving them up and centralizing them with somebody else.

 

Tim - 00:01:59: Makes sense. And when you said it originally comes from data sharing, this refers to previous projects or where does the origin come from?

 

Don - 00:02:09: So my background, I'm a subject matter expert in data and analytics, spent the better part of a decade and a half traveling the world building data states for some of the biggest companies on the planet HSBC, L'oreal, AXA, Sharp. So got a broad set of exposure to the development and deployment of large scale data states and how they function, how they're used. Interestingly enough, everybody will say that their implementation is unique and an actual fact after getting all this product exposure, they're not they're pretty much the same across the board. What we kind of found, and everybody on the team, they either come out of the big database or the web space, and everyone on the big data side, we've all sort of come to the same revelation at some point in time that the contemporary model for data management which is a consolidation model, basically like taking a vacuum and sucking up as much information as you possibly can and putting it into a data lake, it starts to fail as a model. And so, when we were building Nevermined and we incubated it over three years and prior to building Nevermined actually co founded another project in the same domain space Ocean Protocol. So we've been working on this singular challenge for quite a while, which is, in principle, moving away from this consolidation model, where you're reliant on third parties to manage your information and pushing that control back out to the edge, back out to you and I, and doing that in a pattern or an architecture that’s federated as opposed to a consolidation one. And so when we were thinking about this USP in particular, the Data Federation side, my co-founder Dimi, who loves Puns, he came up with the name. He’s like, well, we're going to be in a situation where we're never minding anybody's data, 

Why don't we call it Nevermined? And so that's sort of the entomology of the name and where it all started.

 

Tim - 00:04:16: How can I use Nevermined? What does it do for me?

 

Don - 00:04:20: Sure. So things have evolved since we first kicked things off. What we are now pushing is this idea of Web3 asset interactivity. So if you look at what we've done as an industry over the last 15 years, we've done a really good job of laying the foundations, putting in all the pieces that are going to start to allow this decentralized technology to really work its magic. In terms of the utility that's been created broadly speaking, it's quite one dimensional, right? It's really based around the liquidity and the liquidity management of different assets. So, Tim, you can take an asset and you can mint a token, maybe an NFT, against that asset, and then I can come along and I can buy it from you. And so, what we've done is we've provisioned this utility value, which is the liquidity of this asset, but that's really all we can do so far, right? So the question is, can we go further? And the answer is Yes. Really what we are focused on is unlocking the latent value that's tied to the payloads that the tokens represent, Right? And so, the way that we enable that very quickly is through a process of registration and discovery and then decentralized access control. And then, another part of our USP taking into account this Federated architecture is the ability to actually compute against these assets where the assets reside, without having to consolidate or move the data assets around. So file claim in Lombane calls this compute over data. We call it data in-situ computation, but it's the idea of instead of moving the data or the payload itself to the computation, you actually move the computation to the payload.

 

Tim - 00:06:21:So what's the top three examples that I could use it for or somebody else could use it for? Run me through what you're seeing as the best use cases. But compared to that, how are people actually using it and what are the surprising used cases from it that spring up?

 

Don - 00:06:39: Sure. So I think the best example from a production point of view is a project called VitaDAO. If you haven't heard of VitaDAO, it's the DAO that was set up to fund and support early stage longevity research. I have another company, it's called Kaiko. It's a Web3 services company. And Kaiko is what actually incubated Nevermined as an organization. This was back in, around this time in 2021, we came together with a collective of about twelve other organizations and laid the foundations for creating VitaDAO. So we helped specifically with the smart contract development, et cetera, but also with the token design from a governance point of view, and launched the DAO in June of 2021. And then shortly thereafter, about a month and a half later, the DAO leveraged Nevermined to perform its first transaction. And the theory or the driver behind VitaDAO and a lot of the DSAI space is this concept of an IP NFT. So capturing the value of early stage research within the confines of an NFT, and then making that NFT available in this case to DAOs, so that the researchers that are doing this early stage work receive funding from DAOs in exchange for certain IP rights to the research that they're doing. So, VitaDAO used Nevermined. And in this case, basically just simply walking you through the workflow, Nevermined takes a hash of all the research information, the research data as well as the research papers, hashes that into an NFT. In this case, puts the NFT onto Ethereum maintenance and then the DAO comes along, and through their governance mechanisms decides that they're going to vote and purchase this IP from the researcher. In this case, the first instance was some longevity research from a researcher out of the University of Copenhagen. The DAO bought it for I think, $350,000. And so basically what you had was the transferring of rights to that IP, from the researcher to the DAO. And so this is classic NFT proof of authenticity.

 

Tim - 00:09:05: Let me ask you the two questions. I'm just wondering so how far is the IP transfer backed by a legal framework or not in the real world? And two, what does the IP transfer actually mean? Is this actually a transfer of the ownership? Or do we think about a specific license of use under some creative common space? Or I would love to learn more about this, how this connects? 

 

Don - 00:09:30: So from a meet space, traditional approach and the constraints that are put on. The implementation, that's still largely a work in progress. Right? It's also dependent on the structure that the DAO takes. We can get into that. That's less than a part of our remit in particular. But setting up a legal framework, an entity as a Coop. or what have you, that sort of operates in support of the digital construct of the organization as a doubt, and then obviously the legal benefits that you can get from that in terms of creating contracts and that sort of thing. But, the way that we look at this is more from a technical point of view. Right? And an engineering point of view, and how we can start to translate some of the legal leads into smart contracts as well as I think the visceral or tangible aspect of ownership. Right? So what does ownership really convey? Probably the ability to prove control and or access to the asset. So, as I mentioned, an NFT will purport, it will demonstrate proof of authenticity. Right? If it's a jpeg though, Tim, you've got one? I can come along, I right click, I copy it. Right? So it's really hard to then subsequently prove that ownership, never mind goes a bit a step further. We have a number of different ways of doing this. We can do it in a decentralized fashion with decentralized storage, or given that a lot of assets are currently off chain, we take it a step further. We're agnostic as to where the actual asset resides, so that could be on chain or off chain. It could actually be on chain in a smart contract or more likely, because we're talking about Payloads of a significant size, it's probably if it's decentralized, it's going to be in like file coin or AR weaver IPFS or something. Alternatively, it all reside off chain, probably in the cloud like an S3 bucket and Amazon. So that the novelty from our access control point of view is that the Token that's minted against that Payload works with Nevermined nodes to validate the authentication when you make the request to actually access the asset. And so, this is more in line with proof of authentication. So now you've got this concept of proof of authenticity combined with proof of authentication and you're moving much further along that sort of legal precedent to establish the fact that you're likely the rightful owner or participant within the confines of IP and that asset.

 

Tim - 00:12:30: Yeah, now I get it because I was first thinking about this, okay, NFT gating has been around for a while, right? Like, I build a web page and gate it and I was like, okay, so where's the difference? But it is actually the link to match the NFT against the content that's there. Okay, very good. Interesting.

 

Don - 00:12:51: And quite frankly, it is Token gating. This implementation back in July of 2021 was the first implementation of an IPNFT, but it was also one of the first implementations of Token gating. Subsequent to that, you've seen a number of different organizations pop up that supports certain aspects of Token gating. And quite frankly, this is more a means to an end for us because we’re interested in provenance that goes along with the value transfer, the token acquisition, that transactional side, and what it means to actually gaining access to an asset. But really what we're interested in is the next step is once you've gained access to that asset, what can you do with it? Right? So in VitaDAO case, subsequent to that, for a series of transactions, half a dozen to the tune of about two and a half million that's flowed from the DAO over to different researchers, et cetera. And now you start to get into the analytics side of things, which is where never mind real interest life.

 

Tim - 00:14:04: Okay, where do you see and I want to go a bit further into DAOs, but where do you see the biggest application of what you have today and what you're building tomorrow? We've talked quite a bit about now the research space and licensing of IP. Where do you think is your sweet spot in the market for that? Where do you think what industry or area or used case will be able to produce a degree of disruption to how it's operating today?

 

Don - 00:14:36: So, broadly speaking, disrupting the analytics space. So it's not specific to one vertical or industry, more specifically, ML Machine Learning and federated learning and I think more topically AI, so Artificial Intelligence. We’ve been building towards this point in time, I will say this. This time last year, we've been close to people working on Stable Diffusion, et. cetera, these different AI models. We didn't anticipate the overall or general acceptance of these applications in particular, Stable Diffusion in the art space and then more recently, Chat GPT-3 algorithm from Open AI. I think the challenge on the AI side, there's a lot of challenges and what we're seeing is sort of our thesis start to manifest, because the biggest pushback is less around sort of the threat per se of the AI’s in particular and more the threat to IP and ownership, right? We saw it first with stable diffusion, dolly two in the art space with artists saying they're using my imagery, my likeness that I put out onto websites, et. cetera, that are part of this training corpus for these models, and if I type in some prompts, the result set could come back in the likeness of my art. Right? And I don't see any residuals or royalties from that creation. We're now seeing it more explicitly with Getty suing stability AI for copyright infringement. And so there's this question being asked, can we allow this to happen? How do we address these concerns from our point of view? We feel like there's no stopping it the cat's out of the bag now. Right? So, okay, we're in a state where we need to respond in a way that's going to optimize for that attribution aspect of this. So what we are building is effectively this platform that would enable the cataloging of these assets, basically the tag and trace. And when a model is run, it can either use a filter set that's very explicit that's a sub corpus to the training corpus. And from that, the derivative work can then provide the proper attribution to whoever provided that filtering corpus, or alternatively, working to define how the model is actually pulling its results from the latent space within the model and tying that back to the tokenized inputs. This sounds quite convoluted. I know I can give an example and a walkthrough of how this works in practice. So we built a storytelling, a communal storytelling app just to sort of demonstrate the technology, the idea behind it, it's a website you come in, you connect your wallet and from that wallet, you go into an interface that allows you to create prompts. Those prompts go to an instance of Stable Diffusion. Stable Diffusion outputs ten images based on that prompt. Okay? The prompt itself and those ten resulting images get tokenized as NFTs by Nevermined. And the payloads, both the prompt text and the images get put into IPFS. Then we go further. We pick up those ten images as input to another AI and augmentation of Stable Diffusion. It's a lurp, a linear interpolator which basically takes those images and merges them together to create an animation so that single prompt now results in an animation that output. That derivative work also gets tokenized and put into IPFS. So I do this. I start to tell my story and then, because it's communal, Tim, you can come in after I'm finished, and you can start to add to the prompts and continue the story, and you get a resulting set of animations. And so this storyboard starts to materialized with a series of different animations corresponding to a series of prompts. And the intent is, we tell this story, and that derivative work at the end, when you choose to, say, publish the entire storyboard is also tokenized and put in the IPFS. And if that derivative work is commercialized, then we have this full provenance trail of who contributed, what part of the story and the corresponding images, as well as the services, we can take into account that this Stable Diffusion application also did this. And therefore, if there is a commercial opportunity for this, you pass the residuals and royalties back to everybody that contributed to it.

 

Tim - 00:19:41: You know what? It's very interesting, in a way. 20 years ago, I was working in Chile as a consultant for semi-public governmental organization. And it was back then when they started getting into traceability of food, right, that at the end, if you go to the supermarket and you pick out and it has a lot to do with security and safety of food security, that you can open a prepackaged meal and if something is wrong with it, you can basically trace back. Where did the tomatoes come from and where did the salad come from?

 

Don - 00:20:22: Where did the spinach come from? Recall the spinach. Exactly.

 

Tim - 00:20:26: And what location was it processed? Were the tomatoes made into tomato sauce. Where did all the different ingredients come up that made this? And this was kind of the image that came into my mind. Not talking about this in the AI.

 

Don - 00:20:40: It's a perfect analogy. It's this concept of track and trace and doing this in a purely digital construct or environment. Right? And then adding to that, obviously the workflow and business processing logic that you can bake into a smart contract. So what we demonstrated with this application that we built is that you can demonstrably do this with an AI, right? The sky is a limit in terms of the application. Now imagine that Getty has created their corpus of material and tokenized that content and govern that with a set of smart contracts and then perhaps decentralized access control as well as control and the type of processing, et cetera, that can take place. Now, they're in a situation of do you want to sue so that others don’t use your work, or do you want to add a new line of business and commercialization opportunity where those works can be leveraged in a derivative fashion. But you still get part of the value capture from that derivative work. And I'm optimistic. I believe that it's the latter more than the former, right. That they want to commercialize these assets. It's just about finding the means to doing so.

 

Tim - 00:22:03: That goes back somewhat to a fairly old topic starting in the 90s with Napster, right? Like it's a transformation of how content is distributed and how this distribution models of those content disrupt existing business models. Right? Like exactly how do you move from printing CDs to subscription based music? And it was also super important for a good part of ten years to have the illegal streaming services disrupt the existing business models and putting them under threat to make them adopt to bring us to where we are today. Right. And it's somewhat of a similar situation, you could say, in the content distribution. And I just had this discussion with one of my partners yesterday in terms of if you take the image generation and text generation of how there's going to be a lot of bullshit being produced.

 

Don - 00:23:05: It's not really about there will be a proliferation in crowns of quantity. Exactly. The relevance of the work is going to be tied to the quality and the appreciation in particular on the artistic side of things. Right? That side of things doesn't change anything. What it does do is also that proliferation will, hopefully, from my point of view, democratize the access to the aspect of creation, where somebody that may not have the means, they may not have the skills to produce an artistic work, but they're highly creative, right? And can become really successful prompt engineers and create immersive experiences, et cetera. It's about the quality. It's about what you and I, as consumers of that content, actually appreciate. And so it's another way for content creation. 

 

Tim - 00:24:08: Taking this one step further, and I think this is likely or possibly very much aligned on how you think about it. Right. If you look at what's happening, what will be happening with just ChatGPT and content production for blog post is the internet that is already flooded with garbage. Okay? It's going to be flooded with more garbage.

 

Tim - 00:24:29: Right.

 

Tim - 00:24:29: On one side, we're likely going to see something that was ten years ago, 15 years ago, a lot of companies paying people in underdeveloped countries to write shitty blog posts to just achieve volume of content for ranking. Until then, Google adjusted the ranking, and then the same companies paid for getting all the shit unpublished, right? So you can already see this happening all over again, just that this time it's not some dude and mechanical turk writing shitty blog posts, but somebody's sitting there ChatGPT and spinning out ten blog posts for the corporation on any topic which have a certain value perceived first. But where there is no original content in there, there's no originality right, except in combining it. And we were talking yesterday back to your model, and I find that kind of an interesting way of thinking about it, right. If you then produce things, original art or an original content at some point that has major impact on the output of those AIs, right? It is basically this kind of tracing back where you will be able to find that thousands of pieces of content, a specific image was influenced by somebody that created a new visual form of depicting something that's getting picked up by a lot of people, but in the end, you can attribute the value back. And I find interestingly, the question there is, how does content consumption change potentially best depicted? And you've seen this in the Google versus ChatGPT conversation where you have the Google search results and the ads in it. I have a friend that only does that now. He doesn't read any articles anymore, he only reads the summaries. Right. And what you're essentially doing, I mean.

 

Don - 00:26:23: Sam Altman said the best part about Allie's, CEO of OpenAI, the best part about ChatGPT is I don't need to read anything anymore. Right, other than the summary, the ChatGPT.

 

Tim - 00:26:35: But you see you're still reading, but you're not reading the original pieces anymore. You're reading an abstraction or a summary of whatever you want to call it. And this allows you to ultimately, if you think about it, it's like a window to the Internet where you're not seeing the original Internet anymore, you're seeing an interpretation of it. And that obviously changes the entire advertising industry behind it. Right?

 

Don - 00:27:02: I mean, it definitely can. What we'll see, we'll see advertising baked into these responses from these different algorithms.

 

Tim - 00:27:10: Already scared of the garbage. But I'm telling you, when the AI starts, like putting advertising inside the content. Then

 

Don - 00:27:19: Here's what's interesting. There's two aspects of this, right? On the prompt engineering side, which I'm pretty bullish on, I think this will become its own domain of expertise and this is the ability to actually work with the model, right? If you think about the model in. A 3d space, what the model does when it's given a question goes to what's called the latent space, the part of the model that's probabilistically most likely to hold the response to the question. Right? 

 

Tim - 00:27:51: I think a way of putting this is smart questions get smart answers, right?

 

Don - 00:27:56: It's really no different than current data engineering. If you know how to program a SQL query or query unstructured data, you can get the response that you're looking for. This is no different. So there's that aspect of things. You talk about the proliferation of content, right. What's interesting is that there's two sort of paradigms in the ML space and in particular the AI subset of Machine Learning. One is like algorithmic fine tuning model where you increase the number of parameters that are attached to the model to increase its accuracy. But that's a super expensive proposition. The alternative is a data centric model where you throw as much data as you possibly can at the model in order to train it with the latter with data centricity. The interesting part of this is that, if that's the main means of training models, there's only so much data actually available, especially so much data publicly available, that becomes your rate limiting factor. And so at some point in the not very far off future, I mean, depending on what literature you read ChatGPT-4, the next version of GPT that OpenAI is going to put out. It’ll be trained on an order of magnitude more data than Chat GPT-3 trained on. And Chat GPT-3 was trained on the largest corpus of freely available information that was currently available. Basically, the prediction is we might have one more order of magnitude of data that can be publicly scraped from the internet available. We've hit a wall. Now you got to go private. But to this point of proliferation of content, if you hit that wall, all of these models converge to the same thing. So you're actually limited in the you can get unless you can either handle the model or get access to private data. And so where of the opinion that yes, there will be content proliferation, but hat content proliferation will be directly related to the amount of private data that can be accessed by any particular AI. And so at some point, there will be an inflection where the arms race will shift from a strictly AI development race to one where it's bulking up on the data.

 

Tim - 00:30:23: Well, that's very interesting and in this context and what I'm hearing from you and underlying is let me phrase it as a question. What do you think is going to come out from Google launching bart? What do you think we're going to see there. What are you anticipating?

 

Don - 00:30:40: Here's what's interesting about, like, Google, Facebook, Tesla, these guys sit on a wealth proprietary information, and so they're going to be able to push the envelope maybe even more than a Microsoft that doesn't have the same level of access to data because people use Microsoft software on their machines, but nobody really uses Bing. I mean, they've been operating for longer, but they don't have that corpus of private information. The other interesting aspect, though, is when you throw in the cloud side of these businesses, they definitely already have the back door open, right? So they can convince their existing clients much easier than, say, we can, for instance, to actually open up these assets. So what do I see? I see a huge advantage for these organizations, a potential for them to again corner the market, but being optimistic. And this is where Web3 plays a significant role. And I think what we're doing, it's the openness of this, right? You and I being able to participate in an open ecosystem where we can control who or what has access to our information and control our rights, but we can make it available for certain applications in a public domain that isn't necessarily under the control of any one organization. I find it interesting. I talk about OpenAI quite a bit, but on December 11, 2017, that's when they released the blog post announcing OpenAI, right? And in it, I'm paraphrasing, but it was basically said, we're doing this in the open as a nonprofit because we don’t believe that any one organization should have the power singularly that this AI represents. And now, a little over five years later, they totally flipped. They're going private, right? And so I look at this and it's like, okay, what happened to this ethos? What happened to this mission? Where's the driver? I guess we got to pick up the slack here. I think from our point of view, coming from the Web3 side of things, the providence is awesome from our point of view in terms of this track and trace and then the application to attribution. But the real power of Web3 is locked in the governance, right? And how that manifests, whether it's through a DAO or what have you, it's this contextualization in a digital fashion of Governance rules, business processing, logic effectively at a smart contract level. And that's the power that we have to go up against these organizations that have massive war chests not just of money, but also data.

 

Tim - 00:33:44: Yeah. I think the thing that we all love working in Web3, when you say we coming in from a Web3 site was my fundamental kind of drive and digging deeper into DAOs because I also always thought this decentralizing data is great, it's necessary, it's all the underlying layer. But if you do not change governance and ownership, this is very much useless. And it's also somewhat disappointing on how much value likely through centralization is always accumulated and that organizations always tend back to that. And it doesn't really matter if you Google saying you don't do evil, or if you're Open AI saying, is this for the public good, or if you have to run the Apple App Store, which is finally also coming under scrutiny right, from government levels. But it's just between scary and disappointing to see this happening over and over. Let me ask you the last question from what I asked before. What do you expect from BERT compared to ChatGPT? If when Google actually pushes out you've been following the topic coming from data, what do you expect? Do we expect the same thing, something significantly better, something significantly worth, or what you said earlier, it's all going to end up being kind of the same in the end?

 

Don - 00:35:02: Well, I think that's kind of now that open AI is affiliated with Microsoft, they're going to embed it into Azure Services. So their opportunity to work with these private data sets to set the uniqueness of these models, is there equally so with Google and GCP, though GCP arguably is much further behind than Azure which is even further behind from Amazon, though Amazon, we haven't seen a lot on the AI side of things as of yet. So it'll be interesting to see where they come into play. I mean, quite frankly, if I'm being honest, I don't really care about the differences. Sure, one might be better than the other now, one might provide more accurate responses, but this is like it's a problem to solve. It's an engineering issue, right? And time will solve that and equalize it. Now, can either one of these or any of these organizations, the two that I think could likely really push the envelope here, especially from a personal point of view, that we haven't seen a lot from yet, are Facebook and Tesla, because of the amount of personal information Tesla cars are recording, right? You get in a car, you drive somewhere, they know you're driving there, and then obviously Facebook the amount of personal information that gets shared with that platform. So Facebook from an AI personalization point of view, sits on the biggest repository. And that would have the potential to when you're looking at B to C application of this stuff, blow everybody else out of the water. But we haven't seen much from them yet. We have seen and actually what's interesting with Facebook is everybody parades them for the data side of things, which I would agree with, but on the algorithmic side, they've been quite open, right? They've released a lot of technology into the open source. So anyway, getting back to the differences, do I see an advantage? I mean, Google, because they've been operating for so long with an alignment to this, we saw six months ago this whole sentient's conversation around their lambda model, et cetera. So they got some sophisticated tech under the hood for sure. There is the legal side of things here. I think some of what's being held back from Google's point of view is probably related to legal and compliance. And this is where Microsoft might have a leg up because as we've seen from the late eighty s and ninety s, microsoft is willing to bend the rules and then litigate the shit out of things and they've got the expertise to do that. So maybe that is ultimately microsoft's play, I don't know, but yeah, from our point of view in particular, again, I don't particularly care. We're looking to take advantage of what's being put out there. What is compelling, what I am optimistic about is that these models tend not to stay unique for too long in terms of the models themselves or the design of the models. Again, it's the data sets that they get trained on that makes them unique. And we've got a very specific, I think, outlook and view on how we think that this should manifest, in particular in the open, conforming to open standards, et cetera, and making it broadly available.

 

Don - 00:38:42: To everyone to participate in, to make their data assets available as opposed to a restrictive model that I think these other major corporations are going to follow.

 

Tim - 00:38:50: Don like always, I think, a very exciting yet somewhat scary outlook. Great insight. Thank you for being here with us.

 

Don - 00:39:00: Thanks for having me. Sorry we can get into more on the Dow side. We'll have to do that next time. But I mean, look, dows could be a really good management structure for a lot of this that's going to take shape over the next couple of years.

 

Tim - 00:39:15: Dial Talks is brought to you by Grindry. If you enjoyed this podcast, consider subscribing to dial Talks on Apple Podcast, spotify, Google or any other platform you fancy. To find out more about Grindery, visit grindery. io thanks for joining me. Tim out.




About the Show

Decentralized autonomous organizations, or DAOs, are all the rage. We’re seeing explosive growth in this sector as people experiment with building companies on top of tokens and smart contracts. If you want to get a better understanding of why this is happening, listen to the people that work, build and invest in them: the members.

Join me on my personal journey of discovery, a series of talks with the Web3 builders about DAOs, Life and everything else.

Graham Spencer

How people share their availability and generate stronger commitments via token staking

Spencer is a product manager for DAOhaus, and a RaidGuild contributor. During his Web3 travels, he's noticed that there are usually 2 kinds of people in DAOs - those that dip their finger in multiple projects, and those who focus on one project only. Now, he's championing incentive based mechanisms that make people share their availability and generate stronger commitments via token staking. That, and he thinks that DAOs can be an answer to climate change.