Welcome to this episode of Building Useful Sh*t! In this series, we dive deep into everything HumanAIx—a groundbreaking initiative pushing the boundaries of decentralized AI. HumanAIx brings together a diverse group of founding members, each playing a crucial role in shaping the future of DeAI. In every episode, we spotlight one of these key contributors, exploring their innovations, ongoing contributions, and impact on the decentralized AI ecosystem.
In this exclusive interview conducted by Beinginvested, Alastair Ong, Director at Holo—a P2P cloud network platform—explores how Holo is building the interface layer of HumanAIx, empowering users with full control over their data while enabling seamless interaction with decentralized AI applications. He also breaks down the stark differences between centralized and decentralized AI, highlighting big tech’s lack of transparency and the promise of a fairer, more open AI ecosystem.
From championing data ownership to pioneering user-driven reward systems, Holo is pushing the boundaries of what’s possible. But what obstacles stand in the way? How will decentralized AI evolve, and could it one day outpace traditional AI? If you’re curious about the future of AI beyond corporate control, this is an interview you won’t want to miss.
Below is the full transcription of the interview video.
Beinginvested: In this particular episode, we do have with us Holo, a P2P cloud network platform, and representing Holo is Alastair Ong. Alastair, thank you so much for joining us today. Oh yeah, so happy to have you on today's episode.
Alastair Ong: Likewise, thanks for inviting me. Awesome.
Beinginvested: Look. Alastair, we saw the OORT team unveil the HumanAIx program, and we've seen the entire run up of all the different founding members. We are so excited to have you on as part of the founding members of HumanAIx. We just wanted to understand for the audience, how exactly will Holo be contributing to the overall development of the decentralized AI space through HumanAIx.
Alastair Ong: Thanks for that question. So we at Holo, provide open cloud infrastructure for social collaboration and especially to enable people to retain control over their digital lives when democratizing technology, especially something as complicated and powerful as AI, that requires not just technology and economics, but also an excellent and intuitive user experience. So we are contributing what is called the interface layer of HumanAIx where we'll be providing the end user applications that make it easy for users to interact with the broader ecosystem and the broader set of networks within the HumanAIx program and within that Not everything is going to need to live on chain. Our focus is to make sure that what is off chain, especially user data, is held in a privacy preserving and open way, where open means that users retain full control over that they can move to a different interface layer, a different platform or whatever.
Beinginvested: That's fantastic. Thank you for that insight into how you guys are contributing overall. I guess my next question is more in line with the differences that lie between AI in general, and decentralized AI. One of them that has been notable for us in general, has been the fact that you get rewarded for data, for providing data and stuff like that. I guess my question is, is there already some sort of a reward system in play, or some sort of a system in place that we are drawing inspiration from or is that something that you guys are creating from scratch? How does all of that work?
Alastair Ong: The foundations, to me, are relatively straightforward or baked in. It's, I think, very readily apparent that there's a huge amount of value and demand for high quality data, and on the other side, there's a concern and even a frustration about who does the work versus who gets the benefit when it comes to that data and when you have these factors at play. That's a recipe for a market driven system, right? And that's a market driven system where those who consume and make use of the data in different ways need to reward or pay those who are providing or enhancing that data. So HumanAIx, I think we'll be putting in place a system that facilitates those payments and rewards, how exactly the system works, the specifics and the tokenomics, I think, are still things that we need to figure out. As you know, it's early days for this program.
Beinginvested: Okay, that's good to know. Just the other day, I was chatting with Maro Stokić from Oasis protocol, and we were discussing the apparent differences in the reason why HumanAIx requires this collaborative effort from so many different projects, and it's understandable, there's so much that goes into creating decentralized AI in general. My question here is, could you tell us some of the differences, and especially for those that don't really see, decentralized AI as separate from AI in general? What are some of the differences that we see big, centralized corporations doing in comparison to the decentralized AI space. How does that work?
Alastair Ong: So I think that this is an area where it's a little hard to answer. The reason why it's a little hard to answer is one of the differences, which is a lack of transparency about what the large, centralized corporations are doing. They're clearly accessing huge amounts of data. No one really knows exactly from where. No one knows how people are really getting rewarded for it. No one really knows how that data is pre processed, right? There's a huge amount of pre-processing required to get the data into a format and quality for training. And all of these can, you know, inject biases. All of these can include economic inequity in terms of them using data that they may not know that the creators or the owners of that data may not be adequately rewarded for. In the past, there have also been, you know, fairly high profile missteps about things like the results that models have given things like very strange levels of censorship or bias, or reverse bias by bias, or whatever you want to call and all of that, I think, stems from this lack of transparency. So I think one of the key differences of decentralized AI is around that transparency. And it starts with your previous question about rewards, but it extends to, hopefully, understanding more about the provenance of the data, how it's been treated, how it's been manipulated, and manipulation can be right in the sense of pre-processing, or they can have more ethical, moral questions. So I would say that's the primary difference and goal and the distinction with decentralized AI.
Beinginvested: Is that one of the reasons why you think a lot of these big corporations are closed source, so to speak? You know, we've seen this debate about open source and closed source. And is that one of the reasons, really, because they don't really want that information out there?
Alastair Ong: I mean, I can only speculate. Yeah, I can speculate. I suspect it's a factor. It's also clearly IP considerations or comparative pressures and so on, right?
Beinginvested: How would you see the decentralized AI space in general? How would you see the decentralized AI space develop through HumanAIx? I know this is slightly generalized, but would really love to hear your thoughts on the eventual trajectory through HumanAIx of decentralized AI.
Alastair Ong: Yeah, so it is. I think to some extent, the honest answer is that nobody really knows, right? AI itself is brand new. Decentralized AI is even newer. What I think is exciting about an alliance or a program like HumanAIx is the openness. There will be multiple partners and multiple members in each role, and it's up to the user to choose who they want to interact with. That level of opt in and transparency about how you participate is going to be a powerful driver in the space.
Beinginvested: Right, it's easy to ask the question, where are we heading? But sometimes uncertainty is where we start with, or what we start with. We've seen AI in general. Just over these last three years, we've seen AI grow at an absolute breakneck pace. Is there a possibility at some point of time, maybe not now, but maybe sometime in the near future we see decentralized AI surpass this AI growth, is that a possibility at some point?
Alastair Ong: So I think there are definitely spaces and aspects in which decentralized AI can build an edge. I think how we define surpass is part of that question. But I think we're already seeing a lot of innovation around how to do more with big foundational models. Fine tuning on human feedback, rags, prompt engineering and so on. And these are areas where decentralization can certainly build an edge, because it unlocks the diversity of thought, the different ideas, different demands, that can really create more experimentation and move the entire industry, and especially the decentralized parts of industry forward.
Beinginvested: I've always had this question, the reason why these big corporations have been closed source has primarily been because of the unethical use of AI. And you've probably seen a couple of them, you know, even in the crypto space, propping up in terms of agents where you could throw in a photograph of a woman, and it can make a dude for you, and stuff like that. And I guess that's one of the reasons why closed source makes a lot of sense to big corporations to prevent unethical use. My question is that something that we are in the present day, talking about in regard to decentralized AI, is ethical use and safety, even a topic that we are currently discussing?
Alastair Ong: Yeah, I think the honest answer is, no. I think we actually see the opposite, that there is a frustration over very heavy handed attempts at making AI safe by centralized corporations. And it's not just safe and ethical. Recently, there's been, I think, quite a few posts about system prompting grok that basically don't say anything bad about Elon Musk. That's like a post I've seen many times on Reddit in the last week. So I think that this is probably one of the spaces where there's going to be divergence before conversions, where, right now we're probably going to see the opposite. At first, we're going to see people demanding, uncensored AI, perhaps unethical AI, and then hopefully what we'll see then, as the ecosystem matures, is more incremental but cautious safeguards put in place. Like crypto in general. It used to have the Wild West days, and in the wild west days, generally, I would say that the voice of the community was a lot more extreme, like completely anti regulation, completely anti government. And whereas now it's, there's perhaps a level of balance that comes with maturity and understanding that there are some places where these things make sense, but I think this is one of the places where we were going to see a go slow approach, where it'll start with. If trying to step away from these heavy handed approaches and then come back with some other ways of doing it.
Beinginvested: In the crypto space, with regard to all coins being traded in specific narratives, we've had AI, we've had RWAs, GameFi, and all of these have moved in accordance with their narrative. We are now seeing nothing moving, primarily because the overall markets are down. Bitcoin is now trading at under $80,000 which sucks, but was probably a good thing two months ago. When it comes to decentralized AI, are we always going to look at a specific project, or a specific token, valuable only because of the price action of its token, or will it, at some point of time, also be about a stack?
Alastair Ong: I think that those things aren't opposites.This project is to build useful shit, and that's the goal. Like, the fundamental goal is to build something that people use, and tech is an enabler of that, and when people use it, then that has an implication for token prices. As long as you've got reasonable tokenomics. The tokenomics are reasonable, then there is going to be some correlation between token prices and use. And we see that with some of the ecosystem more mature, some of the more used tokens, right? Like, obviously Ethereum Solana with meme coins, which absolutely are used, right? And that requires people using things like Solana's coin. We saw that with things like uniswap and so on. So I think that there is a correlation between crypto being crypto. There's a lot of other factors in the marketplace.
Beinginvested: That's awesome. Look, I guess my last question to you would be more in line with some of the challenges that you know developing decentralized AI is already facing. The founding members and yourselves at Holo also sort of tackling this from a broader perspective, what are they?
Alastair Ong: We don't really know, it's such a new space, right? I'd say that right now, the space is with the exception of reward systems for data. It's currently trying to mirror the centralized AI industry, but transposing it onto blockchains. It's just about getting data, training data,training models, using the models so far. And there's going to be divergent somewhere. We don't know where there will be challenges. Not entirely sure where, either. What, I think the real value to me of a program like this with so many founding members is specialization, right? Like we can each bring something different to the table, different areas of expertise, different perspectives, and then, in parallel, solve the different problems.
Beinginvested: Is there going to be some sort of a token that the broad markets can trade? If yes, how would that work? Or is it just an ecosystem that people can build on?
Alastair Ong: I haven't had huge amounts of conversations about tokenomics, but from the brief conversations I've had there are two things. There's the general network tokens. And I believe what will happen is some form of swap systems so that users can switch between them seamlessly. So hopefully this rising tide lifts all boats' situation for all of the members in HumanAIx. But I also know that one of the other founding members is there specifically to provide some form of micro transactions related to the work done in the system, particularly around rewarding and paying for data, data enhancements and so on. I don't know how those tokens will work yet.
Beinginvested: Yeah, that's helpful to understand that a little better. So thank you for that.This was a fantastic conversation, Alastair. Thank you so much for joining us today and just you know, giving us a little bit of an insight into how Holo is involved as one of the founding members of the overall HumanAIx program.
This episode of Building Useful Sh*t features an exclusive interview with Alastair Ong, Director at Holo, as he explores how Holo is shaping the interface layer of HumanAIx, empowering data ownership, and driving the evolution of decentralized AI.
Catch up with OORT's recent updates on 2025 Roadmap, Foundation grant program, Olympus USDC integration and Halley upgrade, and the hackathon winners.