Google AI Lead: “Don’t take trading advice from ChatGPT”
ChatGPT and trading bots are on the rise, but can you trust their recommendations? Laurence Moroney, AI Lead at Google, discusses the intersection of AI and web3 in an exclusive interview with crypto.news.
Artificial intelligence (AI) and blockchain technologies are advancing rapidly. With the increased accessibility of AI, the risk of misinformation has grown, as tools like ChatGPT enable everyone to generate large volumes of misleading content. Experts suggest that blockchain has the potential to address this issue. Web3 could provide a secure and verified identity system, ensuring the accuracy and reliability of information in the AI era.
Crypto.news talked with Laurence Moroney, AI Lead at Google, during the SmartCon conference by Chainlink in Barcelona about this and other topics. We explored the synergy between web3 and AI, why you should not rely on ChatGPT and trading bots, and whether AI could really threaten humanity.
“Web3 enables AI3”
Crypto.news: To begin, could you share your journey with Bitcoin and cryptocurrencies?
Laurence Moroney: I first learned about Bitcoin and cryptocurrencies eight or nine years ago by reading the book “The End of Money” by the New Scientist magazine, and that opened my mind. At first, I thought this was a borderline scam, but then I read it and understood how it works and what it’s all about.
After that, I built my first mining rig. I need to go back and take a look at it to see if I ever actually mined anything. I remember leaving it running for a few weeks and getting a few scraps, but those scraps at the time were worth pennies.
I was also a little bit put off by the shilling of NFTs. I have a number of friends whose Twitter accounts got hacked by people wanting to sell NFTs, and it brought crypto into a bad light. That’s a shame because there’s so much good in there.
I think AI is facing a similar situation. When you go onto Twitter today or other social media today, AI has all these folks sharing how you can become a millionaire just by being a prompt engineer. In the same way, a year ago, they were sharing that you can become a millionaire by selling an NFT. Unfortunately, the industry got tainted by that, as well as by other things that we all know about. But I think it has legs because it has viable solutions. I’m mostly positive about crypto. And if web3 enables AI3, then the potential there is huge.
Crypto.news: In your view, what are the key intersections between AI and cryptocurrencies?
Laurence Moroney: I think there are so many points that it’s hard to list them briefly.
There’s a joke that the industry doesn’t get anything right until version 3. When we look at where generative AI stands now, it faces many challenges. For example, questions arise when data is sourced from potentially copyrighted material – who owns the output? Instances like AI-generated art winning competitions have led to controversies, showing that AI art cannot be copyrighted. Technology has raced ahead, but considerations regarding data and its implications haven’t yet caught up.
Crypto can help with that. For example, consider data. In the future, if I’m an author and I grant permission for my data to be used in training models, with identity verification and data stored on-chain, when someone uses a model trained with my data, some of the tokens generated could flow to me—this benefits data creators.
For consumers of models, having the ability to verify information on-chain using trusted models increases trust in incoming data. Lastly, for model builders, working with validated data and user-permitted data allows them to go beyond those hurdles. In essence, web3 enables AI3.
Why you shouldn’t trust ChatGPT
Crypto.news: When it comes to practical applications like trading bots or utilizing ChatGPT for investment advice, what are your thoughts on this? Do they currently function as intended?
Laurence Moroney: First, it’s crucial to understand what a bot is fundamentally. A bot is built on a transformer, a machine learning algorithm that learns how one pattern of text can be converted into another. It predicts what comes next based on the patterns it has learned. For example, if I say, “If you’re happy and you know it,” you’d typically expect the next part to be “Clap your hands.” A transformer earns those kinds of patterns.
Now, imagine a massive corpus of text on the Internet, which includes investment advice. Some of it is good advice, some bad, and some fictional. The model doesn’t know the difference between those. Generative models like GPT, where “G” stands for generative, create content, which means they make things up. Are you willing to trust trading advice or conduct trades based on a model trained on a mix of good, bad, and fictional advice?
Crypto.news: But what if you feed the model exclusively reliable data from reputable exchanges? Could it perform effectively in that scenario?
Laurence Moroney: If you were to train a model exclusively on data from reputable exchanges and only on successful transactions, you might have a better chance of getting reliable output. But you can never predict the future. Analytics have been attempting this for years, both with AI and with traditional data science. There’s nobody out there who can predict prices correctly.
Moreover, if a million people use that model simultaneously, its verifiability and accuracy will go out the window. When everyone is buying something, the price goes up. If everybody sells it, the price goes down regardless of what you predict.
I don’t believe in the future of trading bots in that way. And I worry that if you have a million bots all doing the same thing, they can potentially move the market.
Crypto.news: So, you also don’t think that AI could eventually replace human financial advisors?
Laurence Moroney: AI will make human financial advisors much more efficient. Financial advisors often need to read extensive documents which can contain thousands of words, to find valuable insights. LLMs can help in querying and extracting meaningful information from these documents, thereby improving efficiency.
If you think about accountants, the invention of the spreadsheet didn’t make them obsolete. It put the accountants who worked with pen and paper out of business. But it made the accountants much more efficient.
Crypto.news: Another challenge experts often talk about is that the trading data is too centralized.
Laurence Moroney: That could indeed be one of the challenges, I agree. Another challenge to consider is centralized computing. However, as models grow bigger and their functionalities grow, we’re likely to see the split of these models into smaller ones.
We are already seeing signs of that. For example, with the latest GPT-4 release, the creators came out and said that it comprises several models working together.
We are entering an era where multiple models work together to be able to solve things. That’s when that starts getting exciting, especially when we consider the potential for decentralizing models once we can distribute them. Placing a gargantuan 300 billion-parameter single model directly on the blockchain is impractical. It’s just too big. However, by developing smaller, distributed models, each operating on different blockchains, and utilizing protocols like CCIP to bridge between chains, decentralization becomes a feasible future.
Is AI a threat to humanity?
Crypto.news: In “Scary Smart” by Mo Gawdat, former Chief Business Officer for Google [X], there’s a notion that AI, once surpassing human intelligence, could pose a threat. Additionally, experts have called for a pause in advanced AI development due to fears of unintended consequences. What’s your perspective on this?
Laurence Moroney: I guess my viewpoint is slightly different. AI models are tools used by humans for various purposes. And man’s inhumanity to man is a bigger danger than any particular tool.
I often say, and it might sound like a joke, but I think it’s true, that I’m far more afraid of biological ignorance than machine intelligence. And when you look at the most powerful AIs of today, they are largely harmless.
Crypto.news: But some argue that AI is advancing exponentially.
Laurence Moroney: Indeed, AI’s power is growing exponentially, but its growth is geared towards being able to build better solutions. That doesn’t necessarily mean that they are more dangerous solutions. For example, the hallucination level in a large language model may go down. The ability for you to use it as a utility may go up.
However, there’s the famous case of a man in Belgium who committed suicide after a session chatting with ChatGPT. It’s obviously a very tragic situation. A more powerful model, ten times or 100 times more powerful, in my opinion, doesn’t mean that that will be ten times or 100 times more likely to happen. I think the circumstances that lead to a terrible outcome like that one have nothing to do with the model. The model may have been an enabler, and it would be good to learn guardrails on these things for circumstances like that one. But to me, the multiplication of the power of the model doesn’t mean a direct multiplication of the damage that the model can do.
I know my opinion isn’t shared by a lot of people, and I think only time will tell. But I have been in technology for a long time and have seen negative stories for a long time that didn’t pan out. Remember Y2K? Most people believed that on January 1st, 2000, every computer would crash. On YouTube, you can find videos that were made back then, including one by Leonard Nimoy, the actor who played Spock in “Star Trek”, talking about how the world’s going to end. And nothing happened. We’ve mostly forgotten about it.
I’m not going to say it’s going to be perfectly safe. I think there are bad people who use powerful things to do bad things. That’s exactly the problem. And that’s what I worry about. If you give somebody a bigger gun, they can kill more people. But you don’t stop that by stopping the development of the AI. I think the positive uses of AI greatly outweigh the negative uses. But having control, understanding how these things are used, and putting appropriate guardrails to prevent such things from happening would be, in my opinion, the right approach.
Crypto.news: And finally, recent research suggests that with the rapid progress of artificial intelligence, the “possibility of it becoming conscious is becoming less and less fantastical”. What do you think about it?
Laurence Moroney: It is possible but also depends on how we define consciousness. I don’t see any path to that with today’s technology, but technology does change rapidly. We’ll see.
One of the things around consciousness is the idea of emergence, where something isn’t intelligent, but based on how it behaves, we believe it’s intelligent. We’ve already seen incidences of this where people believe ChatGPT is alive or Bard is alive because it’s in the eye of the beholder.
I think that we’re going to see more of that: people believing something is alive, smart, intelligent, and self-aware. Does that make it alive, smart, and intelligent? It might be because what if a critical mass of people believe that to be true? If eight billion people suddenly believe that ChatGPT is alive, could it be alive? I don’t know.
When we look at plants or crows, we tend not to see them as intelligent. But where I live in the Seattle area, we have a lot of coyotes, and if I’m out walking my dog, the crows warn each other when they see a coyote, and as a result, they’re helping me. So, there are all of these places where we don’t see intelligence or self-awareness or sentience, and this is forcing us to rethink it. As we’re artificially creating stuff, we’re learning more about ourselves, and I find it fascinating.