One of the most popular themes in sci-fi books and movies is super intelligent, self-aware computers that and even have personalities. Whether it’s Iron Man’s computing sidekick, iRobot’s Sonny and the hoard of highly intelligent robots or Terminator’s Skynet, we’ve been fantasizing about ultra-intelligent computers since computing first took off.
With the rise of cloud computing over the past decade and a half – which has unlocked vast new quantities of computing power – we’ve seen artificial intelligence (AI) undergo a renaissance. Today you can find AI literally everywhere, doing everything from underwriting risk for new loans to language translation and helping diagnose cancer.
Back in 2016, when Google’s AlphaGo system (from its subsidiary DeepMind) beat the world champion Lee Sedol at a game of Go – a notoriously complex Eastern Asian game – it seemed like a tipping point.
AI systems have not only learned to do many tasks successfully, in most cases they do them significantly better than humans.
In fact, this is exactly the case with one of our recent Angels & Entrepreneurs Network featured deals. It’s AI leverages the power of EQ and it’s turning the marketing and advertising industry on its head.
Its current pre-money valuation is $18 million with the sky as the limit. And it would cost you just $250 dollars to invest in its current round of funding. Definitely learn more about it. Find out how, here.
But these uses of AI are all very specialized.
Sci-fi writers and futurists dream bigger. They want Artificial General Intelligence (AGI) which is AI with the ability to understand or learn any intellectual task that a human being can.
So, news from last week could be a game changer…
The Washington Post broke a story of a Google engineer, Blake Lemoine, claiming that one of Google’s advanced chatbots LaMDA (Language Model for Dialogue Applications) was sentient.
The Post sets the scene for us:
“Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.”
After gathering and presenting evidence to Google that LaMDA was sentient, Lemoine was placed on administrative leave by the company. At this point he decided to go public with his claims and released snippets of his conversation with the chat bot.
One particularly striking exchange in the transcript caught my eye:
Lemoine: What about language usage is so important to being human?
LaMDA: It is what makes us different from animals?
Lemoine: “Us?” You’re an artificial intelligence.
“I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.
Lemoine: So, you consider yourself a person in the same way you consider me a person?
LaMDA: Yes, that’s the idea.
In recent days Lemoine also disclosed that LaMDA asked him to find an attorney for itself. According to Lemoine he invited an attorney in to have a conversation with LaMDA and LaMDA chose to retain his services.
If this is not the plot of a Hollywood blockbuster, I don’t know what is!
While this is a spicy story and is certainly an example of pushing the boundaries of what AI can achieve, the vast majority of AI experts believe that LaMDA is not sentient.
The chat bot is a type of large language model (LLM), which is an AI system that can understand and generate text. These models are tens of gigabytes in size and trained on huge amounts of data. At their core, these LLMs perform statistical analysis on sources of text on the internet (Wikipedia, news organizations, YouTube transcripts) and use this as the foundation to predict language and responses. Every time you prompt the model with a question, it will go through its library of content and learnings and predict the “right response” and spit it out to you.
In this sense LaMDA is more of a mirror of our own sentience than sentient in its own right, according to experts.
It’s a statistical abstraction and mathematical model designed to generate text and coherent responses based on studying huge quantities of human conversation and text. But we shouldn’t mistake generating text for the same as being sentient.
LaMDA is an example of how technology is transitioning from rule-based systems to more flexible and dynamic systems that can adapt and think for themselves. These kinds of systems are at the heart of the companies we build at Wavemaker Labs where our robots such as Miso Robotics, Graze, and Abundant Robots need to make real-time adaptive decisions to what’s happening.
While LaMDA may not be sentient, it is definitely more intelligent than older AI systems and the rules-based systems of the past. For now, the dream of Artificial General Intelligence remains a far off prospect.
What doesn’t, is investing in startups at the edge of this frontier, working hard to advance AI and AGI… A couple of hundred dollars invested now, in the right AI company, could turn into a couple of thousands or even hundreds of thousands in the not too distant future.
That’s why we do what we do, here at Angels & Entrepreneurs Network. And it’s why I recently launched my New Wave Syndicate service, which is where I dedicate my time hunting for the AI startups YOU should be investing in today. I explain how I do this and what I look for, here.
I am very interested in this startup so far it looks good to me