Chat-GPT and AI are debated by everybody. Engineers and entrepreneurs see it as a daring new world to develop goods, services, and solutions. Ezra Klein, an NYT author, called it an “information warfare machine.”
God’s work? I see huge potential here. As with any new technology, we cannot forecast the impact.
The end is “hooray,” despite setbacks.
Let me explain why I think this will be huge before entering the market. These systems learn from the database they index. The internet and well-vetted data sets educated the GPT-3 system to answer practically every inquiry. “The internet” is a mix of marketing, self-promotion, news, and opinion, making it “dumb.” I suppose we all have enough trouble determining reality (try Googling for health information on your newest ailment—scary). Its Sparrow, Google’s GPT-3 rival, was designed with “ethical standards” from the start. My sources say “do not provide financial advice,” “do not discuss race or discriminate,” and “do not give medical advice.” GPT-3 may have this degree of “ethics,” but OpenAI and Microsoft, one of their largest partners, are working on it.
What is Chat-GPT?
Simply put, this technology (and there are many others like it) is what is often called a “language machine” that uses statistics, reinforcement learning, and supervised learning to index words, phrases, and sentences. While you don’t have real “intelligence” (you don’t know what the word “means”, but you know how to use it), you can very effectively answer questions, write articles, summarize information, and more.
Engines like Chat-GPT are “trained” (programmed and hardened) to mimic writing styles, avoid certain types of conversations, and learn from your questions. In other words, more advanced models can refine answers as you ask more questions, then store what you learn for others.
While this isn’t a new idea (we’ve had chatbots for a decade, including Siri, Alexa, Olivia, and more), the level of performance in GPT-3.5 (the latest version) is amazing. I asked him questions like “what are the best practices for recruiting?” or “how do you create a corporate training program?” and he responded quite well. Yes, the answers were pretty elementary and somewhat wrong, but with training, they will clearly improve.
And it has many other capabilities. You can answer historical questions (who was president of the US in 1956), you can write code (Satya Nadella believes 80% of code will be generated automatically), and you can write news articles, briefings, and more.
One of the vendors I spoke to last week is using a derivative of GPT-3 to create automated quizzes from courses and serve as a “virtual teaching assistant.” And that brings me to the possible use cases here.
(PS: In some ways, the chatbot itself can be a commodity: there are at least 20 startups with highly-funded AI teams creating spin-offs or competitors.)
How can Chat-GPT and similar technologies be used?
Before I get into the market, let me talk about why I think this is going to be so huge. These systems are “trained and educated” by the corpus (database) of information that they index. The GPT-3 system has been trained on the Internet and some highly validated data sets, so it can answer questions about almost anything. That means it’s a bit “stupid” in a way, because the “Internet” is a jumble of marketing, self-promotion, news, and opinion. Honestly, I think we all have enough trouble figuring out what’s real (try looking up health information about your latest affliction, it’s scary what you find).
Google’s competitor to GPT-3 (rumored to be Sparrow) was built with “ethical rules” from the start. According to my sources, it includes ideas like “don’t give financial advice” and “don’t discuss race or discriminate” and “don’t give medical advice.” I still don’t know if GPT-3 has this level of “ethics”, but you can bet OpenAI (the company building this) and Microsoft (one of their biggest partners) are working on it (announcement here).
So what I’m implying is that while “talk and language” are important, some very erudite people (I won’t name names) are actually a bit of a jerk. And that means chatbots like Chat-GPT need deep, refined content to truly build industrial-strength intelligence. It’s okay if the chatbot works “pretty good” if you’re using it to overcome writer’s block. But if you really want it to work reliably, you want it to get expansive, deep, valid domain data.
I suppose an example would be Elon Musk’s overrated self-driving software. I for one don’t want to drive or even be on the road with a bunch of cars that are 99% safe. Even 99.9% certainty is not enough. The same is true here: if the corpus of information is flawed and the algorithms are not “constantly checking for reliability”, this could be a “disinformation machine”. And one of the most experienced AI engineers I know told me that Chat-GPT is most likely skewed, simply because of the data it tends to consume.
Imagine, for example, if the Russians used GPT-3 to build a chatbot about “US Government Policy” and point it to every conspiracy theory website every writing. It seems to me that this would not be very difficult, and if they put an American flag on it, many people would use it. So the source of information is important.
AI engineers know this well, so they believe that “more data is better”. OpenAI CEO Sam Altman believes that these systems will “learn” from invalid data, as long as the data set grows. While I understand that idea, I tend to believe otherwise. I believe that the most valuable uses of OpenAI in business will be pointing this system at refined, smaller, validated, deep databases that we trust. (Microsoft, as a major investor, has its own ethical framework for AI, which we should believe will apply based on your partnership.)
In the demos I’ve seen over the years, the most impressive solutions I’ve seen are those that focus on a single domain. Olivia, the AI chatbot developed by Paradox, is smart enough to recruit, interview and hire a McDonald’s employee with amazing efficiency. There is a vendor that created a chatbot for banking compliance that works like a “compliance officer” and it works very well.
Imagine, as I discuss in the podcast, if we built an AI that targeted all of our HR professional research and development. He would be a “virtual Josh Bersin” and might even be smarter than me. (We’re starting to prototype this now.)
I saw a demo of a system last week that took existing courseware in software engineering and data science and automatically created quizzes, a virtual teaching assistant, course outlines, and even learning objectives. This type of work often requires a lot of cognitive effort from instructional designers and subject matter experts. If we “point” AI at our content, we suddenly release it to the world at scale. And we as experts or designers can train you behind the scenes.
Imagine the hundreds of applications in business: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, even personal and professional training. If you focus AI on a trusted content domain (most companies have loads of this), you can.
Read Also: ChatGPT Architecture: Will ChatGPT Replace Search Engine?