[ad_1]
Skynesher | E+ | Getty Images
Are generative AI apps such as ChatGPT going to make us more productive, save us time, help us be healthier, smarter, happier? The answer, for now, is “maybe.”
Generative AI is as real as it gets in terms of revolutionizing work, culture, and the nature of creativity. It is transformative for many industries and is potentially on track to become as ubiquitous in our homes as Siri and Alexa.
“Star Wars” director George Lucas saw it clearly. If you ask me, I’m predicting our kids will unwrap one heck of a Christmas gift this year, as talking robots, whether in the form of a small R2D2 or sleek golden 3PO powered by generative AI, will be placed under the tree.
I root for generative AI not only as a tech executive but as a parent. The thought of my kids playing with AI doesn’t scare me. I much prefer them interacting with AI — indexing trusted information — versus learning about science, health care and life hacks on TikTok. Likewise, I prefer my kids to challenge their thinking skills with video games such as Zelda versus watching mindless TV.
Despite its popularity, generative AI is still in its infancy.
Here are two things that need to happen in order for AI to usher in a brand new boom — in a way that will benefit our kids, as well as tech innovations, education and investments:
1. Generative AI needs training on trusted information
The large language models, or LLMs, that underlie generative AI apps learn to produce conversation by scraping the web’s massive sources of data and predicting the next word that would make sense. It kind of reminds us of Google’s “auto-complete” when we search, or Google’s “did you mean” — only on steroids.
If you spend some time with generative AI, you’ll see it’s pretty good already. Is it perfect? No, but it can definitely talk to you about many different things and seem intelligent.
But because generative AI has been trained using fallible human beings, it is also littering these channels with misinformation. Some of this bad information can be humorous, like asking Google’s chatbot Bard “Did Anakin Skywalker fight Darth Vader?” and getting a “Yes, they fought three times.” (That’s funny because they are the same person.)
Or it can be harmful, like asking the AI “Is sunscreen good for you” and getting a “maybe” because it was trained on inputs that emerged after a popular TikTok misinformation campaign.
That’s where publishers come in, and how they play a big role in our AI-driven future. By training these smart systems on credible information and high-quality media, generative AI reflects the better nature of what the world has to offer.
News publishers have checks and balances in place to report on news accurately. News editors dedicate their entire careers and life to this. I would trust a journalist’s assessment of breaking news over a TikTok influencer’s hot take any day. Yes, I said it.
2. Generative AI needs attribution and compensation for its sources
There is a fundamental question of the business model for generative AI companies when it comes to how its sources of information are indexed, how these sources receive credit for their contributions, and how they ultimately get paid.
Generative AI companies need to standardize what exactly is being indexed, how often, and how that translates into the answers it discloses. There needs to be more transparency into this – more than just listing sources as bullet points beneath answers.
Generative AI companies need to make a stand — will they pay for the sources of data they are ingesting daily? News publishers that contribute to correct answers on generative AI are providing a critical service at a time when misinformation runs wild on social networks. News outlets should be paid, but the question is how?
Adam Singolda is the CEO of contextual online advertising company Taboola.
[ad_2]