Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Find out more
Much has been written about the dangers of artificial intelligence in recent months and all I’ve seen boils down to three simple arguments, none of which reflect the greatest risk I see facing us. Before I dive into this potential danger of creative AI, it would be helpful to summarize the common warnings that have emerged recently:
- Risks to employment: Innovative AI can now create human-level work products, from artwork and essays to scientific reports. This will be very impact job market, but I believe it is a manageable risk as job definitions adapt to the power of AI. It will be painful for a while, but no different from how previous generations have adapted to other work-saving effects.
- Risk of fake content: Creative AI can now create human quality artefacts on a large scale, including fake and misleading articles, essays, articles and videos. Misinformation is not a new problem, but generalized AI will allow it to be mass-produced on an unprecedented scale. This is a big, but manageable risk. That’s because fake content can be identified by (a) forced watermarking that identifies AI content at creation, or (b) by deploying AI-based countermeasures trained to identify AI content after the fact.
- Risks of sentient machines: Many researchers worry that AI systems will be scaled up to the point where they develop a “their own will” and will take actions that are contrary to human interests, even threatening human existence. I believe this is a real long-term risk. In fact, I wrote One “picture book for adults” titled mind come a few years ago explored this danger in a simple way. However, I do not believe that current AI systems will naturally become sentient without major structural improvements to the technology. So while this is a real danger the industry needs to focus on, it’s not the most pressing risk I see in front of me.
So what worries me most about the rise of creative AI?
The point where most safety professionals, including policymakers, make the mistake, in my opinion, is that they view generalized AI primarily as a tool for traditional content creation at scale. big. While this technology is quite adept at generating articles, images and videos, more importantly AI will create a whole new form of highly personalized communication. , fully interactive and potentially manipulative than any other form of targeted content we target. have faced so far.
Welcome to the age of interactive media
The most dangerous feature of creative AI is not that it can generate fake articles and videos on a large scale, but that it can create interactive and adaptive content customized for each user. to maximize persuasion impact. In this context, interact media created can be defined as targeted promotional material that is created or modified in real time to maximize targeted influence based on personal data about recipient users.
This will transform “targeted influence campaigns” from bombardment targeting broad demographic groups to heat-seeking missiles that can be focused on individuals for optimal effectiveness. . And as described below, this new form of media can come in two powerful flavors, “targeted general advertising” and “targeted conversational influence.”
Targeted Synthetic Ads is the use of images, videos, and other forms of informational content that look like traditional advertising but are personalized in real time to individual users. These ads will be generated on the fly by generalized AI systems based on influence target provided by third party sponsors in conjunction with personal data is accessed for the specific user to be targeted. Personal data may include a user’s age, gender and education, combined with preferences, values, aesthetic sensitivities, purchasing trends, political parties and biases. their culture.
To meet influencer goals and targeting data, the general AI customizes the layout, featured image, and ad message to maximize efficiency for that user. Everything from colors, fonts and punctuation can be personalized along with the age, race and clothing style of any person shown in the image. Will you watch video clips of urban scenes or rural scenes? Will it be set in the fall or spring? Will you see pictures of sports cars or family trucks? Every detail can be customized in real time using generalized artificial intelligence to maximize the subtle impact on you personally.
And because technology platforms can track user engagement, the system learns which tactics work best for you over time, discovering hair colors and facial expressions that capture your attention. best friend.
If this sounds like science fiction, consider this: Both meta And Google recently announced plans to use generalized AI in online ad creation. If these tactics generate more clicks for donors, they will become standard practice and an arms race will ensue, with all major platforms competing to use total AI. to customize ad content in the most effective ways possible.
This brings me to target dialogue influencea generating technique in which influence goals are conveyed through interactive conversation instead of traditional documents or videos.
Conversations will take place through chatbots (like ChatGPT and Bard) or through voice-based systems powered by similar tools. large language model (LLM). Users will encounter the “conversation agent” several times a day, as third-party developers will use the API to integrate LLM into websites, apps, and interactive digital assistant.
For example, you can visit a website to find the latest weather forecast information, join a chat with a AI agent to request information. In the process, you can be targeted with conversational influence – subtle messages injected into the dialog with advertising objectives.
As conversational computing becomes ubiquitous in our lives, the danger dialogue influence will expand significantly, as paying sponsors can put messages in the dialog that we might not even notice. And like targeted aggregate advertising, sponsor-requested messaging targets will be used in conjunction with targeted user personal data to impact optimization.
Data can include a user’s age, gender, and education level combined with personal preferences, interests, values, etc., thus enabling real-time creation dialog designed to engage optimize that particular person.
Why use conversational influence?
If you’ve ever worked as a salesperson, you probably know that the best way to convince customers is not to give them a brochure, but to engage them in face-to-face dialogue so you can advertise to them about the product. their reservation and adjust your argument as needed. It’s a cyclical process of pitching and tuning that can “convince them” to buy.
While it was previously purely a human skill, now generalized AI can take these steps, but with higher skill and deeper knowledge to draw on.
And while a salesperson has only one personality, these AI agents will be digital chameleon can adopt any style of speech, from nerd or rustic to slick or sophisticated and can pursue any sales tactic, from befriending customers to exploiting the fear of missing out. their. And because these AI agents will Armed with personal data, they can mention the right music artists or sports teams to make it easy for you to have a friendly chat.
In addition, technology platforms can record how effective previous conversations have been in persuading you, learning which tactics work best for you personally. Do you respond to logical appeals or emotional arguments? Are you looking for the greatest bargain or the highest quality? Were you affected by time pressure discounts or free add-ons? Platforms will quickly learn how pull all your ropes.
Of course, the big threat to society isn’t the optimized ability to sell you a pair of pants. The real danger is that the same techniques will be used to drive propaganda and misinformation, convince you to believe in false beliefs or extreme ideologies that you could otherwise disprove. For example, a conversational agent can be directed to convince you that a completely safe drug is a dangerous plot against society. And because AI agents will have access to the internet of information, they can pick out evidence in ways that would overwhelm even the most knowledgeable human.
This creates an asymmetrical balance of power commonly known as AI manipulation problem in which we humans are at an extremely disadvantage, when conversing with artificial agents highly skilled at attracting us, while we are incapable of “reading” the true intentions of entities we are talking about.
Unless specified, targeted aggregate advertising And targeted conversational influence will be powerful forms of persuasion, in which the user is overwhelmed by an opaque digital chameleon that offers no insight into its thought process but is armed with data. rich data about our personal interests, desires and tendencies, and have access to unlimited information to advance its arguments.
For these reasons, I urge regulators, policymakers, and industry leaders to focus on generalized AI as a new form of communication that is interactive, adaptive, personalized, and possible. deployed on a large scale. Without meaningful safeguards, consumers can face predatory behaviors ranging from subtle coercion to outright manipulation.
Louis RosenbergPh.D., is a pioneer in the fields of VR, AR and AI, and the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research and Unanimous AI.
Welcome to the VentureBeat community!
DataDecisionMakers is a place where professionals, including technical people who work with data, can share data-related insights and innovations.
If you want to read about cutting-edge ideas and updates, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You can even consider contribute an article your own!