Can Artificial Intelligence explain the world?
What if what you are reading had not been written by a human, but by a machine? The next time you enter the website of a media To inform yourself, you may have to ask yourself this question. And it is that the popularization of artificial intelligence (AI) capable of generating texts and images is beginning to affect the journalismopening a sea of opportunities but also important risks for the profession.
The landing of the AI in newsrooms it is already a reality. Last November, the reference technological medium CNET began to use its own text generator to create articles about personal finances, some content designed to have a good SEO and position itself at the top of the Google lists (something that the search engine penalizes). Without announcing this news to its readers, the company published some 77 articles through this automated technology, 1% of all those published since then. During the first days of January, researchers uncovered not only that the outlet had chosen to apply AI to its content, but also that its system —which had to be edited and supervised by journalists— published lies and inaccuracies repeatedly.
this little scandal within the journalistic sector has highlighted the potential impact of the application of generative systems such as ChatGPT. Created by the company OpenAIthis popular language model has become a media phenomenon thanks to its ability to learn from billions of pieces of data extracted from Internet to create texts, summarize complex concepts, imitate literary styles, compose poems or program lines of computer code. In short, respond to all kinds of user requests.
Errors and plagiarism
Its profound ability to simulate human reasoning and be creative has made AI the technological phenomenon fashionable, a novelty that has managed to cross the barriers of the sector to fascinate the non-expert general public. However, these systems are far from perfect. “Can occasionally generate incorrect or misleading information and produce offensive or tendentious content”, warns ChatGPT itself.
The fact that those chatbot can answer falsehoods convincingly unnerves experts, who fear that the uncritical use of these tools will degenerate into a problem of disinformation. This concern has made thousands of scientific journals have prohibited their use for academic research.
Applied to the media, that risk is even greater. CNET decided to end their experiment with AI on January 25, shortly after the digital medium ‘Futurism’ uncover that some of the articles created with this system not only disseminated inaccurate information and riddled with mistakesbut also they copied exact phrases from other articles without citing their authors. The machine has no consciencewith which it cannot be accused of intentionally misinforming or plagiarizing, but those errors do highlight the deontological debate which opens the application of AI in journalism. The CNET controversy is the first of many.
A lot of opportunities
That controversy has not prevented other publications from launching into the use of AI. Last Thursday, the digital media BuzzFeed announced that AI-powered content would become “a part of our core business” and that it would be used both to improve its popular quizzes and to fine-tune content personalization for its audienceaccording to advancement the ‘Wall Street Journal’. Since then, the company’s shares have skyrocketed 325%. “Using ChatGPT for hard information would be suicide (…) but it can be used for the production of entertainment content in which you can go wrong, as in the BuzzFeed tests,” said analyst Antonio Ortiz in the podcast ‘binaries‘.
The integration of AI in newsrooms can be very useful. These types of systems can be used to translate texts of other languages, transcribe interviews, summarize reports long, generate images, edit videos or synthesize complex concepts to adapt them to different types of readers. This does not seek to replace the journalist, but automate certain tasks heavier to speed up your work and allow you to go further. Already in 2014, the news agency ‘Associated Press‘ began to use AI for these purposes and to write business results reports and for sports.
All this points to a transformation of the media landscape. Of the press written to the TV, dozens of media outlets around the world are already using AI to improve their services and everything indicates that in the coming years there will be an explosion of automatically generated content. As he warns Reuters Institute for the Study of Journalism from the University of Oxford in his 2023 trend reportthat change will not be without friction: “It will be easier than ever to create compelling and highly believable multimedia content, but it will also be more difficult than ever to separate what is real from what is false, misleading or manipulated.”
Source of data and images: elperiodico