fbpx Press "Enter" to skip to content

Can AI replace journalists?

The field of journalism has undergone a transformation, and artificial intelligence (AI) is becoming a crucial instrument in digital journalism. Additionally, it is now more effective and efficient in informing readers about news, but redundant. AI can assist journalists in taking use of new technology by providing more accurate and interesting material (probably with not much accountability), thanks to its capacity to analyse enormous volumes of data and spot trends. It has, however, also sparked worries about the profession’s future on the lines of:

Biased Reporting

The objectivity of AI algorithms depends on how well they are created by humans. Biased reporting may result from the creation of algorithms without consideration for diversity. Systematic sexism, racism, or other forms of prejudice could so persist.

Lack of Creativity

Artificial intelligence can only work with the data that is given to it. This means that the uniqueness and inventiveness of human journalists cannot be matched by news created by AI.

Job Losses

The automation of tasks that humans formerly did by hand, such as content curation, may result in job losses in the media sector. The standard of journalism may suffer if fewer people are available to do fact-checking and report on complex issues.

Ethical Concerns

The use of AI in journalism raises ethical concerns, particularly when it comes to the use of personal data. It’s possible that AI algorithms will be developed to sway readers’ opinions and actions in order to increase revenue. People may become discouraged with online journalism platforms as a result of this.

Lack of Empathy

It is difficult to cover delicate stories using AI-generated material since it lacks empathy and understanding of human emotions.

Lack of accountability

In order to guarantee truth and impartiality, AI usage in journalism must be transparent and subject to scrutiny. Without adequate control, AI might reinforce prejudices and false information, which would eventually damage the authority of online journalism.

Guidelines released by the Associated Press

The Associated Press (AP) on August 16 published new standards for the use of generative AI in its newsroom. It has given journalists permission to experiment with AI tools like ChatGPT as long as they apply care and refrain from using the technologies to produce publishable content. The institution also emphasises that any output from a generative AI tool should be classified as “unvetted source material,” which means that before utilising such information, journalists must exercise their editorial discretion and adhere to AP’s sourcing rules. In other words, before utilising any AI-generated content, journalists must thoroughly fact-check it.

Since AP is aware of these dangers, it has urged its reporters to “exercise the same caution and scepticism they would normally,” which includes identifying sources, conducting reverse image searches, and looking for comparable articles in reliable media.

Generative AI

Additionally, according to AP, no components may be added to or removed from images, movies, or audio files using generative AI. This guideline is probably directed at generative technologies that can grow pictures beyond their initial boundaries and change their aspect ratios without causing any damage, like Adobe’s Photoshop powered by Firefly. The standards did include an exemption for cases when AI illustrations or art is the focus of the narrative, but even in those cases, the picture must be clearly designated as such.

Prohibitions

The AP has also instructed personnel not to submit confidential or sensitive material into AI tools, following the lead of several firms that have prohibited employees from entering sensitive information into AI chatbots.

error: Content is protected !!