When seeing is no longer believing: how AI is reshaping trust in news for Bangladesh’s youth

A video appears on your phone. A well-known political leader is speaking. The setting looks official, the tone is serious, and thousands of people have already liked and shared it. It feels real. It looks real. But what if it is not?

ai

For many young people in Bangladesh, this is no longer a distant possibility. We conducted a study that shows how artificial intelligence is quietly changing how youth judge news on social media. The findings suggest that trust is no longer built on facts or sources alone. Instead, it is increasingly shaped by appearance, emotion, and online behaviour.

Based on focus group discussions with 50 young users aged 18 to 25, along with an analysis of social media content, the study reveals a troubling pattern. Young people are active consumers of news, yet many rely on quick and surface-level cues to decide what is true.

The rise of visual trust

One of the strongest findings from the research is how much importance young people place on visual quality. Participants said that clear images, realistic videos, and professional presentation styles make content feel credible.

The study identifies this as “visual and formal realism” and finds it to be the main basis for judging truth. High-definition videos, natural lighting, and familiar news formats all act as signals of authenticity. Many participants admitted they assume that if something looks polished and realistic, it must be accurate.

This matters because artificial intelligence can now replicate these features with ease. News-style graphics, official-looking logos, and lifelike human faces can all be generated or manipulated. As a result, the very elements that once signalled credibility are no longer reliable.

Emotion comes first

Another key insight is the role of emotion in shaping belief. The study shows that young people are more likely to trust and share content that triggers strong feelings such as fear, sadness, or anger.

Participants openly acknowledged that emotional content affects their judgement. When a post feels shocking or deeply moving, they are less likely to question it. This creates a situation where emotional appeal can outweigh factual accuracy.

The research also notes that social media platforms amplify this effect. Algorithms tend to prioritise content that generates strong reactions. As a result, emotionally charged posts are more visible and spread more quickly than neutral or fact-based information.

Popularity as proof

In the digital environment, popularity often becomes a shortcut for truth. The study finds that many young users see likes, shares, and comments as signs of credibility.

If a post has been widely shared, it feels trustworthy. Participants described a common belief that large numbers of interactions mean the information cannot be false. This creates a powerful feedback loop where popular content gains even more visibility and trust.

However, this reliance on engagement can be misleading. The research highlights that people often do not verify the source or accuracy of a post if it appears widely accepted. Instead, they take the reaction of others as enough evidence.

Trust in familiar sources

The study also shows that credibility is closely linked to personal connections and perceived authority. Young people tend to trust content shared by people they know, as well as recognised figures such as teachers, public personalities, or religious leaders.

Verified pages and well-known organisations also carry weight. When content appears to come from a familiar or authoritative source, it is less likely to be questioned.

This pattern reflects a broader shift from institutional trust to network-based trust. Rather than evaluating information independently, users often rely on their social circle and online communities to decide what is believable.

The influence of algorithms

Social media platforms play a major role in shaping what users see and believe. The study highlights how algorithmic personalisation affects perceptions of credibility.

Content that matches a user’s interests, language, and location feels more relevant. Because it aligns with existing preferences, it is also more likely to be accepted as true. Participants said they trust information more when it reflects their own experiences or beliefs.

Repetition is another important factor. Seeing the same content multiple times makes it feel familiar and therefore more credible. Many participants admitted that repeated exposure leads them to assume that the information must be true.

This shows how algorithms can reinforce belief, not by verifying information, but by increasing its visibility.

Awareness with limitations

Interestingly, the research finds that young people are not completely unaware of artificial intelligence and its risks. Many participants said they know that AI can create fake or manipulated content.

However, this awareness does not always translate into better judgement. The study reveals that participants often have a limited understanding of how AI actually works. They may recognise the term but lack the knowledge needed to identify manipulated media.

As a result, they continue to rely on visual quality, emotional impact, and social proof as indicators of credibility. This creates a gap between awareness and practical ability.

A growing sense of uncertainty

One of the most concerning findings is the sense of confusion among users. Participants reported that they sometimes doubt even genuine content because they know how easily images and videos can be altered.

The study describes this as a growing uncertainty about what is real. Traditional cues such as professional editing or official presentation no longer provide clear answers. At the same time, users feel pressure to process information quickly in a fast-moving digital environment.

This combination leads to what can be described as information fatigue. Users are aware of the risks but lack the tools to respond effectively.

Bangladesh has seen rapid growth in social media use, especially among young people. Platforms like Facebook and YouTube have become primary sources of news and information.

At the same time, the country faces challenges such as limited AI literacy, a lack of Bangla-based detection tools, and evolving regulatory frameworks. These factors create an environment where misinformation can spread easily and quickly.

The study highlights that young users are particularly vulnerable because they are both highly active online and still developing critical evaluation skills. Their reliance on surface-level cues makes them more susceptible to misleading content.

What can be done

The research points to several practical steps that could help address the problem.

One important measure is clearer labelling of AI-generated content. If users can easily identify when a video or image has been created or altered, they are more likely to question it.

Education is also key. Media literacy programmes need to go beyond simple warnings about fake news. They should teach how algorithms work, how AI generates content, and how to recognise signs of manipulation.

Social media platforms have a role to play as well. They can introduce tools that show the origin of content and provide alerts for potentially misleading material. Developing systems that work effectively in Bangla and reflect local contexts is especially important.

For individuals, small behavioural changes can make a difference. Taking a moment to pause before sharing, especially when content is emotional or surprising, can reduce the spread of misinformation.

Rethinking trust in the digital age

The study makes one thing clear. Trust in news is no longer shaped in the same way as before. Appearance, emotion, and popularity have taken on a larger role, while traditional indicators such as source and verification have become less central.

Young people in Bangladesh are navigating this complex environment with awareness but limited tools. They know that not everything they see is real, yet they still rely on cues that can easily be manipulated.

This creates a fragile situation where both belief and doubt are increasing at the same time.

In a world where artificial intelligence can recreate reality with convincing detail, the challenge is no longer just identifying false information. It is learning how to think more carefully about everything we see.

Because in today’s digital landscape, seeing is no longer believing.