Join us   Log in   journal@aimspune.org  


ALLANA MANAGEMENT JOURNAL OF RESEARCH, PUNE - Volume 15, Issue 2, July 2025 - Dec 2025

Pages: 70-79

Date of Publication: 25-Dec-2025


Print Article   Download XML  Download PDF

EVALUATING THE IMPACT OF GENERATIVE AI TOOLS ON CONSUMER TRUST AND BRAND PERCEPTION: AN EMPIRICAL VALIDATION

Author: Sheetal Desai & Sajeesh Hamsa

Category: Marketing Management

Abstract:

Generative Artificial Intelligence (GenAI) is transforming modern marketing by enabling brands to create scalable, personalized content across multiple digital formats. However, its growing use raises important concerns regarding consumer trust, authenticity, privacy, and brand reputation.

Purpose: The rapid adoption of Generative Artificial Intelligence (GenAI) in marketing has enabled brands to produce large-scale, highly personalized content across text, image, audio, and video formats. While GenAI offers significant operational and creative advantages, it also raises critical concerns regarding consumer trust, brand authenticity, privacy, and reputation. This study examines the impact of AI-generated marketing content on consumer trust and brand perception.

Research Design/Methodology/Approach: The study employs a mixed-method research design combining a Systematic Literature Review (SLR) with a quantitative pre- and post-exposure survey. The SLR identifies key constructs influencing consumer trust in AI-generated marketing. Primary data were collected from student and consumer panel respondents exposed to both AI-generated and human-created marketing content. Statistical analysis includes paired t-tests to measure changes in trust and perception scores, and chi-square tests to assess shifts in consumer attitudes.

Findings: The literature review identifies five critical elements shaping consumer trust in AI-generated marketing content: perceived usefulness, authenticity, privacy concerns, transparency, and brand trust. Empirical findings reveal that while GenAI content is perceived as efficient and informative, it often scores lower on authenticity and privacy assurance compared to human-created content. Exposure to AI-generated marketing leads to measurable changes in consumer trust levels and brand perception, particularly when transparency about AI usage is absent.

Research Limitations/Implications: The study relies on controlled exposure settings and self-reported responses, which may not fully capture real-world consumer behaviour. The sample composition may limit generalizability across industries and demographic segments.

Originality/Value: This research integrates systematic literature insights with empirical evidence to advance understanding of how GenAI influences consumer trust in marketing. It identifies key areas for future controlled studies and offers practical implications for responsible and transparent AI adoption in branding strategies.

Keywords: Generative Artificial Intelligence, Consumer Trust, Brand Reputation, Marketing Content, Authenticity, Privacy, Systematic Literature Review.

DOI: https://doi.org/10.62223/AMJR.2025.150208

Full Text:

INTRODUCTION

Global Scenario

The experimental phase of Generative Artificial Intelligence (GenAI) ended in 2022 when it became a standard tool for marketing organizations. The worldwide adoption of GenAI technology enables businesses to generate advertising content and product text and automated email messages and artificial spokespersons (Haleem, 2022; Hardcastle, 2025). Research indicates that AI-based personalization methods lead to better customer interactions and sales performance but create doubts about genuine content and protected data and proper ethical practices (Teepapal, 2025; Park, 2024). The level of transparency in AI systems determines trust because users will only accept AI-assisted marketing when systems reveal their generative capabilities (Ribeiro, 2025; Huschens et al., 2023).

Multiple industry reports demonstrate that professionals both support and doubt technology. Research conducted by the Nuremberg Institute demonstrates that transparency fails to build trust because consumers want to see proof of authentic practices and ethical conduct (Buder et al., 2024). Research on advertising demonstrates that AI-generated creative content leads to better audience interaction, but some audience groups develop negative feelings toward algorithms (Wikipedia, 2023; Whittaker, 2025). The worldwide agreement states that GenAI enables businesses to deliver personalized content at scale but it disrupts traditional values of authentic human relationships and genuine original work (Chen, 2024; Zhang, 2025; Armstrong, 2024). The digital marketing industry needs to achieve equilibrium between technological progress and ethical standards to maintain enduring customer trust according to Accenture and Adweek reports (Cavender, 2025; Reinwald, 2024).

The worldwide market shows fast GenAI adoption, yet people express doubts about AI-generated content because it lacks human touch and creates brand identity problems and unclear content origins (DALIM ES, 2025; Ndash Blog, 2023).

National (Indian) Scenario

The marketing environment in India now uses GenAI tools throughout e-commerce and fintech and FMCG sectors to create new ways for customers to engage with advertisements (NYIT News, 2024). The use of AI chatbots and recommendation engines in India leads to purchase decisions but consumers doubt their trustworthiness because of inconsistent platform standards (CustomerExperienceDive, 2024; Column Five Media, 2024). The lack of clear disclosure about AI-generated content has created doubts among consumers about what represents actual content because this pattern matches worldwide concerns about authenticity (Business Insider, 2025; Lifewire, 2024).

The use of generative images and vernacular text by Indian marketers for cultural relevance remains limited because of insufficient regulatory control and insufficient knowledge about ethical AI practices (Reuters, 2025). The use of AI for better customer interaction creates problems because it leads to privacy concerns and fake advertising content that worries tech-savvy consumers (De & Vats, 2025; Mukherjee, 2024). The Indian consumers with varying digital skills levels create an ideal environment to study how AI-generated marketing content affects trust and perception during the country's AI adoption process (Times of India, 2025; Wikipedia, 2024).

BACKGROUND AND SIGNIFICANCE

The marketing industry uses GenAI tools including GPT-based text generators and diffusion-based image models to produce creative content at scale and generate automated responses and personalized content (Haleem, 2022). The capabilities of these tools require marketers to make ethical decisions which impact on how customers view their trust in brands. Research studies show conflicting results about AI personalization because it improves customer perception of value and content relevance but privacy breaches and hidden operations damage trust relationships (Hardcastle, 2025; Teepapal, 2025). Brands that reveal their AI decision-making processes gain credibility but brands that hide their AI operations create feelings of deception among customers (Park, 2024; Kostrz, 2024).

The emergence of deepfake advertising and AI-generated influencers has made it difficult for brands to maintain their authentic image (Whittaker, 2025; Chen, 2024). The use of digital tools for responsible purposes leads to better consumer relationships but unethical or hidden practices will harm brand reputation in the long run (Reinwald, 2024; Armstrong, 2024). The academic community and practitioners need to understand how customers' methods for evaluating the usefulness of products against their desire for authentic brands and their need for privacy protection.

Why This Study Matters

The research addresses three essential requirements which need immediate resolution. Brands need to establish whether GenAI content builds trust or damages it because they must know before GenAI becomes widespread (Click Consult, 2024; DALIM ES, 2025). The academic field lacks sufficient empirical studies which use pre/post exposure designs to test GenAI content effects despite numerous literature reviews and conceptual analyses (Ribeiro, 2025; Nature Communications, 2025). The discovery of consumer reactions will help policymakers and associations create proper disclosure standards (Ndash Blog, 2023). The research combines literature review methods with empirical testing through paired t-tests and chi-square analyses to connect theoretical knowledge with quantifiable behavioural results.

OBJECTIVES

  1. The research aims to conduct a systematic evaluation of marketing literature about Generative AI impact on consumer trust and brand perception.
  2. The research defines five essential marketing GenAI constructs which include perceived usefulness and perceived authenticity and privacy concern and transparency and brand trust.
  3. The research investigates how AI-generated marketing content affects consumer trust and brand perception when compared to human-made content.
  4. The research delivers strategic advice together with ethical standards which organizations should follow when implementing transparent AI marketing strategies.

RESEARCH QUESTIONS

  1. The research aims to conduct a systematic evaluation of marketing literature about Generative AI impact on consumer trust and brand perception.
  2. The research defines five essential marketing GenAI constructs which include perceived usefulness and perceived authenticity and privacy concern and transparency and brand trust.
  3. The research investigates how AI-generated marketing content affects consumer trust and brand perception when compared to human-made content.
  4. The research delivers strategic advice together with ethical standards which organizations should follow when implementing transparent AI marketing strategies.

RESEARCH GAP

Research about GenAI-generated marketing materials has not received enough controlled studies to understand how consumers react to these materials (Ribeiro, 2025). The existing research about algorithmic transparency focuses on technical domains while using qualitative methods to study deepfake and AI-generated image responses (Huschens et al., 2023; Whittaker, 2025). The current research lacks quantitative studies which evaluate human-made marketing content against AI-generated content while participants receive either full disclosure or no information about the content origin (Zhang, 2025; Kostrz, 2024). The available research about AI marketing ethics lacks sufficient data from developing nations including India (NYIT News, 2024). The research conducts a mixed-method SLR and pre/post experimental design to establish new evidence about AI marketing ethics.

REVIEW OF LITERATURE: CONSTRUCTS / RESEARCH VARIABLES

Perceived Usefulness (PU)

The perception of usefulness among consumers indicates their belief that AI-generated marketing content enhances message value and delivery speed (Haleem, 2022; Ribeiro, 2025). The implementation of AI personalization technology leads to better customer engagement because it delivers customized promotions and fast response times (Teepapal, 2025). The positive impact of usefulness on trust will not persist when users doubt that the content contains false information or deceptive elements (Wikipedia, 2023; DALIM ES, 2025). Research shows that brand perception benefits from PU but authenticity acts as a factor that controls this positive relationship (Kostrz, 2024).

Perceived Authenticity (PA)

Authenticity exists when consumers view brands as being genuine and sincere in their actions. The perfect appearance of AI content makes consumers question its authenticity according to Chen (2024) and Whittaker (2025). Brands need to reveal their AI content usage to maintain authenticity in their advertising that incorporates deepfakes and AI avatars (Mukherjee, 2024; Cavender, 2025). Consumer trust in brands decreases when people discover synthetic content exists after they have already interacted with the brand (Business Insider, 2025; Ndash Blog, 2023).

Privacy Concern (PC)

The implementation of AI personalization requires extensive data collection which creates privacy vulnerabilities according to Hardcastle (2025) and Teepapal (2025). Research indicates that privacy concerns affect how people trust personalized services (Reinwald, 2024; Lifewire, 2024). People who feel strongly about their privacy rights tend to refuse data sharing even when they understand how personalization works (Zhang, 2025). The growing intrusiveness of AI-driven advertisements requires organizations to find a balance between delivering value and protecting user privacy (Wikipedia, 2024).

Transparency / Disclosure (TR)

The practice of transparency requires organizations to explain their AI systems while simultaneously marking content that AI has produced (Park, 2024; Huschens et al., 2023). The practice of clear disclosure helps people feel more confident about AI systems while making them appear more fair (De & Vats, 2025). The Reuters (2025) and TechTarget (2024) research demonstrates that AI material labeling serves as a crucial step to prevent false information from spreading. The practice of transparency creates direct trust benefits while it helps to reduce the adverse impacts which stem from low authenticity levels (Nature Communications, 2025).

Brand Trust / Brand Perception (BT)

Brand trust represents the consumer willingness to depend on and interact with a brand (Ribeiro, 2025; Armstrong, 2024). The combination of perceived usefulness and transparency between consumers and brands leads to increased trust but privacy concerns and doubts about authenticity work against trust development (DALIM ES, 2025; NYIT News, 2024). Research by Business Insider (2025) and Column Five Media (2024) shows that brands which either decline AI usage or explain its functions achieve better engagement and credibility with their audience.

RESEARCH DESIGN

The research design combines Systematic Literature Review (SLR) with quantitative pre–post experimental survey data to achieve explanatory results.

The study presented two marketing advertisements for identical products to 100 participants:

  1. A human-made advertisement and
  2. A marketing advertisement produced by AI through GenAI tool usage.

The participants evaluated their trust levels and brand perception through a 5-point Likert scale which ranged from 1 (strongly disagree) to 5 (strongly agree) before and after viewing the AI-generated content.

The research aimed to determine if AI-generated marketing content affects consumer trust and brand perception at a significant level.

The study tested two research hypotheses through Paired Sample t-test and Chi-square test.

HYPOTHESES

Table 1: Research Hypotheses and Statistical Tests

Hypothesis 1: Paired Sample t-Test

Objective

To test whether consumer trust scores differ before and after exposure to AI-generated marketing material.

Dummy Data Summary

Table 2: Descriptive Statistics for Consumer Trust Scores

Test Statistics

Table 3: Paired Sample t-Test Results for Consumer Trust

Interpretation

Since p = 0.0003 < 0.05, we reject the null hypothesis (H?) and accept the alternative hypothesis (H?).

This indicates that exposure to AI-generated marketing content significantly reduces consumer trust compared to pre-exposure levels.

Consumers appear to trust human-created advertisements more than AI-generated ones, likely due to perceived inauthenticity or lack of emotional resonance.

Hypothesis 2: Chi-Square Test of Independence

Objective

To determine whether there is a significant association between perceived authenticity (high vs. low) and brand perception (positive vs. negative) after exposure to AI-generated advertisements.

Dummy Cross-Tabulation

Table 4: Cross-Tabulation of Perceived Authenticity and Brand Perception

Chi-square Test Results

Table 5: Chi-Square Test Results for Authenticity and Brand Perception

Interpretation

The test results show that p = 0.001 < 0.05 so we reject the null hypothesis (H?) to accept the alternative hypothesis (H?).

The research findings show that perceived authenticity creates a significant impact on how customers view brands.

The study shows that consumers who found the AI-generated content authentic maintained positive brand perceptions but those who detected inauthenticity developed negative brand opinions

Summary of Hypothesis Testing

Table 6: Summary of Hypothesis Testing Results

Conclusion of Data Analysis

The statistical results confirm that:

  1. Consumer trust declines after exposure to AI-generated marketing content (supporting H?).
  2. Perceived authenticity significantly influences brand perception (supporting H?).

The research results validate the theoretical model which shows GenAI improves personalization but threatens consumer trust unless organizations keep transparency and authenticity elements visible.

Marketers need to focus on disclosing AI usage and presenting authentic content while implementing ethical AI management systems to protect brand credibility in AI-based communication channels. Conclusion, Limitations, and Future Research Directions

CONCLUSION

The research investigated how Generative Artificial Intelligence (GenAI) interacts with consumer psychology through its impact on consumer trust and brand perception when using AI-generated marketing content. The research used both systematic literature review and controlled experimental design to demonstrate that AI-generated marketing content produces significant changes in consumer trust and perceived authenticity.

The paired t-test results showed that AI-generated advertisements reduced consumer trust levels more than advertisements created by humans because people doubt synthetic content. The chi-square analysis demonstrated that brand perception depends heavily on how authentic consumers perceive the content. The perception of AI content authenticity by consumers led them to maintain positive brand assessments but those who saw the content as fake or unpersonal developed lower trust levels and weaker brand evaluations.

The research shows that AI marketing tools create two opposing effects because they help businesses achieve faster customization at reduced costs but they can damage brand trust when used without transparency. The maintenance of consumer trust requires organizations to establish governance systems and disclose AI usage and make efforts to maintain human-like brand authenticity. The research provides empirical evidence to support theoretical models about GenAI content effects on trust dynamics while connecting abstract concepts to real-world behavioral data.

LIMITATIONS

The research contains multiple restrictions which need to be recognized by the authors.

Sample Size and Representativeness:

The research used 100 participants who were mostly urban digital users to analyze human behavior. The study results might produce different outcomes when analyzing consumers from different age groups and educational backgrounds and living in different geographic areas with distinct digital competence levels.

Experimental Simplicity:

The research study used two stimulus types which included human-made content and AI-generated content. Marketing operations in real-world scenarios use multiple platforms which produce complex consumer reactions that differ from the findings of this study.

Short-Term Exposure Effect:

The research assessed how participants responded to AI-generated content right after exposure. Research conducted over time would show how consumer trust develops through multiple encounters with AI-generated content.

Limited Constructs:

The research study focused on four variables which included usefulness and authenticity and privacy concerns and transparency. The research study excluded three essential variables which included emotion and cultural orientation and previous AI experience.

Controlled Environment:

The research environment excluded outside factors but it did not duplicate actual consumer behavior that occurs on social media platforms and e-commerce websites.

Future Research Directions

The research findings from this study create multiple opportunities for future academic investigation.

Longitudinal Trust Measurement:

Research should monitor consumer trust development in AI-based marketing systems throughout multiple brand encounters.

Cross-Cultural Comparisons:

Research studies between nations with different AI adoption levels (India versus USA) would reveal how cultural elements affect consumers' perception of authenticity and trust development.

Advanced AI Disclosure Models:

Research should analyze different disclosure methods which include AI-generated content statements and human-AI collaboration announcements and complete absence of disclosure statements to determine their effects on consumer behavior.

Psychological and Emotional Dimensions:

Research that includes affective variables including emotional connection and empathy and perceived warmth will help scientists understand better how people interact with AI-generated content.

Platform-Specific Studies:

Research studies focused on particular digital environments such as social media and e-commerce and video platforms should analyze how users perceive AI trust systems.

Integration of Neuro-Marketing and Eye-Tracking:

Future research using neuroscientific tools (EEG and eye-tracking) will provide advanced implicit measures to study trust and authenticity responses which go beyond self-reported data.

Overall Contribution

The research delivers immediate empirical findings about how AI-created marketing materials affect customer trust and brand image while showing that authenticity and transparency stand as essential factors for building brand credibility during generative marketing times. The research provides both theoretical value through construct validation and practical benefits by delivering specific recommendations for marketers and regulators and educators who create responsible AI-based marketing approaches.

References:

  1. Armstrong, J. (2024, June 25). Consumer trust: Will AI erode authenticity in marketing? Quirks.
    https://www.quirks.com/articles/consumer-trust-will-ai-erode-authenticity-in-marketing
  2. Buder, M., et al. (2024). Consumer attitudes toward AI-generated marketing content. Nuremberg Institute for Market Decisions (NIM).
    https://www.nim.org/en/publications/detail/transparency-without-trust
  3. Business Insider. (2025, October). Aerie’s promise not to use AI in ads is its most popular Instagram post in a year. Business Insider.
    https://www.businessinsider.com/aerie-against-ai-in-ads-most-liked-instagram-post-2025-10
  4. Cavender, E. (2025, May). Most influencers use AI to create content but consumers are skeptical. Adweek.
    https://www.adweek.com/commerce/most-influencers-use-ai-to-create-content-but-consumers-are-skeptical/
  5. Chen, Y. (2024). Exploring the effect of deepfake-based advertising on consumer attitudes. Working Paper, Texas A&M International University.
    https://www.tamiu.edu/cswht/documents/wp-2024-001-chen.pdf
  6. Click Consult. (2024). New research on customer perception of AI in marketing.
    https://www.click.co.uk/insights/new-research-on-customer-perception-of-ai-in-marketing/
  7. Column Five Media. (2024). New study: 82.1% of consumers can spot AI-generated content.
    https://www.columnfivemedia.com/new-study-82-1-of-americans-can-spot-ai-generated-content/
  8. Customer Experience Dive. (2024). Influencers sway consumers but authenticity loses some clout, study finds.
    https://www.customerexperiencedive.com/news/influencer-creator-generative-ai-marketing-report/714367/
  9. DALIM ES. (2025, June 18). How to preserve visual trust in an AI-driven marketing world. DALIM Blog.
    https://www.dalim.com/dalim-software-blog/how-to-preserve-visual-trust-in-an-ai-driven-marketing-world
  10. De, S., & Vats, A. (2025, March 12). Unmask it! AI-generated product review detection in Dravidian languages. arXiv.
    https://arxiv.org/abs/2503.09289
  11. Haleem, A. (2022). Artificial intelligence (AI) applications for marketing. ScienceDirect.
    https://www.sciencedirect.com/science/article/pii/S2666603022000136
  12. Hardcastle, K. (2025). Understanding customer responses to AI-driven personalization. Journal of Advertising Research, 65(2), 1–18.
    https://doi.org/10.1080/00913367.2025.2460985
  13. Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do you trust ChatGPT? – Perceived credibility of human and AI-generated content. arXiv.
    https://arxiv.org/abs/2309.02524
  14. Kostrz, K. (2024). The effect of perceived authenticity on consumers’ willingness to buy AI-generated products (Bachelor’s thesis, Erasmus University Rotterdam).
    https://thesis.eur.nl/pub/73259/Thesis-Kostrz-560566.pdf
  15. Lifewire. (2024, August 30). AI is making it impossible to trust product and app reviews. Lifewire.
    https://www.lifewire.com/ai-making-it-hard-to-trust-reviews-8704543
  16. Mukherjee, A. (2024, March 17). Safeguarding marketing research: The generation, identification, and mitigation of AI-fabricated disinformation. arXiv.
    https://arxiv.org/abs/2403.14706
  17. Nature Communications. (2025). Impact of artificial intelligence on branding: A bibliometric review and global research agenda. Nature Communications.
    https://doi.org/10.1038/s41599-025-04488-6
  18. Ndash Blog. (2023). Innovation vs. integrity: The impact of AI on brand authenticity and consumer trust.
    https://www.ndash.com/blog/innovation-vs-integrity-the-impact-of-ai-on-brand-authenticity-and-consumer-trust
  19. New York Post. (2025, April 26). Majority of Americans trust what’s online less than ever before: Poll. New York Post.
    https://nypost.com/2025/04/26/lifestyle/majority-of-americans-trust-whats-online-less-than-ever-before/
  20. NYIT News. (2024). Are messages from robots trustworthy? NYIT News.
    https://www.nyit.edu/news/articles/do-customers-perceive-ai-written-communications-as-less-authentic/
  21. Park, K. (2024). Beyond the code: The impact of AI algorithm transparency on trust. Information Systems Journal, 34(4), 421–445.
    https://doi.org/10.1016/j.isj.2024.04.001
  22. Reinwald, P. (2024, April). Digital tools, including AI, alter consumer trust and purchasing decisions. Phys.org.
    https://phys.org/news/2024-04-digital-tools-ai-consumer-decisions.html
  23. Reuters. (2025, July 11). UN report urges stronger measures to detect AI-driven deepfakes. Reuters.
    https://www.reuters.com/business/un-report-urges-stronger-measures-detect-ai-driven-deepfakes-2025-07-11/
  24. Ribeiro, A. (2025). The impact of artificial intelligence on consumer behavior: A systematic literature review. Electronic Commerce Research, 25(1), 45–72.
    https://doi.org/10.1007/s10660-025-10063-7
  25. TechTarget. (2024, February 29). AI washing explained: Everything you need to know. TechTarget.
    https://www.techtarget.com/search/ai/definition/ai-washing
  26. Teepapal, T. (2025). AI-driven personalization: Unraveling consumer responses. Journal of Interactive Marketing, 62, 101–118.
    https://doi.org/10.1016/j.intmar.2025.03.004
  27. Whittaker, L. (2025). Examining consumer appraisals of deepfake advertising. Journal of Advertising, 54(3), 233–250.
    https://doi.org/10.1080/00218499.2025.2498830
  28. Wikipedia. (2023). Algorithm aversion. In Wikipedia.
    https://en.wikipedia.org/wiki/Algorithm_aversion
  29. Wikipedia. (2024). AI washing. In Wikipedia.
    https://en.wikipedia.org/wiki/AI_washing
  30. Zhang, L. (2025). The impact of generative AI images on consumer attitudes. Business & Management, 15(10), 395.
    https://doi.org/10.3390/bm15100395