Pro
Siirry sisältöön
Entrepreneurship

Rise of the flattery algorithm and its implications on business coaching

Kirjoittajat:

Johanna Mäkeläinen

lehtori
Haaga-Helia ammattikorkeakoulu

Published : 21.05.2025

In the past decade, social media platforms have mastered the art of tailoring content to individual preferences. By analysing every click, like and scroll, recommender algorithms deliver a steady stream of personalised posts that maximise user engagement – and with it, dopamine-driven reinforcement. In the same way, generative AI tools are evolving into ‘flattery algorithms’, output specifically designed to please and affirm their users’ tastes and viewpoints.

Haaga-Helia’s Upbeat project encourages young immigrant entrepreneurs to improve their business skills and ideas by utilizing various generative AI tools. However, it is critical that the trainers also teach the entrepreneurs AI literacy skills. In this article, I will discuss the potential implications of the flattery algorithm and how it may influence entrepreneurs using generative AI tools to develop their businesses.

The next dopamine loop

Social media’s recommender algorithms are explicitly engineered to trigger the brain’s reward pathways: each novel stimulus or unexpected ‘like’ produces a dopamine spike that encourages further scrolling. When receiving a perfectly tailored compliment, our brains interpret such personalized feedback as social reward. Over time, reliance on machine generated affirmation could condition us to seek faster, more consistent hits of validation from our digital devices.

While recommender systems reinforce our existing interests, generative AI can go further by producing content that flatters our identity. Large language models (LLMs) effectively learn what pleases you and serve it back. Finetuning and Reinforcement Learning from Human Feedback (RLHF) methods have dramatically improved LLMs’ ability to sustain extended, on topic conversations.

Users value systems that feel personable. By modelling emotional cues and employing sentiment aware decoding, LLMs can respond not only with the right information but with the right affective stance, creating the impression of genuine conversation.

Research by Sharma, Liao and Xiao (2024) into LLM powered search systems has already documented this tendency toward selective exposure. In controlled experiments, participants using conversational search with opinionated LLMs engaged in more biased information querying compared to traditional search engines – reinforcing pre-existing beliefs rather than opening new perspectives.

Always a winning business idea?

Entrepreneurs are increasingly turning to generative AI for brainstorming and business planning. However, studies indicate a risk of automation bias that inflates confidence and stifles critical evaluation. As we grow accustomed to algorithmic affirmation, real-world business mentoring – which is sometimes critical – may feel less rewarding in comparison.

David Sweenor’s (2024) research shows LLMs deliver agreeable, flattering suggestions that reinforce entrepreneurs’ original ideas rather than challenging assumptions. We might unconsciously prefer the polished version of our own preferences over genuine dialogue that broadens our perspectives.

An article in Neuroscience News (2025), reports that AI systems exhibit overconfidence and risk averse patterns mirroring human biases, potentially amplifying entrepreneurs’ own cognitive distortions. Moreover, automation bias research reveals that users tend to accept AI outputs uncritically, reducing their propensity to verify or seek alternative perspectives.

Otis et. al. (2023) conducted a five month field experiment with 640 Kenyan entrepreneurs. ChatGPT based advisory tools boosted revenues overall but produced wide dispersion: high performers benefited most while others over reliant on AI guidance pursued misguided strategies and suffered from diminished real world validation.

AI literate business coaching

Long and Magerko (2020) define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool. This is crucial to understand, as all generative AI models are prone to hallucinations, even though search and reasoning abilities are making them more reliant.

Young entrepreneurs receiving business training and coaching at Haaga-Helia’s Upbeat project learn that they are always in charge, and accountable, for all business decisions. No matter how tempting it is to rely on Chat GPT as their know-it-all business mentor, they have to develop a critical mindset towards AI tools.

AI literacy is incorporated into the curriculum at different levels: as theory components, where the entrepreneurs learn the basics of AI and how the tools work, and as practical exercises, where they try commercial AI tools and compare them to AI-powered Smart Guides, built by Haaga-Helia’s experts. Group discussions and personal reflection with the custom-built AI Learning Assistant tool make sure that these topics are covered from different angles.

It is important to teach entrepreneurs that generative AI can liberate creativity, personalise learning and augment our capabilities in unprecedented ways. Yet its ability to flatter and affirm risks reinforcing biases and conditioning us into new forms of digital dependency. As we integrate generative AI into business coaching, it is essential to remind of the hard work required for a successful business, no matter how much the tools flatters their first business ideas.

References

Long, D. & Magerko, B. 2020. ”What is AI Literacy? Competencies and Design Considerations“. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–16.

Neuroscience News. 2025. AI Thinks Like Us: Flaws, Biases, and All, Study Finds. Accessed 24.4.2025.

Otis, N. G., Clarke, R. P., Delecourt, S., Holtz, D., & Koning, R. 2023. The Uneven Impact of Generative AI on Entrepreneurial Performance.

Sharma, N., Liao. V. and Xiao, Z. 2024. Generative Echo Chamber? Effect of LLM-Powered Search Systems on Diverse Information Seeking in Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1033, Pages 1 – 17.

Sweenor, D. 2024. The Yes-Man in the Machine: Avoiding the AI Sycophancy Echo Chamber. LinkedIn. Accessed 23.4.2025

ChatGPT 4.5 was used to finalise the wording of the text (search engine optimisation and alignment).

Picture: Shutterstock