AI reduces cognitive load and boosts operational flow, helping hospitality pros reclaim time. However, as AI integrates more into communication workflows, ethical and operational issues arise. In this article, we examine, based on our experience and work in the HosByte: Smart Omnichannel Sales in the Hospitality Sector project, the risks, vulnerabilities, and unintended consequences of relying on AI to manage email inboxes.
Corporate digital responsibility
Lobschat et al. (2021) introduce Corporate Digital Responsibility (CDR) to address ethical and security issues in intelligent email management. CDR involves shared values guiding an organisation’s digital processes: creation, operation, inspection, and refinement of technology and data. A strong CDR framework benefits hospitality companies by reducing digital risks and boosting reputation, customer trust, and loyalty (Wirtz et al. 2022).
Wirtz et al. (2022) propose a CDR model grounded in the ethical and fair treatment of customers, emphasising their wellbeing and the protection of their privacy. Key measures include embedding shared CDR values across the organisation, actively involving employees through targeted training, developing specialised digital roles within operational units, and cultivating a culture of transparency. They also advocate for reducing customer coercion by offering meaningful alternatives and ensuring robust privacy safeguards.
These measures need strong communication among customers, partners, and employees, plus cybersecurity protocols to limit data access. Techniques like encryption, anonymisation, and data minimisation create protective layers, safeguarding customer data and reducing breach damage.
Ethical considerations and risks of intelligent email management
While many applications of intelligent email management may appear benign, the ability of AI models to access email content and sender contact information raises serious ethical concerns, particularly around data privacy. In the hospitality industry, employees routinely receive emails from two critical stakeholder groups: customers and partners. These messages often contain sensitive information, including preferences, behavioral patterns and personal identifiers (Chen 2025).
To exemplify the above, consider an apartment hotel where most guest communication happens via email. In this case, automated systems handle booking confirmations, access codes, and service requests. If an AI email assistant processes these exchanges, it gains access to guest arrival times, door codes, payment details, and personal preferences. Without explicit consent mechanisms, guests remain unaware their communications flow through AI systems.
In our own work at Haaga-Helia, we process industry newsletters and research updates using AI tools to efficiently extract key insights from dozens of hospitality publications weekly. Yet each newsletter contains proprietary analysis and subscriber data. Where does efficiency cross into ethical ambiguity? Even when supporting legitimate academic work, the lack of transparency about data processing creates uncomfortable grey areas.
Concerns about data collection practices in hospitality are mounting, especially in relation to cybersecurity vulnerabilities, continuous data harvesting, and manipulative tactics such as hidden disclosures and forced consent mechanisms (Law, Ye & Lei 2025). This lack of transparency is evident in a routine task: senders are often unaware that their messages are being processed by AI systems, implying that no explicit consent has been provided for such use.
In addition to data breaches, the use of AI in digital systems has the potential to amplify traditional cyber threats, such as phishing and disinformation campaigns, but also introduces new vulnerabilities that may give rise to AI-powered cyberattacks. Representative examples include data poisoning, where the training process of an AI model is maliciously manipulated to degrade its performance, and adversarial attacks, which involve subtly altering input data to mislead the system’s output (Veprytska, Kharchenko & Illiashenko 2025). Even when the code and training processes appear straightforward, the resulting models can exhibit unpredictable behavior — often functioning as opaque ‘black boxes’ that defy easy interpretation (Wirtz et al. 2022).
To conclude, the use of AI for email management in hospitality should be guided by ethical principles including explainability, inclusiveness, beneficence (with a focus on customer wellbeing), autonomy and, above all, accountability — highlighting the legal responsibility of organisations in deploying AI systems responsibly (Law et al. 2025).
Across this and our previous article Hospitality rewired: the strategic advantage of intelligent email management, we have aimed to reflect on both the strategic benefits and ethical considerations and risks of intelligent email management, offering a foundation for more informed decision-making among hospitality stakeholders.
Hospitality Rewired is an article series exploring the role of artificial intelligence in hospitality operations, offering a critical lens on both its potential and its challenges—with a particular focus on operational efficiency.
Platform economy, artificial intelligence, service robotics, and XR technologies offer new opportunities for small and medium-sized enterprises (SMEs) in the hospitality sector to reach customers and enhance their business operations. The HosByte: Smart Omnichannel Sales in the Hospitality Industry project’s outcomes support profitable and responsible growth for SMEs in the Uusimaa region. The project is co-financed by the European Union and the Helsinki-Uusimaa Regional Council and will be implemented between 09/2024 – 08/2026.


References
Chen, Z. 2025. Exploring the Transformative Potential of Artificial Intelligence in Hospitality: A Systematic Review of Applications and Challenges. Cornell Hospitality Quarterly.
Law, R., Ye, H. & Lei, S. S. I. 2025. Ethical artificial intelligence (AI): Principles and Practices. International Journal of Contemporary Hospitality Management, 37, 1, pp. 279-295.
Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke, M. & Wirtz, J. 2021. Corporate Digital Responsibility. Journal of Business Research, 122, pp. 875-888.
Veprytska, O., Kharchenko, V. & Illiashenko, O. 2025. Cybersecurity and Artificial Intelligence: Triad-Based Analysis and Attacks Review. Cybernetics and Information Technologies : CIT, 25, 3, pp. 156-185.
Wirtz, J., Kunz, W. H., Hartley, N. & Tarbit, J. 2022. Corporate Digital Responsibility in Service Firms and Their Ecosystems. Journal of Service Research, 26, 2, pp. 173-190.
The authors have used Grammerly for grammar and style checks.
Picture: Shutterstock