This study was designed to explore the intricate dynamics among AI-induced job insecurity, perceived contract breach (PCB), pro-environmental behavior at work (PEBW), and the moderating influence of ethical leadership. The findings shed light on the ways in which technological advancements, especially the implementation of AI, affect employee behaviors and organizational sustainability initiatives (Please See Fig. 3).
Contrary to our initial hypothesis, we did not observe a direct negative association between AI-induced job insecurity and PEBW (H1). This unexpected outcome implies that the impact of AI-induced job insecurity on pro-environmental behaviors is more complex than previously considered. This aligns with recent studies suggesting that the connection between job insecurity and discretionary behaviors is not consistently direct (Sverke et al., 2019). The lack of a direct link emphasizes the need to consider intermediary variables to fully grasp the intricate interplay between technological changes and employee behaviors.
Our findings revealed that AI-induced job insecurity does not directly affect PEBW, contrary to our initial hypothesis. This unexpected finding suggests that the relationship between AI-induced job insecurity and PEBW is more complex than initially theorized. It aligns with recent literature suggesting that the impact of job insecurity on discretionary behaviors may be indirect, operating through mediating mechanisms (Shoss et al., 2023). This result underscores the importance of our mediation hypothesis (H4) and highlights the need to consider intervening psychological processes when examining the effects of AI-induced job insecurity on employee behaviors. Several alternative mechanisms might explain this finding. First, it’s possible that AI-induced job insecurity triggers a complex set of reactions that have counterbalancing effects on PEBW. For instance, while job insecurity might reduce some employees’ willingness to engage in discretionary behaviors, it might simultaneously motivate others to engage in PEBW as a form of impression management or job preservation strategy (Bolino, 1999). Second, the relationship between AI-induced job insecurity and PEBW might be moderated by factors not captured in our model, such as environmental values (Kim et al., 2017) or perceived organizational support (Paillé & Raineri, 2015). Future research could explore these potential moderating factors. Lastly, the timeframe of our study might not have been sufficient to capture the full effects of AI-induced job insecurity on PEBW, suggesting the need for longer-term longitudinal studies in this area (Sverke et al., 2019).
Our results robustly affirm the positive association between AI-induced job insecurity and PCB (H2). This aligns with psychological contract theory, which argues that significant organizational changes can lead to perceptions of breached implicit agreements between employees and employers (Rousseau & McLean Parks, 1993). The deployment of AI technologies, perceived as jeopardizing job security, seems to prompt a reassessment of the employment relationship, potentially diminishing the perceived mutual obligations between employees and the organization. This insight expands our comprehension of how technological shifts can alter psychological contracts in the workplace.
The study further verifies the negative relationship between PCB and PEBW (H3), which is in line with social exchange theory (Cropanzano & Mitchell, 2005). When employees sense a breach in their psychological contract, they may respond by reducing their involvement in discretionary behaviors that benefit the organization, including pro-environmental activities. This finding highlights the significance of maintaining robust psychological contracts to promote organizational citizenship behaviors, especially those connected to environmental sustainability.
Significantly, our analysis substantiates the mediating role of PCB in the linkage between AI-induced job insecurity and PEBW (H4). This offers a deeper understanding of the mechanisms through which technological shifts affect employee behaviors. It suggests that AI-induced job insecurity does not directly diminish pro-environmental behaviors; instead, it influences these behaviors through its impact on employees’ perceptions of their psychological contracts. This mediation aligns with the Context-Attitudes-Behavior (CAB) framework (Guagnano et al., 1995), illustrating how contextual factors (AI deployment) affect attitudes (perceptions of psychological contract breach), which in turn dictate behaviors (PEBW).
The moderating role of ethical leadership in the relationship between AI-induced job insecurity and PCB (H5) provides valuable insights into possible mitigation strategies for organizations implementing AI. This finding correlates with and extends previous research on ethical leadership (Brown & Treviño, 2006) to the realms of technological change and sustainability. Ethical leaders, by exhibiting fairness, integrity, and a commitment to employee well-being, seem to mitigate the adverse effects of AI-induced job insecurity on psychological contract perceptions. This underscores the pivotal role of leadership in navigating the human aspects of technological changes and sustaining employee involvement in sustainability initiatives.
Our findings offer several concrete recommendations for organizational leaders dealing with AI-induced job insecurity. First, leaders should establish regular channels for communicating about AI initiatives, their potential impacts on job roles, and the organization’s plans for workforce adaptation. This transparent communication strategy could include AI-focused town hall meetings and dedicated update newsletters (Brynjolfsson & McAfee, 2017). Additionally, organizations should invest in reskilling and upskilling programs that prepare employees for evolving job roles in an AI-enhanced workplace. For instance, companies like IBM have implemented ‘AI Ethics Boards’ that involve employees in AI-related decision-making processes (Fountaine et al., 2019). Organizations should also prioritize the development of ethical leadership capabilities, particularly among managers overseeing AI implementation, which could involve incorporating ethical decision-making modules into leadership training programs (Brown & Treviño, 2006). Furthermore, forming cross-functional teams that include both technical experts and employee representatives can help ensure that AI implementation considers both technological and human factors, potentially reducing job insecurity (Shrestha et al., 2019). Finally, organizations should create and communicate clear career development paths that account for the integration of AI, helping employees visualize their future in the company despite technological changes (Makarius et al., 2020).
Theoretical Implications
This research offers several significant theoretical implications that contribute to and extend previous works in organizational behavior, environmental psychology, and technology management.
Firstly, this study advances the theoretical understanding of employee PEBW by integrating it with the emerging concept of AI-induced job insecurity. By doing so, it expands the scope of PEBW antecedents beyond traditional organizational and individual factors (Norton et al., 2015) to include technological disruption as a critical contextual influence. This integration bridges the gap between technological advancement literature and environmental sustainability research, offering a novel perspective on how emerging technologies shape employee behaviors. The findings contribute to the refinement of the Context-Attitudes-Behavior (CAB) framework (Guagnano et al., 1995) by demonstrating how AI, as a contextual factor, influences pro-environmental attitudes and behaviors. This expansion of the CAB framework to include technological contexts enhances its applicability in contemporary organizational settings and provides a more comprehensive model for understanding PEBW in the digital age.
Secondly, the exploration of psychological contract breach as a mediating mechanism offers valuable perspectives on the psychological mechanisms underlying the link between AI-induced job insecurity and PEBW. This contribution extends psychological contract theory (Rousseau, 1989) by examining how technological changes can alter employees’ perceptions of their implicit agreements with organizations. By demonstrating the role of PCB in the relationship, the study enriches our understanding of the cognitive and affective pathways through which job insecurity influences discretionary behaviors. This finding not only advances psychological contract theory but also contributes to the broader literature on employee responses to organizational change (Oreg et al., 2011), highlighting the importance of perceived organizational obligations in shaping employee behaviors during periods of technological disruption.
Thirdly, the investigation of ethical leadership as a moderating factor provides significant theoretical implications for leadership research within the realm of technological change and sustainability. Through investigating how ethical leadership influences the relationship between AI-induced job insecurity and PEBW, this study extends ethical leadership theory (Brown & Treviño, 2006) into the domain of environmental sustainability and technological disruption. The findings contribute to our understanding of how leadership practices can weaken the negative influences of AI-induced job insecurity on pro-environmental behaviors, offering novel perspectives on the role of leadership in fostering organizational sustainability in turbulent technological environments. The amalgamation of ethical leadership with AI and sustainability research offers a more intricate theoretical structure for comprehending the effectiveness of leadership in the face of complex, contemporary organizational challenges.
Lastly, the study’s comprehensive theoretical approach, which integrates multiple frameworks including Social Information Processing Theory, the Uncertainty Management Model, and Social Exchange Theory, offers a robust foundation to understand the intricate interaction among technological change, members’ perceptions, leadership, and pro-environmental behaviors. This integrative approach advances theoretical development in organizational behavior by demonstrating how diverse theoretical perspectives can be synthesized to explain complex phenomena. The current research’s theoretical model offers a comprehensive framework for future works, encouraging a more holistic approach to investigating the sustainability of organizations within the realm of technological change. Through illustrating how these theories can be combined to explain the complex associations between AI, job insecurity, psychological contracts, leadership, and PEBW, this research paves the way for more sophisticated theoretical models in organizational and environmental psychology.
In conclusion, these theoretical implications collectively enhance the level of sophistication and thoroughness in comprehending PEBW in the context of technological disruption. They extend existing theories, bridge gaps between disparate research streams, and provide a solid foundation for future investigations into the complex dynamics of organizational sustainability in an increasingly digital world.
Practical implications
The current study has valuable practical implications for top-level management teams and executives, and practitioners navigating the complex intersection of artificial intelligence (AI) implementation, employee behavior, and organizational sustainability. These insights provide actionable strategies for fostering pro-environmental behaviors while managing the challenges associated with technological disruption.
The first key implication for organizations is the need for a strategic and employee-centric approach to AI implementation. The findings underscore the potential harmful influences of AI-induced job insecurity on PEBW, highlighting the importance of carefully considering the human implications of AI adoption alongside technological and operational benefits. Top management teams should develop comprehensive change management strategies that address employee concerns about job security during AI implementation, aligning with research by Brougham and Haar (2020) on managing employee perceptions during technological transitions. Implementing transparent communication protocols is crucial to keep members informed about AI activities, their potential impacts, and the organization’s plans for workforce adaptation. Such clear communication can help mitigate uncertainty and reduce perceptions of PCB (Morrison & Robinson, 1997).
The second practical implication stems from the study’s findings on the mediating effect of PCB, emphasizing the importance of actively managing employee expectations and perceptions of organizational duties in the AI era. Organizations should regularly assess and realign psychological contracts to reflect the changing nature of work in AI-enhanced environments. This may involve explicitly addressing expectations around job roles, skill development, and job security in the context of AI adoption (Coyle-Shapiro et al., 2019). Developing policies and practices that demonstrate organizational commitment to employee well-being and career development, even as AI technologies are integrated into work processes, can help maintain a positive social exchange relationship and encourage reciprocal pro-environmental behaviors (Cropanzano & Mitchell, 2005).
Third, the moderating effect of ethical leadership on the relationship between AI-induced job insecurity and PCB has significant practical implications. This finding suggests that ethical leadership practices, such as transparent communication about AI implementation and demonstrating concern for employee well-being, can mitigate the negative effects of AI-induced job insecurity on psychological contract perceptions. For instance, leaders who openly discuss the potential impacts of AI on job roles and provide opportunities for skill development may help maintain the psychological contract, even in the face of technological uncertainty.
Our findings extend previous research on job insecurity and pro-environmental behaviors. While previous works like Shoss et al., 2023 reported direct relationships between job insecurity and organizational citizenship behaviors, our results reveal a more complex picture. The non-significant direct path from AI-induced job insecurity to PEBW (β = −0.048, p > 0.05), coupled with the significant indirect effect through PCB, suggests that the impact of job insecurity on pro-environmental behaviors is more nuanced than previously thought. This aligns with recent calls in the literature for more complex models of employee behavior in the face of technological change (Brougham & Haar, 2018). Furthermore, our study is among the first to empirically demonstrate the role of ethical leadership in the context of AI-induced job insecurity. While Brown and Treviño (2006) have established the general importance of ethical leadership, our findings specifically show its buffering effect in technological transitions. This contributes to the literature on leadership in this digital age (Shrestha et al., 2019).
The fourth practical implication derives from the study’s comprehensive theoretical framework, suggesting the need for a holistic approach to organizational sustainability that integrates technological, human, and environmental considerations. Organizations should develop cross-functional teams that bring together expertise in AI implementation, human resources, sustainability, and ethics to ensure a balanced approach to organizational initiatives. Implementing sustainability scorecards or key performance indicators (KPIs) that include measures of AI adoption, employee well-being, and environmental performance can help organizations track progress across multiple dimensions of sustainability (Epstein, 2018).
Fifth, our findings suggest several specific leadership practices and organizational policies to mitigate AI-induced job insecurity and foster PEBW. Organizations should proactively redesign job roles to incorporate AI, clearly defining how human employees and AI will collaborate. This can reduce uncertainty and job insecurity, as demonstrated by companies like Accenture that have implemented ‘Human-AI collaboration’ workshops to help employees understand and adapt to their evolving roles (Wilson & Daugherty, 2018). Furthermore, organizations should create dedicated committees or task forces responsible for overseeing the ethical implementation of AI. These structures can help ensure that AI deployment considers employee well-being and job security, as exemplified by Microsoft’s AI ethics review process, which involves cross-functional teams to assess the potential impacts of AI projects on various stakeholders, including employees (Floridi et al., 2018). Organizations can also link AI implementation to environmental sustainability goals, potentially increasing employee engagement in PEBW. For example, Google’s DeepMind AI has been used to reduce energy consumption in data centers, a project that can inspire employees to consider AI’s potential for environmental benefits (DeepMind, 2016). Additionally, organizations should ensure that AI-driven decisions, especially those affecting job roles or employment status, are transparent and explainable. This can help maintain trust and reduce perceptions of psychological contract breach, as demonstrated by IBM’s development of AI explainability toolkits that provide clear explanations of AI-driven decisions to employees (Gunning & Aha, 2019). Lastly, establishing voluntary, employee-led groups focused on AI can provide a platform for employees to share concerns, learn about AI developments, and contribute ideas for ethical AI implementation, fostering a sense of involvement and potentially reducing job insecurity.
Sixth, while our study was conducted in South Korea, the implications of our findings may vary across different cultural contexts. For instance, the effect of ethical leadership on mitigating AI-induced job insecurity might be stronger in cultures with higher power distance, where leaders’ behaviors have a more significant impact on employees’ perceptions (Hofstede, 2001). Similarly, the relationship between psychological contract breach and PEBW might be influenced by cultural differences in individualism versus collectivism. In more collectivist cultures, employees might maintain pro-environmental behaviors despite perceived contract breaches due to a stronger sense of collective responsibility (Triandis, 1995).
Furthermore, the perception of AI-induced job insecurity itself might vary across cultures. In societies with stronger social safety nets or lifetime employment traditions, such as Japan, the impact of AI on job insecurity might be less pronounced (Kato & Kodama, 2018). Conversely, in countries with more fluid labor markets, like the United States, AI-induced job insecurity might have a more significant effect on employee attitudes and behaviors. Therefore, while our findings provide valuable insights, organizations operating in diverse cultural contexts should consider these cultural nuances when implementing AI and designing strategies to maintain psychological contracts and promote pro-environmental behaviors. Future research could explicitly examine these cross-cultural variations to provide more globally applicable insights.
Limitations and suggestions for future works
While this paper offers valuable perspectives on the relationships between AI-induced job insecurity, PCB, PEBW, and ethical leadership, it is crucial to recognize many constraints and suggest avenues for future investigation.
First, while our 3-wave time-lagged design offers several advantages over cross-sectional studies, it is not without limitations. First, the potential for attrition bias must be considered. Our final response rate of 48.21% suggests that a significant portion of initial participants did not complete all three waves. This attrition may have introduced bias if the characteristics of those who dropped out differed systematically from those who completed the study (Wolke et al., 2009). Moreover, the time gaps between data collection (5–6 weeks) may have influenced our results. While this interval was chosen to minimize common method bias and capture the dynamic nature of our constructs, it may not have been optimal for detecting all relevant changes. For instance, the effects of AI-induced job insecurity on psychological contract breach might manifest over a longer period, or conversely, might be more immediate and thus partially missed by our measurement intervals (Podsakoff et al., 2012).
Additionally, our study’s timeframe coincided with the ongoing COVID-19 pandemic, which may have introduced confounding factors. The pandemic has accelerated digital transformation in many organizations, potentially amplifying AI-induced job insecurity. Simultaneously, it may have affected pro-environmental behaviors due to changed work arrangements (e.g., increased remote work). The contingent elements would have affected the results of this paper in ways that are difficult to isolate (Carnevale & Hatak, 2020).
Second, our study was conducted within the unique cultural confines of South Korea, thus constraining the extrapolation of our results to different cultural environments. The perception of job insecurity, the nature of psychological contracts, and the manifestation of pro-environmental behaviors can vary significantly across cultures (Rousseau & Schalk, 2000). Future research should consider cross-cultural comparisons to examine how these relationships may differ in various cultural contexts. This could involve replicating the study in multiple countries or conducting a multi-level analysis that incorporates national culture as a higher-level factor influencing individual-level relationships (Tsui et al., 2007).
Third, while our study focused on ethical leadership as a moderating factor, other leadership styles or organizational factors may also play crucial roles in shaping the relationships we examined. Future studies could explore the moderating effects of transformational leadership, servant leadership, or green leadership on the link between AI-induced job insecurity and PCB or PEBW (Eva et al., 2019; Robertson & Barling, 2013). Additionally, investigating organizational-level factors such as organizational culture, climate for innovation, or environmental management systems could offer a more comprehensive knowledge of the contextual influences on the relationships (Norton et al., 2015).
Fourth, our measurement of AI-induced job insecurity, while adapted from established scales, may not fully capture the nuanced nature of job insecurity related to AI implementation. The rapid evolution of AI technologies and their varied applications across different job roles and industries suggest that a more fine-grained measure of AI-induced job insecurity might be necessary. Future research could develop and validate a more comprehensive scale that considers different aspects of AI-induced job insecurity, such as fears of job displacement, concerns about skill obsolescence, or anxieties about human-AI collaboration (Brougham & Haar, 2020; Nam, 2019).
Fifth, while our study examined PEBW as an outcome, future works can investigate a more diverse range of outcomes affected by AI-induced job insecurity and psychological contract breach. This could include other forms of organizational citizenship behaviors, job performance, innovation behaviors, or employee well-being (Zhao et al., 2007). Additionally, investigating potential positive consequences of AI implementation, including augmented job enrichment or new skill development opportunities, could offer a more balanced view on the impact of AI at work (Makarius et al., 2020).
Lastly, our study focused on individual-level perceptions and behaviors. However, the implementation of AI technologies often occurs at the team or organizational level. Future works can adopt a multi-level approach to delve into how team-level or organization-level AI implementation strategies influence individual-level perceptions of job insecurity, PCB, and subsequent behaviors (Kozlowski & Klein, 2000). This could involve collecting data from multiple sources in an organization, such as employees, team leaders, and senior management, to gain a more holistic understanding of the phenomena.
Addressing these limitations and pursuing these future research directions could significantly improve our knowledge of the complex interplay between technological advancements, employee perceptions, leadership, and PEBW. Future research could explore digital transformations beyond AI and their impact on PEBW. For instance, the increasing adoption of Internet of Things (IoT) technologies in workplaces presents an interesting avenue for investigation. IoT devices can provide real-time feedback on resource consumption, potentially influencing employees’ environmental behaviors. Research could examine how this constant availability of environmental data affects PEBW and whether it interacts with factors like job insecurity or psychological contract breach (Marikyan et al., 2019).
Additionally, emerging forms of leadership, such as digital leadership or sustainable leadership, could be examined in relation to PEBW. Digital leadership, which emphasizes the ability to drive digital transformation while considering its human implications, might have unique effects on how employees perceive and respond to technological changes in terms of their pro-environmental behaviors (Cortellazzo et al., 2019). Similarly, sustainable leadership, which explicitly incorporates environmental considerations into leadership practices, could have significant implications for PEBW and might interact differently with AI-induced job insecurity compared to traditional leadership styles (Suriyankietkaew & Avery, 2016).
Furthermore, future studies could adopt a multi-level approach to examine how team-level or organization-level AI implementation strategies influence individual-level perceptions of job insecurity, PCB, and subsequent behaviors. This could involve collecting data from multiple sources within organizations, such as employees, team leaders, and senior management, to gain a more holistic understanding of the phenomena (Kozlowski & Klein, 2000).
Lastly, given the rapid advancement of AI technologies, longitudinal studies spanning several years may offer precious perspectives into how the relationship between AI-induced job insecurity and PEBW evolves over time. Such studies could capture potential adaptation processes and long-term changes in employee attitudes and behaviors as AI becomes more integrated into work processes.
link
More Stories
Remote Workers Focus Better With Fewer Interruptions
Lawsuit alleges Skip Bayless harassed hairstylist, offered her $1.5M for sex
McDonald’s faces new abuse claims despite promises of change