Korean Institute of Information Technology
[ Article ]
The Journal of Korean Institute of Information Technology - Vol. 23, No. 1, pp.31-42
ISSN: 1598-8619 (Print) 2093-7571 (Online)
Print publication date 31 Jan 2025
Received 03 Nov 2024 Revised 09 Jan 2025 Accepted 12 Jan 2025
DOI: https://doi.org/10.14801/jkiit.2025.23.1.31

Security Challenges and Ethical Considerations in the Adoption of Generative AI Technology

Tae-Hyung Kim*
*Ph.D. Candidate in ICT Convergence Engineering, Graduate School, DanKook University

Correspondence to: Tae-Hyung Kim Dept. of Future ICT Convergence, DanKook University Tel.: +82-2-598-7297, Email: playsumer@naver.com

Abstract

In the rapidly evolving landscape of Information Technology(IT), Generative AI has emerged as a groundbreaking force, reshaping user interactions and functionalities. This research aims to understand the antecedents influencing the acceptance of Generative AI, particularly focusing on security and ethical considerations, within the IT sector. Employing a PLS-SEM technique, the study analyzes responses from a sample of 212 university students to gain insights into their perceptions and acceptance intentions. The study reveals that usefulness and easiness significantly impact acceptance intention. However, it also highlights the crucial roles of information security and ethical concerns, which are often underemphasized in traditional technology acceptance models. The findings indicate that while users value functional benefits, their concerns about security and ethics significantly shape their acceptance of Generative AI technologies.

초록

급격히 진화하는 정보기술(IT) 분야에서 생성형 AI는 사용자 상호작용과 기능을 재구성하는 획기적인 세력으로 부상해 오고 있다. 이 연구는 특히 보안과 윤리적인 고려 사항에 초점을 맞춰 IT 부문에서 생성형 AI의 수용에 영향을 미치는 요인들을 이해하는 것을 목표로 한다. 부분최소자승 구조방정식 모델링(PLS-SEM) 기술 방식을 적용, 이 연구는 응답자들의 인식 및 수용 의도에 대한 인사이트를 얻기 위해 212명의 대학생 표본의 응답을 분석한다. 연구는 인지된 유용성과 사용 편의성이 수용 의도에 중대한 영향을 미치고 있음을 보여준다. 그러나 전통적인 기술 수용 모델에서는 일반적으로 강조되지 않은 정보 보안과 윤리적 우려가 중요한 역할을 한다는 것에 대해서도 강조하고 있다. 연구 결과를 통해 사용자는 기능적 이점들을 중요하게 생각하긴 하지만, 보안 및 윤리에 대한 우려가 사용자의 생성형 AI 기술 수용에 상당한 영향을 미치고 있다는 것을 알 수 있다.

Keywords:

generative artificial intelligence, information technology, user acceptance, security, ethical concerns

Ⅰ. Introduction

Generative Artificial Intelligence(AI) is rapidly transforming human interaction, productivity, and problem-solving across industries[1]. This innovative technology leverages advanced machine learning algorithms to produce human-like content, including text, images, and even audio, thereby opening new avenues for creativity and efficiency[2]. Among the most prominent examples of generative AI is ChatGPT, developed by OpenAI, which utilizes deep learning models to generate coherent, contextually relevant, and natural-sounding text[3]. Its versatility allows it to be applied across diverse tasks, including customer support, content creation, and educational assistance. These capabilities have positioned ChatGPT as a key player in the generative AI landscape, offering substantial potential for practical and academic applications.

The adoption of ChatGPT among university students has been particularly notable in recent years[4][5]. Students frequently use ChatGPT for academic tasks such as summarizing articles, generating ideas for assignments, drafting reports, and even coding assistance[6][7]. This growing reliance highlights the need to understand the factors influencing their acceptance intention. The Technology Acceptance Model(TAM), a widely recognized theoretical framework, serves as an appropriate lens for examining this phenomenon[8]. According to TAM, perceived usefulness and perceived ease of use are key determinants of technology acceptance. In the context of ChatGPT, perceived usefulness refers to how well the tool supports students in academic tasks, while perceived ease of use pertains to how effortlessly they can interact with the platform. By applying TAM, this study seeks to provide insights into how these factors shape acceptance intention among university students, addressing an area of growing academic interest.

While TAM explains much of the technology acceptance process, additional factors such as information security and ethical concerns have emerged as critical considerations in the adoption of generative AI technologies[9]-[12]. Data breaches and privacy issues have become increasingly common in AI-driven systems, raising significant security concerns among users. In the context of ChatGPT, perceived usefulness refers to how effectively the platform enhances productivity and academic performance, while perceived ease of use focuses on the simplicity and intuitiveness of interacting with the system. However, beyond these traditional TAM factors, information security and ethical concerns have emerged as critical considerations in ChatGPT adoption. Issues such as data privacy, trust in the platform's handling of personal information, and ethical dilemmas arising from AI-generated content significantly affect user acceptance. Addressing these factors provides a more comprehensive understanding of the dynamics shaping users' acceptance intentions, emphasizing the importance of integrating security and ethical considerations into TAM for analyzing Generative AI adoption.

For university students, who frequently handle sensitive academic and personal data on AI platforms, these concerns are particularly relevant. Similarly, ethical considerations, including bias in AI-generated content, misuse of information, and moral dilemmas, can also influence user acceptance. These dimensions are underexplored in existing research, despite their significant influence on user perceptions and behavior.

Despite the growing body of literature on generative AI, limited research specifically explores how TAM factors interact with information security and ethical concerns in shaping students' acceptance intention toward ChatGPT. This study aims to address this gap by integrating these factors into the TAM framework to provide a comprehensive analysis of acceptance intention.

The objective is to contribute to a deeper understanding of how perceived usefulness, perceived ease of use, information security, and ethical concerns collectively influence adoption behavior. This research offers valuable insights for educators, developers, and policymakers, enabling them to design AI tools and policies that address both functional and ethical user concerns effectively. Through this contribution, the study aims to advance the academic discourse on generative AI adoption while offering practical implications for enhancing user trust and acceptance.


Ⅱ. Literature Review and Research Hypotheses

2.1 Perceived ease of use

Perceived ease of use, defined as the users' perception of how effortlessly they can interact with a system, has been found to be a significant factor in influencing acceptance intention in various technology acceptance cases[8]. Specifically, when users consider a system easy to use, they are more inclined to show an intention to adopt and utilize it in the future. [13]-[15]. In the context of ChatGPT, where user-friendliness is crucial, it logical to propose that a higher level of easiness will lead to a more positive acceptance intention. Thus, this study suggests that perceived ease of use positively affects acceptance intention in the context of ChatGPT.

H1. Perceived ease of use significantly impacts acceptance intention.

2.2 Perceived usefulness

Perceived usefulness, referring to the extent to which users believe that a system can enhance their productivity or performance, is a critical factor in technology acceptance[8]. Numerous studies have demonstrated that when individuals perceive a technology as useful, they are more likely to express a positive intention to accept and use it [9][16][17]. In the context of ChatGPT, which aims to provide valuable assistance, it is plausible to hypothesize that a higher level of perceived usefulness will lead to a significant and positive impact on acceptance intention. Therefore, this study suggests that perceived usefulness significantly affects acceptance intention in the context of ChatGPT.

H2. Perceived usefulness significantly impacts acceptance intention.

2.3 Information security

Information security, the confidence users have in the protection of their personal information, is a crucial factor in technology acceptance[18]. When individuals trust that their data will be secure, they are more likely to express a positive acceptance intention[9]. Thus, it is hypothesized that information security significantly affects acceptance intention. Information security may also play a moderating role in the relationship between perceived ease of use and acceptance intention. Users' confidence in data protection could enhance the positive impact of perceived ease of use on acceptance intention[19]. Similarly, information security may moderate the relationship between perceived usefulness and acceptance intention. A strong sense of data security could strengthen the positive influence of perceived usefulness on acceptance intention[20]. Therefore, this study suggests that information security is a significant factor affecting acceptance intention, with potential moderating effects on the relationships between perceived ease of use and perceived usefulness with acceptance intention.

H3a. Information security significantly impacts acceptance intention.

H3b. Information security significantly moderate the impacts of perceived ease of use on acceptance intention.

H3c. Information security significantly moderate the impacts of perceived usefulness on acceptance intention.

2.4 Ethical concerns

Ethical concerns, which encompass the moral dilemmas and considerations users experience, can significantly influence their acceptance intention. When individuals have ethical concerns while using a system like ChatGPT, it can impact their willingness to accept and use it[21][22]. Thus, it is hypothesized that ethical concerns significantly affect acceptance intention. Ethical concerns may also play a moderating role in the relationship between perceived ease of use and acceptance intention. When users have ethical concerns, the influence of perceived ease of use on acceptance intention could be significantly moderated. Likewise, ethical concerns may moderate the relationship between perceived usefulness and acceptance intention. The presence of ethical concerns might influence how perceived usefulness impacts acceptance intention. Therefore, this study suggests that ethical concerns are a significant factor affecting acceptance intention and may have moderating effects on the relationships between perceived ease of use and perceived usefulness with acceptance intention.

H4a. Ethical concerns significantly affect acceptance intention.

H4b. Ethical concerns significantly moderate the impacts of perceived ease of use on acceptance intention.

H4c. Ethical concerns significantly moderate the impacts of perceived usefulness on acceptance intention.


Ⅲ. Research Methodology

3.1 Measurement

In developing the instrument for this study, we meticulously crafted items for each construct, drawing on previously validated studies.

Table 1 presents the constructs and their corresponding measurement items used in this study, categorized into perceived ease of use, perceived usefulness, information security, ethical concerns, and acceptance intention. Perceived ease of use assesses how effortlessly users can interact with ChatGPT, focusing on accessibility, content downloadability, and ease of learning. Perceived usefulness evaluates the extent to which ChatGPT enhances productivity and task efficiency in daily activities. Information security emphasizes users' trust in ChatGPT's ability to safeguard personal information, ensure privacy, and prevent unauthorized data use. Ethical concerns are clarified through three specific dimensions: general ethical concerns (ETC1), feelings of guilt related to AI reliance (ETC2), and a sense of moral responsibility while using AI (ETC3). These items aim to capture users' emotional and moral considerations when interacting with ChatGPT. Finally, Acceptance Intention measures users' willingness to continue using ChatGPT, incorporate it into their routines, and maintain long-term adoption. These constructs collectively provide a structured approach to understanding users' acceptance behavior towards ChatGPT.

Constructs and measurements

To measure the constructs, we employed a seven-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). This scale was chosen for its ability to capture a wide range of responses, allowing for nuanced differentiation between levels of agreement or disagreement among participants.

Prior to the main survey, we conducted a rigorous content validity assessment. This involved a pre-test with a panel of experts from both academia and industry. These experts were asked to evaluate the clarity, relevance, and appropriateness of each item in the context of Generative AI.

Their feedback was instrumental in refining the instrument to ensure its validity. Following the content validity check, a pilot test was conducted with voluntary participants who are knowledgeable in fields related to Generative AI. The aim of this pilot test was to preliminarily assess the reliability of the instrument and to make any necessary adjustments before deploying it in the main study.

3.2 Data and subjects

The survey for this research targeted university students, a choice grounded in the specific focus of the study. University students represent an appropriate demographic for this research as they are often more sensitive to issues of personal information disclosure, especially in the context of using technologies like ChatGPT. Additionally, their reliance on AI for task completion could potentially lead to ethical discomfort[25][26]. The survey was distributed nationwide across universities in South Korea through the researchers' academic network, with the assistance of several professors who actively supported the dissemination process.

The survey was shared via university online communities, official class announcements, and academic email networks, ensuring accessibility to a diverse group of university students from various disciplines and regions. Data collection occurred over two months, from April to May 2023, allowing sufficient time to gather a representative and balanced dataset while minimizing potential external influences on responses.

The survey began with an introductory section, explaining the purpose of the study: to investigate factors affecting university students' acceptance and academic use of ChatGPT. It highlighted ChatGPT's background, its development by OpenAI, and its rapid adoption globally. Participants were assured that the survey would take approximately 10-15 minutes, their responses would remain anonymous and confidential, and data would be used exclusively for academic research purposes.

To ensure the quality and reliability of the data, a pre-processing procedure was implemented. This involved checking for completeness, consistency, and outliers in the responses. The data filtering criteria included removing incomplete responses and those that failed consistency checks, such as straight-lining (where a participant selects the same response for all items).

Table 2 presents the demographic characteristics of the 212 participants in the study. It categorizes participants by gender and age. Among the participants, 44.3% (94) are male, and 55.7% (118) are female. The age distribution shows a varied range: 10.8% (23) are 18 years old, 20.8% (44) are 19, 13.2% (28) are 20, 9.4% (20) are 21, 8.0% (17) are 22, and the largest group, 37.7% (80), are 23 years or older. This table provides a comprehensive overview of the sample's demographic makeup, essential for understanding the study's context and applicability.

Demographic characteristics of the samples


Ⅳ. Analysis and Results

In this study, the Partial Least Squares(PLS) approach was utilized to manage the reflective factors and the extensive range of constructs[27].

PLS is particularly appropriate for handling intricate predictive models, especially those comprising numerous constructs, including formative ones. To evaluate the measurement and structural models in terms of reliability, convergent validity, and discriminant validity[28].

4.1 Measurement model

Confirmatory factor analysis was performed to verify the convergent validity, reliability, and discriminant validity of the measurement scales. We assessed scale reliability through Composite Reliability (CR) and Cronbach's alpha. The composite reliability scores, exceeding 0.9, demonstrate robust internal consistency within the model[29].

Convergent validity is deemed adequate when factor loading values surpass 0.70 and the Average Variance Extracted(AVE) is higher than 0.5. As shown in Table 3, convergent validity met the required threshold as item loadings were above 0.60, aligning with the criteria established[30].

Reliability and convergent validity

The square root of the AVE values of our constructs exceeded the correlations between the construct and the other constructs, thus satisfying discriminant validity(Table 4).

Fornell-Larcker scale results

4.2 Hypothesis test

The hypothesis testing was conducted based on the data summarized in Table 5, focusing on the relationships between various constructs and the acceptance intention of Generative AI, particularly ChatGPT.

Summary of the results

Hypothesis H1 proposed that perceived ease of use positively affects acceptance intention. The results supported this hypothesis, with a coefficient of 0.112, a t-value of 2.316, and a p-value of 0.021. This indicates that as the perceived ease of use increases, so does the intention to accept and use the technology.

Hypothesis H2 suggested a positive effect of perceived usefulness on acceptance intention. This hypothesis was strongly supported, as indicated by a high coefficient of 0.532, a significant t-value of 8.350, and a p-value of 0.000. This result underscores the importance of perceived usefulness as a critical determinant of technology acceptance.

The H3a hypothesis posited that information security would positively influence acceptance intention. The hypothesis was supported, with a coefficient of 0.157, a t-value of 2.784, and a p-value of 0.005, indicating that concerns about information security are indeed an influential factor in acceptance intention.

Hypotheses H3b and H3c tested the interaction effects of information security with perceived ease of use and perceived usefulness, respectively, on acceptance intention. Both hypotheses were not supported, as evidenced by the coefficients and t-values (H3b: -0.014, 0.232; H3c: 0.074, 1.444), along with p-values that indicate a lack of statistical significance (H3b: 0.816; H3c: 0.149).

Hypothesis H4a examined the impact of ethical concerns on acceptance intention, predicting a negative effect. This hypothesis was supported with a coefficient of -0.250, a t-value of 4.427, and a p-value of 0.000, suggesting that ethical concerns significantly deter users from accepting the technology. Hypotheses H4b and H4c explored the moderating effects of ethical concerns in combination with perceived ease of use and perceived usefulness on acceptance intention. H4b was supported, showing a positive interaction effect with a coefficient of 0.120 and a t-value of 2.041. However, H4c was not supported, as indicated by a coefficient of -0.049 and a t-value of 0.751.

In summary, the hypothesis testing revealed critical insights into the factors influencing the acceptance of Generative AI technologies like ChatGPT. Perceived usefulness emerged as a highly significant predictor, while the roles of ease of use, information security, and ethical concerns were also confirmed to be important, albeit to varying degrees.


Ⅴ. Discussion

In the discussion of the research findings, the results from Table 5 provide an intriguing insight into the factors influencing acceptance intention. Each hypothesis presents a unique aspect of this relationship, shedding light on the complexities of user interactions with technology.

The positive correlation between perceived ease of use and acceptance intention (Coefficient = 0.112, t-value = 2.316, p-value = 0.021) aligns with Davis's (1989) TAM. This suggests that users are more inclined to continue using a technology they find straightforward and user-friendly. This result is consistent with earlier findings by Almaiah and Man (2016), reinforcing the idea that ease of use is a crucial determinant of technology adoption. However, the relatively low coefficient implies that while important, perceived ease of use is not the sole driver of acceptance.

The strong correlation (Coefficient = 0.532) between perceived usefulness and acceptance intention underscores its significance. This is supported by the substantial t-value of 8.350, indicating a robust effect.

It is noteworthy that this factor has a more pronounced influence compared to perceived ease of use, suggesting that the practical benefits of technology play a more vital role in acceptance intention.

Information security’s direct effect on acceptance intention (H3a: Coefficient = 0.157) is significant. However, the interaction effects of information security with perceived ease of use (H3b) and perceived usefulness (H3c) are not supported, which is an interesting deviation from expectations. This suggests that information security in the technology itself is a standalone factor in influencing acceptance intention, rather than enhancing the effects of ease of use or usefulness.

The negative impact of ethical concerns on acceptance intention (H4a: Coefficient = -0.250) is particularly noteworthy. This indicates that ethical considerations significantly deter users from adopting technology. Interestingly, the interaction effect of ethical concerns and perceived ease of use is positive (H4b), suggesting that when a system is easy to use, the impact of ethical concerns on acceptance intention is somewhat mitigated. However, the negative interaction effect with perceived usefulness (H4c) was not supported, indicating a complex relationship between these factors.


Ⅵ. Conclusion

This study examined the acceptance of Generative AI, specifically ChatGPT, among 212 university students using PLS-SEM. The research focused on factors such as perceived ease of use, perceived usefulness, information security, and ethical concerns. Results indicated significant relationships between these factors and acceptance intention, highlighting the complex interplay between usability, utility, information security, and ethical considerations in adopting Generative AI technologies like ChatGPT.

The study makes a significant theoretical contribution by integrating information security and ethical concerns into the well-established TAM to examine the adoption of Generative AI, specifically focusing on ChatGPT, among university students. This integration addresses an important gap in existing literature, where traditional TAM frameworks often overlook these critical dimensions despite their growing relevance in modern AI adoption scenarios. By incorporating these factors, the study offers a more holistic perspective on the drivers of acceptance intention, moving beyond functional and usability aspects to include concerns about data protection and ethical responsibilities in AI usage. One particularly noteworthy finding is the substantial impact of perceived usefulness on acceptance intention, evidenced by its higher coefficient compared to perceived ease of use. This result underscores the importance of functionality and performance outcomes in influencing user adoption behaviors. It indicates that while ease of use remains significant, university students prioritize tangible benefits such as productivity enhancement and task efficiency when deciding whether to adopt ChatGPT. This insight refines our understanding of technology adoption patterns in the context of advanced AI systems. Furthermore, the study demonstrates that information security and ethical concerns play critical roles in moderating user perceptions and acceptance behaviors. Information security, characterized by trust in data protection and privacy measures, emerged as a key factor shaping students' willingness to adopt Generative AI technologies. Ethical concerns, on the other hand, highlight the emotional and moral dimensions of AI usage, such as potential misuse, biases, or ethical dilemmas, which can either hinder or facilitate acceptance depending on how these concerns are addressed.

By incorporating these dimensions into TAM, this research extends the theoretical boundaries of technology acceptance studies and emphasizes the need for future models to adopt a more comprehensive approach that considers functional, psychological, and moral determinants of technology acceptance. This study not only validates the importance of perceived usefulness and ease of use but also pioneers the exploration of security and ethical perceptions as central constructs in understanding the adoption of AI-driven technologies like ChatGPT. Through this contribution, the study lays the foundation for subsequent research to explore these relationships further across different user groups and technology contexts, enriching the theoretical discourse on AI adoption frameworks.

This study's findings offer valuable insights for practitioners in IT academia, industry, and the Generative AI field. Given the significant impact of perceived usefulness on acceptance intention, developers and marketers of Generative AI technologies should focus on highlighting and enhancing the functional benefits of their products. Emphasizing how such technologies can improve efficiency and productivity may encourage greater adoption among potential users. Additionally, the role of ethical concerns in influencing acceptance suggests that it is crucial for practitioners to address ethical issues proactively. This includes ensuring transparency, privacy, and fairness in AI operations. For academia, incorporating these insights into curriculum development can better prepare students for the ethical and practical challenges of using AI in various fields. In the industry, these findings can guide the development of user-centric AI solutions that are not only easy to use but also ethically sound and highly effective.

This study's primary limitation lies in its sample scope, focusing exclusively on 212 university students, which may not fully represent the broader population's experiences and attitudes toward Generative AI technologies like ChatGPT. Future research could expand the sample to include participants from diverse age groups, professions, and cultural backgrounds to enhance the generalizability of the findings. Additionally, the study was conducted within a single cultural and educational context, potentially limiting the ability to generalize results across different regions or educational systems. Cross-cultural studies could offer deeper insights into how cultural factors shape acceptance intention. Another key limitation is the cross-sectional design of this study, which captures data at a single point in time. Longitudinal studies are recommended to examine how user perceptions, security concerns, and ethical considerations evolve with prolonged exposure to ChatGPT. Furthermore, this research focused primarily on self-reported subjective perceptions, without analyzing actual usage data or behavioral patterns. Integrating objective usage analytics and behavioral tracking in future studies could provide a more comprehensive understanding of long-term adoption and sustained use of Generative AI technologies. Finally, exploring the interaction effects of ethical concerns and information security across various demographic groups may reveal nuanced insights that were not captured in this study.

References

  • Y. Wu, "Integrating Generative AI in Education: How ChatGPT Brings Challenges for Future Learning and Teaching", Journal of Advanced Research in Education, Vol. 2, No. 4, pp. 6-10, Jul. 2023. [https://doi.org/10.56397/JARE.2023.07.02]
  • S. F. Wamba, M. M. Queiroz, C. J. C. Jabbour, and C. V. Shi, "Are both generative AI and ChatGPT game changers for 21st-Century operations and supply chain excellence?", International Journal of Production Economics, Vol. 265, pp. 109015, Aug. 2023. [https://doi.org/10.1016/j.ijpe.2023.109015]
  • J. Qadir. "Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education", 2023 IEEE Global Engineering Education Conference (EDUCON), Kuwait, Kuwait, May 2023. [https://doi.org/10.1109/EDUCON54358.2023.10125121]
  • J. M. R. Rodríguez, M. S. R. Montoya, M. B. Fernández, and F. L. Lara, "Use of ChatGPT at university as a tool for complex thinking: Students' perceived usefulness", NAER: Journal of New Approaches in Educational Research, Vol. 12, No. 2, pp. 323-339, Jul. 2023. [https://doi.org/10.7821/naer.2023.7.1458]
  • M. Perkins, "Academic integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond", Journal of University Teaching and Learning Practice, Vol. 20, No. 2, pp. 7-24, Feb. 2023. [https://doi.org/10.53761/1.20.02.07]
  • A. G. R. Castillo, et al., "Effect of Chat GPT on the digitized learning process of university students", Journal of Namibian Studies: History Politics Culture, Vol. 33, pp. 1-15, May 2023. h [https://doi.org/10.59670/jns.v33i.411]
  • M. V. Vinichenko, A. V. Melnichuk, and P. Karácsony, "Technologies of improving the university efficiency by using artificial intelligence: Motivational aspect", Entrepreneurship and sustainability issues, Vol. 7, No. 4, pp. 2696, Jun. 2020. [https://doi.org/10.9770/jesi.2020.7.4.(9)]
  • F. D. Davis, "Perceived usefulness, perceived ease of use, and user acceptance of information technology", MIS Quarterly, Vol. 13, No. 3, pp. 319-340, Sep. 1989. [https://doi.org/10.2307/249008]
  • H. Siagian, Z. J. H. Tarigan, S. R. Basana, and R. Basuki, "The effect of perceived security, perceived ease of use, and perceived usefulness on consumer behavioral intention through trust in digital payment platform", International Journal of Data and Network Science, Vol. 6, No. 3, pp. 861-874, 2022. [https://doi.org/10.5267/j.ijdns.2022.2.010]
  • M. Hasal, J. Nowaková, K. Ahmed Saghair, H. Abdulla, V. Snášel, and L. Ogiela, "Chatbots: Security, privacy, data protection, and social aspects", Concurrency and Computation: Practice and Experience, Vol. 33, No. 19, pp. e6426, Jun. 2021. [https://doi.org/10.1002/cpe.6426]
  • C. Wang, S. Liu, H. Yang, J. Guo, Y. Wu, and J. Liu, "Ethical considerations of using ChatGPT in health care", Journal of Medical Internet Research, Vol. 25, No. pp. e48009, Apr. 2023. [https://doi.org/10.2196/48009]
  • S. Bankins and P. Formosa, "The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work", Journal of Business Ethics, Vol. 185, No. 4, pp. 725-740, Feb. 2023. [https://doi.org/10.1007/s10551-023-05339-7]
  • V. Venkatesh, J. Y. L. Thong, and X. Xu, "Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology", MIS Quarterly, Vol. 36, No. 1, pp. 157-178, Mar. 2012. [https://doi.org/10.2307/41410412]
  • S. Kelly, S.-A. Kaye, and O. Oviedo- Trespalacios, "What factors contribute to the acceptance of artificial intelligence? A systematic review", Telematics and Informatics, Vol. 77, No. pp. 101925, Feb. 2023. [https://doi.org/10.1016/j.tele.2022.101925]
  • L. Gao, G. Li, F. Tsai, C. Gao, M. Zhu, and X. Qu, "The impact of artificial intelligence stimuli on customer engagement and value co-creation: the moderating role of customer ability readiness", Journal of Research in Interactive Marketing, Vol. 17, No. 2, pp. 317-333, May 2023. [https://doi.org/10.1108/JRIM-10-2021-0260]
  • J. Kim, K. Merrill Jr, and C. Collins, "AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI", Telematics and Informatics, Vol. 64, pp. 101694, Aug. 2021. [https://doi.org/10.1016/j.tele.2021.101694]
  • J. Parviainen, T. Turja, and L. Van Aerschot, "Social robots and human touch in care: the perceived usefulness of robot assistance among healthcare professionals", Social robots: Technological, societal and ethical aspects of human-robot interaction, pp. 187-204, Jul. 2019. [https://doi.org/10.1007/978-3-030-17107-0_10]
  • Y.-L. Chi and Y.-C. Tsai, "The empirical study of impact critical security factors of mobile applications on technology acceptance model", Journal of Statistics and Management Systems Vol. 20, No. 2. pp. 245-273. Feb. 2016. [https://doi.org/10.1080/09720510.2016.1232888]
  • S. Li, K. Peng, B. Zhu, Z. Li, B. Zhang, H. Chen, and R. Li, "Research on Users’ Privacy-Sharing Intentions in the Health Data Tracking System Providing Personalized Services and Public Services", Sustainability, Vol. 15, No. 22, pp. 15709, Nov. 2023. [https://doi.org/10.3390/su152215709]
  • Y. Lin and Z. Yu, "Extending Technology Acceptance Model to higher-education students’ use of digital academic reading tools on computers", International Journal of Educational Technology in Higher Education, Vol. 20, No. 1, pp. 34, Jun. 2023. [https://doi.org/10.1186/s41239-023-00403-8]
  • B. C. Stahl and D. Eke, "The ethics of ChatGPT–Exploring the ethical issues of an emerging technology", International Journal of Information Management, Vol. 74, pp. 102700, Feb. 2024. [https://doi.org/10.1016/j.ijinfomgt.2023.102700]
  • A. J. Rhem, "AI ethics and its impact on knowledge management", AI and Ethics, Vol. 1, No. 1, pp. 33-37, Oct. 2021. [https://doi.org/10.1007/s43681-020-00015-2]
  • M. A. Almaiah and M. Man, "Empirical investigation to explore factors that achieve high quality of mobile learning system based on students’ perspectives", Engineering science and technology, an international journal, Vol. 19, No. 3, pp. 1314-1320, Apr. 2016. [https://doi.org/10.1016/j.jestch.2016.03.004]
  • A. Masood, A. Luqman, Y. Feng, and F. Shahzad, "Untangling the Adverse Effect of SNS Stressors on Academic Performance and Its Impact on Students’ Social Media Discontinuation Intention: The Moderating Role of Guilt", SAGE Open, Vol. 12, No. 1, Mar. 2022. [https://doi.org/10.1177/21582440221079905]
  • L. Köbis and C. Mehner, "Ethical Questions Raised by AI-Supported Mentoring in Higher Education", Frontiers in Artificial Intelligence, Vol. 4, Apr. 2021. [https://doi.org/10.3389/frai.2021.624050]
  • T. Gundu, "Chatbots: A Framework for Improving Information Security Behaviours using ChatGPT", Human Aspects of Information Security and Assurance, Vol. 674, pp. 418-431, Jul. 2023. [https://doi.org/10.1007/978-3-031-38530-8_33]
  • J. F. Hair, M. Sarstedt, C. M. Ringle, and J. A. Mena, "An assessment of the use of partial least squares structural equation modeling in marketing research", Journal of the Academy of Marketing Science, Vol. 40, No. 3, pp. 414-433, Mar. 2012. [https://doi.org/10.1007/s11747-011-0261-6]
  • J. C. Anderson and D. W. Gerbing, "Structural equation modeling in practice: A review and recommended two-step approach", Psychological bulletin, Vol. 103, No. 3, pp. 411-423, Jan. 1988. [https://doi.org/10.1037/0033-2909.103.3.411]
  • J. Hair, R. Anderson, and B. R. Tatham, "Multivariate data analysis", Pearson Prentice Hall, 2006.
  • C. Fornell and D. F. Larcker, "Evaluating structural equation models with unobservable variables and measurement Error", Journal of Marketing Research, Vol. 18, No. 1, pp. 39-50, Feb. 1981. [https://doi.org/10.1177/002224378101800104]
Authors
Tae-Hyung Kim

2017. 08 : MS degree, Dept. of Information Science, Korea National Open University

2019. 08 ~ Present : Ph.D. Candidate in ICT Convergence Engineering, Graduate School, DanKook University

Research interests : AI, ICT Convergence, Security

Table 1.

Constructs and measurements

Construct Items Description Reference
Perceived
ease of use
PES1 ChatGPT is readily accessible. [8][23]
PES2 The provided content on ChatGPT can be easily downloaded.
PES3 Learning how to utilize ChatGPT didn't require much effort.
Perceived usefulness PUS1 I find ChatGPT to be valuable in my everyday life. [8]
PUS2 Utilizing ChatGPT assists me in completing tasks more efficiently.
PUS3 Using ChatGPT enhances my productivity.
Information security ISC1 I have confidence that my personal information will not be utilized for any other purposes. [18]
ISC2 I have faith that my personal information is safeguarded.
ISC3 I am assured that my personal information is secure.
Ethical
concerns
ETC1 While using ChatGPT, I have experienced ethical concerns. [24]
ETC2 While using ChatGPT, I have experienced feelings of guilt.
ETC3 While using ChatGPT, I have felt a sense of moral responsibility.
Acceptance
intention
ACI1 I plan to continue using ChatGPT in the future. [13]
ACI2 I will consistently incorporate ChatGPT into my daily routine.
ACI3 I willingly agree to use ChatGPT.

Table 2.

Demographic characteristics of the samples

Category Item Subjects (N=212)
Frequency Percentage
Gender Male 94 44.3%
Female 118 55.7%
Age 18 23 10.8%
19 44 20.8%
20 28 13.2%
21 20 9.4%
22 17 8.0%
23 or older 80 37.7%

Table 3.

Reliability and convergent validity

Construct Items Mean St. Dev. Factor loading Cronbach's alpha CR AVE
Perceived
ease of use
PES1 5.368 1.305 0.836 0.710 0.820 0.607
PES2 5.255 1.364 0.848
PES3 5.019 1.377 0.635
Perceived usefulness PUS1 5.415 1.235 0.916 0.908 0.942 0.845
PUS2 5.481 1.230 0.941
PUS3 5.368 1.196 0.901
Information security ISC1 3.632 1.627 0.928 0.931 0.956 0.878
ISC2 3.571 1.566 0.948
ISC3 3.382 1.542 0.935
Ethical
concern
ETC1 3.099 1.650 0.880 0.902 0.939 0.837
ETC2 2.642 1.647 0.922
ETC3 3.005 1.803 0.940
Acceptance
intention
ACI1 5.090 1.500 0.919 0.893 0.934 0.824
ACI2 4.377 1.665 0.884
ACI3 5.038 1.456 0.921

Table 4.

Fornell-Larcker scale results

Construct 1 2 3 4 5
Note: The values on the diagonal represent the square root of AVE.
1. PES 0.779 - - - -
2. PUS 0.433 0.919 - - -
3. ISC 0.155 0.120 0.937 - -
4. ETC -0.114 -0.103 -0.038 0.915 -
5. ACI 0.402 0.620 0.279 -0.313 0.908

Table 5.

Summary of the results

H Cause Effect Coefficient t-value p-value Hypothesis
H1 Perceived Ease of Use Acceptance Intention 0.112 2.316 0.021 Supported
H2 Perceived Usefulness Acceptance Intention 0.532 8.350 0.000 Supported
H3a Information Security Acceptance Intention 0.157 2.784 0.005 Supported
H3b Information Security
× Perceived Ease of Use
Acceptance Intention -0.014 0.232 0.816 Not Supported
H3c Information Security
× Perceived Usefulness
Acceptance Intention 0.074 1.444 0.149 Not Supported
H4a Ethical Concerns Acceptance Intention -0.250 4.427 0.000 Supported
H4b Ethical Concerns
× Perceived Ease of Use
Acceptance Intention 0.120 2.041 0.041 Supported
H4c Ethical Concerns
× Perceived Usefulness
Acceptance Intention -0.049 0.751 0.453 Not Supported