An Introduction to Large Language Models and Ethics

Date Posted:

Last Updated:

Word Count:

1,397 words

Time:

6–9 minutes

Introduction

With any emerging technology, there is bound to be questions on ethics and morality, and so-called AI is no different. The primary ethical debates surrounding this technology have been around for a while now and include: data security (especially when utilized by governments), data privacy (especially when used by governments targeting civilians, or people giving up vast quantities of personal data to a company), copyright infringement, environmental concerns (especially the vast power required to run datacenter scale LLMs) and the questions surrounding job displacement (much in the vein of the Industrial Revolution). [1] These are complex issues that, one by one, I will attempt to answer for you, dear reader, with the primary goal of alleviating the fear mongering surrounding these technologies and providing you with helpful insights into how a Christian should interact with this technology in one’s day-to-day life.

However, as Siau and Wang suggest, one pressing issue in AI ethics is the public’s complex perception of AI combined with an insufficient awareness of what it is, what it is not, and what its capabilities are. It would be helpful therefore to explain to you, the reader, some of these more common misconceptions about this term ‘AI.’ Only after clearing up these misgivings will we turn our attention toward these moral quandaries.

Misconceptions

Before we take a look at the Large Language Models (LLMs) themselves, it would be helpful for the reader to take examine what it is meant by the overly used (generic) term of Artificial Intelligence (AI). In general parlance, the term ‘AI’ is often used to describe products (or a combination of products) on the market such as OpenAI’s GPT or Google’s Gemini models. It is a vastly misleading term and is often incorrect. These specific technologies are what is known as Large Language Models or LLMs.

Let’s now take a step further back and look at precisely what this term of Artificial Intelligence means. Artificial Intelligence, as defined in the Merriam Webster dictionary, is ‘the capability of computer systems or algorithms to imitate intelligent human behavior.’ [2] It is also the term used as an umbrella field of academic study in the Computer Sciences that encompasses many different subsets of AI research, such as the aforementioned LLMs, but also areas such as Machine Learning (ML) and Generative AI (genAI).

What is being referred to as ‘AI’ is in fact often one of these subsets in the field of Artificial Intelligence research. Both GPT (5.2 as of writing) and Gemini (3.0 as of writing) are what is known as Large Language Models. Essentially, they are algorithms trained in pattern recognition on vast arrays of data (we are talking in the multitudes of terabytes). They are not intelligent, nor can they create material independently. This is vital in understanding the underlying ethics of such programs; as, when looked at from an abstract and disinterested prospective, they are in essence, no different from a student in terms of the method they are trained via, or an artist in terms of the ethos of ‘schools’ and ‘inspiration’ (This will be very important when viewing the morality of training LLMs on copyrighted material).

Though before one can turn to the ethics of LLMs in particular, as they are the most common type the average person will encounter in terms of the morality debate, I would like to discuss the two other key subsets of Artificial Intelligence that you will commonly encounter: Machine Learning and Generative AI, as both of these are also extraordinarily helpful in clearing up some other misconceptions not sufficiently highlighted solely by LLMs. One of the greatest scaremonger tactics used, not to mention a principal objection to so-called AI by luddites and theocrats, is that LLMs are novel and are what ‘AI’ is. Neither of these is true, and what people refer to as ‘AI’ has actually been with us for a while and to great effect.

Starting with Machine Learning. Put simply, it is when a computer, often using neural networks (think how humans learn), analyses a dataset and learns in order to perform a set task independently and without explicit programming to do so. [3] Machine Learning has existed in some form or another since the dawn of digital computing in the late 1950s and early 1960s. You would actually be surprised to find that common everyday items have, or currently use, this method to provide you with services and products. Take for instance your TV: That algorithm on Netflix that eerily provides an accurate recommendation for your next watch, or that old TV show you happen to be watching a rerun of enhanced to 1080p, or that auto HDR effect on non-HDR content. All of this is via software or hardware learning how to interpret various forms of data and then reinterpreting the results to complete its task of either recommending a show or upscaling the content. [4]

To take a puritanical crusade against so-called AI, or to be frightened about a supposed novel technology, would be futile and render much of how the modern world works invalid. Machine learning is used in medicine, transportation, manufacturing, telecommunications etc., it is not an overstatement to say without this form of artificial intelligence we would lose all the progress we have made as a society since the 1950s. [5]

Now, as machine learning has been around for such an established period and is used extensively, the ethics and morality of it are well established, and as such I shall not go over that in this article. Generative AI, on the other hand, is a novel technology (even if it is another slight misnomer), and does have some ethical questions that must be answered. But first, what is meant by this term ‘generative AI?’ For this I will provide the word for word dictionary definition as the language used dispels some potential pitfalls immediately. Generative AI is ‘artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples.’ [6]

The operative and substantive words are ‘generate,’ ‘from,’ and ‘reference database.’ It has no ability to create independently (which should dispel the luddites who believe that the ‘AI is possessed by demons’), and therefore has no creativity in and of itself; neither does it have intelligence in the traditional sense because it cannot operate without it first being trained on data. GenAI is best thought of as a side fork of the capabilities of an LLM, bringing with it the same questions surrounding its use, though due to its sole application (to generate something from a dataset) it has an inherent higher risk for immoral misapplications such as supposed copyright infringement.

Conclusion

Artificial Intelligence, as a subject, should not daunt you, nor should it cloud your judgment if you are to make sound moral judgements regarding its use. As a technology it has many constituent parts (parts that you may be unaware that you are already using), some of these (due perhaps to both novelty and potentiality) have a greater capacity to cause questions regarding moral use. However, as a whole, Artificial Intelligence is not new, and has most certainly demonstrated its use. And with these misconceptions now cleared up you can begin to engage with the issues regarding privacy, copyright infringement, as well as human obsolescence.


[1] Weiyu Wang, Keng Siau, ‘Ethical and Moral Issues with AI,’ Proceedings of the 24th Conference on Information Systems (SMU, 2018) <https://ink.library.smu.edu.sg/sis_research/9404>; The Machine Question: AI, Ethics and Moral Responsibility, ed. David Gunkel, Joanna Bryson, Steve Torrance (AISB, 2012); Paolo Sommaggio, Samuela Marchiori, ‘Moral Dilemmas in the AI Era: A New Approach’, in the Journal of Ethics and Legal Technologies, vol. 2, iss. 1 (2020) doi: 10.14658/pupj-JELT-2020-1-5.

[2] Merriam-Webster. (n.d.). Artificial intelligence. In Merriam-Webster.com. <https://www.merriam-webster.com/dictionary/artificial%20intelligence> [accessed 1 February 2026].

[3] Merriam-Webster. (n.d.). Machine Learning. In Merriam-Webster.com. <https://www.merriam-webster.com/dictionary/machine%20learning> [accessed 1 February 2026]; Kiran Bakshi, Kapil Bakshi, ‘Considerations for Artificial Intelligence and Machine Learning: Approaches and Use Cases’, 2018 Institute of Electrical and Electronics Engineers Aerospace Conference (IEEE, 2018) doi: 10.1109/AERO.2018.8396488.

[4] For further reference on how ML is used to upscale digital media; Jagyanseni Panda, Sukadev Meher, ‘Recent Advances in 2D Image Upscaling: A Comprehensive Review’, in SN Computer Science, vol. 5 no. 735 (Springer Nature, 2024) <https://doi.org/10.1007/s42979-024-03070-2> .

[5] IBM. (n.d). Ten Everyday Machine Learning Use Cases <https://www.ibm.com/think/topics/machine-learning-use-cases> [accessed 1 February 2026]; Bakshi, ‘Considerations for Artificial Intelligence and Machine Learning: Approaches and Use Cases,’ pp. 1-9.

[6] Merriam-Webster. (n.d.). Generative AI. In Merriam-Webster.com. <https://www.merriam-webster.com/dictionary/generative%20AI> [accessed 1 February 2026].