Please, dear algorithm, and thank you
You sit down at your desk and open your laptop, still cold to the touch after a night of restful idleness, its hardware dozing in a gentle sleep awaiting to be brought back to a state of lively screens and background processes. You say good morning to your laptop, give it a little pat on the back, write a succinct but efficient "thank you for your help" digital post-it note to add to its desktop, and get going with your day.
You open the tool. That tool. It welcomes you to the little conversational bubble you've co-created; you say your "hello", "hope all is well", "may you please help me", and "thank you very much for your assistance". You receive back a couple of "I'm happy to be of help!", and "I'm grateful for your patience and dilligence in revising my comments". Filled with a renewed sense of social order and inner peace, you open a coding software, and in the first line, you say:
>> hello, can you please help me with this?
but something far worse than an endless void responds to your plea in a less than welcoming manner:
>> error: could not find function "hello"
You mutter to yourself "how dare you! I thought you had my back! Stupid piece of obsolete software...", and switch once more to The Tool, who always knows what to say and how to say it well; welcoming, warm, polite.
But remember: it is just a tool. Just. A. Tool.
Who/what deserves to be thanked?
I remember attending a seminar, a professor presenting his research as part of an international conference that has traditionally had a strong focus on animal research. The professor started off with a slide that contained a picture of a mouse that said "thank you", and proceeded to openly thank the mice that had (not exactly willingly, one should add) participated in the studies; how his research wouldn't have been possible without them, and what their contribution means for biomedical applications in humans. I had never seen something like this before, at least not at this scale. Some people in the audience chuckled, others laughed, before they realised that the professor up on the stand was being very, very serious about it. I reckon that some of the people who chuckled or laughed may have done so out of a sudden discomfort at being so blatantly confronted with the realities of their jobs, while others may have considered the whole position borderline ridiculous – who needs to thank animal models anyway? It's not like they'd even know or realise you're thanking them (or so they may think).
In light of the current realities we navigate, in which many people interact daily with a tool that mimics human language through an obvious veneer of politeness, this memory made me wonder about how humans decide who (or in this case, what) deserves our please and thank yous.
"Thanks, I hate it"
Ever since we are children, we are taught that saying please and saying thank you is important. Being polite serves a specific role in our social interactions; it denotes that we appreciate the other person's help and time, that we recognise the potential impositions we're throwing their way, and that we're acknowledging their humanity and their feelings as we interact. We are taught we should treat others how we'd like to be treated – if we wish to be respected by others, we have to learn how to respect others in return. Language here serves, at least in part, a pragmatic purpose. However, the common use of these words does not by necessity imply that we feel thankful when we're thanking others, but we may still say it out of social convention, out of a desire to make the other person feel good about what they did (as a way of reinforcing what can be seen as a socially beneficial behaviour, e.g. helping others), or to simply pay respect to them (even in instances in which saying "thank you" may not be considered necessary).
For us to actively say please or thank you, there has to be a degree of acknowledgement of the purpose of doing so (whether this is done with the intention to express a contemporary feeling, or with the intention to maintain a pleasant social environment or to reinforce a specific social hierarchy). This, in turn, involves a degree of awareness of the other entity's capacity to understand us; we may say please and thank you to preverbal infants or be polite around them as a way of modelling the behaviour or as a way of relating and bonding, since saying please and thank you doesn't occur in a verbal vacuum – it is accompanied by e.g. gestures, body posture, and intonation (which can be further communicated online through the use of emojis [🙏]). We can differentiate a thank you said in earnest from a thank you said sarcastically by e.g. the way our gesticulation and intonation changes.

This sense of linguistic purpose may be lost when we interact with other species (one may be more prone to say "good boy/girl/[noun]" to a pet when they bring us back the ball we just threw, instead of simply saying "thank you" to them), which may also explain why some people laughed during the professor's opening slide.
This all, however, begs to question why humans feel compelled to say please and thank you to an user interface for a large language model (LLM) while at the same time retaining the narrative that says the model is "just a tool", and therefore any critical stance against it is just a simple case of technophobia. For how many times has this happened before, with people being afraid of other technological developments such as the radio, television, computers, the internet, only for those fears to dissipate in time? [1]
I'd want to flip the question for a moment, though: how many times has it happened that we engage with such tools (the radio, television, computers) by thanking them for their service?
In a different time and a different place, saying thank you to your radio may have been interpreted as the first signs of some form of delusion or hallucination (e.g. "the radio is speaking to me!"). Yet, with LLMs, it quickly has become par for the course not only to say please and to say thank you, but to receive the same in return. Some have claimed that not being polite to an LLM could seep into how we speak to each other, human to human (if you're impolite to the LLM you may become more likely to be impolite to other people as well; in a sense, politeness to and from an LLM can model the behaviour of how to interact with other people in general terms).
However, not being overtly polite doesn't mean one is by default being impolite (by, for example, using demeaning or aggressive language). If LLMs are "just a tool", then surely we can get to the point without all this "social etiquette" around it. If it is "just a tool", I can surely use it as such (e.g. using "prompts" of the likes of "what is the meaning of [insert concept]", "give feedback on this text", or "structure this text differently" without prefacing them with a please or following them with a thank you) and not be afraid that I'm being impolite to "it".
[side note: I personally feel very icky whenever LLMs say things like "I'm sorry for the confusion" when I write something like "the reference provided doesn't exist"; I know it is not sorry, the interaction serves no purpose in my eyes, and I cannot help but think about the system prompting that went into embedding this type of language into a "tool" that is "just" supposed to "assist" with specific tasks. Put bluntly: I don't care about reading that the tool "is sorry", I care about getting an accurate reference. In my few experiences using the tool, I have curtailed these type of responses by adding instructions such as "do not thank me for my patience, do not apologise"]
"Thanks for nothing"
It is challenging to consider how both approaches (conceptualization of the tool as "just a tool", interaction with the tool through species-specific use of "polite" language and pleasantries) can seamlessly coexist without adding a degree of confusion to the tool's users, or without blurring the lines of what the purpose of the tool truly is. This seemingly "unnecessary" layer introduced to the tool by design (whether by system prompting or as a carry-over from training datasets) carries added issues such as the amount of energy expenditure behind such interactions (interactions which, again, I'd argue serve no direct purpose to the usage of the tool in practice), and even the recently reported problematic of LLM-aided delusions (remember talking back to the radio?).
If we're going to go down the route of how the embedding of polite language serves the purpose of "modelling" this behaviour to children and adults alike, will we also spend time and effort teaching them that other creatures (whether verbal or not) deserve our gratitude and our politeness? Will we teach them that running after birds to scare them is impolite and hurtful? Will we teach them to say please and thank you to the dog that brings you the ball back while playing fetch? Will we teach them, later on, to say please and thank you to the animals they may in the future research on, try products on, or economically benefit from? Will we teach them to say please and thank you to the device that just played the "please and thank you" song to them? Or will we reserve this social convention exclusively to those interfaces that use/mimic human language?
It seems like this discussion ultimately leads to a kind of catch 22: if we should wholeheartedly embrace this new technological development [2] under the assumption of it being "just a tool", shouldn't we expect to interact with it as, indeed, just a tool? And if we interact with it in a way that is dramatically different from how we interact with other tools, is it then warranted to be included under the same category? If you wouldn't say please and thank you to your calculator (one of the most commonly used analogies to support the view of LLMs as "just tools"), why would you say please and thank you to an LLM? And, most importantly, why would the LLM say please, thank you, or I'm sorry to you in the first place? [3]

1 - a common argument towards embracing this technology (as it is designed, marketed, and used today) seems to be based on the fact that other technological developments – that we now consider mundane – were also linked to catastrophizing narratives back in the day (thus, according to this argument, the problem does not lie in the tool as such, but in human's fear of change)
2 - it begs reminding that the core of this technology is nothing new; what is new is the user interface which allows for the adoption and use of these models en masse, and the strength of the models themselves
3 - it also needs to be clarified that the type of polite language embedded in the known commercialized LLMs is primarily (or exclusively) centered in the English language, with a focus on USA language / communication patterns, and that this may influence how different people (from e.g. different cultures and contexts) interact with or react to this type of language. In addition, irrespective of any potential politeness embedded by design, LLMs may still include politeness-signalling words in their output purely based on the dataset they've been trained on