Illustration by Carcazan
A race to infinity
La(S)imo: Since last week, I read a rather alarming article on the FT about AI ongoing development and objectives. AI expert Ian Hogarth reports that “producers of some of the most powerful AI models, including GPT-4, are no longer disclosing the size or details of their training datasets”. So transparency is a major concern.
(C)arcazan: Well, if Arthur I. Miller’s views are anything to go by, he — with rather excessive confidence — considers that AI is aiming for infinity and beyond! However, when challenged or engaged with by his audience, it seems clear that even that is unknown at this point… So either he knows something we don’t, or part of the marketing machine around AI seeks to pump into us its inevitability and our subservience to it.
S: While large investments are being poured in progressing the research on Artificial General Intelligence, the same article reports that only about 100 people are currently employed to research on “alignment” of AI engines to human values — for the most, companies are focusing on “cosmetics”, which is a way to say that AI may emulate humans, but that is how far we are and where the research seems headed — even more alarming, in my view.
C: Hence the desperate need for regulation and the rebalancing of who decides what accurate data or truth is, and then who programmes AI with it. Something occurred to me though… We know (or some prefer not to recognise) the environmental and social costs of AI — take mining, for instance. We know that disinformation is a not just a problem but, for some in positions of power, a commodity, literally a finger into the minds of each and every human user ergo voter. And we know that AI has certain useful applications, like in medicine; but can we dismiss the possibility that AI art generators could be used to gain people’s trust in AI to pave the way for more encroaching developments to come?
S: Sadly, neither individuals nor corporates seem prepared to negotiate their needs and goals on account of their environmental costs — I myself struggle to stay true to my shopping choices, dietary or other. I resisted having a smart phone until it was given to me as a working benefit, and now I cannot function without one — it is my clock, my ticket office, my entertainment centre, my newspaper, my office assistant. One person’s phone — replaced every how many years? — is someone else’s rights violated in a mining village. How can I hope that some company would care if I do not? Aren’t we the atoms of those companies? The dichotomy us/them applied to citizens/governments and citizens/corporates becomes greyer and greyer the closer you get to privilege. In other words, I, privileged citizen, am much less a detached entity from governments and corporates than the invisible exploited worker. So, my saying that they do something wrong or should be doing something different is a way of abdicating my responsibility — unless of course I do more than signing petitions landed in my mailbox daily. I suppose talking about it, itching others to think about it, is something. But I feel that I fail my values a hundred times a day.
C: Well, you raise an interesting point which flows perfectly from my concern — addicted as I am to my mobile phone as well, I think it has become less of an addiction and more of a necessity in terms of its functions and also the information I gain from it, not to mention the access to internet and social media (now essential passports for small businesses and artists). So self-regulation is important, but it is also a portal, a gateway for those who run it to impart information or ideas. Reading about how AI could be used to control human behaviour, I have to wonder who ultimately benefits from it. Tempting people with pretty things to engender trust and familiarity as a pathway for something else is not new: Donald Trump was on ‘The Apprentice’ for ages before becoming President; Boris Johnson was often on TV shows before becoming Prime Minister; could we just be getting a generation to trust creative use of AI before its ultimate purpose and influencing role in our daily lives is revealed?
S: While I do not credit the two above-mentioned individuals with long-term strategic planning, I see truth what you say — making people familiar with generative AI companions, so that we will welcome more complex applications as simple next-of-kin. Security and arm industries are one example of the massive economic benefits behind more worrying applications.
C: …
S: We drifted off from today’s question, though. What are the ethical use of AI in relation to its attempt to reach infinity? Everything that advances humanity knowledge is to me an ethical use. I am much more comfortable with the idea of a system used to simulate humans for their medical treatment, or to explore the unknowns of how we function, than to emulate us for the sole purpose to do, unassisted, what we do, or (in the infinity/God paradigm) to reach where humans cannot.
C: I do love your consideration of “simulate” versus “emulate” — where to draw the line between human advancement and advancement on humans?