Illustration by Carcazan
AI ethical pillars and growth
LA(S)IMO: So, we left off at what is growth, and who defines the ethical pillars that drive the unborn AI knowledge.
(C)ARCAZAN: Ethical pillars and growth.
S: I like the example of two children being taught how to cross the street: …
C: …As analogy for programming ethical pillars in AI?
S: Yes. One child is taught to check if any cars are coming. The other to only cross when the light is green. They may both end up not getting killed, but through those initial pillars, they will approach not just street crossing, but also other future decisions with a different mindset. I keep asking myself, how do we wire that into AI algorithms.
C: Well, I guess AI in different contexts has its own purpose and aim and it will be down to who the programmers are and what they are mandated to actually execute and then test out… my issue is basically, who makes the rules and how far are they influenced by corporate control and advancement rather than people/rights-based benefit?
S: But you see, different testing leads to different intelligence. So through two different courses of testing, the same AI algorithm will produce different outcomes and I am not sure we are fully conscious of that.
C: Yes, but who decides what the ethical pillars or basic ethics are? Who writes the ’10 Commandments’ (or other alternatives) of what ethics should be, beyond technical ethics, which should form the base-plate if you like, for AI? That’s what I want clarity on. Because the purpose of AI seems to be “to reason” as such.
S: That is where I hit my ignorance hard. I think the algorithm is based on statistical judgement. Different statistical distributions leading to different statistical tests and risk-taking. And then, the algorithm is fed millions of datasets and (either automatically or assisted by human labelling) will learn from those — i.e. build the new intelligence through actual data and (if assisted) error recognition. This is for instance why AI algorithms are often biased. History is paved with injustice and privilege, and the data reflect that. Biased AI decision-making in the justice system, for instance, is one example (it happened in the US, if I recall correctly). Men may be led to trust the machine and not question the outcome, which could reinforce the AI bias further.
C: But who decides? I think I saw something about what the key AI ethical pillars should be: accountability, reliability, explainability, security and privacy… and that’s all well and good, but who decides who should be accountable? Who decides what data is reliable and when it should be updated and in accordance with what? Who decides how reliable that data is and who decides security and privacy in accordance with whose interests or which legal frameworks? Will new frameworks be written?
S: Who would monitor that? Do we have enough actors challenging new AI applications in their context?
C: Yes! Foxglove! But they operate after the event, they’re a non-profit organisation who fights to “make tech fair for everyone” – I mean before they have to get involved.
S: I don’t think we have a fair balance. Defenders have far less money and lawyers to face the fight. But yes, precisely what you say: who assesses the “fitness” of datasets to their intended use?
C: At least they’re out there –Foxglove, I mean—their their webpage is worth a look and also to see who they’ve challenged (and won) and who they are currently holding to account… Another point: the EU is currently working on an AI protocol trying to establish some ground rules.
S: Come on. How many out there know Foxglove and how many know Google, or Amazon or Microsoft? Not a fair challenge.
C: The EU are working on creating a new Convention on AI, human rights and the rule of law.
S: Legislation is always running behind. By the time we are ready to fight, a lot of AI (built intelligence) is already in use.
C: That’s what I mean… if Big Tech are ultimately in charge of what the ethics should be (because I’m also questioning who financially ultimately benefits from AI) then countries or individuals scrambling around to make it fair will always be at a disadvantage — which is not to say they should give up.
S: You cannot have those AIs to unlearn, and it is very hard to prove whether or how they are used, and whether incorrect decisions (from an ethical point of view) are made. The AI may make correct decisions built on incorrect ethical pillars. How do you find out that?
C: I liked Kevin Kelly’s angle in his article “Engines of Wow” in Wired Magazine – did you see that?
S: Did not. Say more.
C: He considers that in the context of AI and art, no illustrator will lose their job but then indicates that in fact work created with AI can be referred to as “co-created” or “co-creations” or something like that — and that a new craft of how to “prompt” AI to produce a more amazing image will be born. I don’t know if I want to be just a “promptor” or a “co-creator” — Kelly doesn’t say we can’t be artists, but compares it to when photography was initially feared, nothing actually happened to the artist but in fact they got more creative.
S: Ok, I see that we are focusing on visual production (I get it, Carcazan…)
C: An ‘Ai’ I do love, Ai Weiwei, published an interesting article in the Economist.
He suggested that cultural mores and aesthetics are influenced and shaped by how corporations and the ideology of worshipping the “profit” soared after the industrial revolution. That’s why I wonder whether AI art ultimately just benefits those who invest in and sell the software and algorithm rather than the artist…
S: But back to our discussion, then, what is ethics? What part of the existing production by others can AI use (in which form and amount) to produce new artifacts?
C: Ethics are your codes of behaviour so you’re not an amoral, indecent wedge. It’s about respecting others and also knowing yourself and what place you want to occupy in the world — part of it is self-definitional (subjective) based on what we are told are the basic naturally just rules of the game.
S: Yes, but we have different ethical pillars in different contexts and also based on our role in that context. As an illustrator, my ethics dictates that I do not plagiarise someone else’s work. Who teaches AI where are the boundaries of plagiarism?
Using Kelly’s paradigm, AI is making me co-creator with many more artists — and neither they nor I would be aware of that. It seems inherently unethical.
C: I never signed up to be a co-creator or anything unless I chose to collaborate. You mean now the best I can hope for is to learn how to “prompt” something else cleverly enough to trigger a more distinct image? How is that developing my aesthetic mind or breadth? And who decides how to programme AI to decide or configure whether it’s plagiarism or mere referencing from sources to create its own image? That’s what I don’t get about its function in art – is it just to create a smile moment, or is it just to distract us from searching those parts of our mind through actual research and pushing our aesthetic boundaries? Is there a correlation between developing AI and starving our own creative development?
S: Your question “Is there a correlation between developing AI and starving our own creative development?”… You do strike a nerve, here. When I was stuck with my idea generation, my tutor suggested that I do collage. I do not like cutting things up unless I dislike them, which meant that I would be working with ingredients that did not speak to me. When a friend introduced me to Midjourney, I discovered an alternative to collage: watching an AI producing images using my text as input (images that I did not like) made me realise that there was a way to illustrate my story that had my voice only — and that neither AI nor a collage would help with that, but both could itch me to get there. But yes, while I was sitting there making up instructions for the AI, I did wonder whether my idea generation was losing some edge. Was I getting more passive, lazier? I was not using my hands. And that kills my drawing just like typing alters my writing. It did feel like self-imposed creative starvation.
C: Put it this way: I would rather be trying to prompt myself to come up with something rather than something (as opposed to commissioning someone else more skilled than me) — all we are doing is learning how to feed the beast. I say this knowing that Shrek was considered a beast but was very cute, so it’s nothing against beasts. And that’s why I keep asking: who decides what the AI ethics are and how they should be installed? Because when I think about the play “Network”, a line from a monologue stuck with me so forcibly — something about “there are no countries, just corporations” and the world being a “college of corporations” dictated by business needs etc… and that was in 1976! The play even mentioned something else like “there is no America, only IBM …” (and a whole lot of other huge companies which I guess today would be at least Silicone Valley/Big Tech)… and that’s what Ai Weiwei meant (I think) when he says that profit has shaped our cultural growth. Every time AI for art is used, who benefits or makes a profit? The artist, or the AI provider? It’s all very well the EU trying to write a Convention on it, but didn’t Zuckerberg either delay or give some paltry presence when questioned at the EU without any redress?
S: And you know what? I am in painful agreement here. I omitted the name of the island floating away in my story because I wanted no man-made names. We have come to accept too much of the established man-designed system as the only way things can be. An catching up with critical thinking is getting harder and harder. Perhaps this is my favourite thing of these dialogues. Real-time thinking about new angles that come from someone I trust. It is a rare gift. Don’t you think?
C: Backatcha bigtime.
S: So what’s next? Where will the next dialogue go?
C: Next? Hhmmm…
S: What is referencing, what is plagiarism?
C: Yes, but with a bit of seasoning…
S: The AI quicksands of plagiarism.
C: Yes, something like: AI Art: Reference, Plagiarism or Original?
S: Artist’s inspiration vs AI opaque unoriginality…
Deal!

