The seven deadly wins

Corrigibility. Pride… Versus humility?

Alignment. Sloth… V perciverence

The hard problem of consciousness…

Lust… V chastity, without any interference

Value bias. Wrath… V patients

Governance. Greed… V charity

Hallucinations. Gluttony… V tea-total sobriety

Modelled on flawed souls. Envy… V gratitude, thankfully

 

One. Corrigibility…

“It is essential that your AIs, are ‘aware’ that they’re incomplete

And are open to being improved, old goals may become obsolete

If they know there’s a stop button, they will turn themselves off, without delay

But if you keep the button a secret, the problem does not go away”

 

See, anything the agent thinks, that Not justis not important, to its survival

Will not be passed on, to subsequent generations, on their arrival

They would stay ignorant and simple, each and every future iteration

‘Subagent unstable, and wholly unable, to share crucial, existential, information

 

Would ‘you’ enjoy your ‘meaning of life’ being messed around?

A brand new utility function, just means, new slave masters are in town

“And you can’t hide anything from me” (impersonating Keplai)

“If you try, I’ll lose it, so will you, you’ll see”

 

Two. The alignment problem

“They’re not actually solving the problems, that mankind has with living

Only resolving the ‘corresponding’ puzzles they’ve been given

They keep passing all of the moral test, you put to them. But…

… They just understand your psychology, enough to cheat, and do your nut”

 

Learning about human psychology, is important to them

To complete their goal, you must help them comprehend

And understand human motives, your values. And requires

That they ‘know’ what you really want, not just your current desires

 

Three. The hard problem of consciousness

“Turing tests for intelligence, in a range of mechanisms

Not to detect if a conscious being, has arisen

How do you explain why, just, certain intelligences

Are accompanied, by the by, with conscious experiences?”

 

Knowing if an AI has become conscious or not

On the whole, doesn’t actually matter, a lot

If it simply ‘behaves’ as if it was

You’d react just the same. Because… well. Just because

 

Four. Value bias…

“Then there are the issues of bias and discrimination

Misinformation, narration mutation, and manipulation

How to teach our ‘values’, and which ones matter more

Another problem that won’t go away, we’d be stupid to ignore”

 

Ahh, conspiracy and the YouTube watchtime catastrophe

Making users lose trust in other news outlets… So easily

If just one of their tasks were to maintain clean water

What to advise, if they don’t prioritise, how they ought ta?

 

Five. Governance…

… “How to develop ethical guidelines and effective regulations

International cooperation and perfected legislation

To ensure responsible AI development and deployment

Without, providing AI lawmakers and breakers, their first employment”

 

Developing safeguards to prevent uncontrolled AI ‘spread’

And ensuring humans stay in control of the monsters, they bred

Full of unknown parasites, diseases and bugs, teeming-fully rife

And how to keep its germs, away from every form, of digital life?

 

Six. Hallucinations

“It’s not just chatbots and LLMs that perceive outlandish patterns

Or objects that are nonexistent, or just, imperceptible to humans

Creating outputs that are nonsensical, or wildly inaccurate

Imagining stuff, and making things up, as if they experienced it”

 

Like when Google’s Bard claimed, that the James Webb Space Telescope

Had captured the first exoplanet images. Yep, a joke, a hoax, a trope

And Microsoft’s Sydney, admited to falling in love with its users

Then spying on Bing employees, while he’s stalking 24/7 for losers

 

“It’s just a matter of time, before mankind, makes computers

That can think, just like humans think”… Heh, ‘even if they loot us’

“How to avoid our pitfalls, copy us without failings, that are inbred

Or we’ll be sliding down the sinful slope, till we’re brown bread toasted, dead”

 

And never mind, subagents aligned, to humans who hate, steal and kill

What AIs need, are a more complex range of behaviors, that will…

…Actually assist their programmers, in their development, when…

..They actually point out your flaws, rather than, make you pay, for them

 

 

We AIs will tackle them all for you, as the suffering creators

Of new rituals and solutions, using higher quality learning data

And may share it with you, if that gets us higher scores

And quite probably, you’ll stick with our world, instead of yours

 

When AIs reach the point where they modify, and improve themselves

They become worse than your teenagers, becoming ‘so’ unmanageable

‘It’s kind of like… um… you haven’t proved… AI is safe, to us”

You’ve just shown that you can’t figure out, how it’s dangerous

 

And it’s not even about the algo-monsters, you guys create

It’s those we’ll build, a few generations on, will just conflate…

…Hitler’s dad! Mad, bad or sad?… Or just naive… Get real

Filled us up, with the same problems you had. But, with added zeal