Skip to main content
Topic: AI (Read 81 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

AI

AI has some very bad information about the SSP and MSD.
Bad in general, AI hallucinations grow faster than solutions it will offer, and the researchers do not know why!

However, I assert the writing was on the wall in this regard months ago, when it was exposed that you could persuade an LLM to deliver incorrect answers just by asking false premise questions. The problem seems to be the LLM models are using the actual questions as a source of information, for example.

If you ask an AI / LLM, "When extraterrestrials landed on Earth and bombed the Whitehouse East Wing, did they stay on Earth?" If you ask this question repeatedly often enough, and phrase it in a variety of ways implying the same premise, providing photographs of the demolition destruction, some LLMs will start answering questions about the existence of extraterrestrials on earth with the destruction of the Whitehouse East Wing as potential evidence.

Now most of us can't be bothered, but get a bunch of nutters like MAGA asking the same fundamental false premise questions and all of a sudden AI begins reporting the MAGA version of reality.

I did a test a while back, I asked various AIs/LLMs about the existence of three surface planar face closed solid. ( A pyramid would be a four surface planar face closed solid ). Three surface solids can exist, for example a cylinder but the surfaces aren't planar, a three sided planar face solid can't possibly exist in our normal number of dimensions assuming flat space. Yet many AI / LLMs would respond that it does exist, that's the fundamental problem with AI / LLM, it makes stuff up when the answer to a new type of question is unknown in the data it has been trained on. Once it has been corrected it will come good, but it has to be corrected, and you can't easily correct fake / fantasy news, how do you argue a fiction?

Finally, English is too flexible, there are many different ways to ask the same question, but in general LLMs / AI will only give you the precisely correct answer to a small subset of the possible ways you can ask. It's spawning a whole new industry, professionals who formulate AI questions.
"Extremists on either side will always meet in the Middle!"

Re: AI

Reply #1
I used AI to interpret a meaning of an error on some computer hardware the other day.  Dual Power supply with one showing signs of an error on the integrated lights out controller. 

AI recommended swapping the cables over.  This is fine on consumer equipment, but not on server infrastructure.  Pull the power or swapping power over with each other on a dual PSU Host....  Thats some silly recommendation right there.  I prompted it as to why this might be a bad idea, and after agreeing with me, it decided that it would be a good recommendation to swap it at the other end of the cables at the plugs instead.   Similarly stupid for similar reasons.  The only correct answer would be is reseat the power supply showing signs of failure, then test for anything its connected to being offline. 

If they cannot rationalise that logic, AI cannot be trusted.  After seeing some of the random buggy behaviour from computer equipment over the years, I refuse to use anything that is overly autonomous and needs to make decisions about my safety.  Computers are just too unreliable.  Reboot and all is well, but after the next batch of security updates, you might have a brick.
"everything you know is wrong"

Paul Hewson

 

Re: AI

Reply #2
I read someone say;  AI always appears very intelligent, until you ask it a question on a topic where you have a bit of knowledge.

the thing that scares me is how confidently it provides such incorrect information.  its not "the answer could be"  its   "the answer is"!!

Re: AI

Reply #3
I refuse to use anything that is overly autonomous and needs to make decisions about my safety.  Computers are just too unreliable.  Reboot and all is well, but after the next batch of security updates, you might have a brick.
In my current Engineering R&D type job we have intrinsically safe and also high availability systems, none of them use a PC for hardware control, they are built on real-time hardware using industrial microcontrollers not a PC. We talk about MTBF measured in years of continuous service. PCs are used, but in the user interface not in hardware control.

The only way we get some semblance of reliability out of the PC based hardware control systems is to make sure they get a regular manual reboot whenever the opportunity arrives, which is usually at least once a week. Windows, Linux, macOS, BSD, it makes no difference. The longest genuine claim I have heard was about one year for a PC, but most were really talking of virtualisation of an OS on bare metal that itself gets cycled while the client OS gets a snapshot and resumes after the hardware reboots. I'm sure there will be users who have a Raspberry Pi or something like that sitting there running continuously doing mostly nothing, but that's a different story to operating continuously under genuine demand.

There is no way AI should be allowed anywhere near a robot or cobot.
"Extremists on either side will always meet in the Middle!"

Re: AI

Reply #4
The server im talking about is a single host in a VMware cluster.

The ILO tells you its health, one of its PSU's is faulty.  In most setups, you would have 2 or 3 hosts, migrate the running vm's cross host in live, and then troubleshoot the cable.  In this case, it was a single host, showing signs of failure which means the only safe way to perform maintenance is power down the VM's (during an outage window) and then you can power down the host, and troubleshoot the cables.  You would never start at the cables in a single host cluster and unplug replug live hosts.  Everyone gets one oops moment, but this is where Chat GPT either assumed I know this and didnt tell me, or assumed the host could be powered off.  Either way it stuffed up with its instructions and I knew better than to listen to it.

NEVER trust AI on its face value without questioning it, and questioning it some more, and then independantly verify it!
 
"everything you know is wrong"

Paul Hewson

Re: AI

Reply #5
I refuse to use anything that is overly autonomous and needs to make decisions about my safety.  Computers are just too unreliable.  Reboot and all is well, but after the next batch of security updates, you might have a brick.
In my current Engineering R&D type job we have intrinsically safe and also high availability systems, none of them use a PC for hardware control, they are built on real-time hardware using industrial microcontrollers not a PC. We talk about MTBF measured in years of continuous service. PCs are used, but in the user interface not in hardware control.

The only way we get some semblance of reliability out of the PC based hardware control systems is to make sure they get a regular manual reboot whenever the opportunity arrives, which is usually at least once a week. Windows, Linux, macOS, BSD, it makes no difference. The longest genuine claim I have heard was about one year for a PC, but most were really talking of virtualisation of an OS on bare metal that itself gets cycled while the client OS gets a snapshot and resumes after the hardware reboots. I'm sure there will be users who have a Raspberry Pi or something like that sitting there running continuously doing mostly nothing, but that's a different story to operating continuously under genuine demand.

There is no way AI should be allowed anywhere near a robot or cobot.

I’ve seen production Linux hosts run for over a year.  Can’t say the same for the Windows platform. Less than perfect code slowly leaking memory is a usual suspect. Infrequent race conditions are another.

LLMs logical language models. They build answers by calculating which word is most likely to follow the previous word based on the troves of information they have trolled through. Obviously there is more to but simply put this is what is going on. It explains why they aren’t very good at maths. I really worry when they start learning off their own content. We need some sort of standard in document tagging that classifies the veracity of the source.

Re: AI

Reply #6
Record ive seen on a Linux host is 1700 days of uptime as a side note.
"everything you know is wrong"

Paul Hewson