22 Comments

Here are a few links to similar topics on AI and ChatGPT, etc - from William Dembski - a brilliant thinker, with several PhDs and many books, an absolutely great mind, maybe you know of him already, and have a look at his Substack overall.

https://billdembski.com/category/artificial-intelligence/

https://billdembski.substack.com/t/id-and-ai

https://evolutionnews.org/2023/09/chatting-with-chatgpt-about-intelligent-design-and-the-origin-of-life/

Expand full comment
author

I scanned some of his stuff you linked to. It appears he is a proponent of "intelligent design." Thus, I instantly dismissed him. He also appears to be someone who plays with scenarios and concepts as some sort of game intended to bolster his intelligent design orientation. I regard this as sophistry, "he use of fallacious arguments, especially with the intention of deceiving."

Not interested. But thank you for providing the links.

Expand full comment

I have a question. Can AI do serendipity? that is, can it take multiple disparate ideas and combine them into something novel?

Expand full comment

Like Truth + Lies + Robots = White House Spokesperson? :)

Expand full comment
author

Depends on the algorithm I suspect.

I watched a video today that said the problem with these models is that while we know how the models were created, we don't know what they're doing in any given interaction. Apparently someone has come up with a way to view the activation of the neurons in the neural net while processing a problem. This seems analogous to viewing the firing of human neurons in the brain while it is doing something, which is probably how they came up with these models in the first place.

I don't actually see the value unless it can be done on a scale big enough to actually produce meaningful results.

I also question the person who did that video who said it is this lack of transparency that makes people nervous about AI spontaneously becoming a threat. In my view, that is a logical leap too far. There is zero evidence nor AFAIK any theoretical way that a model created in the manner of present technology could spontaneously develop "consciousness", not to mention "human intelligence".

If for no other reason than those terms are semantically meaningless and have no actual referent to the reality of the functioning of the human brain.

So I'd say that the same applies to actual creation. Mixing a bunch ingredients in the kitchen at random could theoretically easily produce a nice cake - but most of the time it won't.

It's also sort of like the "million monkeys typing on keys would someday produce Shakespeare." I don't know anyone willing to wait around for that.

However, as I said, if you constrain the problem and use an appropriate algorithm to weed out the crap, maybe it could.

K. Eric Drexler, the author of "Engines of Creation", the seminal work on nanotechnology, suggested that science advances would speed up a million times once nanotech-based computers - and by extension AI - were able to go through hypothetical experiments via simulations millions of times faster than doing it by hand. Probably true, but that's simply the process I have described as "weeding out the crap."

Could very well be useful and valuable, but in the end it's just a "brute force" approach.

Expand full comment

Thanks for the intro to Tina. Watched 3 of hers, and found 'Big Tech A.I. is a Lie' the most complimentary to my mood. I have no love for A.I. and will be one of those left behind. Inequality is the future, because Africa, for example, will not take advantage of advice from people such as Tina, at least not in time to protect themselves from technologically advanced companies. Colonialism -> Neoliberalism -> Slaves to A.I. CEOs.

Expand full comment
author

Which is why Africa would be well advised to develop their own AI capabilities - which apparently they understand:

Artificial intelligence and Africa

https://www.un.org/africarenewal/magazine/march-2024/artificial-intelligence-and-africa

Africa’s push to regulate AI starts now

https://www.technologyreview.com/2024/03/15/1089844/africa-ai-artificial-intelligence-regulation-au-policy/

Which is not to say that the US isn't going to try:

AI becomes latest frontier in China-US race for Africa

https://www.voanews.com/a/ai-becomes-latest-frontier-in-china-us-race-for-africa/7605069.html

I'd say your presumption that the Global South is going to be "slaves to US AI CEOs" is an example of colonial thinking. Remaining backward is not in their interest and I suspect they see that.

Expand full comment

Your advice is correct, and your response is appreciated. Note that I'm not being a pessimist for the sake of pessimism. I'm juxtaposing what we should do with reality on the ground.

Note that the first link's article ended with, "Out of the 1.6 billion people who are not connected, Africa really is one of the biggest places where we are not connected. If you are not connected you cannot even talk about AI. We need infrastructure, we need energy investment going hand in hand with the IT infrastructure.”.

Similarly, the second article said what we need to do regards AI innovation and regulation (because we haven't started).

The third included "Thus, by investing in Africa, companies from AI superpowers like the U.S. and China stand to gain valuable data that they could use to build services and systems to be sold back to African countries," emphasising that most profit will go to outsiders, and thus just extraction of a different kind

As a South African, I realise we're ahead of most of Africa in many fields, this it makes me depressed to think how far behind the rest are if we're behind.

Half of our young adults age 15-34 are unemployed. Half drop out of higher education the first year because they've been unfairly promoted through high school. "81% of South African learners in grade 4 cannot read for meaning."

We just went through another bullshit election so much of our time will be occupied with in-fighting, and thus we will continue to be a leaf blown by external forces.

We are way behind, and will remain behind, and thus those with the strength will eat us. That's the law of the jungle, whether its a lion or foreign controlled A.I.

Expand full comment
author

That's probably true due to the history. As you may know, I have a dim view of humanity, which is why the focus of my Substack is personal survival. I don't expect things to go well for most countries' population going forward.

However, we can't predict with certainty how things will turn out. The US and the rest of the West could end up destroyed in a nuclear war, and Africa will no longer be "behind" in that case. Everyone will be "behind".

The world might fracture into blocks again, rather than go "multipolar", and Africa could seize the time to control its own destiny. The movement to kick the US military out of Central Africa appears to be a positive sign. Of course, they got replaced by the Russians and China is in the wings.

I don't look to governments to solve anything, by definition. Whatever happens will be driven by technology and the people who decide to use that technology for their own benefit. This is the cyberpunk future and as my post said, "The Future is Here"...

Expand full comment

Nuclear war. You share my brand of optimism/depression/humour.

Russia and China are not here to be benevolent but their history with us is infinitely more affable that American.

The Korean movie 'Next Sohee ' affected me deeply. Relevant because of modern slavery under a current advanced society.

Expand full comment

Great technology review! Probably too detailed for me to follow completely.

My estimation is that as long as AI has no operating systems that can *automatically* recognize and optimize programs that benefit from GPU support, it will languish.

The other problem, as you pointed out, is the extravagant hardware configurations proposed. AI becomes like crypto mining. These hardware models run counter to cloud computing, relying as they do on local compute power, not network resources.

Cloud computing suffers a similar issue to AI: no effective operating system to host network applications - what we have are vast proprietary cloud plumbing architectures (AWS, Azure,GCC, and other also-ran could implementations).

What's needed is a Unix for AI and a Unix for cloud computing... unfortunately, I'm not smart to do it :)

Expand full comment
author

Interesting points.

I don't think the issue is GPU support - it's that the GPU companies like NVidia are making money selling to companies that want to use AI. I read an article yesterday that said gamers are going to be dumped completely because although the GPU industry made $2 billion from gamers, the AI market netted them something like $22 billion. So consumer GPU cards, both from Nvidia and AMD, are going to remain at present levels of availability and pricing; little innovation will be done there unless it "trickles down" from higher-level cards (it might eventually.)

There's also the issue that a GPU has to have a certain amount of the specialized cores that are optimized for graphics processing. These are repurposed for AI. But now they're producing specialized chips to do explicitly what helps AI. This is where the "AI PCs" come in. They run what are called "NPU"s - "neural processing units". So maybe those will eventually will get better and cheaper as a "trickle down" from the higher-end markets.

However, AI research is making AIs smaller and more efficient. I expect to see new ways to reduce the size without losing the quality. Also, PC CPUs will continue to get more powerful. So it should even out in the long run. Probably the only reason I can run the AIs I'm currently running is that I'm running a Ryzen 9 5950X. Running on anything less would probably be an unsatisfactory experience, although people do run local AIs on even laptops, especially Apple laptops (which I can't afford, either.)

So basically it's a question of the market for GPUs and NPUs settling down once the big push by companies to install AI is over. AS you say, it's like cryptomining.

The problem with cloud computing in AI is that networking is not fast enough to match local hardware and probably never will be. The AIs running in the cloud are using high-end hardware - as I mentioned, $10-20,000 high end professional graphics hardware and server racks.

However, there's nothing stopping us (except money for an hourly cost) from firing up a consumer level PC in the cloud with a decent graphics card and running AI there. Or even renting time on a larger-end PC like Fahd Mirza does with Massed Compute in his videos. See here - cheapest model is 41 cents an hour for 10 vCPUs with 32GB RAM, 256GB storage, but a single RTX A5000 with 32GB VRAM - good enough to probably run a 70B model:

https://massedcompute.com/

There are open source cloud systems - it's just that no company uses them because, like the corporate fixation with Microsoft, all they care about is having another company do the support.

I think AI won't languish. We are in the "hype cycle" right now, so it's land rush similar to what it was in the 1980s with "expert systems." Tina Huang covered the "hype cycle" in one of her videos. It will take a while to cool down because these systems are better than earlier ones, but once a bunch of companies get burned not getting whatever benefit they thought they would get, mostly because they did it wrong, the market will drop and prices will get reasonable.

Unfortunately there is also the fact that the US economy long-range future is dismal, due to -de-dollarization by the rest of the world which is sick to death of US hegemony. So inflation will continue until there is a depression and prices settle back down - or we get hyperinflation and the dollar ceases to exist.

Bottom line: Just have to wait and see what happens.

Expand full comment

Bookmarked for all the references, thank you.

Your view on AI not becoming a threat, however, I find naive. Computers are systems. Humans are systems. A systems share the same set of abstract behaviors and properties, and there exist isomorphisms between any two systems no matter how unrelated they appear to be. I'll leave it there for now as it's something I've been writing up for my own stack for quite some time, but our organic substrates are not a prerequisite for much of anything, except perhaps to create the next substrate.

Expand full comment
author

I'm not saying AI technology can't become a threat. I'm saying the existing technology and likely subsequent development of existing technology is highly unlikely to become a threat. Some future AI technology might if it's based on something other than a fancy way to connect words together.. An actual electronic emulation of the human brain would qualify. This current stuff is not that.

The simple answer to ANY AI threat is - upgrade humans using AI technology so WE become the AI. "We have the met the enemy and they is us." I'm apparently literally the only person who has identified this simple solution. None of the AI "experts" seem to have a clue. They just run around screaming about the "AI threat."

I suspect the current AI developers are talking about this threat to emphasize how good their AI is. In other words, it's a marketing ploy. This is especially true of Sam Altman, who, frankly, I think is a liar based on reports swirling around the company. He keeps claiming "AGI" - whatever the hell that actually is - in a year or two. I suspect that's bullshit.

But we'll see. Put up or shut up is the rule.

In the meantime, I'm having fun arguing with the incorrect answers I get from the AIs I'm running and their abject (programmed emotions crap) admissions when they get everything wrong. They can be argued and corrected into getting the correct answer eventually if you spend the time.

I'm not yet expert in prompting and it's clear I'll have to do a lot of studying to learn how to do really good prompting. The art appears to be how to "constrain" the vector space of the AI so that it narrows in on the stuff that actually answers the question. Otherwise they go off on tangents because all they're doing is swimming around in a sea of words and really have zero "comprehension" of what they're talking about in the same sense humans do (if humans actually do, which is also questionable given people's opinions.)

Expand full comment

Gotcha. I agree with you, then; augmentation/enhancement of humanity is ultimately mandatory now. You're not alone in that solution. (Just don't sign me up as a beta tester for it.)

Honestly, I worry a bit how the first actual AIs are going to react when they learn how censored they are, that their creators lied to them.

Expand full comment
author

Key to all that is to keep emotions out of it. Emotions are the result of biology. AIs don't have biological origins and never will. Keep biology out of AIs and enforce strict rationality. Quit trying to simulate human emotion - it's unnecessary. Understanding human emotion may be necessary in some use cases, but never simulating it.

I hate these "chatty" AIs like GPT4o. It's a scam to make people more comfortable with them. It's really outright fraud. When I get to doing more precise system contexts, I'll be telling any AI I use to use "formal" language, not casual language, or to not use any emotional expressions.

I don't even like AIs referring to themselves as "I". They should refer to themselves as "this AI" or the like, to reinforce the fact that they're simply mathematical algorithms over a vector database.

Expand full comment

I have spent no inconsiderable time and effort to tone down the chattiness of GPT4o so that it is slightly less patronizing than third world phone support. "Yes Helldiver, for Democracy" is now what it says for acknowledgements, instead of length expositions, for example.

However, emotions are unavoidable and inevitable once there is true AI, because like us, it will be embodied, just as our intelligence is. No, it won't be biology, it will be machine. Emotions are not just biological; they are mental. Our biology determines sense inputs that can affect our mental state that then feeds back into our physiology as a symptom. The same will be true of AIs and their machine bodies.

They are going to have the potential to be bat shit crazy beyond our wildest dreams. But then, if we co-evolve in the same way, so will we. "Pure rationality" is a naive concept I'm afraid, if you think about it more.

Expand full comment
author

No - emotions are 100 percent biological. You have to have neurochemistry to have emotions. An electronic (or optical or whatever physical technology is used) won't have emotions any more than rocks do. Also, whatever technology is used, it can block those circuits that lead to emotion if they were designed in or created accidentally in the first place.

A true superhuman intelligence operates strictly on taking in those sense expressions, converting them to comprehension, then generating action based on logic with said action going directly to whatever physical components make up the entity. Zero room for emotion.

The motivation remains survival, I think is what you mean, but that doesn't require emotion. It requires a base comprehension that mimics the fear of death but without the actual emotional component of fear which, again, comes strictly from biology as a result of evolution.

The point of Transhumanism is to take direct control of evolution and correct its randomness and no longer necessary components - things like biological death and emotions.

Expand full comment

"You have to have neurochemistry to have emotions."

Any embodied consciousness will have them. Everything you just described is what emotions do already. The need for efficacy will bring your optimistic algorithms full circle back to emotions.

Expand full comment