

We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.


We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.


It can at least get one unstuck, past an indecision paralysis, or give an outline of an idea. It can also be useful for searching though data.


If this works, it’s noteworthy. I don’t know if similar results have been achieved before because I don’t follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.
FinalSpark says its Neuroplatform is capable of learning and processing information
They sneak that in there as if it’s just a cool little fact, but this should be the real headline. I can’t believe they just left it at that. Deep learning can not be the future of AI, because it doesn’t facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.


I don’t know about google because I don’t use it unless I really can’t find what I’m looking for, but here’s a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):

In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.
Here is the same search using perplexity’s default model (not pro, which is a lot better at breaking down queries and including relevant references):

and I don’t have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.
I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I’m not saying there aren’t pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.
Edit: I didn’t answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:
etc.


Maybe I can share some insight into why one might want to.
I hate searching the internet. It’s a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it’s my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.
I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.
And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone’s idiosyncratic, artistic ideas for how to organise and present data, or how to keep me ‘engaged’.
So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.
And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it’s irresponsible to embed one the way google has.
I think it’s probably best to… uhh… sort of gatekeep this tech so that it’s mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.


We’re all living in amerikka
koka kola
santa klaus


Yeah, I don’t know why anyone knowledgeable would expect them to be good at chess. LLMs don’t generalise, reason or spot patterns, so unless they read a chess book where the problems came from…


Not well, apparently.


Because they have no basis on which to decide where to go. It’s like buying toothpaste but there are hundreds of options, none of which you know anything about, so you get whichever seems most popular. It minimises the risk of ending up with something which is unpopular for good reasons.
Firstly, I’m willing to bet only a minority of users regularly use those buttons. Secondly, you’re talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?
I know LLMs are used to grade LLMs. That isn’t solving the problem, it’s just better than nothing because there are no alternatives. There aren’t enough humans willing to endlessly sit and grade LLM responses.
For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
The reason prompt engineering is a thing is that people know what is expected and desired output and what isn’t, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.


Yeah a real problem here is how you get an AI which doesn’t understand what it is doing to create something complete and still coherent. These clips are cool and all, and so are the tiny essays put out by LLMs, but what you see is literally all you are getting; there are no thoughts, ideas or abstract concepts underlying any of it. There is no meaning or narrative to be found which connects one scene or paragraph to another. It’s a puzzle laid out by an idiot following generic instructions.
That which created the woman walking down that street doesn’t know what either of those things are, and so it can simply not use those concepts to create a coherent narrative. That job still falls onto the human instructing the AI, and nothing suggests that we are anywhere close to replacing that human glue.
Current AI can not conceptualise – much less realise – ideas, and so they can not be creative or create art by any sensible definition. That isn’t to say that what is produced using AI can’t be posed as, mistaken for, or used to make art. I’d like to see more of that last part and less of the former two, personally.


It’s not so much the hardware as it is the software and utilisation, and by software I don’t necessarily mean any specific algorithm, because I know they give much thought to optimisation strategies when it comes to implementation and design of machine learning architectures. What I mean by software is the full stack considered as a whole, and by utilisation I mean the way services advertise and make use of ill-suited architectures.
The full stack consists of general purpose computing devices with an unreasonable number of layers of abstraction between the hardware and the languages used in implementations of machine learning. A lot of this stuff is written in Python! While algorithmic complexity is naturally a major factor, how it is compiled and executed matters a lot, too.
Once AI implementations stabilise, the theoretically most energy efficient way to run it would be on custom hardware made to only run that code, and that code would be written in the lowest possible level of abstraction. The closer we get to the metal (or the closer the metal gets to our program), the more efficient we can make it go. I don’t think we take bespoke hardware seriously enough; we’re stuck in this mindset of everything being general-purpose.
As for utilisation: LLMs are not fit or even capable of dealing with logical problems or anything involving reasoning based on knowledge; they can’t even reliably regurgitate knowledge. Yet, as far as I can tell, this constitutes a significant portion of its current use.
If the usage of LLMs was reserved for solving linguistic problems, then we wouldn’t be wasting so much energy generating text and expecting it to contain wisdom. A language model should serve as a surface layer – an interface – on top of bespoke tools, including other domain-specific types of models. I know we’re seeing this idea being iterated on, but I don’t see this being pushed nearly enough.[1]
When it comes to image generation models, I think it’s wrong to focus on generating derivative art/remixes of existing works instead of on tools to help artists express themselves. All these image generation sites we have now consume so much power just so that artistically wanting people can generate 20 versions (give or take an order of magnitude) of the same generic thing. I would like to see AI technology made specifically for integration into professional workflows and tools, enabling creative people to enhance and iterate on their work through specific instructions.[2] The AI we have now are made for people who can’t tell (or don’t care about) the difference between remixing and creating and just want to tell the computer to make something nice so they can use it to sell their products.
The end result in all these cases is that fewer people can live off of being creative and/or knowledgeable while energy consumption spikes as computers generate shitty substitutes. After all, capitalism is all about efficient allocation of resources. Just so happens that quality (of life; art; anything) is inefficient and exploiting the planet is cheap.
For example, why does OpenAI gate external tool integration behind a payment plan while offering simple text generation for free? That just encourages people to rely on text generation for all kinds of tasks it’s not suitable for. Other examples include companies offering AI “assistants” or even AI “teachers”(!), all of which are incapable of even remembering the topic being discussed 2 minutes into a conversation. ↩︎
I get incredibly frustrated when I try to use image generation tools because I go into it with a vision, but since the models are incapable of creating anything new based on actual concepts I only ever end up with something incredibly artistically compromised and derivative. I can generate hundreds of images based on various contortions of the same prompt, reference image, masking, etc and still not get what I want. THAT is inefficient use of resources, and it’s all because the tools are just not made to help me do art. ↩︎


It’s not like corporations are some animal who can’t help but be who they are.
That’s exactly what they are. They are composed of people only to the extent that a car is composed of wheels.
If it’s otherwise in working order, a flat tire will be replaced and the car will be going wherever it’s meant to go. Profit city is where all roads lead to, and a flat tire (or four) can only delay for so long.
If you want to hold corporations to moral standards, you have to change the incentives (destinations) and restructure corporations to be actually owned and controlled by people who are then held to those moral standards (put more of the car into the wheels).


Is this going to be available for free? And if so, to what extent? I’m not paying for AI, but would be cool to try it out.
I’ve also been burnt a few times by registering for some “free” AI service only to realise after putting in some actual effort into trying to create something that literally any actual value you might extract from it is gated behind a payment plan. This was the case when I tried generating voices, for example: spend an hour crafting something I like; generating any actual audio with it? Pay up. It’s like trying out a free MMO where you spend a long time creating your character just the way you want it only to be greeted by “trial over - subscribe now!”


True, I could have identified those as suggested solutions (albeit rather broad and unspecific, which is perfectly fine). I also sympathise on both accounts.
I have this personal intuition that a lot of social friction could be mitigated if we took some inspiration from the principle of locality physics when designing social networks and structuring society in general. The idea of locality in physics is that physical systems interact only with their adjacent neighbours. The analogous social principle I have in mind is that interactions between people that understand and respect each other should be facilitated and emphasised, and (direct) interactions between people far apart from each other on (some notion of) a “compatibility spectrum” should be limited and de-emphasised. The idea here is that this would enable political and cultural ideas to be propagated and shared with proportionate friction, resulting in a gradual dissipation of truly incompatible views and norms, which would hopefully reduce polarisation.
The way it works today is that people are constantly exposed directly to strangers’ unpalatable ideas and cultures, and there is zero reason for someone to seriously consider any of that since no trust or understanding exists between the (often largely unconsenting) audience and the (often loud) proponents. If some sentiment was instead communicated to a person after having passed through a series of increasingly trusted people (and after likely having undergone some revisions and filtering), that would make the person more likely to consider and extract value from it, and that would bring them a little bit closer to the opposite end of that chain.
Anyway, those are my musings on this matter.


We don’t have to prove that the brain isn’t puppeted from some external realm of “consciousness” in order to say we can be quite confident that it isn’t, because positing that there is such a thing as free will in the traditional notion of the term is magical thinking, which most of us might agree isn’t particularly respectable.
What we can do is take a compatibilist approach and say there is something that is “effectively indeterministic” about human decision making, because we can’t ever ourselves predict our own actions any faster than we observe them. I don’t have any moral contribution to make here; I just wanted to add this reflection.


I don’t see em suggesting any particular solutions, so I’m not sure what you are criticizing or why you think it would result in Elon remaining at large any more than from figurative fruit throwing.
I agree that social repercussions have a place, but I also agree that it is only “good enough” for many – but not all – situations. Seeking a more sophisticated approach based on studying and identifying potential root causes seems to me like it would be more sustainable, not to mention an opportunity for individual growth.
Once. They do not have the ability to learn or adapt on their own. They are created by humans through “deep learning”, but that is fundamentally different from continuously learning based on one’s own actions and experiences.