Reasoning

I’ve written a couple posts here using AI and what I found very quickly is the loss of the sense of reasoning and accomplishment. While AI can very rapidly get to a solution to a similar place as what might take me a couple hours, what I lose out on is far more profound. People say AI is going to improve so many fields in so many ways but I think one place where we’re going to fall short is our feeling of accomplishment and a deeper understanding of how we got to the end result when performing hard tasks. Ofcourse they say this means we’ll move on to new forms of work that will give us an even greater sense of meaning but I’d be remiss if I didnt say that I feel slightly cheated by the ease of whats output. If I’m writing a piece of content so much of that process isn’t the output itself but the time I spent reasoning and thinking about it. If what we’re replacing is our rationale mind and cognition then we are losing so much of what it means to be human. Perhaps that time spent could be put to better use elsewhere but I’d say we are very rapidly going to lose the ability to reason at all and that’s scary.

If we look at the history of abstraction or building blocks in science, nature and computing then its only fair to say this is evolution and eventually we’ll abstract away the hard work for many human heavy labour tasks of today. So much of that heavy labour is not physical but mental. Knowledge work is going to be replaced. Which means reasoning about what we onced reason about is going to disappear. This should concern us to some degree. While it has away of democratising many fields it will also means we need to be more cognisant of that which is an entirely human experience which we’re going to hand over to the machines. The way many religious scholars have looked at this, not through the lense of AI, but general guiding principles and predictions of the coming future, it’s essentially the loss of knowledge. We forget or unlearn how to do things over a multi-generational span because it’s been offloaded elsewhere, automated, or just generally is deemed unnecessary. So what happens when we do the same for our rationale mind?

GPT-4o mini is seen as a reasoning engine. Could this be the beginning of the end for humanity? Are we on the slope of decline as our intelligence is effectively handed off to supercomputers of our own making. Offloading decisions to the Gods isn’t anything new. Humanity used to cast lots, essentially a form of randomness, for decision making. How is asking the AI any different? It draws on a wealth of human knowledge, and yet it’s token generator is nothing more than assumed intelligence that could be choosing between two scores for different words. It’s entirely randomness at best.