So, I am wondering, what is gained by winning the debate over whether LLMs can achieve AGI or not? As far as I know, general intelligence is not defined, much less artificial general intelligence. It's kind of like arguing over the presence of a phantom.
Dangling the idea that "AGI" is just around the corner serves those who want to hype up investment and interest. So I can see how popping that balloon can save a lot of misspent money and energy. I can also see how refuting this path to AGI is a point for us meat sacks over the machines. It's an existential shot in the arm.
What matters to me, ultimately, is if the technology can be reliably useful without burning up the planet in the process. If defeating LLM=AGI puts the industry on a more sustainable path, I am all for it.
So, I am wondering, what is gained by winning the debate over whether LLMs can achieve AGI or not? As far as I know, general intelligence is not defined, much less artificial general intelligence. It's kind of like arguing over the presence of a phantom.
Dangling the idea that "AGI" is just around the corner serves those who want to hype up investment and interest. So I can see how popping that balloon can save a lot of misspent money and energy. I can also see how refuting this path to AGI is a point for us meat sacks over the machines. It's an existential shot in the arm.
What matters to me, ultimately, is if the technology can be reliably useful without burning up the planet in the process. If defeating LLM=AGI puts the industry on a more sustainable path, I am all for it.
Why is this debate important to others?