LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.
“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”
Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.
I’m sorry, you fucking what? How about you test the world’s population in PhD level history and see if you get a 46%? Are you fucking kidding me? You’re telling me this machine is half accurate on PhD history and you’re tryna act like that doesn’t just make your entire history department fucking useless? At most, you have 5 years until it’s better at the job than actual humans trained for it, because it’s already better than the public at large.
50% is decent, if it had any idea of when it actually was correct or not. But 50% is not very good, when the 50% that’s faulty, results in it going of on a long tangent spewing lies. Lies that are incredibly real looking, takes immense knowledge or huge amounts of time to check.
If you’re well versed enough in the subject to spot the lies, you likely wont get much help from AI. And if you aren’t, well, you’re going to be learning a lot of incorrect information. Or spend ridiculous amount of times fact checking.
Works a bit like that for software developing at the moment. AI is incredibly at spewing out code quickly. But the time won by copying it, is lost looking for errors that are extremely well hidden.
For it to be a totally fair test you’d be testing the worlds population in an open book exam since the model likely has every history book they could find in its training data.
Well, that’s simply not true. The llm is simply trained on patterns. Human history doesn’t really have clear rules such like programming languages, so it’s not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.
The big challenge that we’re facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.
Lllm is not ai, It’s simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.
I would have killed for these a decade ago and they’re an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it’s the next dot com bubble