My subjective notes on the state of AI at the end of 2024
5 points ·
tiendil
·
1. Industry Transparency: https://tiendil.org/en/posts/ai-notes-2024-industry-transparency
2. Generative Knowledge Bases: https://tiendil.org/en/posts/ai-notes-2024-generative-knowledge-base
3. Current State: https://tiendil.org/en/posts/ai-notes-2024-the-current-state
4. Forecast: https://tiendil.org/en/posts/ai-notes-2024-prognosis
Since posts are quite long, here are key takeaways.
By analyzing the decisions of major AI developers, such as OpenAI or Google, we can make fairly accurate assumptions about the state of the AI industry.
All current progress is based on a single base technology — generative knowledge bases, which are large probabilistic models.
The development of neural networks, a.k.a. generative knowledge bases, is reaching a plateau. Future progress is likely to be incremental/evolutionary rather than explosive/revolutionary.
We shouldn't expect singularity, strong AI, or job loss to robots (in the near future).
Instead, we should expect increased labor productivity, job redistribution, turbulence in education, and shifts in the education level of future generations.
What do you think? How does the concept of "generative knowledge bases" resonate with your understanding of the current situation?
343rwerfd ·19 days ago
Rumors mention recursive "self" improvement (training) already ongoing at big scale, better AIs training lesser AIs (still powerful), to became better AIs, and the cycle restarts. Maybe o1 and o3 are just the beginning of what was choosed to make available publicly (also the newer Sonnet).
https://www.thealgorithmicbridge.com/p/this-rumor-about-gpt-...
The pace of change is actually uncertain, you could have revolutionary advances maybe 4-7 times this year, because the tide has changed and massive hardware (only available to few players) isn't a stopper anymore given that algorithms, software is taking the lead as the main force advancing AI development (anyone in the planet with a brain could make a radical leap in AI tech, anytime going forward).
https://sakana.ai/transformer-squared/
Beside the rumors and relatively (still) low impact recent innovations, we have history: remember that the technology behind gpt-2 existed basically two years before they made it public, and the theory behind that technology existed maybe 4 years before getting anything close to something practical.
All the public information is just old news. If you want to know where's everything going, you should look to where's the money going and/or where are the best teams working (deepseek, others like novasky > sky-t1).
https://novasky-ai.github.io/posts/sky-t1/
Show replies
kingkongjaffa ·19 days ago
Yes. The latest product releases from them all, have been chain of thought tweaks to existing models, rather than a new model entirely. Several models are perceivably the same or worse than previous models (sonnet 3.5 is sometimes worse than Opus 3.0 and Opus 3.5 is nowhere in sight.)
GPT4o is sometimes worse than base GPT4 when it was available.
The newest and largest models so far are either too expensive to run, and/or not much better than the previous best models, and so this is why they have not been released yet despite rumours that these newest models were being trained.
I would love announcements/data to the contrary.
moomoo11 ·18 days ago