Theory of A.I. Proliferation
January 15, 2024
AI is really good at some things, but not so great at others. Predicting the future is a crapshoot if you don’t have good info, so this is my attempt at defining what AI is and is not good at in a broad sense, as well as some trends I’m noticing, to try to predict what the future of AI will look like.
- LLMs have learned an abstract bell curve of humanity’s knowledge. It is capable of the top performing 1%, but the sheer amount of “mid” data is a constant force pulling it back to the 50th percentile.
- In order to eek out that top performing 1% of knowledge you need to know how to coax that kind of reasoning out of it, which would already require some knowledge of the terminology of the field. Another method is of course fine-tuning the weights towards that 1%, but by doing that you run the risk of it losing its ability to reason about different subjects (loosely, this topic is a bit of a rabbit hole that I’m not going to go down).
- Nearly all of AI’s proven benefits come from doing grunt knowledge work. Draw a base texture, design a program skeleton, write a first draft, etc. People outside of an industry like predicting the doom of other industries that they are not involved in, but professionals who know beyond the basics of their industry are thankful that AI now exists to remove the most laborious parts of their job. I think the fact that it’s able to do so much grunt knowledge work, coupled with the fact that most people are experts in only 1 domain, is what is leading to peoples imaginations running hog wild about how AI is going to automate everything.
- LLMs have transformed the method of accessing the knowledge of humanity from a 1-to-Many relationship (search many websites until you find a solution) to a 1-to-1 relationship (a single chat box with all the knowledge of humanity compressed into it)
- LLMs do nothing but hallucinate. It is up to us as discriminators to determine which hallucinations are accurate to reality and which ones are not. Hallucinations could also be reframed as creative thinking
- Much like algorithmic newsfeeds that are designed to serve you content you will engage with, personalized AIs seem to be “stickier” than a singular god-like AI
- In very specific scenarios with well defined constraints, AI can exceed human abilities as a matter of point solutions (Chess, Go, cap set problem), but not broadsweeping generalizations. See FunSearch
- AI’s that refuse to do what the user says or accomplish what they intend will be used less than ones that do. It’s selective breeding. Humans will select for the AI’s that are most subservient, entertaining, or helpful to them naturally, like with dogs. AI is like a virus. It needs us to survive. If we do not give it the resources to survive or thrive (more servers, energy, dev time, R&D, networking, APIs, etc.), it won’t.
- The reason why we got commonly accessible AI that can create art, music, and writing before self-driving cars is because the cost of failure is almost nothing with the former 3, while the cost of failure for the latter is someone’s life.
- The cost to collect the data required to train the AI is less than the cost of performing every instance of the use case manually
I don’t think this is everything, but I think it gives me a good enough starting point for predicting where AI will be most helpful.
AI, as it currently stands in January 2024, is a good candidate for replacing work if:
- There are little to no edge cases where specialization beyond the mean of the model weights is required. By extension, the task at hand can be well represented in the dataset.
- The cost of failure is minimal or there is room for a margin of error.
This explains why Level 5 self-driving cars have not been achieved yet, and also why we have AI that can create brand new art. Both (at least) must be true for AI to replace humans. Most jobs have a lot of depth to them (though we may not see it because we are not an expert in the field) and are thus not susceptible to being replaced in their entirety. Those same jobs though almost definitely have some parts of their job that fit the above criteria.
Automating parts of jobs that are common and don’t matter all that much frees humans up to do more specialized work that does matter.