17 Comments

I really wanted to make a communism joke in the bit about human preferences and economics but I managed to restrain myself.

Expand full comment

Great post Sergey!

Expand full comment

Excellent as always! I keep worrying that LLMs have a lower ceiling than a lot of enthusiasts seem to think, but I know *nothing* about what can be done with / on top of / around LLMs, technically; their indecipherability seems problematic from an “extension” angle, but I would know!

Anyway, great read!

Expand full comment

I think we are barely scratching the surface of what can be done around them. Consider, for example, in isolation the idea that we now have an autonomous planning engine that can be hooked up to anything.

Expand full comment

That’s how it feels to me. I’m very irritated by my position: a Cambrian explosion about to happen without my having any particular purchase on what’s possible and the technical details of it all. Exciting, but annoying to be a pure bystander!

Expand full comment

I want to come out and play

Expand full comment

I think realistically the era of ambient computing starts in the 2030s. The era of prompting by text feels somewhat painful. If conversational AI reaches a point of convenience and utility, what would it take?

Expand full comment

You'd need the actions it takes from a text prompt to be GREATLY more difficult than writing the prompt. So far, ChatGPT plug-ins don't clear that hurdle. Adept might in the future, but I wouldn't say that their demo does.

Expand full comment

awesome read! need that Longformer attention window :kek:

Expand full comment

btw i finally got that working 😎

Expand full comment

GOAT

Expand full comment

Even if you get stuck at human level.. could you just create a lot of them and have them interact with and learn from each other?

Expand full comment

I think, at the limit, the question is: at what point does speed matter more than ability? If I were to guess, I'd guess that 100,000 median human-level AIs running around the clock would create superintelligence slower than OpenAI. But if they were 2 standard deviations above the median level... I don't know.

Regardless, I think the idea of giving LLMs access to a longer term memory and a way to shift their context windows around it is crucial. Right now, the models are human-level on short tasks.

Expand full comment

Langchain is kinda doing this where you can chain them up to chat w/ each other in a sequence to accomplish a task: https://github.com/hwchase17/langchain

Expand full comment

heh, I used to work with Harrison, cool to see his thing in the wild :)

Expand full comment

oh wow! yeah people are losing their minds over langchain!

Expand full comment

Yeah, langchain is really cool

Expand full comment