Skip to content

Commit

Permalink
fix: typo
Browse files Browse the repository at this point in the history
  • Loading branch information
ethanthoma committed Oct 24, 2024
1 parent 84a1155 commit 3d413df
Showing 1 changed file with 14 additions and 15 deletions.
29 changes: 14 additions & 15 deletions posts/llms_dont_make_sense.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ computer. The data? Whatever is sent over the network...

You may be an optimist. Fair. I am not though. I simply cannot see how the
network will become a general purpose computer from looking at network data. And
for that reason, I don't see how LLMs will learn to be AGI by looking a text.
for that reason, I don't see how LLMs will learn to be AGI by looking at text.

## The Human Experience

Expand Down Expand Up @@ -62,26 +62,25 @@ Reinforcement learning (RL) was king for much of 2000s. Before AlexNet's big run
and the eventual transformer hegemony, RL was the coolest kid of the block. And
I think, it will outlive deep conv nets and transformers.

Value systems are an integral part of decision making. I find the smartest
someone is (not academically but on a more philosophical benchmark), the more
self-consistent their value system tends to be. The decisions we are heavily
influenced by multiple factors like emotion and what we value.
Value systems are an integral part of decision making. Economics captures this
part of us, at least partially (and why I think it and game theory are important
for machine learning research). You may like fruit A more than fruit B (or
you're like me who doesn't really eat fruit) but after giving you _x_ fruit A,
you will probably want fruit B more...

Economics captures this part of us, at least partially. You may like fruit A
more than fruit B (or you're like me who doesn't really eat fruit) but after
giving you _x_ fruit A, you will probably want fruit B...

And this is important. Really important. OpenAI knows that their is a base
And this is important. Really important. OpenAI knows that there is a base
entropy in language modeling. You know that too. Think about it. You probably
can guess what someone you're close to would say if you text them. But would
that change if they were more stressed? What if they normally like your texts
but today they got hammered by a bunch of people asking them the same thing?

How your values changed (as influenced by many factors like emotion) affect how
you interact with the environment, including communication, which can not be
predicted on average, like a LLM tries to do.
I find the smartest someone is, philosophically not academically, the more
self-consistent their value system tends to be. The decisions we make are
heavily influenced by multiple factors like emotion our values. How your values
changed affect how you interact with the environment, including communication,
which can not be predicted on average, like a LLM tries to do.

I don't think we want an average-perdicting model, we want AGI, something I hope
I don't think we want an average-predicting model, we want AGI, something I hope
is a bit better than the average person.

## Architecture is King
Expand All @@ -93,7 +92,7 @@ is a bit better than the average person.
OpenAI has very succesfully [psyoped](https://en.wikipedia.org/wiki/Psychological_operations_(United_States))
us into thinking scaling was king. More parameters, more compute, more data, more
more more. But, I think they're wrong. Transformers can scale really well. And
so we do scale them. But if you think transformers is the finally architecture
so we do scale them. But if you think transformers are the final architecture
then go off and spend your trillions on training your single model.

Transformers have one flaw. They're not the perfect model. Okay, maybe that's
Expand Down

0 comments on commit 3d413df

Please sign in to comment.