The Blessing and the Curse of Large Language Models

Language models are amazing. It's astonishing how natural conversations with GPT-3, the language model underpinning ChatGPT, can feel.

Nonetheless, the limitations of this fill-in-the-blank approach become apparent if you know where you look:

  • GPT-3 will fill in whatever it thinks is most probable, not what would actually be nice or useful. If it thinks a tirade filled with racial slurs is the best completion, that's what it'll respond.
  • In a similar vein, GPT-3 will very convincingly lie to you: it has no inherent idea of what it knows and what it doesn't. Much of the work that went into ChatGPT as opposed to the base GPT-3 model involved trying to fix this, but it's a tricky problem to solve.
  • Sometimes it just can't figure out how to fill in a blank effectively. One example is arithmetic: it's pretty good with small numbers, but if you ask it to compute 1382 times 2349 it'll give you an answer that's close but not quite right—for example, 3,219,418 when the answer is 3,246,318.1

  1. Note how it's almost right: clearly the network is picking up something here, but it doesn't know how to do the schoolbook multiplication algorithm because that's really hard to learn from raw English text.

The Blessing and the Curse of Large Language Models

Language models are amazing. It's astonishing how natural conversations with GPT-3, the language model underpinning ChatGPT, can feel.

Nonetheless, the limitations of this fill-in-the-blank approach become apparent if you know where you look:

  • GPT-3 will fill in whatever it thinks is most probable, not what would actually be nice or useful. If it thinks a tirade filled with racial slurs is the best completion, that's what it'll respond.
  • In a similar vein, GPT-3 will very convincingly lie to you: it has no inherent idea of what it knows and what it doesn't. Much of the work that went into ChatGPT as opposed to the base GPT-3 model involved trying to fix this, but it's a tricky problem to solve.
  • Sometimes it just can't figure out how to fill in a blank effectively. One example is arithmetic: it's pretty good with small numbers, but if you ask it to compute 1382 times 2349 it'll give you an answer that's close but not quite right—for example, 3,219,418 when the answer is 3,246,318.1

  1. Note how it's almost right: clearly the network is picking up something here, but it doesn't know how to do the schoolbook multiplication algorithm because that's really hard to learn from raw English text.