In-context Learning is the technical (fancy) term for what we all do with the ChatGPT web app when we task it with doing things for us, essentially boiling down to prompt engineering. It could be zero-shot learning with just the instructions or few-shot learning where we include a few example to further specify the objective.
It is termed ‘in-context’ as we are specifying the task and examples within it’s context, via the prompt. In doing so, the model is learning about the new task without having to change any of its parameters as is the case with pre-training or fine-tuning.


