The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough to be useful. You could in theory run these models on a very old device, but it will be so slow will not be useful.
On why Apple Intelligence won't work on older iPhones