The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough to be useful. You could in theory run these models on a very old device, but it will be so slow will not be useful.

On why Apple Intelligence won't work on older iPhones

Internet Archive Business Insider, Jun 18, 2024


The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough...

The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough...

The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough...

The inference of large language models is incredibly computationally expensive. It's the oomph in the device to actually do these models fast enough...