I mean, I’m pretty sure that everyone running a free-to-use LLM service is logging and making use of data. Costs money for the hardware.
If you don’t want that, it’s either subscribe to some commercial service that’ll cover their hardware costs and provides a no-log policy (assuming that anyone provides that, which I assume that someone does) that you find trustworthy, or buy your own hardware and run an LLM yourself, which is gonna cost something.
I would guess that due to Nvidia wanting to segment up the market, use price discrimination based on VRAM size on-card, here’s gonna be a point – if we’re not there yet – where the major services are gonna only be running on hardware that’s gonna be more-expensive than what the typical consumer is willing to get, though.
I mean, I’m pretty sure that everyone running a free-to-use LLM service is logging and making use of data. Costs money for the hardware.
If you don’t want that, it’s either subscribe to some commercial service that’ll cover their hardware costs and provides a no-log policy (assuming that anyone provides that, which I assume that someone does) that you find trustworthy, or buy your own hardware and run an LLM yourself, which is gonna cost something.
I would guess that due to Nvidia wanting to segment up the market, use price discrimination based on VRAM size on-card, here’s gonna be a point – if we’re not there yet – where the major services are gonna only be running on hardware that’s gonna be more-expensive than what the typical consumer is willing to get, though.